title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Fundamental Limits of Prompt Compression: A Rate-Distortion Framework for Black-Box Language Models | Accept (poster) | Summary: The paper formulates hard prompt compression as a rate-distortion problem. An algorithm is provided for hard prompt compression, as well as another to estimate the RD curve (Algorithm 1). Experiments on synthetic as well as benchmark datasets are provided with comparisons to previous methods.
Note:
- I'm more than willing to change my score significantly if the questions are addressed appropriately.
- I don't know much about LLMs.
- I'm not familiar with previous literature on prompt-compression.
- I'm familiar with convex optimization, but it's hard to evaluate the correctness or novelty of Algorithm 1.
- I'm familiar with info theory in general.
Strengths: - Overall the paper is very well written. I was able to follow everything to a reasonable degree even without knowing much about LLMs.
- The paper takes a principled approach, through the lens of info theory, to prompt-compression to investigate fundamental limits.
- The overall conclusion points towards a space for improvement in current methods (summarized in Figure 1.), with the caveats mentioned in the questions/weakness.
Weaknesses: - The synthetic data is very limited as it has a binary support (although this is highlighted in as a limitation in the appendix).
- The gold standard for the distortion functions, as far as I understand, would be to have a distortion function that is low when two sentences have "the same meaning", as measured by humans, and high otherwise. Of course, this is function doesn't exist (as far as I know). It isn't obvious why the chosen distortion metrics (i.e., log-loss and 0/1) capture "semantics" as originally intended by the authors.
Smaller issues
- Line 123: notation for a pair of objects inconsistent with the previous parts of the text (i.e., used $[M, Q]$ instead of $(M, Q)$)
- Line 147: typo, "taken with respect" -> "taken with respect to"
- Line 158: typo, "The dimensions of problem" -> "The dimensions of the problem"
Suggestions (this is conditioned on there being reasonable explanations for the questions in the Questions section):
- I would suggest focusing the paper more about the empirical results and why the RD formulation makes sense for this problem (the ML community is not too familiar with info theory, so the formulation itself is a contribution in my opinion).
- The entirety of section 3.3 is not too important for the story and could be entirely moved to the appendix. Instead, replace 3.3 with a discussion as to why this RD formulation captures the "rate-semantics" trade-off originally intended by the authors, by providing evidence that the distortion functions proposed can actually measure semantic distortion.
Technical Quality: 3
Clarity: 4
Questions for Authors: > Line 261: "taking $P_{XQY}$ to be the empirical distribution on the dataset"
If the empirical distribution is used in the RD formulation, isn't this just overfitting to the sample?
> Line 108 and Equation (1)
I don't understand why the log-loss, or 0/1 loss, under the LLM is a reasonable distortion function to measure semantic distortion. Please see "Weakness" section.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: There's a large limitations section in the appendix which I appreciate.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful reading of the paper and constructive suggestions and feedback. We provide our response to the reviewer’s individual comments here, and strongly encourage the reviewer to check our global response for updates regarding our new natural language experiments, which is also relevant to the reviewer’s comments. Please find our response to the questions below:
* **Limited dataset**: We include two new experiments on natural language data in the global response.
* **Choice of distortion metric**: Indeed, the proper choice of distortion function is important to meaningfully measure the change in performance (distortion) of the LLM as the rate is varied. Ideally, a distortion metric as described by the reviewer (“low distortion” for sentences with the “same meaning,” as measured by humans) is the gold standard, but this metric is not known. This is very much a crucial open problem for not just our work, but for fair benchmarking of LLMs in general.
* **0/1 loss for synthetic dataset**: For our synthetic dataset consisting of binary strings, the 0/1 loss is the best. This is because (1) the answers are numerical or binary answers, so there is no notion of semantic relatedness between two answers, and (2) the answers are single tokens, so the most practical thing is to check for an exact match between the answers. We also include the log loss metric to include some diversity in our evaluations. Although the structure of the curves are different, the relative ordering of the curves and their trends are very similar to the curves in the 0/1 loss plot.
* **rougeL and BertScore: Semantic distortion metric for natural language datasets**: For our new results on natural language, we report our results using these (or rather, “$1-$” these, since these are similarity metrics and we want a low distortion to mean high similarity):. rougeL is a common metric used to evaluate summaries, which does so by computing the precision and recall of the tokens in the generated and reference texts, while BertScore computes contextualized embeddings for the tokens and uses the pairwise cosine similarity among them to produce the final score. The authors of the BertScore work highlight that their metric correlates better with human judgements than all other metrics (rougeL included) [1]. Regardless, our results with these two metrics are in agreement with each other, suggesting that a “good enough” metric may be sufficient. A popular approach in current literature is to ask GPT4 to give a score on the similarity between generated and reference texts. Although it has been shown that humans agree with GPT4’s evaluation more than the evaluation of other humans [2], we are skeptical of this metric because it is not reproducible and GPT4 has biases which may result in unfair or inaccurate evaluations. Additionally, we would also like to mention that our theoretical framework is general and does not assume any specific distortion function. In particular, it can also be used with new distortion functions that better capture semantic notions, when they are discovered in the future.
* **Discussion of rate-semantic trade-offs in Section 3.3**:
* We thank the reviewer for this suggestion. We will add a discussion on the semantic metrics (as in the answer above) together with our NLP experimental results in the revised manuscript.
* We maintain that Section 3.3 is important because our primal and dual LP formulations, particularly Algorithm 1, not only form the basis of our theoretical contributions by allowing us to compute the optimal curve efficiently but also demonstrate that our theoretical results hold for any distortion metric, including those designed to capture distortion in the semantic space.
* **Re: If the empirical distribution is used in the RD formulation, isn't this just overfitting to the sample?**: Firstly, we would like to mention that our theoretical framework allows for any choice of the distribution, and Algorithm 1 can still be used to compute the optimal curve with that choice of the distribution.
* We choose the empirical distribution since it is the most natural choice when the distribution is unknown and we are only given a dataset. This is also how other works computing information-theoretic limits of compression in various settings approximate the underlying distribution ([33, 34] in our paper).
* Alternatively, one might assume a parametric distribution and try to learn the parameters from the dataset. However, it is unclear what the right parametric model for text (particularly LLM prompts) should be.
* It is true that a very small number of samples (i.e., when the dataset is not a good representation of the true distribution) can lead to an optimal compressor that overfits. In our experiments on synthetic prompts, we observe that LLMLingua-2 Dynamic nearly matches the optimal curve for some rates. Had the optimal compressor significantly overfit to the dataset, optimality would not have been achieved (although this is not a necessary condition). This suggests that with a sufficiently large dataset, the empirical distribution is a good enough approximation.
* Finally, in each of our plots, we compare all compression schemes to the optimal curves by evaluating them on the same dataset, so the comparison is fair.
This is indeed an interesting problem, and we will add a discussion highlighting the choice of distribution in the revised manuscript.
___
**References**:
[1] Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, & Yoav Artzi (2020). BERTScore: Evaluating Text Generation with BERT. In International Conference on Learning Representations.
[2] Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, & Chelsea Finn (2023). Direct Preference Optimization: Your Language Model is Secretly a Reward Model. In Thirty-seventh Conference on Neural Information Processing Systems.
---
Rebuttal Comment 1.1:
Comment: The added discussion on "semantic distortion" is very satisfying. I'm happy with the response and have updated my score accordingly.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their timely response, and are happy that the reviewer is satisfied with our rebuttal. We further extend our gratitude to the reviewer for increasing their score.
As per the reviewer's suggestion, we will include a discussion on semantic distortion in the revised paper. | Summary: The paper studies the distortion-rate function for prompt compression and proposes an linear programming based algorithm that produces compressed "hard" prompts and is suitable for black box models. The authors provide empirical results on a synthetic dataset illustrating a gap between the existing prompt compression approaches and the theoretically optimal distortion-rate tradeoff and show that the proposed algorithm outperforms the existing methods.
Strengths: - The paper provides sufficient detail on related work and motivates the problem well.
- The proposed distortion-rate formulation is simple yet easy to understand/interpret. It also allows for query-adaptive solutions with simple modifications to the formulation.
- The empirical results on the synthetic dataset demonstrate the improvements over existing prompt compression methods, which are provably (since the dataset is synthetic) worse than the optimal compressor.
Weaknesses: - It would be great to see some experimental results with real datasets.
- What is the reason that the rate is normalized but the distortion is not? Have the authors tried different combinations?
- How does the efficiency of the proposed method compare with existing prompt compression methods? Did the authors compare how long each method takes to compress the same prompt?
Technical Quality: 3
Clarity: 3
Questions for Authors: In addition to the questions above, out of curiosity, do the authors expect any challenges when the compressor is conditioned on multiple queries?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors mention limitations in the last section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and constructive feedback. We provide our response to the reviewer’s individual comments here, and strongly encourage the reviewer to check our global response for updates regarding our new natural language experiments, which is also relevant to the reviewer’s comments. Please find our response to the questions below:
* **Real (natural language) datasets**: We include two new experiments on natural language data in the global response.
* **Normalization**: Normalizing the distortion in the same sense as the rate could, in theory, be done by defining the distortion to be $\mathbb{E}\left[ \frac{\mathsf{d}(Y, \phi_{\text{LLM}}(M,Q))}{\mathsf{d}(Y, \phi_{\text{LLM}}(X,Q))}\right]$. However, this is not reasonable in practice since $\mathsf{d}(Y, \phi_{\text{LLM}}(X,Q))$ is usually small (in fact, they are often 0 for the 0/1 loss plots), leading to very large normalized distortion values. A different way to normalize would be to subtract the distortion of the uncompressed prompt, but this would only shift the plots such that the line corresponding to “No Compression” in our plots is at 0. This provides less information than our plots, since we then no longer know how much the “No Compression” distortion is.
One might also ask whether it is possible to use the rates without normalization, i.e., just using $\mathbb{E}[\mathrm{len}(M)]$ or $\frac{\mathbb{E}[\mathrm{len}(M)] }{ \mathbb{E}[\mathrm{len}(X)] }$ instead of $\mathbb{E}\left[\frac{\mathrm{len}(M)}{\mathrm{len}(X)}\right]$. None of these choices are right or wrong, as they capture slightly different nuances of compression. Choosing the appropriate normalization for a given scenario is certainly an important problem, as pointed out by the reviewer. The key point here is that all these cases can be handled by our framework, by simply changing the associated definitions of rate and distortion to be normalized or unnormalized as needed. Algorithm 1 can still be used to compute the optimal curve, by appropriately redefining the quantities $\boldsymbol{R}_x$ and $\boldsymbol{D}_x$.
* **Efficiency**: The efficiency of our methods is the same as the efficiency of LLMLingua-2, which is faster than LLMLingua methods and Selective Context. The following shows the average time it takes each method to compress a a prompt from our small natural language dataset:
| Compression method | Time (seconds) |
| --------- | -----------: |
| Selective Context | 0.049 |
| LLMLingua | 0.273 |
| LLMLingua Query | 0.530 |
| LLMLingua-2 | 0.044 |
| LLMLingua-2 Query | 0.043 |
| LLMLingua-2 Dynamic | 0.044 |
We are not able to provide the timings for the NarrativeQA dataset since we had to distribute those experiments across machines with differing hardware (which yields an unfair timing comparison) in order to ensure the results were complete before the rebuttal deadline. We thank the reviewer for this suggestion. We agree that reporting the timings/efficiency of the compression methods is useful, and we will include a table for the timings on NarrativeQA, in addition to our small natural language dataset, in the final revision.
* Additional question, **multiple queries**: As far as computing the optimal curve using Algorithm 1 goes, there is no new difficulty when there are multiple queries to condition on. These can all be clubbed into a new “mega-query” $Q’$, and all of the theory follows, replacing $Q$ by $Q’$. Naively, this approach may also be used with existing prompt compression methods, where all queries of interest are concatenated together. We expect that this may work well if the compressor is trained to accept multiple queries. Currently, all query-aware compressors are trained for a single query, and we are skeptical if they will work well without additional training.
---
Rebuttal Comment 1.1:
Title: response
Comment: I thank the authors for their response; my questions have been answered sufficiently. I maintain my score for acceptance.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their timely response, and are happy that we were able to sufficiently answer all questions.
We believe that including the table of timings for the compression methods is a useful benchmark to include, and we will add it to the revised paper. | Summary: This paper proposes an information-theoretic framewok for token-level prompt compression for large language models, where the rate is characterized by the expected ratio between the compressed prompt and the original prompt, and the distortion is a cross-entropy based distortion or an accuracy based distortion. The RD problem is then converted to a LP problem, where optimal solutions are given for both query-aware and query-agnostic settings. An algorithm "LLMLingua-2 Dynamic" is proposed that outperforms other LLMLingua-based algorithms.
Strengths: Token-level semantic compression has been an active area of research. The current paper builts an information-theoretic framework inspired by the rate-distortion trade-off in source coding. Algorithms for solving the RD problems are given. Therefore, the proposed framework will be useful for comparing prompt compression methods in the future.
Weaknesses: The motivation for query-aware compression is not clear. If query is provided, then why not compress the answer directly, so the optimal rate would be a constant? The main content focuses on the query-aware setting (we assume that a query is provided in addition to the compressed prompt during the target LLM inference call), but the theorecital discussions in section 3.3 are on the query-agnostic setting. While the query-aware parallel is provided in the appendix, is there any reason for not focusing on the query-aware setting in the first place?
The experiments are not convincing, in that the prompt lengths in the dataset is too short, and that the dataset only contains 7 queries. In practice, the token-level semantic compression is often used to reduce redundancy of long input. Therefore, this work should consider long prompts such as the NarrativeQA dataset. Moreover, the literature review is not thorough -- some baselines are missing (e.g., [1,2]).
[1] Wingate, David, Mohammad Shoeybi, and Taylor Sorensen. "Prompt compression and contrastive conditioning for controllability and toxicity reduction in language models." arXiv preprint arXiv:2210.03162 (2022).
[2] Fei, Weizhi, et al. "Extending context window of large language models via semantic compression." arXiv preprint arXiv:2312.09571 (2023).
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Why is the compression rate defined as the expected ratio between len(M) and len(X)? In information theory, the rate is usually measured by units such as bits.
2. In the query-aware setting, if query is provided, then why not compress the answer directly, so the optimal rate would be a constant?
3. In lines 85-86, you said "we assume that a query is provided in addition to the compressed prompt during the target LLM inference call", but later in lines 136-137, you then said "To simplify the presentation, we restrict our discussion here to the query-agnostic setting, and only briefly mention the analogous definitions and results for the query-aware setting." Is there any particular reason for this conflict?
4. In Proposition 1, what do you mean by $\mathbb{R}_+^{\mathcal{M}_x}?$ $\mathcal{M}_x\subseteq \mathcal{V}^*$ right?
5. The proof of Proposition 1 is unclear. In (LP), constants $D_x, R_x$ are used, but only $D_{x,m}, R_{x,m}$ are defined.
6. From the experimental results, it seems that the proposed method can bot achieve high compression rate/low distortion (lossless). Could you discuss this in your paper?
7. Could you provide comparisons to other baselines such as [1] and [2]?
8. Could you conduct experiments on other public datasets, such as the NarrativeQA dataset?
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback. We provide our response to the individual comments here, and strongly encourage the reviewer to check our global response for updates regarding our new natural language experiments, which is also relevant to the reviewer’s comments. We also believe there is a misunderstanding regarding the meaning of query-aware prompt compression, which we address in our responses to Q2 and Q3.
* Q1 (**Unit of rate**): We define the rate to be len(m)/len(x) mainly because this is the convention that is followed in the prompt compression literature. Both len(m) and len(x) are measured in “tokens”, so the unit of rate is “tokens (of compressed prompt) per token (of prompt)”. In classical information theory, sequences are compressed to binary strings, hence the unit is “bits per source symbol”. Even in information theory, it is indeed the length that is used as the proxy for rate in variable-rate coding [1] and more recent “one-shot” compression setups [2,3].
* Q2 (**Compression of answer**): Providing the answer as an input to the LLM might not result in the correct output, e.g. let the prompt be “00011” and the query be “how many zeros are in the prompt?”. The answer should be “3”. If we simply pass “3” as the input to the LLM as the compressed prompt with the query “how many zeros are in the prompt?,” the answer will be “0”, which is incorrect for the given prompt.
Indeed, we expect that a “good” compressor is able to extract as much of the answer as possible from the prompt and compress that information, but understanding this is orthogonal to our work; we instead find the optimal theoretical performance that can be achieved by *any* compressor (which may or may not use this “find answer, compress” strategy).
* Q3 (**Clarification on query-aware**):
* We wish to clarify a potential misunderstanding: “we assume that...LLM inference call” does not imply the query-aware setting — this is the case for both query-aware and query-agnostic compression (Fig. 2 in paper). The difference in the query-aware case is that the compressor also has access to the query.
* The query-aware and query-agnostic settings are both independently important to study theoretically. Our work provides an information-theoretic framework to study hard prompt compression, an area of active research, with several schemes already proposed. These schemes are either query-agnostic (LLMLingua) or query-aware (LLMLingua Query); we cover both types in our framework.
* We only develop the theory for the query-agnostic setting in the main paper since the query-aware setting is analogous.
* Q4 (**Notation in Prop. 1**): For a finite set $A = \\{ a, b, c \\}$, we denote $\\mathbb{R}^{A}_{+}$ as the set of all vectors $x$ with components indexed by elements of $A$, i.e., $x = (x_a, x_b, x_c)$, where $x_a, x_b, x_c$ are nonnegative real numbers. (An overview of our notation is provided in App. A, but we will explain these instances of non-standard notation in the main text.)
* Q5 (**Constants in Prop. 1**): The constants $\\boldsymbol{D}_{x}$ and $\\boldsymbol{R}_x$ are vectors with nonnegative real components, indexed by elements of $\\mathcal{M}_x$, i.e, ${\\boldsymbol{D}_x}_m$ and ${\\boldsymbol{R}_x}_m$ are the components of $\\boldsymbol{D}_x$ and $\\boldsymbol{R}_x$.
* Q6 (**Proposed method does not achieve high rate and low distortion?**):
Our proposed method does match the “No Compression” result when the rate is 1, and can likely achieve low distortion for rates less than 1, but we are not able to exactly characterize when that transition happens for this dataset. LLMLingua-2 Dynamic does not compress with higher rates on our binary dataset, and this is because the $[0, 1]$ confidence scores for each token which should be kept (according to the training data) are very high (between 0.98 and 0.99), and the scores for the tokens which should not be kept are very low. In fact, we could not find a threshold greater than 0 which gives a high rate. For our new natural language experiments, the curves for our LLMLingua-2 Dynamic method fully cover the range of low to high rates.
We will include updated figures containing at least the RD point for rate 1 in the final revision of our paper, in addition to the discussion above explaining this behavior.
* Q7 (**Comparisons to other baselines**): Please note that [4] (which we cite as [10] in our paper) is a *soft* prompt compression scheme and is not compatible with our framework, which focuses on *hard prompts*. While methods that use soft prompts are a valid approach to the prompt compression problem, they are not compatible with black-box LLMs, which is a crucial component of our framework.
Thank you for bringing [5] to our notice; we will cite them in the final version of our paper. Unfortunately, their code is not publicly available; we are unable to integrate their method into our experiments.
* Q8 (**Additional experiments**): Thank you for this suggestion; we agree that extending our experiments to (ideally large-scale) natural language datasets is important for our paper. We provide the results of two new experiments on natural language data, including NarrativeQA, in our global response.
___
**References**:
[1] J. Ziv and A. Lempel, "Compression of individual sequences via variable-rate coding," in IEEE Trans. Info. Th., 1978
[2] C. T. Li and A. E. Gamal, "Strong Functional Representation Lemma and Applications to Coding Theorems," in IEEE Trans. Info. Th., 2018
[3] C. T. Li and V. Anantharam, "A Unified Framework for One-Shot Achievability via the Poisson Matching Lemma," in IEEE Trans. Info. Th., 2021
[4] D. Wingate, M. Shoeybi, and T. Sorensen. "Prompt compression and contrastive conditioning for controllability and toxicity reduction in language models." arXiv:2210.03162 (2022)
[5] W. Fei, et al. "Extending context window of large language models via semantic compression." arXiv:2312.09571 (2023)
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. Given the modifications the authors promise, I have raised my rating to accept.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's timely response and sincerely thank the reviewer for raising their score. | null | null | Rebuttal 1:
Rebuttal: We thank all the reviewers for acknowledging our contributions and providing constructive feedback. Our ***key contributions*** lie in providing a principled framework for prompt compression (as acknowledged by Reviewers cHot and sxbS), showing a large gap between optimality and current schemes (as acknowledged by Reviewers Jydp and sxbS). We are able to do so by formulating the optimal curve as a linear program that can be solved using an efficient, geometric solution via its dual in Algorithm 1. This is nontrivial as the linear program itself has too large a dimension to solve directly. We also propose compression schemes that outperform current schemes on synthetic datasets.
We highlight some of the ***additional experiments*** we ran to address the reviewers’ concerns, which include results on (1) a small-scale dataset with natural language prompts to compare with the optimal compressors on natural language, and (2) a larger scale dataset, NarrativeQA, as suggested by reviewer cHot.
1. **NLP dataset**: We prompt GPT4 to curate a small-scale natural language dataset consisting of short prompts (no more than 15 tokens in length), queries, and their respective answers (some examples from the dataset are shown in Table 1 in our attached PDF). We will include details on the dataset in the final revision of the paper. We are also able to compute the optimal curve for this small NLP dataset, using the “pruning” approximation described in Appendix E of the paper. Please see Figure 1 in our attached PDF for the results of this experiment. Our key observations are (1) the gap between the optimal prompt compressors and current methods is quite large, (2) Our proposed methods achieve the lowest distortion for rates below 0.5, and (3) with prompt compression, it is possible to do better than standard prompting as shown by the gap between the optimal curves and the “No Compression” result. All of these observations are in line with our experiments on binary sequences.
2. **Extension to a larger scale NLP dataset** (suggested by Reviewers cHot, Jydp, and sxbS): As per the suggestion of Review cHot, we use the summaries provided in the NarrativeQA [1] datasets as the prompts (up to several hundred tokens), and the queries in the dataset accordingly. The result of this experiment is shown in Figure 2 in our attached PDF.
* Comparison of constructive algorithms: Similar to the result of the small natural language dataset, we observe that our proposed methods again perform better than all other methods for rates less than 0.5.
* As discussed in item 3 below, it is not feasible to compute the curves for the optimal prompt compressors on this (or any other) large scale dataset. As such, we view this experiment to be complementary to the experiment discussed in item 1 above.
3. **Fundamental characteristics of the optimal curve computation**: A key observation is that Algorithm 1 can be used to compute the optimal curve even for practical datasets, since it only has a linear complexity in the length of the prompt. The difficulty when assuming black-box models is that we need to make an inference call with each possible compressed prompt to compute its distortion, resulting in an exponential complexity in the length of the prompt. This cannot be avoided without further understanding the structure of language models or making some assumptions on their statistical properties. We leave such considerations, including methods to approximate the optimal curve for larger datasets, for future work.
___
**References**:
[1] Kočiský, Tomáš, et al. "The NarrativeQA Reading Comprehension Challenge." Transactions of the Association for Computational Linguistics 6 (2018): 317-328.
Pdf: /pdf/fdb3fcad1afc0c2c1c94ee009c9f1247eb5126b9.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Hyper-opinion Evidential Deep Learning for Out-of-Distribution Detection | Accept (poster) | Summary: This manuscript takes a significant step forward in the realm of evidential deep learning by considering a more holistic hyper-opinion evidence framework. The approach presented offers a novel perspective for optimizing evidential deep learning models, notably enhancing their ability to detect out-of-distribution instances without additional settings. This enhancement is particularly evident when the models are applied to complex datasets.
Strengths: The manuscript is commendable for presenting a fresh and innovative approach to the framework of evidential deep learning. It is noteworthy that while previous research has mainly concentrated on enhancing the settings in evidential deep learning, there has been an oversight in addressing the structural deficiencies of the framework itself.
The authors have introduced an interesting concept by treating the features extracted by the network as evidence when applying the theory to practical scenarios. The authors also give full mathematical proof on the problems they solved. This approach grounds in a solid theoretical foundation based on subjective logic, demonstrates simplicity in its application and shows impressive results in experimental validation.
Weaknesses: The manuscript exhibits some shortcomings in the handling of certain details. The hyper-domain does not encompass the set itself and the empty set; however, in practice, due to the treatment of features as evidence, the possibility of evidence representing these two sets cannot be ruled out. Furthermore, the subjective logic stipulates that evidence from the same subset should appear at most once, yet the approach presented in this paper does not guarantee full compliance with this condition.
Technical Quality: 4
Clarity: 3
Questions for Authors: I had some concerns on the setup of the ablation experiment. What implement the authors take to transform from hyper-opinion to multinomial-opinion without opinion projection?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The limitation is understandable and the method does not present any potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer cb7H,
We sincerely appreciate the time and effort you have dedicated to reviewing our paper and providing us with valuable feedback. Here are our replies.
> The manuscript exhibits some shortcomings in the handling of certain details. The hyper-domain does not encompass the set itself and the empty set; however, in practice, due to the treatment of features as evidence, the possibility of evidence representing these two sets cannot be ruled out. Furthermore, the subjective logic stipulates that evidence from the same subset should appear at most once, yet the approach presented in this paper does not guarantee full compliance with this condition.
Due to the opinion projection without prior information, both the full set and the empty set contribute the same to each category. For full set, it contributes $\frac{1}{K}$ (where K is the number of categories) evidence to each category. For empty set, it contributes none evidence to any category. Therefore both situation will not affect the model accuracy.
Before the opinion projection, there may exists multiple pieces of evidence supporting the same set. But these evidences will be accumulated as one piece of evidence, which meets the condition of Subjective Logic. As in the 'Hyper-opinion Belief' section of Figure 2, the belief mass for one set appears at most once.
> I had some concerns on the setup of the ablation experiment. What implement the authors take to transform from hyper-opinion to multinomial-opinion without opinion projection?
We first model the evidence on hyper-domain to form the hyper-opinion, and then use the fully connected layer for opinion assignment. ReLU activation is applied for the fully connected layer weights and the bias is set to 0 to ensure the non-negative assignment of hyper-opinion evidence. | Summary: This paper studies the problem of out-of-distribution (OOD) detection. Traditional Evidential Deep Learning framework collects sharp evidence that supports a single category while ignoring vague evidence that supports multiple categories, leading to inaccurate uncertainty estimation and decreased OOD detection performance. The authors introduce hyper-domain and propose Hyper-opinion Evidential Deep Learning. With hyper-opinion, HEDL explicitly models evidences as sharp and vague evidences that support single categories and multiple categories respectively. HEDL extends the framework of EDL and establishes more accurate uncertainty estimation for OOD detection. Experiments on several datasets demonstrate the effectiveness of the proposed method.
Strengths: -The proposed method is sound in theory. HEDL extends the framework of EDL, establishes a more accurate uncertainty estimation framework for OOD detection. It’s quite novel and enlightening.
-Without additional regularization terms nor computational complexity, it maintains the simplicity and generalizability.
-This paper has sufficient experiments to demonstrate the effectiveness of the proposed method and each module. It achieves better performance compared with SOTA methods.
Weaknesses: -To ensure fairness, the value of W_{prior} should be explicitly stated and consistent with EDL, cause the value of W_{prior} has been proved to be crucial to the effectiveness of EDL model [1].
-The authors may consider incorporating the KL divergence as a loss term to enhance the model's performance.
-In the section when proofing the existence of the gradient vanishing problem in EDL. It would be beneficial to delineate the conditions where o_k is less than zero occurs.
-The paper could benefit from a wider analysis of the impact of the other two loss functions mentioned in EDL, e.g., the MSE loss and the Log loss, when applied to the HEDL model.
[1] Chen M, Gao J, Xu C. R-EDL: Relaxing Nonessential Settings of Evidential Deep Learning[C]//The Twelfth International Conference on Learning Representations. 2023.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1.Whether the opinion projection process that translates hyper-opinion into standard multinomial-opinion, leads to a loss of uncertainty information? Please provide a detailed discussion on this matter.
2.Whether HEDL encounters the issue of exponential explosion, a challenge mentioned in subjective logic as the number of categories expands. Could the authors address this potential concern and discuss any mechanisms to manage such an increase in complexity?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The limitations discussed in the manuscript do not appear to significantly constrain the applicability of the method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer Y3V6,
We sincerely appreciate the time and effort you have dedicated to reviewing our paper and providing us with valuable feedback. Here are our replies.
> To ensure fairness, the value of $W_{prior}$ should be explicitly stated and consistent with EDL, cause the value of $W_{prior}$ has been proved to be crucial to the effectiveness of EDL model [1].
In all our experiments, the value of $W_{prior}$ is set to 1, the same as in EDL, denoting no prior information of the evidence.
This clarification is mentioned in section 3.1, if the paper is accepted, we will add this definition to the experimental setup too.
> The authors may consider incorporating the KL divergence as a loss term to enhance the model's performance.
We tried to add KL divergence as a constraint yet obtained worse model effect. In fact, HEDL will extract and retain vague evidence, while KL divergence tends to constrain the generation of vague evidence. The conflict between them thus affecting the effect of the model.
> In the section when proofing the existence of the gradient vanishing problem in EDL. It would be beneficial to delineate the conditions where $o_k$ is less than zero occurs.
Eq. 27 shows that in each back-propagation process, for categories that are not ground-truth, the gradient values of the fully connected layer weights will experience large gradient descent. For some categories (determined by the initial parameters of the model), after undergoing multiple large gradient descents, the fully connected layer weights are degraded, resulting in the output $o_k$ being less than 0.
> The paper could benefit from a wider analysis of the impact of the other two loss functions mentioned in EDL, e.g., the MSE loss and the Log loss, when applied to the HEDL model.
Thanks for your suggestion. The results of applying the other two loss functions are as follows. Different loss functions have limited effect on the model performance.
| Method | CIFAR-10 | | | | CIFAR-100 | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | FPR95 | AUPR | AUROC | Acc | FPR95 | AUPR | AUROC | Acc |
| MSE | 18.27 | 92.66 | 95.16 | 95.47 | 56.14 | 89.15 | 88.65 | 80.67 |
| Log | 15.33 | 93.64 | 95.71 | 95.62 | 52.44 | 89.70 | 89.15 | 80.39 |
| Digamma | 15.55 | 94.47 | 96.27 | 95.66 | 55.14 | 89.07 | 89.59 | 80.40 |
> Whether the opinion projection process that translates hyper-opinion into standard multinomial-opinion, leads to a loss of uncertainty information? Please provide a detailed discussion on this matter.
In the process of projecting hyper-opinion to multinomial-opinion, vague evidence is allocated to specific singletons, while the total evidence mass does not change. At the same time, the uncertainty of each singleton and the overall uncertainty remains unchanged. Due to Eqs. 10 and 18, there is no uncertainty loss in the opinion projection process.
> Whether HEDL encounters the issue of exponential explosion, a challenge mentioned in subjective logic as the number of categories expands. Could the authors address this potential concern and discuss any mechanisms to manage such an increase in complexity?
Instead of building the corresponding evidence for each element, we build the extracted features as evidence in the hyper-domain. The number of evidence we extract are determined by the feature dimensions. So the number of evidence does not change with the number of categories, which avoids the exponential explosion problem.
---
Rebuttal Comment 1.1:
Title: Retaining my positive score
Comment: Thank you for your rebuttal. I am glad that the authors validated the wider analysis I raised for KL divergence and other loss functions. The definition of W_{prior} addressed my concern about the fairness, and please include the definition in the paper. The discussion on uncertainty loss and exponential explosion is convincing. After reading other reviews and your responses, I am going to retain my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for helping us improve the paper, we will add the definition about $W_{prior} $ to the experimental setup in an updated version of this paper. We really appreciate your valuable comment! | Summary: This paper introduces an out-of-distribution detection method based on evidential deep learning. This method models the evidence in hyper-domain, and the hyper-opinion in Subjective Logic is used to replace the multinomial-opinion in traditional evidential deep learning. The Hyper-opinion Evidential Deep Learning considers the vague evidence ignored in traditional Evidential Deep Learning, so as to achieve better uncertainty estimation effect. The OOD detection performance of proposed method exceeds the current SOTA OOD detection methods while maintaining the classification accuracy.
Strengths: 1. HEDL achieves SOTA OOD detection performance, while maintaining classification accuracy.
2. As the number of categories in a dataset increases, the traditional EDL framework's performance significantly deteriorates. In contrast, HEDL can consistently extract comprehensive evidence and maintain its performance, regardless of the dataset's scale.
3. HEDL does not introduce additional computational complexity of the model.
4. HEDL can mitigate the vanishing gradient problem in EDL theoretically and practically.
Weaknesses: 1. The origin of the sample uncertainty depicted in the upper portion of Figure 1 is not clearly defined. It is imperative to clarify whether the data presented comes from actual experimental outcomes or if it is merely illustrative example. If the data is based on real-world results, please provide an explanation of how to quantify the samples vagueness.
2. The paper does not specify the value of $W_{prior} $ nor does it analyze its role as a hyperparameter. It is essential to elucidate whether $W_{prior} $ is equivalent to EDL. Please provide a detailed numerical definition of $W_{prior} $.
3. Figure 2 can be described more clearly. In the second module ‘Opinion Projection’ of the lower part, the positioning of the right bar appears to be more logically associated with the 'Multinomial-Opinion Optimization' section. To improve the figure's comprehensibility and accuracy, it is recommended to reposition the right bar accordingly.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In Figure 4, the uncertainty distribution of In-Distribution and Out-of-Distribution data is similar between HEDL w/o projection and HEDL. This observation raises the question of the contribution of the opinion projection. Could the authors further explain the specific role and impact of the opinion projection in the whole method?
2. Could the authors clarify the conceptual difference between a 'Dirichlet hyper distribution' and a standard 'Dirichlet distribution' in Equation 8? From my understanding, it seems that the two distributions equal. If there is a nuanced difference, please provide an explanation to delineate the unique characteristics of the 'Dirichlet hyper distribution' as employed in your method.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, limitations and social impact are discussed in the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer jaEe,
We sincerely appreciate the time and effort you have dedicated to reviewing our paper and providing us with valuable feedback. Here are our replies.
> The origin of the sample uncertainty depicted in the upper portion of Figure 1 is not clearly defined. It is imperative to clarify whether the data presented comes from actual experimental outcomes or if it is merely illustrative example. If the data is based on real-world results, please provide an explanation of how to quantify the samples vagueness.
These data comes from the experimental results of CIFAR-100. We randomly select two categories (kangaroos and dinosaurs in Figure 1 for example), and calculate the vague evidence ratio of each sample in HEDL. Next we calculate the uncertainty of the samples in EDL and HEDL models respectively.
> The paper does not specify the value of $W_{prior}$ nor does it analyze its role as a hyperparameter. It is essential to elucidate whether $W_{prior}$ is equivalent to EDL. Please provide a detailed numerical definition of $W_{prior}$.
In all our experiments, the value of $W_{prior}$ is set to 1, the same as in EDL, denoting no prior information of the evidence.
This clarification is mentioned in section 3.1, if the paper is accepted, we will add this definition to the experimental setup too.
> Figure 2 can be described more clearly. In the second module ‘Opinion Projection’ of the lower part, the positioning of the right bar appears to be more logically associated with the 'Multinomial-Opinion Optimization' section. To improve the figure's comprehensibility and accuracy, it is recommended to reposition the right bar accordingly.
Thanks for your suggestion. The 'Multinomial-Opinion Optimization' section is about the optimization of the Dirichlet distribution associates with a multinomial-opinion. The right bar in ‘Opinion Projection’ represents the belief mass allocation from hyper-opinion to multinomial-opinion, which is more related to opinion projection. If the paper is accepted, we will make Figure 2 more clear.
> In Figure 4, the uncertainty distribution of In-Distribution and Out-of-Distribution data is similar between HEDL w/o projection and HEDL. This observation raises the question of the contribution of the opinion projection. Could the authors further explain the specific role and impact of the opinion projection in the whole method?
As shown in **Table 2**, the evidence needs to be projected from hyper-opinion to multinomial-opinion for accurate classification. The opinion projection operation can ensure the correct allocation of vague and sharp evidence, thereby obtaining accurate classification results.
> Could the authors clarify the conceptual difference between a 'Dirichlet hyper distribution' and a standard 'Dirichlet distribution' in Equation 8? From my understanding, it seems that the two distributions equal. If there is a nuanced difference, please provide an explanation to delineate the unique characteristics of the 'Dirichlet hyper distribution' as employed in your method.
The Dirichlet hyper distribution is built upon hyper-domain.
Instead of building evidence for singletons in a domain, Dirichlet hyper distribution treats the set as a singleton element in hyper-domain.
Thus Dirichlet hyper distribution is able to model the evidence for set (support multiple singletons in the same time), while Dirichlet distribution is limited to model evidence for singletons only. Modeling evidence for set enables Dirichlet hyper distribution to take vague evidence into consideration. | Summary: This paper provides a method for Out-of-Distribution detection called Hyper-opinion Evidential Deep Learning which is based on Evidential Deep Learning. It models the evidence on the hyper-domain, considering the extra vague evidence containing multiple possible categories. The measurement of vague evidence makes HEDL have more accurate uncertainty estimation. This paper also proposes an opinion projection method to mitigate the gradient vanishing problem in EDL. Experiments on several datasets show that the proposed method outperforms the current Out-of-Distribution detection methods.
Strengths: (1)The paper has clear motivation for each part of the proposed method and the method has a solid theoretical foundation.
(2)The proposed method achieves SOTA results compared with other methods.
Weaknesses: 1.Eqs. 13, 14, and 15 are hard to understand. The authors can detail on the meanings between the different W's.
2.Why HEDL is trained on a pre-trained feature extractor and how to ensure the fairness of comparative evaluations with other methods?
3.Theoretical proof of how HEDL solves vanishing gradient problem is missing. How is the EDL gradient vanishing point determined in Figure 3 and does this phenomenon consistently result in the vanishing gradients for certain fixed classes during each iteration?
4.Adding experiments on model complexity comparison can be beneficial to illustrate the computation complexity of HEDL.
5.More state-of-the-art methods (CIDER[1] and NECO[2]) can better demonstrate the effectiveness of proposed method.
[1] https://openreview.net/forum?id=aEFaE0W5pAd
[2] https://openreview.net/forum?id=9ROuKblmi7
I’m mostly concerned about 2 and 3. If the authors can address these questions, I may consider raising my score.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please answer the questions in Weakness.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Please see Weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer hxrE,
We sincerely appreciate the time and effort you have dedicated to reviewing our paper and providing us with valuable feedback. Here are our replies.
> Eqs. 13, 14, and 15 are hard to understand. The authors can detail on the meanings between the different W's.
$W$ is a matrix of $N \times K$ shape, representing the weight of the fully connected layer.
$W^s$ is a 0/1 matrix obtained by applying the Heaviside function to each value in the $W$ matrix, representing the set of each piece of evidence supports on the hyper-domain.
$W^p$ normalizes the value of each row of $W^s$ according to prior information, representing the opinion projection weight matrix.
> Why HEDL is trained on a pre-trained feature extractor and how to ensure the fairness of comparative evaluations with other methods?
Because HEDL extracts both vague and sharp evidences, it converges slowly in the early stages of training. Therefore, applying the softmax layer to pre-train a feature extractor and then fine-tuning it with HEDL can achieve faster convergence and better model results.
Refer to **Experiment Implementation Details**. For all comparative experiments, we apply the same model structure and initial training parameters, as well as the total number of training epochs, thus ensuring the fairness.
> Theoretical proof of how HEDL solves vanishing gradient problem is missing. How is the EDL gradient vanishing point determined in Figure 3 and does this phenomenon consistently result in the vanishing gradients for certain fixed classes during each iteration?
We provided theoretical proof of how HEDL solves vanishing gradient problem in **Appendix B**.
Figure 3 shows that in each training process, the parameters whose gradient norm sum is 0 are the gradient vanishes points.
The gradient vanishing classes in each training are not fixed, and will change according to the initial parameters.
Due to lacking of extracting evidence training caused by gradient vanishing, those gradient vanishing classes suffer from low classification accuracy and high uncertainty.
> Adding experiments on model complexity comparison can be beneficial to illustrate the computation complexity of HEDL.
We had provided the experimental data with training time for each model(softmax/EDL/HEDL) in **Appendix D**. Both theoretical proof and experimental results show that HEDL does not increase the computational complexity of the model.
> More state-of-the-art methods (CIDER[1] and NECO[2]) can better demonstrate the effectiveness of proposed method.
Thanks for your suggestion. We planned to compare with CIDER in our experiments, since CIDER does not report the classification accuracy in its original paper, we did not put it the main text. As for NECO, we keep the same experimental settings and model parameters as in our paper to reproduce the experimental results. The results are shown in the table below. It proves that our method is better than the current SOTA methods. If our paper is accepted, we can add these experimental results to our paper if necessary.
| Method | CIFAR-10 | | | CIFAR-100 | | |
|:------ |:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|
| | FPR95 | AUPR | AUROC | FPR95 | AUPR | AUROC |
| CIDER | 21.94 | **95.16** | 95.33 | 58.21 | 87.99 | 84.61 |
| NECO | 31.28 | 92.34 | 94.68 | 64.92 | 86.36 | 83.74 |
| HEDL | **15.55** | 94.47 | **96.27** | **55.14** | **89.07** | **89.59** |
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the additional explanation and more experimental results. I look forwards to seeing an updated version of this paper with these results, and I have updated my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for helping us improve the paper and updating the score, we really appreciate your valuable comment! | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable Convergence | Accept (poster) | Summary: This paper proposes a new optimizer based on Adam, called MicroAdam, which reduces memory footprints, via sparisification, quantization, and error feedback. Theoretical convergence guarantees are provided. Empirical results show good performance on several fine-tuning tasks
Strengths: 1. This paper proposes a new optimizer based on Adam, called MicroAdam, which reduces memory footprints, via sparisification, quantization, and error feedback.
2. Theoretical convergence guarantees are provided.
3. Empirical results show good performance on several fine-tuning tasks
4. Efficient implementation on GPUs is provided.
Weaknesses: 1. The experiments are limitted to finetuning tasks. Note that GaLore also show good performance in pretraining tasks.
2. The theoretical analysis is actually based on AMSGrad instead of Adam.
---------------
My concerns are well addressed according to the author's feedback.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In many cases, MicroAdam shows even better loss or accuracy than the original Adam, which doesn't really make sense since the compression methods should be lossy which typically incurs worse loss or accuracy. Is there any explanation of this phenomenon?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The limitations are well discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the feedback. We address your questions below.
**Weakness 1:** Pretraining
We would like to emphasize that we designed MicroAdam for the finetuning (FT) use case and our research question was “are all gradient entries useful for fine-tuning?”, which is why we test MicroAdam for FT tasks.
We agree with the reviewer regarding MicroAdam’s convergence in Figure 4 and we can explain this behavior by emphasizing the high sparsity in MicroAdam’s update (which, in our experiments, is around 90%). However, our results show that we can achieve at least the same performance with only 45% memory compared to AdamW-8bit ($0.9d$ bytes for MicroAdam compared to $2d$ bites of AdamW-8bit).
We address your concerns on effectiveness via two additional experiments:
1. To address the training from scratch / non-language tasks question, we trained ResNet-18 and ResNet-50 on ImageNet using MicroAdam and compared it with SGD, AdamW and AdamW-8bit. We provide results in the response PDF. Briefly, MicroAdam manages to obtain close to 72% accuracy, compared to 70% for SGD, AdamW and AdamW-8bit.
- For ResNet-18, **MicroAdam uses ~10 MB** memory for the model state, compared to ~22 MB for AdamW-8bit ($2.2\times$ more), ~45 MB for SGD ($4.5\times$ more) and ~90 MB for AdamW ($9 \times$ more).
- For ResNet-50, **MicroAdam uses ~22 MB** memory for the model state compared to ~49 MB for AdamW-8bit ($2.2 \times$ more), ~97 MB for SGD ($4.4 \times$ more) and ~195 MB for AdamW ($8.9 \times$ more).
This is an indication that MicroAdam is effective for pretraining model for vision tasks.
2. To address the question of scale, in addition to existing experiments, we would like to emphasize the effectiveness of MicroAdam for SFT by training a Llama 2-13B model on the same GSM-8k dataset and comparing it against AdamW-8bit. We provide results in the response PDF. Briefly, our results show that MicroAdam can also recover the accuracy of AdamW-8bit for this task at a lower memory cost: **MicroAdam allows training on a single Nvidia H100 GPU with 80GB RAM**, the total running process requires 70 GB GPU memory (~59 GB for the model and activations and ~11 GB is used for the optimizer state), while AdamW-8bit (24.2 GB for the optimizer state) requires more than one 80GB GPU and need at least 2 GPUs. The need for more than one GPU for AdamW-8bit comes from the size of optimizer state, which is $2.2 \times$ larger.
FT tasks require adapting a fraction of parameters (for example, LoRA) to be able to learn a downstream task. In contrast, pretraining (PT) requires updating all parameters in the model because there is a lot more knowledge to be embedded into the model compared to FT case and the dataset sizes are also much larger. In the paper we briefly explain that under the settings we choose for MicroAdam ($k=1\\%$ and $m=10$ gradients), the parameter update is at least 90% sparse, meaning that MicroAdam updates at most $10\\%$ of parameters in each layer at every optimization step.
**Weakness 2:** Theoretical analysis based on AMSGrad
The AMSGrad optimizer was introduced as a solution to a fundamental issue in Adam optimizer’s proof, where a state matrix was mistakenly supposed to be positive definite throughout the training (see Section 3 from Reddi et al. 2019 [1]). In Section 5 of [1], the authors provide a simple, one dimensional problem where Adam converges to the worst possible solution. Despite this clear evidence of unsatisfactory solution in simple settings, Adam (and the newer version AdamW [2]) is still the off-the-shelf optimizer when it comes to LLMs finetuning. We chose to build our theoretical analysis on the framework of AMSGrad since this algorithm fixes the above technical issue, and can therefore have a convergence proof (as opposed to Adam). Moreover, CompAMS introduces error feedback and provides analysis for AMSGrad in a distributed optimization setting. In this context, our work focuses on optimizing space usage, and introduces compression for error feedback in the CompAMS framework.
**Question 1:** Explaining the better performance
This behavior was also surprising for us. We believe that compression has a regularization effect by updating only 10% of the weights. Adding noise into training dynamics can both increase or decrease the loss or accuracy. It is not the case that lossy compression or noisy training must have higher loss or lower accuracy. For instance, if that were the case, then larger batch size (and implicitly lower noise in the stochastic gradient) should have implied better accuracy, which is not true as far as we are concerned. We are thinking about this in the context of full batch gradient descent, which does not generalize better than the mini-batch version.
References:
- [1] **On the convergence of Adam and beyond**, Reddi et al., 2019, available at https://arxiv.org/pdf/1904.09237
- [2] **DECOUPLED WEIGHT DECAY REGULARIZATION**, Loshchilov and Hutter 2017, available at https://arxiv.org/pdf/1711.05101
- [3] **ON DISTRIBUTED ADAPTIVE OPTIMIZATION WITH GRADIENT COMPRESSION**, Li et al., 2022, ICLR 2022, available at https://arxiv.org/pdf/2205.05632
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the feedback. It seems that most of my concerns are addressed and I will raise the score. | Summary: This paper proposes a memory-efficient Adam-based optimizer called MicroAdam. The key idea is to compress gradients using Top-K sparsification before passing them to the optimizer state, along with an error feedback vector that is itself compressed via quantization. For general smooth non-convex (and PL) functions, MicroAdam’s convergence rates are provided with constants depending on the choice of compressors. Experiments on fine-tuning tasks with BERT and LLaMA suggest that MicroAdam performs competitively with AdamW while using less memory.
Strengths: - This work provides theoretical convergence guarantees under commonly used assumptions. Previous work on memory-efficient Adam often relied more on heuristics.
- Performance on fine-tuning tasks is comparable to uncompressed AdamW, with lower memory cost.
Weaknesses: - Both theoretical and practical performance, as well as memory savings, depend heavily on the choice of compressors. These compressors can be complex to implement and require specific techniques to avoid memory overhead on GPUs. The difficulty and complex details of implementation might hinder impact as it is not easy to incorporate in practice.
- The approach is limited to fine-tuning tasks, and its effectiveness for LLM pre-training or non-language tasks remains unclear.
Technical Quality: 3
Clarity: 3
Questions for Authors: The condition $(1+\omega)q \leq 1$ is necessary for theoretical convergence. Do the compressors used in the experiment section satisfy this requirement? Generally, how do the authors suggest finding appropriate compressors for unfamiliar tasks?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations are well-addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the feedback. We address your questions below.
**Weakness 1:** Complexity of compressors
We agree that memory savings depend on the choice of compressors, and that efficient implementations are needed. However, we do have the advantage that both gradient sparsification, quantization, and low-rank compression are well-understood from distributed optimization, and one can build on prior work in the area, as well as good support in popular frameworks such as Pytorch. Specifically, the implementation we provide is not very complex, and can be further optimized along the lines we sketched in the response to **Weaknesses 4 and 6** of **Reviewer nVyV**.
**Weakness 2:** Pretraining
We would like to emphasize that we designed MicroAdam for the finetuning (FT) use case and our research question was “are all gradient entries useful for fine-tuning?”, which is why we test MicroAdam for FT tasks.
We agree with the reviewer regarding MicroAdam’s convergence in Figure 4 and we can explain this behavior by emphasizing the high sparsity in MicroAdam’s update (which, in our experiments, is around 90%). However, our results show that we can achieve at least the same performance with only 45% memory compared to AdamW-8bit ($0.9d$ bytes for MicroAdam compared to $2d$ bites of AdamW-8bit).
We address your concerns on effectiveness via two additional experiments:
1. To address the training from scratch / non-language tasks question, we trained ResNet-18 and ResNet-50 on ImageNet using MicroAdam and compared it with SGD, AdamW and AdamW-8bit. We provide results in the response PDF. Briefly, MicroAdam manages to obtain close to 72% accuracy, compared to 70% for SGD, AdamW and AdamW-8bit.
- For ResNet-18, **MicroAdam uses ~10 MB** memory for the model state, compared to ~22 MB for AdamW-8bit ($2.2\times$ more), ~45 MB for SGD ($4.5\times$ more) and ~90 MB for AdamW ($9 \times$ more).
- For ResNet-50, **MicroAdam uses ~22 MB** memory for the model state compared to ~49 MB for AdamW-8bit ($2.2 \times$ more), ~97 MB for SGD ($4.4 \times$ more) and ~195 MB for AdamW ($8.9 \times$ more).
This is an indication that MicroAdam is effective for pretraining model for vision tasks.
2. To address the question of scale, in addition to existing experiments, we would like to emphasize the effectiveness of MicroAdam for SFT by training a Llama 2-13B model on the same GSM-8k dataset and comparing it against AdamW-8bit. We provide results in the response PDF. Briefly, our results show that MicroAdam can also recover the accuracy of AdamW-8bit for this task at a lower memory cost: **MicroAdam allows training on a single Nvidia H100 GPU with 80GB RAM**, the total running process requires 70 GB GPU memory (~59 GB for the model and activations and ~11 GB is used for the optimizer state), while AdamW-8bit (24.2 GB for the optimizer state) requires more than one 80GB GPU and need at least 2 GPUs. The need for more than one GPU for AdamW-8bit comes from the size of optimizer state, which is $2.2 \times$ larger.
FT tasks require adapting a fraction of parameters (for example, LoRA) to be able to learn a downstream task. In contrast, pretraining (PT) requires updating all parameters in the model because there is a lot more knowledge to be embedded into the model compared to FT case and the dataset sizes are also much larger. In the paper we briefly explain that under the settings we choose for MicroAdam ($k=1\\%$ and $m=10$ gradients), the parameter update is at least 90% sparse, meaning that MicroAdam updates at most $10\\%$ of parameters in each layer at every optimization step.
**Question 1:** Compressors' properties
The condition $(1+\omega) q < 1$ is theoretical and parameters $q$ and $\omega$ are worst case bounds. In the experiments we apply much more compression than the theoretical worst case condition allows us. In general, Top-K, low-rank and quantization are the main compression operators widely studied theoretically and used in practice.
---
Rebuttal 2:
Title: Response to authors
Comment: Thank you for addressing my concerns and conducting additional vision pretraining experiments. The accuracy of AdamW seems a bit lower than usual. I don't think one can conclude that MicroAdam achieves better accuracy than AdamW in this case without further fine-tuning. However, I believe it sufficiently demonstrates MicroAdam's behavior in pretraining, so I will raise my score. | Summary: The paper introduces MicroAdam, a novel optimizer designed to improve memory efficiency while maintaining competitive performance with established optimizers such as Adam. The authors provide theoretical analyses and experimental results to demonstrate the benefits of MicroAdam in various settings.
Strengths: - **Theoretical Analysis**: The paper includes comprehensive theoretical analyses of memory consumption and the guarantee of convergence.
- **Experimental Validation**: The experimental results showcase the potential of MicroAdam in reducing memory consumption, with detailed comparisons to existing methods.
Weaknesses: - **Motivation in Figure 1**: Figure 1 does not effectively provide additional motivation for MicroAdam, as it represents a 2-dimensional non-stochastic problem with only choosing k=1 for top-k compression. This toy example can be solved by many heuristic methods or parameter adjustments, which diminishes the unique motivation for MicroAdam.
- **Convergence Speed**: The convergence speed of MicroAdam is asymptotically similar to AMSGrad rather than Adam itself. Given that the paper aims to improve Adam's memory footprint, it is unclear why the convergence speed matches that of AMSGrad, which theoretically has no significant speed difference from Adam's with respect to T.
- **Memory Consumption Analysis**: Sections 3.2 and C provide theoretical and simulated memory consumption analysis. However, the practical memory consumption during actual LLM training does not necessarily correlate strongly with the theoretical optimizer states' memory usage. For example, a small LLAMA model with sufficiently large batch size and token length can peak at over 70GB of memory usage (mainly due to activation memory), whereas the optimizer states' memory footprint might be under 100MB. Therefore, the paper should report more about real peak memory under appropriate settings rather than theoretical or simulated values, as the differences in optimizer states memory can be mitigated under certain settings and PyTorch's memory management mechanism.
- **Top-K Operator Memory Efficiency**: Despite the block-wise operation of the Top-K operator, if the gradients are in bf16 format, the memory overhead for 50% sparsity remains the same (due to the need to store 16-bit indices) or could even be higher when considering other memory overheads in the algorithm.
- **Lack of Training from Scratch Results**: A significant issue is the near absence of results for training from scratch, which is crucial for evaluating MicroAdam's effectiveness. For MicroAdam, training from scratch is more important. Table 1 shows that during fine-tuning, the training loss does not strictly correlate with the final metrics. For instance, GaLore has a much worse loss but maintains metrics comparable to those of other methods. This is why LLM fine-tuning metrics are often taken from the best of three epochs, not necessarily the last epoch, even if the training loss is the smallest in the final epoch. Additionally, Figure 4 indicates that MicroAdam's training efficiency is lower than baseline Adam, as MicroAdam's third epoch loss is nearly identical to Adam's second epoch loss.
- **Computational Efficiency**: MicroAdam appears to be less computationally efficient. Are there ways to improve its computational efficiency?
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the feedback. We address your questions below.
**Weakness 1:** About Figure 1
The motivation for this illustration is to show that some form of EF is _necessary_ for good convergence, even in the case of toy instances. Specifically, prior heuristic methods, such as GaLoRE, perform gradient compression but simply _omit EF altogether_, and we wanted to show that this is infeasible in general. We agree with your point that various other heuristics could be applied, and we will clarify the motivation for this example or move it to the Appendix.
In Figure 1 we show how error feedback (EF) fixes AdamW with Top-K compression. The plot on the left shows the optimization trajectory of the original Adam optimizer. The center plot illustrates the convergence of Top-K Adam when we only choose the largest coordinate from the accumulator (equivalent to 50% sparsity, since the problem is two-dimensional). In the end, on the right side we show that adding EF to Top-K Adam recovers the same optimization trajectory as the original Adam optimizer. Extrapolating to higher dimensional problems, our MicroAdam approach helps recover the trajectory of the original Adam optimizer, while using less memory.
**Weakness 2:** Convergence speed
The AMSGrad optimizer was introduced as a solution to a fundamental issue in Adam optimizer’s proof, where a state matrix was mistakenly supposed to be positive definite throughout the training (see Section 3 from Reddi et al. 2019 [1]). In Section 5 of [1], the authors provide a one dimensional problem where Adam converges to the worst possible solution. Despite this clear evidence of unsatisfactory solution in simple settings, Adam (and the newer version AdamW [2]) is still the off-the-shelf optimizer when it comes to LLMs finetuning. We chose to build our theoretical analysis on the framework of AMSGrad since this algorithm fixes the above technical issue, and can therefore have a convergence proof (as opposed to Adam). Moreover, CompAMS introduces EF and provides analysis for AMSGrad in a distributed optimization setting. In this context, our work focuses on space optimizations, and introduces compression for EF in the CompAMS framework while preserving convergence.
**Weakness 3:** Memory consumption
We agree with the example you gave in your comment and we would like to add a few details about our experimental work. We emphasize that the memory usage we reported in the paper was the maximal memory usage, read directly from the GPU. This means that a user must have at least that much memory available on the GPU to perform the experiment. Our memory usage is the overall memory used by the entire program, including model activation and gradient, as well as the batch of data used for forward pass and the optimizer state. In section 3.2 we provide the memory usage only for the optimizer state expressed in bytes ($0.9d$ for MicroAdam, $2d$ for AdamW-8bit and $4d$ for AdamW) and this is the same for any model with $d$ parameters. Concretely, for Llama-2-7b, we have 25.1GB for AdamW, 12.5GB for AdamW-8bit and **5.65 GB for MicroAdam**.
In practical terms, our experiments consider settings where the memory footprint for orthogonal components (such as activation sizes) is minimized, such as microbatch size 1 and global batch size 32 for Llama-2-7B. One can have lower memory usage with exactly the same model settings only if the optimizer state is lower and this is what we try to do with MicroAdam.
**Weakness 4:** Top-K Memory Efficiency
The idea of MicroAdam is to store highly sparse gradients in the buffer $\mathcal{G}$ and to compute the statistics $m_t$ and $v_t$ on the fly. In practice, we show that very high (at least 99% sparsity) can be induced in the gradients, and this allows us to have a much lower memory usage in practice. It is true that for moderate (50%) sparsity the savings would be minimal or negative, but one of our sources of novelty is in showing that much higher sparsity can be supported in practice, with respect to the information used to generate optimizer states. We can provide more details about our efficient implementation in the discussions after the rebuttal.
**Weakness 5:** Pretraining
Because of limited size for the number of characters in our response, we are kindly asking to read our comment about pretraining in our response to **Weakness 1** of **Reviewer u3Hk**.
**Weaknesss 6:** Computational Efficiency
To speed up the main operations at the core of MicroAdam, we already implemented CUDA kernels for quantization and to efficiently compute the statistics $m_t$ and $v_t$. However, we believe there are several ways to further improve computational efficiency. For instance, we can experiment with changing the number of thread blocks in CUDA kernels for the auxiliary operations, such as setting tensors to zero (line 6) and copying the top-k values from the accumulator $a_t$ at indices $\mathcal{I}_t$ to the matrix $\mathcal{V}$ that stores the values. At the moment, we use the maximum possible number of CUDA thread blocks to perform the operation, which might be improved in the context of layer-wise preconditioning. We did not prioritize such optimizations during the preparation of the paper because MicroAdam achieved its main goal, that of reducing memory usage by 55% (wrt AdamW-8bit) and 77.5% (wrt AdamW).
We skip some details now due to limited sized response, but we are happy to explain more in the discussions phase.
References:
- [1] **On the convergence of Adam and beyond**, Reddi et al., 2019, available at https://arxiv.org/pdf/1904.09237
- [2] **DECOUPLED WEIGHT DECAY REGULARIZATION**, Loshchilov and Hutter 2017, available at https://arxiv.org/pdf/1711.05101
- [3] **ON DISTRIBUTED ADAPTIVE OPTIMIZATION WITH GRADIENT COMPRESSION**, Li et al., 2022, ICLR 2022, available at https://arxiv.org/pdf/2205.05632 | Summary: The paper presents a new optimizer that approximates Adam but has a lower memory footprint. Adam stores for each parameter two additional values --- the exponential moving averages of the gradients and the gradient squared. The current optimizer saves space by using the fact that for most parameters, the gradient is quite small at each step, so could be ignored. The optimizer stores the most important gradients at each step, for several past steps. It also stores a low-resolution version of the gradient, which is added to the next gradient (the reason for this was not clear to me). The exponential moving averages for the gradients and gradients-squared are computed from the most important gradients stored at each step and are used in the Adam update.
The paper proves the convergence of the optimizer under some reasonable assumptions. The authors show that the method can be implemented efficiently on a GPU, and the performance is similar to Adam. The authors performed several experiments to show that their optimizer saves about 10-20% memory compared to Adam, while achieving similar accuracy.
Strengths: The main idea is quite interesting - that most of the values of a gradient are small in magnitude, so need not be used to update the parameters --- only up to 10% of all the parameters are ever updated in any step, and these change only a little across steps.
The convergence results are a useful validation of the algorithm. The experimental results seem quite strong to me.
Weaknesses: The presentation of the paper was not very clear to me. For example, k is not mentioned as one of the inputs in Algorithm 1. $\delta_1, \Delta_1$ are initialized to vectors in R^d, but then they seem to be scalars. I also did not understand what the error correction was for.
Technical Quality: 3
Clarity: 2
Questions for Authors: In your algorithm, why did you not use a_t or the current gradient in steps 9 and 10, since this is already present.
It looks like most of the memory is being used for other tasks, so reducing the memory by a factor of 4 resulted in a small overall reduction in memory.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the feedback. We address your questions below.
**About Summary:** Memory savings
The memory savings of MicroAdam are ~20% if we compare the entire memory usage, but the key comparison point should be on the size of optimizer states, as this is the quantity we are minimizing. We restate the memory usages for optimizer states presented in section 3.2 of our paper for Llama-2-7B model:
- AdamW: 25.10 GB
- AdamW-8bit: 12.55 GB
- MicroAdam: **_5.65 GB_**
In our next revision, we will show the optimizer state in the tables containing results to clearly differentiate from the memory usage that is shared across all optimizers (e.g. model state, such as activations and gradient). As explained in section 3.2 of our paper, the memory footprint of MicroAdam is $0.9d$ bytes in any experiment, where $d$ is the model size, while AdamW and AdamW-8bit have $4d$ and $2d$, respectively. To express the memory savings in percentages, MicroAdam uses only 45% memory compared to AdamW-8bit ($0.9d$ vs $2d$) and 22.5% compared to AdamW ($0.9d$ vs $4d$).
**Weakness 1:** Paper clarity
**Algorithm 1:** We explain Algorithm 1 in detail in Section 3 of our paper. Please check the end of this answer where we extend the explanations to make sure our algorithm is well understood.
**About $k$:** Indeed, $k$ is one of the inputs of the Algorithm 1 and $\delta_1$, $\Delta_1$ should be initialized as scalars. We will fix these in the next revision.
**About error correction:** Let us clarify the main steps of the proposed MicroAdam algorithm and address some of the questions you raised. As you noted, Adam stores two additional values for each parameter: the exponential moving average of the gradients ($m_t$) and the squared gradients ($v_t$). To avoid allocating twice the model size for $m_t$ and $v_t$, we compress the gradients using a sparsity-inducing operator Top-K, storing the most important gradient components at each step, for several past steps (these are stored in the buffer $\mathcal{G}$). However, we do not ignore the smaller gradients, but we store and accumulate them at 4-bit resolution, adding them to the next gradient to compute the model updates. Intuitively, instead of disregarding small gradient components in each iteration, we accumulate them and apply them in the next step. This ensures that all gradient information is eventually used to update the model parameters. Below we provide a detailed explanation of Algorithm 1.
**Question 1:**
The idea of our algorithm is to store 99% sparse gradients (e.g. only the largest 1% of values) into the buffer matrix $\mathcal{G}$. Then, this sparse gradient window is used to “materialize” the Adam statistics $m_t$ and $v_t$. This way, we make sure that we use only $0.9d$ bytes of memory instead of $2d$ as in AdamW-8bit or even $4d$ as in original AdamW. Below we provide a detailed explanation of Algorithm 1.
**Question 2:**
In all our experiments we report the memory usage for the entire process. We would like to emphasize that what should be compared is the size of optimizer states, since the settings for each experiment is the same and only the optimizer differs (e.g. batch size and model sizes are fixed). In section 3.2 we provide a detailed theoretical memory analysis for the optimizer states of AdamW (25.1GB), AdamW-8bit (12.5GB), **MicroAdam (5.65 GB)**. In the next revision we will clearly make a distinction between the memory usage for the optimizer state and the overall memory usage (which includes, among others, model activations and gradient). Please let us know if we misunderstood your question. We are happy to discuss and clarify everything about our work.
**_Explanation for Algorithm 1_**: line $n$ refers to $n^{th}$ line in this algorithm:
**Optimization step $t=1$:**
In this case, the error feedback is completely zero as initialized in line 2
- **Line 4**: the accumulator $a_t$ contains only the stochastic gradient
- **Line 5**: if we suppose the accumulator $a_t$ is normally distributed with a mean of zero, then this line is equivalent to choosing $k=1\\%$ values from the tails (outliers) because we apply Top-K on the absolute values of the accumulator $a_t$. The Top-K operator returns indices of those outliers, which we store in $\mathcal{I}_t$, as well as the corresponding values $\mathcal{V}_t$ from $a_t$ **(not from $|a_t|$)**
- **Line 6**: the outliers are erased from the accumulator $a_t$ because they will be transferred to the buffer matrix $\mathcal{G}$
- **Line 7**: compute $\delta$ and $\Delta$ (the min and max values from the accumulator $a_t$) which will be used for the quantization at the next step.
- **Line 8**: quantize the remaining values in the accumulator $a_t$ **without the outliers in the tails** using procedure $Q$ in Alg. 2. **This error will be dequantized and added to the stochastic gradient at the next optimization step $t+1$**. This is the point in our algorithm when we preserve the error instead of discarding it.
- **Line 9**: add the new set of outliers to the ring buffer $\mathcal{G}$.
- **Lines 10 and 11**: compute the first and second order moment statistics $\hat{m_t}$ and $\hat{v_t}$ over the ring buffer $\mathcal{G}$ using Alg. 3.
- **Line 12**: update the model parameters as usually done in Adam
**Optimization step $t \geq 2$:**
The only change compared to optimization step $t=1$ is that the error feedback $e_t$ is not zero anymore, but will contain the quantized values not selected by Top-K (inliers) and will be added to the gradient. After that, the algorithm works exactly as explained above, starting with line 5. | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for the useful feedback! We have provided individual responses to each review, and briefly summarize the main points here:
- To address the concern about Algorithm 1, we provided a detailed explanation of each line in the algorithm at the end of our response to **Reviewer 8ZAo**.
- To address the concern about the absence of pretraining results, we stated our research question which our work successfully answers (that of reducing the memory usage in the finetuning case), having similar motivation as LoRA [1] (e.g. reducing the memory usage for finetuning, but using a completely different approach). In addition, we also provided results for pretraining Computer Vision tasks, where MicroAdam shows ~2% higher validation accuracy on ResNet-18 and ~5% higher than AdamW variants, while still being 1% better than SGD (see the attached PDF).
- To address the concerns about novelty in our contribution, we emphasized our key algorithmic contributions: compressing the error feedback, providing efficient implementation for GPUs and theoretical guarantees as well.
- To resolve the questions about compressor complexity, compression memory efficiency and computational efficiency, we provided detailed explanations.
We hope our responses address the reviewers’ questions, and would be happy to continue the discussion during the rest of the rebuttal period.
References:
- [1] **LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS**, Hu et al., 2021, available at https://arxiv.org/pdf/2106.09685
Pdf: /pdf/e056a02beae990ff12e42df88d9b47354b2ae5fd.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The script proposed a memory-efficient method with a theoretical guarantee.
Strengths: The topic on memory-efficient optimizers is important. I did not observe obvious flaws in the theory.
Weaknesses: See below.
Technical Quality: 2
Clarity: 1
Questions for Authors: See below.
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 1
Limitations: **Major concern 1**: The contribution of the paper is rather weak. After reading the paper, I still did not find what is new in this paper. For instance:
**1-1: It is unclear what is new in the compression design of Algorithm 1.** I have a hard time figuring out the contribution of Algorithm 1. Please summarize the significance and novelty of the proposed compression procedure in Algorithm 1. Please discuss how it is different from the existing ones and why do we need this new one. Please discuss the design principles and ideas.
**1-2 It is unclear whether the proposed method is truly effective.** There are much much more experiments are needed. Currently, only 7B SFT is provided, which is far from convincing. SFT is not enough to support the effectiveness of a new optimizer. It seems unclear how the method would perform on LLM pre-training or non-language tasks.
**1-3: It is unclear what is new in the theory.** Under the assumption of unbiased compression + bounded variance + bounded gradient, this type of Algorithm 4 has already been extensively studied by the distributed community and so. Please highlight what is new and what is nontrivial, if there is any.
**Major concern 2: The presentation is quite poor.** For instance:
1. There is no explanation on the principle or idea behind the design of Algorithm 1.
2. It is good to see simple example like Figure 1. But unfortunately, nearly nothing is explained here. We can see EF helps convergence, but we do not understand why. It seems just a toy example without any explanation on the insight.
3. The current logic flow of the script is confusing. It would be much easier to read if the authors:
(i) introduce Algorithm 4 first (generic form of MicroAdam),
(ii) provide the theoretical guarantee,
(iii)then introduce Algorithm 1( a specific form that you chose) ,
(iv) discuss the implementation details,
(v) then show the experiments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the feedback. We address your questions below.
**Limitation 1-1:** Contribution and novelty in MicroAdam
We focus on reducing the memory cost of adaptive optimization, and start from the idea that not all gradient components carry useful information for optimization. Thus, we show that gradients can be sparsified **before used to compute the states in the Adam optimizer**. Intuitively, sparsity leads to significant savings in terms of the size of optimizer states.
However, to ensure convergence both in theory and practice, it is necessary to incorporate an error correction mechanism **error feedback** (EF), made popular in distributed optimization. **Simply using EF does not lead to memory savings**, since the size of the error correction buffer is the same as the model size. At the same time, EF is critical for convergence, as also illustrated in Figure 1.
Instead, **our key algorithmic innovation is in showing that the EF can itself be compressed** in the context of adaptive optimization, leading to significant memory savings while preserving convergence. The new parts of our algorithm, i.e. EF compression and decompression are highlighted **in blue** in Algorithm 1. Our practical contribution is in showing that sparsity in optimizer states can be efficiently leveraged for memory savings, at the scale of billion-parameter models.
**Limitation 1-2:** Pretraining
Because of limited size for the number of characters in our response, we are kindly asking to read our comment about pretraining in our response to **Weakness 1** of **Reviewer u3Hk**.
**Limitation 1-3:** Novelty in the theory
Motivated by the question of reducing storage cost, we are the first to consider **EF quantization** and to provide a convergence proof for it.
The key point in the algorithm design, which allows us to obtain practical gains, is the ability to compress the EF accumulator through quantization. In the analytical view of MicroAdam, presented as Algorithm 4, the EF accumulator is denoted by $e_t$, and in line 5 this accumulator is updated and compressed via $e_{t+1} = Q(e_t + g_t - \tilde{g}_t)$. As can be seen from examining our proof, quantizing the EF, although very simple algorithmically, significantly complicates the analysis. (Generally, we are not aware of any method that compresses the error accumulator and provides theoretical convergence guarantees in any setup. )
Moreover, from the practical side, since the state we maintain are the quantized EF and the sparse gradients, our algorithm must re-compute optimizer states on-the-fly in every iteration based on this compressed state. Implementing this efficiently is a non-trivial challenge.
**Major Concern-2:**
**Algorithm 1**:
We explain Algorithm 1 in detail in Section 3 of our paper. Please check the answer for **Reviewer 8ZAo** where we extend the explanations we already provided in section 3 of our paper to make sure our algorithm is well understood.
**About Figure 1:**
The motivation for this illustration is to show that some form of EF is **necessary** for good convergence, even in the case of toy instances. Specifically, prior heuristic methods, such as GaLoRE, perform gradient compression but simply **omit EF altogether**, and we wanted to show that this is infeasible in general. We agree with your point that various other heuristics could be applied, and we will clarify the motivation for this example or move it to the Appendix.
In Figure 1 we show how EF fixes AdamW with Top-K compression. The left plot shows the optimization trajectory of the original Adam optimizer. The center plot shows the convergence of Top-K Adam when we only choose the largest coordinate from the accumulator (equivalent to 50% sparsity, since the problem is two-dimensional). In the right side we show that adding EF to Top-K Adam recovers the same optimization trajectory as the original Adam optimizer. Extrapolating to higher dimensional problems, our MicroAdam approach helps recover the trajectory of the original Adam optimizer, while using less memory.
**Major Concern-3:** Sections order in the paper
Thank you for this suggestion; we will re-order the presentation to introduce the general algorithm, followed by the implementable version.
---
Rebuttal 2:
Title: I will keep my score
Comment: Thanks a lot for the response.
I still think it is rather weak to validate the efficacy of a new optimizer by SFT on Llama. Further, I kindly disagree with authors comment that "our research question was "are all gradient entries useful for fine-tuning?” " This is not shown in the script. The whole script is written in the style of proposing a new optimizer for generic training procedures (including pre-training), not just for SFT.
I will keep my score.
---
Rebuttal Comment 2.1:
Title: Author Response
Comment: Dear Reviewer,
Thank you for your response. We provide some brief clarifications below:
> I still think it is rather weak to validate the efficacy of a new optimizer by SFT on Llama.
Unfortunately the reviewer may have completely missed our _pre-training results_, presented in the rebuttal. Specifically, in the PDF attached to the main response, we have presented pre-training results for ResNet18 and ResNet50 on ImageNet, trained _from scratch_ showing that our optimizer outperforms Adam and even SGD in this setting, while using _less than half of the memory_ relative to Adam-8bit, and less than 1/4 relative to SGD!
Our SFT results on Llama are on 7B and 13B parameter models (the latter is at the end of the rebuttal PDF), where our method presents significant practical advantages--specifically, it allows the user to use a single GPU for SFT, rather than several, while providing a solid convergence theory.
We unfortunately do not have the GPU resources currently to perform pre-training at this scale. With all due respect, we do not believe that being able to perform billion-parameter-scale pre-training experiments should be a requirement for acceptance at NeurIPS.
> The whole script is written in the style of proposing a new optimizer for generic training procedures (including pre-training), not just for SFT.
We respectfully disagree on this point: as the reviewer can see by examining our Limitations section (lines 327 to 331 of the submission), the lack of large-scale pre-training experiments is clearly stated as a limitation of our paper, and extending MicroAdam to large-scale pre-training is cited as our main direction of future work. We believe MicroAdam can be extended to large-scale (LLM) pre-training, but doing so would require significant application-specific effort and major computational resources for validation.
We believe we have established the viability of our approach in the submission, both via strong theoretical guarantees and via medium-scale SFT experiments. The rebuttal shows that our approach is also valid for vision pre-training experiments. We would be happy to amend the submission to further clarify the fact that we do not currently aim to do large-scale pre-training, which is left for future work.
Respectfully,\
The authors | null | null | null | null | null | null |
Unveiling The Matthew Effect Across Channels: Assessing Layer Width Sufficiency via Weight Norm Variance | Accept (poster) | Summary: The paper studies the effects of layer widths on neural network performance. By studying the affects of the weight norm in different channels, the work discovers several distinct stages of training, which are apparent across different modalities and architectures. The authors also show how the insights could help in improving network performance with the same computational costs.
Strengths: The paper has great merit, and discovers interesting dynamics in neural network training. Further, the dynamics are consistent, and come from an intuitive intuition of NN training. It is exhibited across different architectures and datasets, with an ability to also improve networks using these insights.
Weaknesses: The paper is really great! It could benefit from more experiments and archs, but the current experiments are convincing.
Technical Quality: 4
Clarity: 3
Questions for Authors: Some possible interesting references which could be relevant:
1. https://proceedings.neurips.cc/paper_files/paper/2023/hash/b63ad8c24354b0e5bcb7aea16490beab-Abstract-Conference.html
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer nGxF,
Thank you very much for your thoughtful and valuable feedback. Your recognition of the intuitiveness of our findings and the potential for our insights to further improve neural networks is very encouraging and motivating for us. We are committed to further advancing this line of research by conducting more experiments and arches.
> Some possible interesting references which could be relevant.
Thank you for introducing the references, which are indeed interesting in providing insights into the representations learned in self-supervised learning.
Once again, thank you for your valuable comments and support. We are more than happy to respond to any further questions. | Summary: The paper proposes a method to optimize neural network layer width by analyzing the variance of weight norms across channels during training. This approach helps determine if a layer is sufficiently wide. Empirical validation shows that adjusting layer widths based on these patterns can reduce parameters and improve performance across various models and datasets.
Strengths: 1. This paper proposes a novel way to assess layer width sufficiency using weight norm variance.
2. The paper shows robust experimental evidence across multiple datasets and model architectures.
3. Besides the application on the model width optimization, the paper also offers a deeper understanding of training dynamics through the identification of distinct patterns in weight norm variance, which could be valuable for other related areas.
Weaknesses: 1. The layer width optimization is related to channel pruning and NAS-based channel number search, which have achieved significant success in finding optimal layer widths. I think these methods should be discussed and compared in the paper.
2. Measuring the weight norm would introduce additional computation cost, can the authors discuss the complexity of the method and report how much additional time does the method introduce?
3. Can the authors provide more detailed and theoretical analysis on the choice of the metric? For example, there are other metrics such as gradient norm, Hessian matrix, and absolute weight value to evaluate the weight importance and training statistics, why the proposed metric is better?
Technical Quality: 3
Clarity: 4
Questions for Authors: See Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The limitations have been adequately discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer nv2M,
Thank you for your valuable feedback. We will address each of your concerns and questions in the following section.
> W1: The layer width optimization is related to channel pruning and NAS-based channel number search, which have achieved significant success in finding optimal layer widths. I think these methods should be discussed and compared in the paper
Thanks for the suggestion. We will add more discussion and comparisons to our paper. Generally, our approach differs from the two kinds of methods mentioned above.
* For channel pruning, the objective is to reduce the computational cost, which differs from the objective of this paper. Notably, most pruning methods would manually set a pruning ratio for each layer.
* For NAS-based search, the model width is searched over a model structure space. However, in this paper, we want to provide a more principled indicator for whether one layer in a network is sufficiently wide.
> W2: Measuring the weight norm would introduce additional computation costs. Can the authors discuss the complexity of the method and report how much additional time the method introduces?
The computation cost for measuring the weight norm is less than that of inferencing once with the model. We report the time used for measuring the weight norm on the CPU (AMD EPYC 7302 16-Core Processor).
| model | time used(s) |
| -------- | ------------------ |
| VGG16 | 0.4580$\pm$ 0.3560 |
| ResNet18 | 0.3150$\pm$ 0.2433 |
The results are averaged over 10 runs, and we report the mean and the std of the results. Therefore, the time required to measure the weight norm is neglectable. We will add a more comprehensive computational cost analysis in our paper.
> W3: Can the authors provide a more detailed and theoretical analysis of the choice of the metric? For example, there are other metrics such as gradient norm, Hessian matrix, and absolute weight value to evaluate the weight importance and training statistics. Why is the proposed metric better?
Since we are focusing on the channels in one layer, the output of each channel is combined (mostly added up) and becomes the layer's output. For one channel, the weight norm generally determines how influential the channels are to the layer's output, whereas a larger weight norm produces a larger output.
For other metrics, we have shown in Sec. 3.1 that the gradient norm is positively correlated to the weight norm, which also requires backward propagation on certain losses to be measured. While absolute weight value and Hessian matrix do not fit our purpose.
We hope our rebuttal could address your concerns. We are looking forward to your reply. Thanks for your time and effort during the reviewing process. We are more than happy to answer any further questions. | Summary: This paper tries to address an issue in deep neural networks: the trade-off between computational cost and performance, particularly focusing on the width of each layer. Traditionally, layer widths in neural networks are determined empirically or through extensive searches. The authors propose a novel approach by examining the variance of weight norms across different channels to determine if a layer is sufficiently wide.
Strengths: 1. The authors identify patterns regarding the variance of weight norm between different channels during training. Some layers exhibit an "increase to saturate" (IS) pattern and other layers show a "decrease to saturate" (DS) pattern. These patterns have some connection regarding the inter-channel similarities given a layer.
2. They redesign the width of classical CNN models like ResNets and VGGs according to their findings. And the empirical results provide some support for their arguments.
Weaknesses: 1. The empirical justification of the proposed method is somehow not convincing enough. From Table 2, we can see that channels within layers with IS patterns are also hard to merge, like layers 8 to 11 in Table 2. From Figure 12 in the Appendix, we can see that the similarity gradually decreased first and then increased for middle layers with DS patterns, and it is hard to say there is a hard cutoff.
2. Suppose the proposed argument is correct, then it seems challenging to use this method in practice. The width of each layer depends on the IS or DS pattern of the original model, so you need to train the original model first and then the model with the streamline width. This always increases the total training cost especially when scaling up the model.
3. The authors claim that they have similar observations for ViTs in lines 58-62, but no results are given. Since the original ViT has a uniform width, I am wondering whether the arguments still hold for models with uniform width.
4. The experiment setup is not very convincing. For CIFAR-10/100 scales of datasets, ResNet-18 or ResNet-50 are large and they have many redundant parameters. In the original ResNet paper, they used different architectures for CIFAR datasets with a much smaller number of parameters (0.27M to 1.7M with ResNet-20 to ResNet-110), please see Table 6 in "Deep Residual Learning for Image Recognition". The performance of ResNets and VGGs is much lower than public baselines for CIFAR-10 and CIFAR-100. Please see these two repo: https://github.com/weiaicunzai/pytorch-cifar100 and https://github.com/kuangliu/pytorch-cifar.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see the weakness.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Addressed in conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer iwdV,
Thank you for your valuable feedback. We address each of your concerns in the following.
> W1: The empirical justification of the proposed method is somehow not convincing enough. It is hard to say there is a hard cutoff.
With all due respect, our results indicate that there is no hard cutoff between the DS pattern and the IS pattern. As shown in Sec. 3.2, as the layer width increases, the pattern gradually changes from the DS pattern to the IS pattern. Similar to Fig. 9 and Fig. 12 in the Appendix, the pattern and the similarity gradually change from IS to DS and from DS to IS again. In our hypothesis, the perfect width would be that the pattern is at the edge between the DS and IS pattern.
> W2: Suppose the proposed argument is correct, then it seems challenging to use this method in practice.
Generally, whether the weight norm variance follows the IS or DS pattern is shown way before convergence, which makes a more efficient method possible.
Still, we agree that finding a more efficient method is challenging. However, the community has witnessed the development of more efficient methods for finding lottery tickets [1], where retraining is also required in the first paper. We hope our findings in this paper could innovate future works in adjusting layer width more efficiently.
> W3: The authors claim that they have similar observations for ViTs in lines 58-62, but no results are given.
Sorry for the confusion. The results regarding ViTs are presented in Sec. 3.2 and Fig. 3 [lines 153- 167]. As shown in Fig. 3, as we change the width of the MLP in a small ViT, the narrower MLP show a DS pattern, while the wider MLP shows an IS pattern. This phenomenon aligns with our observations on GCN, GRU, and CNNs, in that wider layers show the IS pattern while narrow layers show the DS pattern.
> W4: In the original ResNet paper, they used different architectures for CIFAR datasets with a much smaller number of parameters (0.27M to 1.7M with ResNet-20 to ResNet-110).
We conduct experiments with ResNet-20 on CIFAR10. Since it is a much smaller network, we show that the layers in the first two blocks follow the DS pattern, and only the layers in the third block follow the IS pattern. By widening the small ResNet-20 by 8 times, we show that the layers in the first block and the third block show the IS pattern, and the layers in the second block show the DS pattern.
Since these results are in figures, they are in the PDF attached to the joint rebuttal.
> W5: The performance of ResNets and VGGs is much lower than public baselines for CIFAR-10 and CIFAR-100.
The main difference between our implementation and the two mentioned repos is that we only use random flip in the training data preprocess while the two repos use normalization, and the repo for CIFAR100 uses random rotation in the data preprocessing.
We report the results using the code of the two mentioned repos:
**CIFAR10**
| Model | accuracy |
| ------------------- | -------------- |
| VGG16 | 94.03$\pm$ 0.09|
| VGG16 streamline | __94.19$\pm$ 0.10__ |
|ResNet18| 95.12$\pm$ 0.12|
| ResNet18 streamline | __95.60$\pm$ 0.09__ |
**CIFAR100**
| Model | accuracy |
| ------------------- | --------------- |
| VGG16 | 71.25$\pm$ 0.25|
| VGG16 streamline | __71.90$\pm$ 0.23__ |
|ResNet18 | 74.33 $\pm$ 0.21|
| ResNet18 streamline | __74.94$\pm$ 0.23__ |
Each result is averaged over $5$ runs. We report the mean and std of the accuracy.
We hope our rebuttal addresses your concerns, and we look forward to your reply. Thanks for your time and effort during the reviewing process. We are more than happy to answer any further questions.
[1] Frankle, Jonathan, and Michael Carbin. "The lottery ticket hypothesis: Finding sparse, trainable neural networks." ICLR 2019
---
Rebuttal Comment 1.1:
Title: Look forward to further discussion
Comment: Dear reviewer iwdV
We hope this message finds you well. We appreciate your valuable feedback and have tried our best to address each point of your concerns in our rebuttal. Since the discussion period is approaching its end, we are eager to hear from you, whether our rebuttal has addressed your concerns. Please feel free to comment on our rebuttal if you have further questions or comments. We are more than happy to respond to any further comments. Thank you for your commitment to the review process.
Best regards,
Authors
---
Rebuttal Comment 1.2:
Comment: I appreciate the efforts of the authors in the rebuttal. The rebuttal addressed some of the concerns. However, I believe there is more space to make the experiment more comprehensive on larger-scale datasets, models, and different kinds of architecture beyond CNNs (as stated in W2). As a result, I will keep my score.
---
Reply to Comment 1.2.1:
Title: Thank you for the comment and here are our response
Comment: Dear reviewer iwdV,
Thank you for your feedback and we regret to hear that our rebuttal is not to your satisfaction. The following is our response to your further comment.
As you mentioned "different kinds of architecture beyond CNNs", with all due respect, we want to emphasize that proposing a streamlining technique is only a small part of this paper. As the title of this paper indicates, we mainly identify the weight norm variance pattern regarding the layer width, **where we provide theoretical analysis in Sec. 3.1 and extensive experimental results on RNN, GNN, Transformer, and CNN in Sec. 3.2.**
In utilizing the identified pattern, besides the proposed method, we investigate the training dynamic of widely used CNNs with extensive experimental results providing insights such as there are three stages during the training procedure. We have also provided our analysis regarding each stage of training.
The proposed method and corresponding experiments in Sec.5 are our attempts to provide a simple sample using the identified pattern to guide the network design and further validate the observations and insights provided in this paper. **In believing the novelty of the insights and the new perspective provided in this paper, we agree with the comment that "there is more space" for more effective methods. We sincerely hope our paper can innovate future works and contribute to the community in the attempt to follow the footsteps of many excellent previous works (Best papers) [1, 2] in providing insights and innovating further applications.**
Thank you once again for your valuable time and feedback. We hope the identified pattern and provided insights in this paper could also be taken into consideration, as well as the proposed method.
Best regards,
Authors
[1] Frankle, Jonathan, and Michael Carbin. "The lottery ticket hypothesis: Finding sparse, trainable neural networks." ICLR 2019, Best Paper.
[2] Schaeffer, Rylan, Brando Miranda, and Sanmi Koyejo. "Are emergent abilities of large language models a mirage?." NeurIPS 2023, Best Paper. | Summary: This paper investigates the relationship between the differences in weight norms across channels and the adequacy of layer widths. The authors suggest that knowing these patterns(IS/DS) can help set layer widths better, leading to better resource use, fewer parameters, and improved performance in different network designs. The experiment shows that narrow-wide-narrow streamline network would boost the performance.
Strengths: Originality: Yes, this paper is the first work that studies the width of network from the perspective of variance of weight norm.
Quality. Good, clear figures and enough experiments.
Clarity. Good, the organization of the paper is well
Significance. Yes, it provide a new metric to adjust the width of the network.
Weaknesses: To determine the appropriate width, your method should involve training the model and observing various patterns of weight norm variance to see if the width is sufficient. However, retraining the model to decide on the width can be time-consuming. Considering that IS/DS indicate whether layers learn similar neurons, could we use the cosine similarity of a pretrained model to directly assess if the width is sufficient?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The experiment includes both ResNet and VGG16. How should we define the weight norm of convolutional layers? Chapter 3 provides a simple analysis of the Matthew effect between similar channels. Is there any theoretical analysis for convolutional layers?
2. DS patterns suggest less similar neurons, indicating we should increase the width. IS patterns suggest redundant neurons, meaning we can decrease the width. In practice, why don't we adjust the width exactly according to the IS/DS patterns? For example, in the appendix, Figure 9 shows IS/DS patterns for 13 layers of the VGG network. Layers 2, 3, 4, 5, 6, and 7 show DS patterns, so we should increase the width of these layers. However, in Table 13, only the widths of layers 2, 4, and 7 are increased. Would adjusting all the layers accordingly result in better performance?
3. How do we determine the exact width of a layer based on the DS pattern? Is there an empirical boundary for the width of layers? I noticed that in the third stage of weight norm variance, DS patterns exhibit different drop ratios. For instance, in Figure 9, layers 2, 3, and 7 drop to a higher level, while layers 4, 5, and 6 drop to a lower level. Does this indicate that we should increase the width more for layers 4, 5, and 6?
4. In table3, after adjusting to streamline width, all networks would have a higher FLOPs than origin. Would the performance improvement due the FLOPs increase?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No apparent limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer Q6z7,
Thank you for your valuable feedback. We address each of your concerns and questions in the following.
> W1: Retraining is time-consuming. Considering that IS/DS indicates whether layers learn similar neurons, could we use the cosine similarity of a pre-trained model to directly assess if the width is sufficient?
Yes, for pre-trained models, using cosine similarity could be a more efficient way to assess if the width is sufficient. However, when it comes to inspecting a model on a new dataset, using weight norm variance indicates whether the layer width is sufficient way before convergence.
> Q1: The experiment includes both ResNet and VGG16. How should we define the weight norm of convolutional layers? Chapter 3 provides a simple analysis of the Matthew effect between similar channels. Is there any theoretical analysis for convolutional layers?
The difference between a convolutional layer and a linear layer is that the scalar multiplication in a linear layer becomes a convolution with the kernel as a matrix. Like in a linear layer, the larger kernel produces a larger output. In practice, for the weight norm, we reshape the kernel matrix into a vector and use the l2 norm of the vector to further calculate the weight norm of each channel. Thanks for the good question, we will add more analysis in our paper and clarify it.
> Q2: DS patterns suggest less similar neurons, indicating we should increase the width. IS patterns suggest redundant neurons, meaning we can decrease the width. In practice, why don't we adjust the width exactly according to the IS/DS patterns?
Yes, we can adjust the width exactly according to the pattern. In the paper, we manually adjust the layer width, and we want to make the model into a streamlined shape while the layer width is neat. To answer the question, we have made a VGG16_muted version 2 and tested it. The result is in the response to Q4.
> Q3: How do we determine the exact width of a layer based on the DS pattern? Is there an empirical boundary for the width of layers? I noticed that in the third stage of weight norm variance, DS patterns exhibit different drop ratios. For instance, in Figure 9, layers 2, 3, and 7 drop to a higher level, while layers 4, 5, and 6 drop to a lower level. Does this indicate that we should increase the width more for layers 4, 5, and 6?
You are right. According to results in Sec. 3.2, the DS pattern exhibits different drop ratios. As we increase the width, the pattern moves from DS to IS. The perfect scenario would be finding the smallest width leading to an IS pattern.
> Q4: In Table 3, after adjusting to streamline width, all networks would have higher FLOPs than the origin. Would the performance improvement due to the FLOPs increase?
Since the shallow layers take much more calculation than the deep layers in CNN, for the streamlined networks, the increase in FLOPs is not significant compared to the decrease in the number of parameters.
We have also made a new version of VGG16_muted with lower FLOPs than the original VGG16 (99.68%).
| model | FLOPs (%) | params (%) | CIFAR10 accuracy | CIFAR100 accuracy |
| --------------- | --------- | ---------- | ---------------- | ----------------- |
| VGG16 | 100% | 100% | 94.03$\pm$ 0.09 | 71.25$\pm$ 0.25 |
| VGG16 steamline | 103.89% | 61.55% | __94.19$\pm$ 0.10__ | __71.90$\pm$ 0.23__ |
| VGG16 streamline v2 |99.68%| 58.53% | 94.16$\pm$ 0.11|71.68$\pm$ 0.29|
For this experiment, we add normalization and random rotation in the preprocess of training data as suggested by reviewer iwdV. We will provide more experimental results and add them to our paper. Generally, VGG16 streamlined version 2 reduces both the FLOPs and the number of parameters while gaining performance increases.
We hope our rebuttal addresses your concerns, and we look forward to your message. Please feel free to make any comments; we are more than happy to respond to any further questions. | Rebuttal 1:
Rebuttal: Dear AC and Reviewers,
We sincerely thank you for the time and effort you dedicated to the reviewing process. We are delighted to hear that reviewers find the paper to be well-written (Q6z7, nV2M, nGxF), novel (Q6z7, nV2M), and interesting (nGxF). To further address the comments and questions posed by reviewers, we have also conducted additional experiments as required by the reviewers, including:
* Measuring the time required to calculate the weight norm.
* Train the networks with normalization added to the preprocessing of training data.
* Conduct experiments with smaller ResNet-20. (**The results in figures are provided in the attached PDF**)
For each reviewer, we have posed a rebuttal addressing the concerns. We look forward to your reply and are more than happy to respond to any further comments. Once again, thank you for your valuable comments and support.
Pdf: /pdf/1a44c773748eaaaa992a94784f9a32c53d318939.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
FLoRA: Federated Fine-Tuning Large Language Models with Heterogeneous Low-Rank Adaptations | Accept (poster) | Summary: This paper proposes a novel aggregation algorithm FLoRA for low-rank federated fine-tuning. Compared to prior works, FLoRA stacks the LoRA matrices together rather than averaging them, thus avoid the mathematical errors during the aggregation process on the server. The paper provides experiments compared with baselines and discusses about the efficiency and privacy preservation of FLoRA. The paper provides convergence analysis of FLoRA.
Strengths: (1) This paper identifies and addresses an issue in LoRA federated learning algorithm from a unique perspective. The mathematical analysis provided in the paper is straightforward and convincing. This work can also provide intuition to other works related to LoRA-based merging.
(2) The paper is well-written and easy to follow.
(3) The paper provides sufficient experiments and analysis.
Weaknesses: (1) The paper misses some related works. For example, [1] also talks about LoRA in federated fine-tuning and optimize its efficiency and accuracy. [1] and other possible related works should be considered and added to the paper.
(2) The experiment results presented in Figure 4 show that some single client perform better than the server, which needs further explanation in the paper. Additionally, the subtitle 'The Impact of Heterogeneous...' here does not match the title 'Standalone...' of Figure 4, which needs further polishing.
(3) The paper should explain why it uses Llama rather than SOTA models (e.g., Llama-3).
[1] Sun, Y., Li, Z., Li, Y., & Ding, B. (2024). Improving LoRA in privacy-preserving federated learning. arXiv preprint arXiv:2403.12313.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weaknesses. The author might need to explain why they are using Llama rather than stronger models like Llama-2 or Llama-3.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limiations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer's recognition of the contributions of our paper. Regarding the weaknesses raised, we address all the concerns in detail below.
>W1 The paper misses some related works. For example, [1] also talks about LoRA in federated fine-tuning and optimize its efficiency and accuracy. [1] and other possible related works should be considered and added to the paper.
[1] is a quite relevant paper also discussing the error of simply averaging LoRA, and focuses on the privacy issues in federated learning to aggregate LoRA modules. We will add a detailed discussion regarding the methods and insights of [1] in our future version.
>W2 The experiment results presented in Figure 4 show that some single client perform better than the server, which needs further explanation in the paper. Additionally, the subtitle 'The Impact of Heterogeneous...' here does not match the title 'Standalone...' of Figure 4, which needs further polishing.
In our m=10 experiments, the training dataset is randomly divided into ten parts and distributed to clients for local training, resulting in an uneven quality distribution of the local datasets. Many studies have pointed out that datasets often contain a large amount of useless or even harmful data. As a result, it is acceptable that some clients may train local models that perform better than the global model due to receiving higher-quality or more suitable data for the test set.
We will change the title of Figure 4 to make it clearer and more accurate.
>W3 The paper should explain why it uses Llama rather than SOTA models (e.g., Llama-3).
In our experiments, we observed that state-of-the-art (SOTA) models are not very sensitive to common fine-tuning datasets. Some models show no performance improvement after fine-tuning or even experience performance declines. For instance, after 3 epochs of fine-tuning on MMLU, we tested Llama-2-7b and obtained the following results:
==========================
$\quad$ $\quad$ $\quad$ $\quad$ Before $\quad$ $\quad$ After
—-------------------------------------------
MMLU $\ $ $\ $ $\quad$ 45.80 $\quad$ $\quad$ 45.19
HellaSwag $\ $ $\ $58.69 $\quad$ $\quad$59.47
==========================
We believe this occurs because stronger models like Llama-2-7b have already converged on some common benchmarks, making further fine-tuning less effective in increasing accuracy. The performance of the Llama-3 series is also similar to this. To better demonstrate the effectiveness of fine-tuning, and to provide a clearer comparison between FLoRA and baseline methods, we chose models like Llama-7b where the benefits of fine-tuning are more evident.
Still, following the advice of reviewer 7eRV, we extend our experiments to other series of models, here are some results:
===========================================
Training Set $\quad$ $\quad$ $\quad$ Gemma-2b $\quad$ Gemma-7b
—------------------------------------------------------------------------
MMLU $\quad$ $\quad$ FLoRA $\quad$ $\quad$ 3.40 $\quad$ $\quad$ $\quad$ 5.89
$\quad$ $\quad$ $\quad$ $\quad$ $\ $ FedIT $\quad$ $ $ $\quad$ 3.33 $\quad$ $\quad$ $\quad$ 5.72
Wizard $\quad$ $\ $ $\ $ FLoRA $\quad$ $\quad$ 3.54 $\quad$ $\quad$ $\quad$ 6.65
$\quad$ $\quad$ $\quad$ $\quad$ $\ $ FedIT $\quad$ $ $ $\quad$ 3.44 $\quad$ $\quad$ $\quad$ 6.65
===========================================
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing the concerns raised during my review process, and I have also gone through all the comments from other reviewers. I agree with Reviewer 7eRV on the contribution of this work, the proposed method is simple yet effective. Our community indeed benefits from straightforward methods that address realistic problems. The simplicity of this work is not a drawback; rather, it facilitates broader use and adoption by the entire community.
The authors have successfully identified and rectified the mathematical errors present in current methods with their approach. Therefore, I believe this paper should be accepted by NeurIPS 2024 to attract broader attention from the community. More researchers need to explore the proposed method and potentially develop novel solutions to address the identified problem.
Based on the points mentioned above, I have decided to raise my score. | Summary: Previous methods using Low-Rank Adaptation (LoRA) for efficient federated fine-tuning may have led to mathematically inaccurate aggregation noise, reducing the effectiveness of fine-tuning and being unable to handle heterogeneous LoRA. The authors analyzed the mathematical inaccuracies in LoRA aggregation in existing federated fine-tuning methods and proposed a new stack-based aggregation method that supports federated fine-tuning across client machines with heterogeneous LoRA adapters. Extensive experiments demonstrate the superior performance of FLORA in homogeneous and heterogeneous environments.
Strengths: 1. The proposed method is simple, effective, and easy to implement.
2. The stacking mechanism for aggregating LoRA modules supports heterogeneous LoRA ranks across clients, which has broader application scenario.
3. The experiments utilized various models and benchmarks, and the results validated their effectiveness.
Weaknesses: 1. Writing needs to be calibrated. The sentences in line 64 and line 70 are repeated.
2.The discussion on accelerating convergence is not indexed in the main text (Appendix B) and lacks experimental validation.
3. More discussion may be needed on the advantages of FLoRA compared to the implementation of 'the aggregation of local update' in Figure 2.
4. The foundation models chosen for the experiment are all from the Llama series. Are there other types of foundation models that can be used for validation?
Technical Quality: 3
Clarity: 3
Questions for Authors: see the weaknesses above
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: see the weaknesses above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer's recognition of our proposed method's effectiveness and broad application scenarios. Regarding the weaknesses raised, we address all the concerns in detail below.
>W1: Writing needs to be calibrated. The sentences in line 64 and line 70 are repeated. 2. The discussion on accelerating convergence is not indexed in the main text (Appendix B) and lacks experimental validation.
We thank the reviewer for pointing out errors in our writing and section arrangement. We will correct the repeated part and discuss more about the convergence in the final manuscript. For the experimental validation, we provide some data points here to prove that FLoRA converges faster than our baselines. We use the Wizard dataset to fine-tune a Llama-7b and test on MT-bench. The LoRA rank here we use is 8. We display the global model performance in the first 5 rounds. FLoRA converges earlier than FedIT (in 3rd round) and has higher accuracy.
=================================
Epochs $\quad$ 0 $\quad$ 1 $\quad$ $ $ 2 $\quad$ $ $ 3 $\quad$ $ $ 4 $ $ $\quad$ 5
—-------------------------------------------------------
FLoRA $\ $ $ $ 2.86 $ $ 3.79 $ $ 3.88 $ $ 3.90 $ $ 3.88 $ $ 3.83
FedIT $\ $ $ $ $ $ 2.86 $ $ 2.93 $ $ 3.12 $ $ 3.47 $ $ 3.66 $ $ 3.62
=================================
>W2: More discussion may be needed on the advantages of FLoRA compared to the implementation of 'the aggregation of local update' in Figure 2.
Figure 2 in our paper illustrates the process where the server first recovers the local updates from the LoRA modules and then aggregates them. While this method might be acceptable if the server has ample computational resources, it significantly increases communication costs. Recovering the full gradient on the server necessitates sending back the full model parameter updates to the clients rather than just the LoRA modules. As shown in Figure 6 of our paper, this results in approximately five times more communication cost compared to our FLoRA approach, which is impractical in most scenarios. Therefore, our FLoRA method offers a substantial reduction in communication overhead, making it a more efficient solution than the traditional method of recovering and aggregating local updates on the server.
>W3: The foundation models chosen for the experiment are all from the Llama series. Are there other types of foundation models that can be used for validation?
We thank the reviewer for the advice. Due to the limited time of the rebuttal, we are only able to conduct the experiments on homogeneous LoRA fine-tuning on Gemma models. Here are the results of the MT-bench evaluation:
===========================================
Training Set $\quad$ $\quad$ $\quad$ Gemma-2b $\quad$ Gemma-7b
—------------------------------------------------------------------------
MMLU $\quad$ $\quad$ FLoRA $\quad$ $\quad$ 3.40 $\quad$ $\quad$ $\quad$ 5.89
$\quad$ $\quad$ $\quad$ $\quad$ $\ $ FedIT $\quad$ $ $ $\quad$ 3.33 $\quad$ $\quad$ $\quad$ 5.72
Wizard $\quad$ $\ $ $\ $ FLoRA $\quad$ $\quad$ 3.54 $\quad$ $\quad$ $\quad$ 6.65
$\quad$ $\quad$ $\quad$ $\quad$ $\ $ FedIT $\quad$ $ $ $\quad$ 3.44 $\quad$ $\quad$ $\quad$ 6.65
===========================================
We can see that FLoRA still outperforms the baseline in Gemma models. We will further conduct heterogeneous experiments and experiments of MMLU and ARC evaluation.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the response and my concerns are well addressed. The simple yet effective method is friendly for practical applications, especially for the federated learning field, so I increase my score. | Summary: This paper focuses on developing a novel aggregation strategy for training LLMs in FL with LoRA. Specifically, they identify that separately aggregating two matrices of LoRA module is not identical to aggregating model updates. Therefore, they propose a stack based aggregation strategy that merge matrices before aggregating them.
Strengths: 1. Experiments show that the proposed method is effective.
2. Clear presentation.
Weaknesses: 1. The novelty is limited. Although the proposed method is effective, the contribution of the stack-based aggregation strategy may be too limited.
2. The motivation of the stack based aggregation is not clear. Since the computational resource of the server is usually sufficient, it is totally acceptable that recovering the local update and then aggregating them.
Technical Quality: 2
Clarity: 3
Questions for Authors: no
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer's recognition of the effectiveness of our proposed method. Regarding the weaknesses raised, we address all the concerns in detail below.
>W1: The novelty is limited. Although the proposed method is effective, the contribution of the stack-based aggregation strategy may be too limited.
The primary contribution of our paper is identifying a mathematically inaccurate aggregation noise present in nearly all existing LoRA-based federated fine-tuning algorithms. We then propose a novel and efficient stacking method to address this issue. Our paper introduces significant novelty by being the first to highlight the noise generated by the simplistic averaging process commonly used. Given that almost all current LoRA federated fine-tuning algorithms and codebases rely on this averaging method, our stacking algorithm offers a mathematically accurate and provably convergent solution. We believe this contribution will provide substantial value to the community.
>W2: The motivation of the stack based aggregation is not clear. Since the computational resource of the server is usually sufficient, it is totally acceptable that recovering the local update and then aggregating them.
Recovering the local updates and then aggregating them may be acceptable from a computational cost perspective. However, this approach necessitates the server sending back the full model parameter updates to the clients, rather than just the LoRA modules. As illustrated in Figure 6 of our paper, this method results in approximately five times the communication cost compared to our FLoRA approach, which is impractical. Therefore, we argue that recovering the local updates and then aggregating them is not a feasible solution due to the prohibitive communication overhead.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I still have some concerns.
1. I am confused about the necessity of adopting a stack-based data structure to merge the local A_i and B_i modules. It seems that we can just adopt a set to store all modules.
2. It appears that the communication cost is linear to the number of selected clients and the rank of A and B modules. Could the method still save communication costs when the number of selected clients is large or the rank is large?
---
Reply to Comment 1.1.1:
Title: Thanks for your response. We have addressed your concerns here.
Comment: >I am confused about the necessity of adopting a stack-based data structure to merge the local A_i and B_i modules. It seems that we can just adopt a set to store all modules.
While storing all modules in a set does indeed have the same memory cost as our proposed method, it remains impractical in real-world scenarios for two primary reasons:
**(1) Incompatibility with LoRA Implementation:** LoRA is not implementable when stored separately. In LoRA-fine-tuned language models, the LoRA modules can be utilized without merging them into the base model parameters, allowing the LoRA modules and the original model parameters to be stored together. As discussed in the LoRA paper [1], the forward pass is represented as:
$h = \Delta x + W x=W x+BAx,$
where $BA$ represents the LoRA parameter. In our FLoRA approach, the A and B are global LoRA modules. However, if we store all modules separately, the inference stage would require the computation:
$h = \Delta x + Wx = Wx + B_0A_0x + B_1A_1x + B_2A_2x + … + B_{K-1}A_{K-1} x,$
This necessitates applying $K$ LoRA modules within a single base model, which introduces iterative computations that are inefficient and impractical during the inference stage. Moreover, current LoRA codebases do not support this approach, where a single base model integrates multiple LoRA modules. The inconvenience of using a set to store LoRA modules becomes particularly problematic as the number of clients increases.
**(2) Incompatibility with Multi-Round Federated Fine-Tuning:** Storing LoRA modules to a set cannot effectively support multi-round federated fine-tuning. In subsequent rounds, fine-tuning requires an updated base model, necessitating updating the base model on the server or the clients. Updating on the server, as discussed in our rebuttal, incurs significantly higher communication costs. Conversely, updating on the clients imposes a substantial computational burden on each client, making this approach impractical.
>It appears that the communication cost is linear to the number of selected clients and the rank of A and B modules. Could the method still save communication costs when the number of selected clients is large or the rank is large?
Yes, we are indeed able to save communication costs when using a widely applied LoRA rank.
LoRA is commonly employed for fine-tuning models because it significantly reduces computational resource demands. This advantage hinges on the premise that the LoRA rank is much smaller than the rank of the model parameters themselves. The key prerequisite for using LoRA is that the number of parameters in LoRA must be substantially smaller than that of the base model. Otherwise, the benefits of LoRA would be negated, making full fine-tuning a more efficient option. Therefore, the LoRA rank should not be so large that it surpasses the communication resources required for transmitting the model’s original parameters.
Regarding the number of selected clients, it is important to note that the commonly used LoRA rank is typically about 1/100 of the base model size. For instance, in the case of Llama-7b, which has 4096x4096 full parameters, a typical LoRA configuration might use only 16x4096 parameters. This relationship can be expressed as:
$P_{LoRA} << P_{full}$
,where $P$ is the parameter size. When the clients send parameters to the server, FLoRA can consistently save communication costs because:
$P_{LoRA} < P_{full}$
This inequality holds true whenever LoRA is used. Moreover, when the server sends parameters back to the clients, FLoRA can also save communication costs under the condition:
$2K < P_{full}/P_{LoRA}$
Given that $P_{full}/P_{LoRA}$ is typically greater than 100 in fine-tuning scenarios, and federated fine-tuning server generally lack the communication resources to support extensive client communication (e.g., current works use 10 clients [2]), FLoRA proves effective in reducing communication cost.
References:
[1] Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., ... & Chen, W. (2021). Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685.
[2] Chen, J., Xu, W., Guo, S., Wang, J., Zhang, J., & Wang, H. (2022). Fedtune: A deep dive into efficient federated fine-tuning with pre-trained transformers. arXiv preprint arXiv:2211.08025.
---
Rebuttal 2:
Title: Looking forward to your response
Comment: Dear Reviewer,
We wanted to inform you that we have addressed the additional concerns you raised. As the rebuttal phase is coming to an end, we kindly request your prompt feedback on our rebuttal.
We appreciate your time and consideration and look forward to your response.
Best regards,
Authors
---
Rebuttal Comment 2.1:
Comment: Thanks for the reply. Can the stack-based aggregation save the computational cost?
It would be better if the analysis regarding the computational and communication costs could be provided and compared with the aggregation simply using a set.
---
Rebuttal 3:
Title: Thanks for your reply, we clarify the computation and communication cost here
Comment: Compared to storing the LoRA modules in a set, stack-based methods can significantly reduce computational costs due to the following advantages:
1. **Reduced Memory Overhead**: During the weight recovery process of LoRA, each LoRA module pair is expanded to the full parameter size. In LoRA fine-tuning with \( M \) clients, if the modules are stored in a set, each module must be recovered separately, leading to a total memory usage of \( (M + 1)P_{\text{full}} \), where \( P_{\text{full}} \) represents the full parameter size. This includes the original parameters and \( M \) newly recovered parameters. In contrast, the stack-based method employed in FLoRA allows for the recovery of the full parameters with **only a single matrix multiplication**, resulting in a total memory requirement of just \( 2P_{\text{full}} \). Given that \( M > 1 \) in federated fine-tuning, this approach leads to substantial computational savings.
2. **Optimized GPU Utilization**: Since parallel computing on a GPU is significantly faster than sequential processing, it is more efficient to compute the stacked LoRA modules collectively rather than individually. This approach better aligns with the GPU’s architecture, leading to faster computations and more efficient resource utilization.
To illustrate the benefits of our method in terms of computation and communication costs, particularly in comparison to the baseline FedIT and the set-based method proposed by the reviewer, we provide a table below. For reference, the communication and computation costs of FLoRA are normalized to 1:
=========================================================================================
Method $\quad$ Communication $\quad$ Recovering Computation $\quad$ Accurate aggregation $\quad$ Applicable to inference
-------------------------------------------------------
FedIT $\quad$ $\quad$ P_{full}/P_{LoRA} $\qquad$ $\qquad$ 1 $\qquad$ $\qquad$ $\quad$ $\qquad$ $\qquad$ No $\quad$ $\qquad$ $\qquad$ $\qquad$ $\qquad$ Yes
Set-saving $\qquad$ $\quad$ 1 $\qquad$ $\qquad$ $\qquad$ $(M+1)/2$ $\ $ $\ $ $\ $ $\qquad$ $\quad$ $\qquad$ Yes $\quad$ $\qquad$ $\qquad$ $\qquad$ $\qquad$ No
FLoRA (ours) $\quad$ $\quad$ 1 $\qquad$ $\qquad$ $\qquad$ $\quad$ 1 $\quad$ $\quad$ $\qquad$ $\qquad$ $\quad$ $\qquad$ Yes $\quad$ $\qquad$ $\qquad$ $\qquad$ $\qquad$Yes
=========================================================================================
According to this table, our approach generally outperforms the baselines in important metrics.
---
Rebuttal 4:
Title: Looking forward to your further feedback
Comment: Dear Reviewer 4FRM,
The rebuttal deadline is approaching soon, and we have provided detailed responses to your latest concerns. We kindly request that you review our replies at your earliest convenience. If there are any further questions or issues, please let us know. If our responses have adequately addressed your concerns, we would appreciate a higher score.
Thank you for your time and consideration.
---
Rebuttal 5:
Comment: Thanks for the response.
I agree that using stack-based aggregation can improve GPU utilization. Yet, I am confused about the other advantages.
1. The simple set-based aggregation can also have a total memory usage of ( 2P_{\text{full}} ). In particular, we can compute B_iA_i one by one and add outcomes one by one.
2. I am confused about why the recovering computation of set-based aggregation is (M+1)/2. Considering an extreme case $r=1, A \in R^{d * 1}, B \in R^{1 * d}$, then the computational cost of set-based aggregation is $K*d*d$
The computational cost of stack-based computational cost is also $K*d*d$.
---
Rebuttal Comment 5.1:
Title: Thanks for your response. We further address your concern here.
Comment: >The simple set-based aggregation can also have a total memory usage of ( 2P_{\text{full}} ). In particular, we can compute B_iA_i one by one and add outcomes one by one.
We do agree that computing B_iA_i one by one has a total memory usage of ( 2P_{\text{full}} ). However, as we mentioned earlier, such sequential computation would significantly increase the time required for the recovery process. If the time for the GPU to perform one $BA$ is t, then calculating for $M$ clients one by one will take approximately $Mt$. This is because the GPUs have strong parallel computing capabilities. The time for the GPU to compute a single LoRA matrix multiplication with $r=1$ is similar to the time required to compute it with $r=M$, whereas performing $M$ separate $r=1$ LoRA matrix multiplications takes $M$ times longer than performing a single $r=M$ multiplication.
We also want to recall that the main drawback of using such a set-based aggregation is the difficulty of utilizing LoRA on the base model. The LoRA modules cannot be utilized without merging them into the base model parameters, which limits the implementation of the fine-tuned model.
>I am confused about why the recovering computation of set-based aggregation is (M+1)/2. Considering an extreme case $r=1$, then the computational cost of set-based aggregation is $Kdd$. The computational cost of stack-based computational cost is also $Kdd$
Here, we need to clarify that the computational overhead we are evaluating primarily concerns GPU memory rather than computational complexity. This is because, in large model fine-tuning/inference, the main bottleneck is memory usage. Current works also mainly focus on reducing memory consumption rather than computational complexity, often even increasing complexity to reduce memory usage (e.g., LoRA). Therefore, our evaluation of computational overhead also focuses on memory consumption. As mentioned in Question 1, if the set-based approach calculates all LoRA modules simultaneously, it will increase memory usage to $(1+M)/2$. However, calculating LoRA modules sequentially would significantly reduce the efficiency of code execution.
Thank you for continuing to engage with us, and we look forward to receiving your further feedback. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Beyond Euclidean: Dual-Space Representation Learning for Weakly Supervised Video Violence Detection | Accept (poster) | Summary: This paper presents a pioneering approach called Dual-Space Representation Learning (DSRL) for the task of weakly supervised video violence detection. The overall framework contains the Hyperbolic Energy-constrained Graph Convolutional Network (HE-GCN) for capturing event hierarchies and the Dual-Space Interaction (DSI) module to facilitate cross-space feature integration. Experiments on XD-Violence and UCF-Crime datasets show the advantage of the proposed method.
Strengths: - **Motivation**: The authors propose to combine Euclidean and hyperbolic geometries to handle the challenging scenarios of ambiguous violence, which is a problem-centric motivation.
- **Written**: The quality of this paper's presentation is good and the whole paper is well-organized.
- **Technical Correctness**: The proposed method is technically sound and has been evaluated on two datasets.
Weaknesses: - **Novelty**: Some modules in this paper lack innovation, such as the cross-graph attention mechanism is commonly used in previous works.
- **Model Complexity**: This paper does not discuss the computational complexity or parameters of the DSRL model. For practical applications, especially in real-time surveillance scenarios, it's important to ensure that the model can operate efficiently with minimal latency.
Technical Quality: 3
Clarity: 3
Questions for Authors: - **Qualitative Visualizations**: Does the 'Hyperbolic' in Figure 5 represent the result of HyperVD? If not, I wonder whether HyperVD can handle ambiguous violence.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please see #Weaknesses and #Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive evaluation of the paper. We have carefully considered your constructive and insightful comments and here are the answers to your concerns.
***Q1: Novelty: Some modules in this paper lack innovation, such as the cross-graph attention mechanism is commonly used in previous works.***
Thank you for your feedback regarding the novelty of our work. Here we would like to emphasize the innovative aspects of the introduced Cross-Space Attention (CSA) mechanism and Hyperbolic Energy-constrained Graph Convolutional Network Module (HE-GCN).
Compared with the commonly used cross-graph attention (CGA) mechanism,
**i) the role of our CSA is different.** CSA aims to break the information cocoon of different spaces and realize the interaction of different geometric structure information across spaces while CGA focuses on the representation learning of the same geometric structure information in a single space.
**ii) The interaction strategy of CSA is different.** On the one hand, since the Lorentzian metric preserves true relationships by computing the nonlinear distance between nodes, while the cosine similarity may show fake relationships, especially for ambiguous violence with similar visual cues, we adopt the Lorentzian metric instead of the cosine similarity that is commonly used in CGA to obtain attention weights for better interaction in spaces with different geometric structures. The corresponding comparison results reported in the submitted manuscript (lines 270-274) also prove the effectiveness of the Lorentzian metric. On the other hand, we use a step-by-step interaction model. Unlike the commonly used one-step interaction, the result of one-step interaction, which is dominated by the information of the current space, is interacted again with the original information in another space to achieve full cross-space information interaction.
**iii) The construction of the graph for interaction is different.** Unlike the traditional strategy of node selection for message aggregation, the graph in CSA is constructed through a more effective dynamic node selection strategy where the node selection threshold during message aggregation is layer-sensitive and affected by the hyperbolic Dirichlet energy of the current layer.
It is worth emphasizing that HE-GCN is also one of our main innovations. Instead of adopting the hard node selection strategy in HGCN, HE-GCN selects nodes for message aggregation by our introduced layer-sensitive hyperbolic association degrees, which are dynamic thresholds determined by the message aggregation degree at each layer. To better align with the characteristics of hyperbolic spaces, we introduce the hyperbolic Dirichlet energy to quantify the extent of message aggregation. Benefiting from the dynamic threshold, the layer-by-layer focused message passing strategy adopted by HE-GCN not only ensures the efficiency of information excavation but also improves the model's comprehensive understanding of the events, thus enhancing the model's ability to discriminate ambiguous violent events.
***Q2: Model Complexity: This paper does not discuss the computational complexity or parameters of the DSRL model. For practical applications, especially in real-time surveillance scenarios, it's important to ensure that the model can operate efficiently with minimal latency.***
Thank you for the valuable feedback regarding the computational complexity and parameters of the DSRL model. We agree that understanding the model's complexity is especially important for real-time applications. Our model indeed meets the requirements for real-time processing, as analyzed below:
Our experiments were conducted on a single NVIDIA RTX A6000 GPU.
For **Video Input**, the model achieves 83.87 FPS while handling only video data. The model's parameters are 13.4 MB (I3D parameters at 12.49 MB and DSRL parameters at 0.91MB), which keeps the overall parameter size manageable and enables quick response times.
For **Video + Audio Input**, the model maintains a high processing speed of 56.86 FPS, even with the additional computational load of audio processing. The model's parameters are 85.54 MB (I3D parameters at 12.49 MB, VGGish parameters at 72.14 MB and DSRL parameters at 0.91MB ).
These results demonstrate that our model exhibits excellent real-time performance with multimodal inputs, making it suitable for latency-sensitive real-world applications.
***Q3: Does the "Hyperbolic" in Figure 5 represent the result of HyperVD? If not, I wonder whether HyperVD can handle ambiguous violence.***
Yes, the "Hyperbolic" in Figure 5 indeed represents the result of HyperVD.
Our experiments indicate that HyperVD is not sufficiently effective in handling ambiguous violence due to two main limitations. First, HyperVD relies solely on hyperbolic representations, which weakens its ability to capture essential visual features. Second, it inadequately learns the hierarchical relationships of complex violent events, which further contributes to its suboptimal performance in dealing with ambiguous violence. We will clarify in the final version of the paper that "Hyperbolic" refers to the results of HyperVD.
---
Rebuttal Comment 1.1:
Comment: Thanks for providing the response. It partially addressed my concerns. However, can you provide the real-time processing speed of HyperVD and MACIL-SD? It would be nice to see some comparisons between them.
---
Reply to Comment 1.1.1:
Comment: Thank you for the response.
Following your suggestion, we have evaluated the real-time processing speed of HyperVD and MACIL-SD with an NVIDIA RTX A6000 GPU for a fair comparison, and the FPS of HyperVD and MACIL-SD is 98.90 and 102.99, respectively.
In this work, we focus on addressing the problem of ambiguous violence. To tackle this issue, we have to introduce some additional computations, with the speed sacrificed to some extent. Despite our method not being as fast as these two methods, it still meets real-time processing requirements and is acceptable. Moreover, our AP performance on the XD-Violence dataset has improved by 4.21\% over MACIL-SD and 1.94\% over HyperVD. Notably, from the visualization comparison experiments, as shown in Figure 5 ("Hyperbolic" is HyperVD) of the submitted manuscript, our method is obviously better than HyperVD in handling ambiguous violence.
We hope our responses have addressed your concerns. If you have any further questions or comments, please kindly let us know, and we will be glad to respond. | Summary: This paper presents a novel approach called Dual-Space Representation Learning (DSRL) for weakly supervised Video Violence Detection (VVD). Traditional VVD methods rely heavily on Euclidean space representation, which often fails to distinguish between visually similar events. The proposed DSRL method leverages both Euclidean and hyperbolic geometries to enhance the discriminative capacity of features. It introduces two key modules: the Hyperbolic Energy-constrained Graph Convolutional Network (HE-GCN) and the Dual-Space Interaction (DSI) module, to facilitate better information interactions and improve violence detection accuracy. The method achieves state-of-the-art performance on the XD-Violence dataset in both unimodal and multimodal settings. Additionally, the method shows good effectiveness on the UCF-Crime dataset, further proving its strong generalization ability across datasets.
Strengths: 1.The proposed method is totally interesting and novel. It provides an effective solution to the ambiguous anomalies problem by progressively learning event context in hyperbolic spaces.
2.The proposed method achieves good performance, especially on XD-Violence dataset in both unimodal and multimodal settings. Visualization experiments also highlight its effectiveness in solving the ambiguous anomalies problem.
3.The HE-GCN and DSI modules are well-motivated, and the authors have reported sufficient ablation study results to prove the effectiveness of each module. I think it is much helpful.
Weaknesses: 1.For the HE-GCN module, I am curious about the relationship between HDE and LSHAD. The authors should explain the relationship between HDE and LSHAD in detail?
2.Figure 2 should be revised to clarify that the method is applicable to both unimodal and multimodal inputs, not just multimodal. Currently, it may mislead readers.
3.The Preliminaries section is well-detailed and informative, but it may be complex and difficult for a broader audience to understand.
4.There are some minor writing issues: Lines 439 and 479, “Figure” and “Fig” should be standardized; Line 147, there is a symbol display error.
Technical Quality: 3
Clarity: 3
Questions for Authors: Same as weakness.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***Q1: For the HE-GCN module, I am curious about the relationship between HDE and LSHAD. The authors should explain the relationship between HDE and LSHAD in detail?***
Thank you for the valuable comments. We will elaborate on the relationship between *HDE* and *LSHAD* in detail.
In fact, *HDE* was designed to serve *LSHAD*. To enhance the model's ability to capture contextual information in hyperbolic space, we propose *LSHAD* as a threshold in the node selection for message aggregation, which ensures that the model first captures the broader global context with a relaxed threshold at the beginning of message aggregation, then gradually shifts focus to the local context with stricter thresholds.
When there is a significant difference in features between nodes, we need to adopt a more relaxed node selection strategy to choose as many nodes as possible. However, as the number of layers increases and the features between nodes become more similar, we need to adopt a stricter selection strategy to choose the more important nodes.
Therefore, we make the *LSHAD* relate to the degree of message aggregation of the current layer, which is measured by *HDE*. *HDE* essentially captures how effectively the nodes' information is aggregated and how similar their features become through the message-passing process.
The synergy between *HDE* and *LSHAD* ensures that HE-GCN module effectively captures hierarchical relationships and aggregates information, enhancing the discriminative power of DSRL. By employing these strategies, our DSRL leverages the strengths of both Euclidean and hyperbolic geometries to improve the performance of VVD, particularly in distinguishing ambiguous violent events.
***Q2: Figure 2 should be revised to clarify that the method is applicable to both unimodal and multimodal inputs, not just multimodal. Currently, it may mislead readers.***
We thank the reviewer for the valuable suggestions. We will modify Figure 2 to clarify that our method is applicable to both unimodal and multimodal inputs in the final version.
***Q3: The Preliminaries section is well-detailed and informative, but it may be complex and difficult for a broader audience to understand.***
Thank you very much for your valuable feedback. We understand that the Preliminaries section may be complex for a broader audience, and we are committed to making it more accessible. In the final version of the paper, we will add further explanations for some of the equations, such as providing additional descriptions for Eq. 4 and Eq. 5.
We have revised the explanation regarding the connections between hyperbolic space and tangent space. The original sentence:
"The connections between hyperbolic space and tangent space are established by the exponential map $\exp _{\mathbf{x}}^{K}(\cdot)$ and logarithmic map $\log _{\mathbf{x}}^{K}(\cdot)$ are given as follows:" has been modified for clarity.
The revised explanation is: "The mapping between hyperbolic spaces and tangent spaces can be done by exponential map and logarithmic map. The exponential map is a map from a subset of a tangent space of $\mathbb{L} _ {K}^{n}$ (i.e., $\mathcal{T} _ {\mathbf{x}} \mathbb{L} _ {K}^{n}$) to $\mathbb{L} _ {K}^{n}$ itself. The logarithmic map is the reverse map that maps back to the tangent space. For points $\textbf{x} ,\textbf{y} \in \mathbb{L} _ {K}^{n} $, $\textbf{v}\in \mathcal{T} _ {\mathbf{x}} \mathbb{L} _ {K}^{n}$, such that $\textbf{v}\ne \textbf{0} $ and $\textbf{x}\ne \textbf{y} $, the exponential map $\exp _{\mathbf{x}}^{K}(\cdot)$ and logarithmic map $ \log _{\mathbf{x}}^{K}(\cdot) $ are given as follows:''.
***Q4: There are some minor writing issues: Lines 439 and 479, “Figure” and “Fig” should be standardized; Line 147, there is a symbol display error.***
Thank you for your valuable comments. We will address these minor writing issues in the final version by changing "Fig" to "Figure" and correcting the symbol errors. Furthermore, we have carefully reviewed the paper to ensure that there are no writing errors.
---
Rebuttal 2:
Title: new comments
Comment: The authors have addressed all my concerns, so I stay with my original score. | Summary: This paper proposes leveraging dual-space learning, encompassing both Euclidean and Hyperbolic spaces, to enhance discriminative capacity by capitalizing on the strengths inherent in hyperbolic learning. To overcome the limitations of the hard node strategy in the previous method, this work introduces DSRL (Dual-Space Representation Learning), which enhances node aggregation selection by utilizing layer-sensitive hyperbolic association degrees constrained by hyperbolic Dirichlet energy. A cross-space attention mechanism is then proposed to facilitate information interactions between Euclidean and hyperbolic space to capture better discriminative features.
Strengths: ### Novelty:
Hyperbolic space learning is a valuable direction in the field of video understanding because there are inherent hierarchical relationships under video series. The previous method (HyperVD) is a good attempt, but it simply transfers the XD-Violence baseline into hyperbolic space without solving the limitations in hyperbolic GCN. The authors introduce a dual-space approach to combine the strengths of both Euclidean and Hyperbolic spaces to maximize discrimination capability and use hyperbolic Dirichlet energy to address the over-smoothing issue underlining the previous hyperbolic networks.
### Clarity:
Overall, the main paper is easy to follow and organized well.
### Experiments:
The main quantitative and qualitative experiments are adequate for the violence detection task, including ablation studies and t-SNE visualizations. The results demonstrate significant improvements compared to previous methods.
Weaknesses: ### Experiments:
It would have made the manuscript more convincing if the authors could provide inference visualizations of the ablation modules, which means to better illustrate the discriminative power of the method w/ or w/o your core modules.
### Minor Typos:
- Line 137: $\mathcal{L}^{\/}$ should be $\mathcal{L}^{n}$.
- Eq (17) misses the right bracket for the $softmax$ function.
- In Eq (9), it misses the symbol annotations for $\mathbf{v}$. Did you use the hyperbolic transformation strategy proposed in "Fully hyperbolic neural networks", if so, the citation is missed here.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Why utilize these two spaces (hyperbolic and Euclidean) for the representation learning? Have you explored different hyperbolic spaces such as Lorentz and Poincaré Ball?
- How about the model's training stability?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please refer to the weakness.
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects', 'Ethics review needed: Human rights (including surveillance)']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for acknowledging the strength of our method. We have carefully considered your constructive and insightful comments and here are the answers to your concerns.
***Q1: It would have made the manuscript more convincing if the authors could provide inference visualizations of the ablation modules, which means to better illustrate the discriminative power of the method w/ or w/o your core modules.***
Thank you for your positive comments and valuable suggestions. We have supplemented inference visualization results for the ablation module, which will be added in the final version. We conducted corresponding visualisation experiments to explore the discriminative power of the model w/ or w/o our core modules, HE-GCN and DSI. **Specific visualisations and analysis are shown in Figures 1 and 2 of the uploaded PDF file.** Two dimensions (the feature-level and the frame-leve) are analyzed in the two figures.
**At the feature-level,** the results are shown in Figure 1 of the uploaded PDF. Compared to GCN, HE-GCN can capture the hierarchical context of events, effectively separating features. This results in a greater distance between feature clusters compared to Figure 1(a) original features and Figure 1(b) using only Euclidean representation learning. However, some challenging feature points remain difficult to distinguish. The addition of the DSI module facilitates information interactions between Euclidean and hyperbolic spaces, capturing more discriminative features to better differentiate these challenging feature points. As shown in Figure 1(d), the DSI module further enhances feature differentiation by effectively combining information from both spaces.
Meanwhile, **at the frame-level**, experiments conducted on two test videos as shown in Figure 2 of the uploaded PDF demonstrate that our method significantly improves the discriminative power for identifying violent frames compared to the baseline, which uses only GCN. Compared with the model w/o our core modules, both HE-GCN and DSI contribute to detecting violent frames.
Following your suggestions, we believe that we can more comprehensively demonstrate the effectiveness of our method and make our manuscript more convincing.
***Q2: Minor Typos:***
Thank you for spotting these minor typos. We will modify them accordingly in the final version. In Eq(9), $\textbf{v} \in \mathbb{R}^{n+1}$ denotes a velocity (ratio to the speed of light) in the Lorentz transformations. And we used the hyperbolic transformation from this paper, which we will cite in the final version.
***Q3: Why utilize these two spaces (hyperbolic and Euclidean) for the representation learning? Have you explored different hyperbolic spaces such as Lorentz and Poincaré Ball?***
Since we focus on addressing ambiguous violence in the VVD task, it is essential to consider both visual features and the hierarchical contextual information of the event. Therefore, we employ these two spaces for representation learning for two main reasons:
1) Euclidean space is widely used in many domains and is effective at capturing visual features, such as salient motion and shape changes in videos. However, it often overlooks the relationships between events.
2) Hyperbolic space, characterized by exponentially increasing metric distances, naturally reflects the hierarchical structure of data. It enhances the hierarchical relationships of events but tends to weaken the expression of visual features.
By combining these two spaces for representation learning, we can leverage their respective strengths to improve the discriminative of feature representations, ultimately enhancing the performance of VVD.
In the preliminary practice, we implement our method with the Poincaré ball model. However, we encountered situations where the loss became NaN, which is caused by the numerical instability of the Poincaré ball model. Subsequently, we switched to the Lorentz model because it guarantees numerical stability and computational simplicity in its exponential and logarithmic maps and distance functions. Due to the stable training and improved performance, we finally selected the Lorentz model during hyperbolic representation learning.
***Q4: How about the model's training stability?***
We evaluate the training stability of our model by analyzing the trends in training loss and average precision (AP). **The results presented in Figure 3 of the uploaded PDF file** exhibit a stable training process.
**Training loss curve over time.** The training loss starts at approximately 0.61 and gradually decreases over time, stabilizing around 0.17 towards the end of the training. This consistent decrease and eventual stabilization indicate effective learning and convergence throughout the training process.
**Average precision curve over time.** The AP shows some fluctuations during the early epochs, which is normal as the model adjusts its weights and parameters. Despite these fluctuations, the overall trend is upward, indicating that the model's performance is improving. Subsequently, the AP values show less variation and remain at a high level.
---
Rebuttal Comment 1.1:
Title: Comments from Reviewer rxrS
Comment: Thank you for the authors' detailed response, most of my concerns have been addressed and I maintain my original rating. | Summary: The paper introduces a novel method called Dual-Space Representation Learning (DSRL) aimed at enhancing the detection of video violence, particularly in scenarios where the violence is weakly supervised and visually ambiguous.
DSRL combines the strengths of both Euclidean and hyperbolic geometries to capture discriminative features for violence detection, leveraging the hierarchical structure modeling capability of hyperbolic spaces.
Two specialized modules are designed: the Hyperbolic Energy-constrained Graph Convolutional Network (HE-GCN) module and the Dual-Space Interaction (DSI) module, enhancing the understanding of event hierarchies and promoting dual-space cooperation.
Comprehensive experiments on the XD-Violence dataset demonstrate the effectiveness of DSRL, outperforming existing methods in both unimodal and multimodal settings.
Strengths: 1. The method is innovative. DSRL designs a special information aggregation strategy. Through layer-sensitive hyperbolic association degree (LSHAD) and hyperbolic Dirichlet energy (HDE), it effectively captures the hierarchical context of events and can better model the problem of violent video detection.
2. This method uses the property of hyperbolic space that can better distinguish visually similar events and is applied to a suitable scenario - fuzzy violent event recognition, which improves recognition accuracy.
3. The performance of the method is outstanding. It has reached SOTA in quantitative analysis. In qualitative analysis, DSRL demonstrates its ability to distinguish between violent and normal events in different situations, including its performance in complex violent events.
4. The proposed model can process multimodal inputs, integrate visual and audio information, and improve the ability to understand complex scenes.
Weaknesses: Subjectivity: The Layer-Sensitive Hyperbolic Association Degree (LSHAD) proposed by the author contains multiple hyperparameters, and LSHAD is used to examine the threshold of the message graph. It is very important. There is a lack of explanation for why it is designed in this way.
Complexity: DSRL combines Euclidean and hyperbolic geometric spaces, as well as cross-space interaction mechanisms, which may increase the complexity of the model, resulting in higher demands on computing resources and training time.
Parameter sensitivity: DSRL contains multiple hyper-parameters, such as β, γ, α, etc. The selection of these parameters may have a significant impact on model performance. This article lacks an analysis of parameter adjustment.
Application-specific limitations: DSRL is specifically designed for the task of video violence detection, which may mean that further adjustments or optimizations are required when applying it to other types of video content analysis tasks.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What are the specific reasons for the design choices in the Layer-Sensitive Hyperbolic Association Degree (LSHAD) with its multiple hyperparameters and threshold criteria? Was there any experimental or theoretical analysis conducted to support these design decisions?
2. Considering the increased demand for computing resources and training time, are the benefits of these complex designs justified?
3. What methods or heuristic approaches are used to determine the optimal values of these hyperparameters? Has a systematic sensitivity analysis been conducted for the selection of hyperparameters (such as β, γ, α, etc.) in the DSRL model?
4. Has the DSRL model been tested on other types of video content analysis tasks? If so, how did the model perform in these tasks? Are there any additional adjustments or optimizations required to adapt the model to different video content analysis applications?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors discussed their limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and suggestions to improve this work. We will address your concerns below.
***Q1: Reasons for the design choices in the LSHAD with its multiple hyperparameters and threshold criteria***
Inspired by the Global-first principle [1] that humans always have cognition on global first and then focus on local, we propose a novel node selection strategy, which guarantees the model captures the broader global context first with a relaxed threshold at the beginning of message aggregation and then focuses on the local context with more strict thresholds.
To achieve this, we introduce the *LSHAD* construction rule, which calculates an *LSHAD* threshold based on the number of the current layer *k* and hyperbolic Dirichlet energy of the current layer. As the *k* increases and the hyperbolic Dirichlet energy decreases, the *LSHAD* threshold increases and is limited to [0,1] by the sigmoid function. If there is no $\beta$ and $\gamma$, the threshold in the first layer will be strict ($\textgreater 0.5$), causing the overlook of some global context information. Therefore, to make our node selection threshold conform to the Global-first principle, we introduced the two hyperparameters in *LSHAD*, where $\beta$ controls the influence of the number of current layer *k* and $\gamma$ acts as a bias to fine-tune the threshold. Moreover, we conducted an ablation study to determine the optimal value of the two hyperparameters ($\beta$,$\gamma$), where $\beta$ ranges among [0.2,0.4,0.6,0.8,1.0] and $\gamma$ ranges among [1.0,1.2,1.4,1.6,1.8,2.0]. The results in the table below reveal that when $(\gamma - \beta)$ is 0.4, the performance is optimal, so we chose a pair (0.8, 1.2) from this set.
| $\beta \setminus \gamma $ | 1.0 | 1.2 | 1.4 | 1.6 | 1.8 | 2.0 |
|--------------------------|------|------|------|------|------|------|
| **0.2** | 85.22| 87.12| 86.32| 86.60 | 86.31| 86.86|
| **0.4** | 87.52| 85.22| 87.12| 86.32| 86.60| 86.31|
| **0.6** | 87.61| 87.52| 85.22| 87.12| 86.32| 86.60 |
| **0.8** | 87.29| 87.61| 87.52| 85.22| 87.12| 86.32|
| **1.0** | 86.29| 87.29| 87.61| 87.52| 85.22| 87.12|
[1] Chen, L. (1982). Topological structure in visual perception. Science, 218, 699-700.
***Q2: Comparisons of computing resources and training time***
Our designs, HE-GCN and DSI, effectively capture hierarchical contextual information of events and integrate two spaces of information, respectively.
Following your suggestion, we trained three models on XD-Violence for 30 epochs each using a single NVIDIA RTX A6000 GPU: Baseline (GCN), Baseline+HE-GCN, and Baseline+HE-GCN+DSI (DSRL). The results in the table below show that the benefits of our designs are justified. Our DSRL improved AP by 3.57\% compared to the Baseline, while the training time per epoch increased by only 41 seconds, and memory usage rose by just 4.1GB, both of which are within a reasonable range, making the performance gains well worth the resource consumption.
| Methods | Params | Training Time per Epoch | Total Training Time | Video Memory Usage | AP (%) |
|----------------------------|---------|:-------------------------:|:---------------------:|:--------------------:|:--------:|
| Baseline (GCN) | 0.7734M | 2 min | 60 min | 4.24 GB | 84.04 |
| Baseline + HE-GCN | 0.8975M | 2 min 19 s | 69 min 39 s | 7.03 GB | 86.46 |
| Baseline + HE-GCN + DSI (DSRL) | 0.9966M | 2 min 41 s | 80 min 19 s | 8.34 GB | 87.61 |
***Q3: Hyperparameter sensitivity analysis***
We employed a grid search method to determine the optimal values of the hyperparameters in our model. We have conducted a sensitivity analysis on $\alpha$ and $\lambda$ in DSI, as shown in Line 470 in our submitted paper.
We further conducted a sensitivity analysis on $\beta$ and $\gamma$. The table referenced in **Q1** shows that our model is relatively robust to changes in the hyperparameters within certain ranges.
Besides, the table also shows that when $(\gamma - \beta)$ is 0.4, the performance is optimal, so we chose the pair (0.8, 1.2) from this set.
Specifically, when $\beta$=0.8, the variations in $\gamma$ within the tested range do not significantly affect the model's performance. Similarly, when $\gamma$=1.2, changes in $\beta$ also have minimal impact on the model's effectiveness. This indicates that our model's performance is relatively stable under small perturbations of these hyperparameters, suggesting a degree of robustness in the parameter configuration.
***Q4: Applicability to other video content analysis tasks***
Currently, we primarily focus on addressing ambiguous violence in VVD by customising DSRL based on the dual-space idea. Although we have not yet tested its effectiveness on other tasks, our theoretical analysis suggests that this dual-space idea could be beneficial for various other video content analysis tasks. For instance, in video person re-identification (Video Re-ID), capturing both visual features and spatio-temporal hierarchical relationships is crucial. To adapt our model to Video Re-ID, we may consider the following adjustments:
1) We may need to adjust the graph construction to fully incorporate information about human body parts and their hierarchical relationships.
2) Video Re-ID is a type of fine-grained recognition that may require enhancing visual feature extraction in Euclidean space to strengthen node features in the graph.
In the future, we will explore adapting the model to different video content analysis applications.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors' response, most of my concerns have been addressed, and considering the application area would be somewhat domain specific, I will stay with my original rating, i.e. being slightly positive. | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for providing constructive feedback that helped us improve the paper. We are glad the reviewers find that
**Our method is interesting and novel**
* "The method is innovative." ---8atP
* "The proposed method is totally interesting and novel." ---5VkS
* "This paper presents a pioneering approach." ---8SDB
**Our presentation is clear and well-motivated**
* "Overall, the main paper is easy to follow and organized well." ---rxrS
* "The HE-GCN and DSI modules are well-motivated." ---5VkS
* "The quality of this paper's presentation is good and the whole paper is well-organized"---8SDB
**Our performance is outstanding**
* "The performance of the method is outstanding. " ---8atP
* "The results demonstrate significant improvements compared to previous methods. " ---rxrS
* "The proposed method achieves good performance " ---5VkS
* "Experiments on XD-Violence and UCF-Crime datasets show the advantage of the proposed method. " ---8SDB
**Main contributions**
1. We present DSRL, the first method to integrate Euclidean and hyperbolic geometries for VVD, which significantly improves the discrimination of ambiguous violence and achieves state-of-the-art performance on the XD-Violence dataset in both unimodal and multimodal settings.
2. We design the HE-GCN module with a novel message aggregation strategy to better capture the hierarchical context of events, where the node selection threshold is dynamic, not fixed, and determined by layer-sensitive hyperbolic association degrees based on hyperbolic Dirichlet energy.
3. We design the DSI module to break the information cocoon for better dual-space cooperation, combining visual discrimination from Euclidean geometry and event hierarchical discrimination from hyperbolic geometry, utilizing cross-space attention to facilitate information interactions.
Once again, we express our sincere appreciation for your valuable contributions to the review process. Your expertise and guidance have been invaluable in improving the quality of our work. We remain committed to continuous improvement and eagerly await your final decision. Please see the attached PDF for a one-page PDF with added experimental results.
Pdf: /pdf/a4b4f9ad2d4b3e33ffb0ec27cd670c5cd42b928c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
RanDumb: Random Representations Outperform Online Continually Learned Representations | Accept (poster) | Summary: The paper shows that a fixed random feature space of the data, followed by some linear classifier, outperforms continually learned feature spaces across multiple benchmarks. The paper then shows an ablation study of the suggested method.
Strengths: **Clarity:** The suggested idea is simple, and it is easy to understand how it works and why.
**Quality:** The random features are tested across multiple benchmarks, showing improvements in many of them. The ablation study is comprehensive. Overall, I find the results very convincing: I believe that RanDumb can outperform many continual learning algorithms, despite it being learned using a single pass over the data in a non-deep learning way.
**Significance:** This kind of meta-study, showing weak points of the entire field rather than weak points of a specific work is important for the community. The paper shows that continual learning methods are not the optimal solutions for many continual learning benchmarks.
Weaknesses: I'll review this paper from 2 perspectives: the first is from a "technical" point of view, judging the actual suggested method RanDumb. The second is the "meta" point of view, judging the "meta" claim of the paper, that we should re-think CL methods, as they are not optimal for many of the existing CL benchmarks.
**Originality:**
From the technical point of view, the suggested method is simply an implementation of known methods on several different benchmarks, and as such has very limited originality. From the meta point of view, I think that the additional benefits of this paper over g-dumb [1] are somehow limited, as it has been shown before that CL methods do not optimally solve the CL benchmarks in many cases.
**Quality and significance:**
While the quality of the technical point is high, I'm not so convinced about the meta point. The discussion in the paper about the meta point is rather minimalistic and is left for the reader. A more thorough discussion will be useful, especially a more in-depth comparison with previous works that suggested similar things, such as g-dumb.
As I don't find the "technical" point of view of the paper novel significant enough to merit a top-tier publication in its own right, I think the paper should focus more on the meta point of view. Specifically, I find that there is a missing discussion on why the existence of simple methods that outperform CL methods on CL benchmarks should be so worrisome. The counter-argument is that the benchmarks themselves are more of "toy problems", and that achieving high performance on them is not the major goal of the works, but instead learning how to deal with problems that deep models encounter when facing changing data, including catastrophic forgetting and distribution shifts.
**Clarity:**
While the main idea is clear, the paper is very confusing to read. The different benchmarks are marked by letters, which makes it hard to follow which is which. Moreover, I often had to go to other papers to understand the experiments that were done in a better way, as this formulation was very confusing. This is a major problem in the current version, as a substantial part of the paper revolves around this formulation.
**Summary:**
I find the technical point of view good and convincing, but not significant and novel enough for to merit a publication in a top-tier conference. The meta point of view in my opinion is important and relevant for the community, but is not supported enough by the paper. Therefore, I am leaning toward rejection.
------
[1] Prabhu, Ameya, Philip HS Torr, and Puneet K. Dokania. "Gdumb: A simple approach that questions our progress in continual learning." Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16. Springer International Publishing, 2020.
Technical Quality: 3
Clarity: 2
Questions for Authors: Can you please elaborate on the problems in CL that this paper introduces, and the difference between them and the points raised in g-dumb?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: not relevant
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and feedback. We appreciate the recognition that our idea is simple and easy to understand, as well as the recognition of the comprehensive nature of our ablation study and the improvements demonstrated across multiple benchmarks. The reviewer also pointed out that our meta-study, which highlights weak points of the entire field rather than specific works, is important for the community. We seek to address the concerns raised by the reviewer below. Below we have merged different parts of the review that come under the reviewer’s “meta-claim” and “technical claims” perspectives of the paper to provide a thorough reply in these aspects.
---
## Meta Claim
> "... judging the 'meta' claim of the paper, that we should re-think CL methods, as they are not optimal for many of the existing CL benchmarks. I think that the additional benefits of this paper over g-dumb [1] are somehow limited, as it has been shown before that CL methods do not optimally solve the CL benchmarks in many cases. Can you please elaborate on the problems in CL that this paper introduces, and the difference between them and the points raised in g-dumb?"
- Our meta-claim, as consistently stated across the work, is that *random representations of the raw pixels of the images consistently outperform the deep-learning based learned representations from methods specifically designed for online continual training*.
- In contrast, the meta-claim made in GDumb is that continual learning methods forget all previous online samples, and their performance can be recovered by using the latest memory. Additionally, GDumb still uses deep models to justify their claims.
### Main differences between GDumb and RanDumb meta-claims
- **GDumb primarily argues about forgetting of online samples:** GDumb relies entirely on memory and learns nothing from the online samples themselves. This ablates the effect of online samples on performance. In contrast, we only learn from online samples and use no memory.
- **RanDumb primarily argues about inadequate representation learning:** GDumb learns representations similar to the experience replay (ER) baseline and does not make any claims regarding the quality of the representations learned via online representation learning. We specifically ablate representations and discuss the effects.
Overall, there are critical fundamental differences between RanDumb and GDumb in their meta-points.
### Differences between the experimental settings of GDumb and RanDumb
- **RanDumb focuses mainly on low-memory settings,** whereas GDumb primarily shines in high-memory settings. For example, it is trivial to note that in our primary rehearsal-free setting with no exemplars stored (currently popular in continual learning), GDumb would, by design, effectively provide random performance.
- **RanDumb is entirely complementary to GDumb,** i.e., RanDumb shines in benchmarks where GDumb performs poorly and vice-versa. Therefore, just like GDumb, we believe that RanDumb would be a critical addition to the continual learning literature.
Overall, we request the reviewer not to simply dismiss the meta-point because of a broad claim that both RanDumb and GDumb are simple baselines (with no other points in common), and we re-iterate that our meta-point is emphasized above and in the title.
---
## Technical Claim
> "From the technical point of view, the suggested method is simply an implementation of known methods on several different benchmarks, and as such has very limited originality. I find the technical point of view good and convincing, but not significant and novel enough to merit a publication in a top-tier conference."
We agree that a **part** of our work heavily relies on an existing and very well-cited work of Rahimi and Recht [52] to support our arguments. However, we believe that it is absolutely fine to do so for various reasons:
- It is well accepted in the research community to use fundamental insights and findings from a related field. The above work is a fundamental work with thousands of citations of top-tier conference papers, which have been built on top of this work.
- We are the first to show the effectiveness of the above approach in continual learning literature.
- We had to combine various insights from the continual learning literature to make the above work effective across a variety of experiments we presented in this work.
Just the fact that an approach seems simple should never be the reason to undermine its effectiveness, and we hope that the reviewer agrees with us.
---
## Presentation
> "While the main idea is clear, the paper is very confusing to read. The different benchmarks are marked by letters, which makes it hard to follow which is which..."
We would like to highlight that other reviewers found the presentation of the paper to be *"good"* or *“excellent.”* They found that *"This paper is well-written," "The motivation, method, and results are very clear and easy to follow,"* and even specifically, *"The paper is well-structured; it is relatively easy to browse the results and sections."*
However, we do understand that it can sometimes be difficult to follow notations and references. Hence, we had summarized the important details about the different benchmarks relevant to our paper in a table alongside the “Benchmarks” paragraph in Section 3. We also describe critical aspects of the benchmarks in detail there.
Having said the above, we do acknowledge that labeling experiments by alphabets might affect readability, and we will **improve this aspect by replacing these labels with short, explanatory phrases in our revised draft.**
Could the reviewer suggest further ways to improve the structure of experiments? We are happy to make necessary changes.
---
We hope we have addressed the major concerns of the reviewer, and are happy to answer any further questions/concerns. We look forward to a fruitful reviewer-author discussion phase.
---
Rebuttal Comment 1.1:
Title: answer to the rebuttal
Comment: Thank you for the elaborate rebuttal.
I still find many of the issues I've raised unanswered: I still find the discussion in the paper about the meta-point very minimalistic, and the paper's presentation very hard to follow.
I still think that the paper is borderline, as it have both issues and merits. As there is no pure borderline score in this conference, I've put a borderline reject in my original review. Following the other reviewers and the rebuttal, I'll change my score to borderline accept.
I hope that for future revisions of the paper, the authors will emphasize the discussion about the meta-point that appears in their rebuttals and implement the clarification to the suggested presentation.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer again for the comments, and appreciate raising the score of our work.
We will incorporate your comments in the paper to highlight the meta-point as you suggested. If you have any further questions or require clarifications then please let us know. We will be happy to address them.
Thank you once again. | Summary: This paper proposes RanDumb, a representation learning-free method for online continual learning (OCL). It uses data-independent random Fourier transform to project the data to a high-dimensional space (embed), scales the features to have unit variance (decorrelation), and finally performs classification with a nearest mean classifier (NCM) in the feature space.
The authors compares RanDumb with existing methods (when they are trained from scratch) on various OCL benchmarks. The authors additionally substitute the "embed" part of RanDumb with frozen pretrained representations and find that it still performs decently compared to existing methods that finetune the representations, on one of the benchmarks.
Strengths: 1. This paper is well-written and I did not find any major technical flaws.
2. I find that random representations outperform continually trained representations (from scratch) under the data/memory-constraint continual learning regime interesting and novel.
3. The benchmarks used are extensive, involving multiple datasets (MNIST, CIFAR, subsets of ImageNet, etc.), different levels of OCL constraints (e.g., number of classes per task), and different model architectures (ViT, ResNet, etc.).
Weaknesses: 1. I think the main novelty lies in the usage of random Fourier features because, without it, the proposed method is very similar to SLDA. Therefore, I'm not entirely convinced by the motivation. It makes sense that representation learning from scratch performs poorly in the low-data regime. In what cases do we not want to use a frozen pretrained representation directly?
2. The method involves using a very large embedding space (25K in most experiments), which raises concerns on efficiency. Kernel trick is efficient because you can compute the dot product in a high-dimensional space without explicitly projecting the data to that space, but I'm not sure how the dot product $\phi(x)^\top \bar{\mu}_i$ can be computed efficiently here. Could the authors elaborate on that?
Technical Quality: 3
Clarity: 4
Questions for Authors: I listed my main concerns in the weaknesses section above and I only have some minor questions or comments here.
1. I wonder if it's worth providing the definition of OCL clearly so that readers not familiar with OCL can grasp the ideas more easily. Also, I recommend adding details on the method explicitly (even if it's just a few equations in the appendix) rather than only providing references. For example, I'd appreciate seeing how the inverse of the covariance matrix is computed online.
2. What part of the results are the reported numbers in prior work? Is it all the numbers except for the proposed method?
3. I thought Co$^2$L was a contrastive learning-based method. Why is it included in benchmark B.1 and not B.2?
4. Could the authors provide some hypothesis on why fine-tuning/prompt-tuning based methods collapse in Table 6?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors have addressed the limitations clearly in Sec. 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their time reviewing our work and providing encouraging remarks such as *"no major technical flaws,"* *"interesting and novel,"* and *"extensive benchmarks."* We also appreciate giving the presentation of our work a remark of being *"excellent."*
Below, we seek to address the potential weaknesses/questions raised by the reviewer in the same order as noted by the reviewer.
---
> "I think the main novelty lies in the usage of random Fourier features because, without it, the proposed method is very similar to SLDA. Therefore, I'm not entirely convinced by the motivation. It makes sense that representation learning from scratch performs poorly in the low-data regime. In what cases do we not want to use a frozen pretrained representation directly?"
When strong pre-trained features are available for a specific application (e.g., where the test data distribution is more or less known during training), practitioners should use them as they are likely to perform better. However, in the absence of such pre-trained representations, as is the case with online continual learning where the future datapoints/classes are unknown, we argue that a baseline, such as ours in this work that simply uses random features, should be first evaluated thoroughly before relying too much on the online representation learning methods that exist in the literature.
We have made the above point empirically as well via extensive experiments where we show that several of the existing continual learning methods (including VR-MCL—the ICLR'24 best paper honorable mention [70]) fail to outperform our simple non-deep-learning based random feature baseline! We strongly believe that our random transform as a baseline will help in guiding future research directions. We are thankful to the reviewer for raising this question as we believe that adding a small discussion and related implications around this in our revised draft would strengthen our arguments further.
---
> "The method involves using a very large embedding space (25K in most experiments), which raises concerns on efficiency. Kernel trick is efficient because you can compute the dot product in a high-dimensional space without explicitly projecting the data to that space, but I'm not sure how the dot product can be computed efficiently here."
We will expand our discussion on implementation details of our method versus SLDA [27]. Briefly, computing the dot product was extremely fast. For our use case, the runtime (on an Alienware x17 R2 laptop for 25K dimensions) was **0.3±0.02 seconds**.
However, as you asked for details in Q1, the major bottleneck is not the inner product, it is computing $S^{-1}$. We use a standard approach of solving linear systems $S \cdot x = \mu$ for this using a heavily optimized function in PyTorch called `torch.linalg.lstsq`. The solution $S^{-1} \mu$ is then used to compute $x^T (S^{-1} \mu) - \mu^T (S^{-1} \mu)$.
Certainly, one could just invert the matrix $S$ at the beginning and then use the Sherman-Morrison formula for online updates as each update in our case is rank-1.
---
> "I wonder if it's worth providing the definition of OCL clearly so that readers not familiar with OCL can grasp the ideas more easily. Also, I recommend adding details on the method explicitly (even if it's just a few equations in the appendix) rather than only providing references. For example, I'd appreciate seeing how the inverse of the covariance matrix is computed online."
Thank you for this question. We agree that providing a more formal definition of OCL will improve the readability of the paper. We will provide that in our revised draft. For details about matrix inversion, please check the answer above.
---
> "What part of the results are the reported numbers in prior work? Is it all the numbers except for the proposed method?"
We clarify—all numbers in tables where the caption says (Ref: table and citation) except our method are taken from the aforementioned table in the cited paper. We ensure that the experimental settings are exactly the same. Please note that reporting numbers directly from prior works makes the baselines stronger as these are already heavily optimized by the authors of their corresponding papers, which most of the time is difficult to reproduce as the specific design choices (hyper-parameters, etc.) to achieve such numbers are not always well explained in the paper.
---
> "I thought Co2L was a contrastive learning-based method. Why is it included in benchmark B.1 and not B.2?"
Thank you for bringing this to our notice. We will correct this right away!
---
> "Could the authors provide some hypothesis on why fine-tuning/prompt-tuning based methods collapse in Table 6?"
Thank you for the great question. Based on recent research, and our observation that the collapse is early across tasks:
Parallel work {1} (will be cited and included in the revised draft) suggests that most current prompt-tuning methods often suffer from a lack of prompt diversity and can be characterized with a single prompt. As a result, the effectiveness of classification depends heavily on the quality of that prompt.
When designing a prompt for a large number of classes (e.g., 50 or 20), the prompt leads to generally discriminative representations (even for future tasks), whereas a prompt for just 2 classes, in our case, is limited in its discriminative power. This causes all prompt-tuning methods to collapse across tasks.
{1} Thede et. al., Reflecting on the State of Rehearsal-free Continual Learning with Pretrained Models, CoLLAs 2024
---
We hope we have addressed the major concerns of the reviewer, and are happy to answer any further questions/concerns. We look forward to a fruitful reviewer-author discussion phase.
---
Rebuttal Comment 1.1:
Title: Thanks for rebuttal
Comment: I thank the authors for addressing my concerns. I've raised my score to a 7.
I hope the authors consider incorporating some of our discussions into the revisions. Additionally, I still don't like the use of the term "kernel **trick**" since the dot product is computed explicitly in the high-dimensional feature space. I also recommend clarifying early in the paper that the "representations" learned are on a very high dimensional space, since I think this is not very common when we use the word "representation learning" and could cause confusions.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer again for an insightful review and valuable time. Appreciate it. We would also like to thank the reviewer for appreciating our rebuttal and raising the score. Thank you for that.
> Additionally, I still don't like the use of the term "kernel trick" since the dot product is computed explicitly in the high-dimensional feature space.
We agree with the reviewer and will fix this oversight on our part. As the reviewer rightly pointed out, since we directly compute the dot product in the high-dim space, we do not use the “trick” in line 77. We will correct this statement. Thank you for pointing this out.
> I also recommend clarifying early in the paper that the "representations" learned are on a very high dimensional space, since I think this is not very common when we use the word "representation learning" and could cause confusions.
We agree on this point too! We did highlight this in the introduction line 41, when introducing RanDumb design for the first time that projection is to a high dimensional space. However, we are happy to emphasise this earlier in the paper to avoid any potential confusion.
Thank you once again for your time and insightful comments. | Summary: To obtain powerful representation in an online continual learning setting, the authors propose a new learning method referred to as RanDumb, that embeds raw pixels using a fixed random transform, approximating an RBF kernel initialized before any data is seen.
The proposed model trains a simple linear classifier on top without storing any exemplars, processing one sample at a time in an online continual learning setting.
Extensive experiments demonstrate its effectiveness and power with several ablations.
Strengths: (+) Extending the investigation to popular exemplar-free scenarios with pre-trained models, this work shows that training only a linear classifier on top of pre-trained representations surpasses most continual fine-tuning and prompt-tuning strategies.
(+) The author's investigation challenges the prevailing assumptions about effective representation learning in online continual learning.
(+) The authors explain how the random Fourier basis (a low-rank data-independent approximation of the RBF Kernel) affects the representation of online continual learning along with the accuracy of RanDumb (embedding dimensions).
Weaknesses: (-) The structure of RamDumb (Fig. 1) is too simple to follow the structure. Could the authors depict the architecture of input, RBF-Kernel, pre-trained model (ViT), Decorrelate(D), and NCM(C)? or include an additional figure to illustrate the RamDumb.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weakness.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The embedding (RBF-kernel) is unclear; the authors could better explain how the random Fourier projection affects continual learning input representation with simple math.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time in reviewing our work and for providing their concerns and suggestions. We appreciate the recognition of the work's contribution in *"challenging the prevailing assumptions about effective representation learning in online continual learning."* The acknowledgment of our *"extensive experiments"* and results *"extending the investigation to popular exemplar-free scenarios with pre-trained models"* is highly valued.
Below, we address the reviewer’s request for clarification on RanDumb’s structure.
> P1 [Structure of RanDumb needs Clarity]: (i) Could the authors depict the architecture of input, RBF-Kernel, pre-trained model (ViT), Decorrelate(D), and NCM(C)? Or include an additional figure to illustrate the RanDumb structure.
>
> P2 [Embedding]: The embedding (RBF-kernel) is unclear; the authors could better explain how the random Fourier projection affects continual learning input representation with simple math.
We would like to clarify the structure of RanDumb across benchmarks (and will add a figure in the revised draft to clarify this).
- **For training from scratch**:
Raw flattened pixels are inputted, followed by random projection, decorrelation, and then classification using the NCM (as described in Equation 3). No input representations are learned; we use raw pixels as inputs, as illustrated in Figure 1.
- **For pretrained models**:
The input image is passed through the pretrained model to obtain features, followed by decorrelation, and then NCM classification. Mathematically, this would be the same as Equation 3 if we overload $\phi$ as the pretrained model instead of random projection.
We acknowledge our initial description was inadequate and appreciate you bringing this to our attention. We will clarify this in the paper with a separate figure for pretrained models and additional equations to avoid overloading $\phi$.
We hope we have addressed the major concerns of the reviewer, and are happy to answer any further questions/concerns. We look forward to a fruitful reviewer-author discussion phase.
---
Rebuttal Comment 1.1:
Comment: Thank you for your kind response.
I would maintain my score to see how the random Fourier projection (mechanism) affects continual learning or feature representations.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer again for their time, feedback and are glad they liked our work. | Summary: The authors show that a model with random Fourier features as a representation, followed by a normalisation then nearest class means for classification, outperforms most Continual Learning methods in online CL benchmarks. The representation is not learned as opposed to other online CL methods the model is compared against. Based on this result, the authors conclude that, in the online learning setting, continual learning algorithms fail to learn effective representations. The second main contribution is that this naive learning method bridges 70% to 90% of the performance of joint learning, leaving a small improvement gap to develop an effective continual representation learning method.
Strengths: - Originality :
- Even though the observation is simple, it highlights a blind spot in the field. Also, the naive model contrasts with most methods in the field and raises interesting questions.
- Quality :
- The authors present extensive benchmarks and each benchmark is clearly motivated wrt the literature and the scope of the paper.
- An ablation study on the components of the naive model is presented, highlighting that both the normalisation and Fourier features are critical for the performance.
- Clarity :
- The motivation, method and results are very clear and easy to follow.
- The paper is well structured as well, it's relatively easy to browse the results and sections.
- Significance :
- This study will likely motivate subsequent works to investigate online representation learning
Weaknesses: I don't see any major weaknesses, except the question I raise in the section below.
Technical Quality: 3
Clarity: 3
Questions for Authors: One point I wanted to discuss was the following : One property of RanDumb is that training is not gradient based. This contrasts of all other training methods. Given that the other methods train the final layer with gradient based optimisation methods, it could be that the representations are still decent for previous tasks but that the forgetting occurs mostly in the final layer. I was thinking that, in order to validate that the other continual learning methods don't learn effective representations, one experiment could be : Run the full CL sequence on a given dataset with a given model, training the full model. then freeze the learned representation and check the joint accuracy with the frozen representation vs the joint accuracy with a fully trainable model.
I am curious about your thoughts on this point.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: There are no potential negative societal impacts of the work. The authors adequately highlighted various limitations of their work. These limitations are out of scope of the current work and can be addressed in follow-up investigations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful question. To compare the effect, we will use two baselines: ER [13] and iCARL [54].
**Effect of gradient-free classifiers:** The primary difference between these methods is that ER trains the last layer along with the previous network, while iCARL learns a non-gradient based Nearest Class Mean (NCM) classifier. Similar to iCARL, RanDumb also employs an NCM classifier, but differs as it does not train a deep network.
| Method | Embedder | Classifier | CIFAR10 (5/2) M = 0.1k | CIFAR10 (5/2) M = 0.2k | CIFAR100 (10/10) M = 0.5k | CIFAR100 (10/10) M = 1k | TinyImageNet (100/2) M = 1k | TinyImageNet (100/2) M = 2k |
|-----------------|---------------|------------|------------------------|------------------------|---------------------------|-------------------------|------------------------------|-----------------------------|
| ER [13] | Deep network | Linear | 19.4 | 29.7 | 8.7 | 15.7 | 2.5 | 5.6 |
| iCARL [54] | Deep network | NCM | 31.0 | 33.9 | 12.8 | 16.5 | 5.0 | 6.6 |
| RanDumb (Ours) | Random | NCM | 55.6 | 55.6 | 28.6 | 28.6 | 11.6 | 11.6 |
Training the last layer in a gradient-based manner, as noted by the reviewer, can lead to an additional degree of forgetting. However, even after addressing this issue, there remains a significant gap in performance that needs to be addressed, as highlighted by our comparison with RanDumb.
The experiment proposed by the reviewer requires two passes over the data, not permissible in the OCL setting. We suggest that our experiment above might provide a valid comparison to clarify the concern while maintaining the online continual learning scenario.
We hope we have addressed the major concerns of the reviewer, and are happy to answer any further questions/concerns. We look forward to a fruitful reviewer-author discussion phase.
---
Rebuttal Comment 1.1:
Title: Reply to rebuttal
Comment: Thanks to the authors for sharing the results !
This experiment clarifies the concern I raised.
However I wanted to clarify that the experiment I suggested is a diagnostic rather than a method, therefore two passes over the data wouldn't be a concern. It would be the most direct comparison of the learned continual learning representations vs the random Fourier features, to validate whether they are strong enough.
I will maintain my score, thanks !
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer again for their comments and for appreciating our work.
> However I wanted to clarify that the experiment I suggested is a diagnostic rather than a method, therefore two passes over the data wouldn't be a concern. It would be the most direct comparison of the learned continual learning representations vs the random Fourier features, to validate whether they are strong enough.
Yes, as a diagnostic tool two passes shouldn’t be a concern. We agree with that. We will investigate this further and, space permitting, we will make sure to include this diagnostic method in the revised draft as this might provide better insights.
Thank you once again, appreciate it! | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
How Do Large Language Models Acquire Factual Knowledge During Pretraining? | Accept (poster) | Summary: This paper aims at understanding how large language models acquire knowledge from the pretraining process.
To this end, the authors propose a dataset consisting of paragraphs about fictional entities and three different kinds of probes that can be used to test whether the model successfully acquires the knowledge after fine-tuning.
Their finding includes:
- The likelihood of the knowledge gradually increases as the models are exposed to the knowledge more times, but the models also gradually forget the knowledge as the models are fine-tuned with other data.
- Larger models acquire knowledge more effectively, but models trained with more tokens do not.
- A larger batch size helps models to retain the knowledge better (forget slower).
They suggest that this explains why models can not learn long-tail knowledge.
Strengths: - The authors conduct comprehensive investigations on LMs' knowledge acquisition from different aspects, including
- Types of knowledge recurrence (once/duplication/paraphrase)
- Model scales (1B/7B)
- Different batch size (16/4096)
- Different level of acquisition (memorization/semantic/compositional probs)
- Figure 2 shows how knowledge is accumulated in the model throughout the training process.
- Their results explain why language models cannot acquire long-tail knowledge well.
- The results about batch size are interesting and may be useful for choosing the best batch size in practice.
Weaknesses: 1. The interpretation about batch size may be too assertive. It is not certain that the retention rate will really go to 0 as the number of tokens increases. The interpretation is based on extrapolation.
2. Also, it is possible that when the batch size is larger than a certain number, the model will not be able to learn long-tail knowledge in the batch. The authors should also plot the relationship between batch size and effectivity.
2. The evaluation metric (measuring the likelihood of the target sequence) may not effectively reflect whether the model acquires the knowledge, as the target phrase may not be the only correct continuation (at least it is unclear in the main text).
3. Line 70: [13] does not study NLP tasks!
4. The writing could be improved so the information is more digestible. For example, the first paragraph in Section 4.2 contains information about model scales and pretraining stages. It's better to be re-organized as two paragraphs.
5. In Section 3, the authors should provide more details about how the dataset is created. Also, the presentation could be greatly improved. The purpose of the experiment, the purpose of the dataset, the design of the dataset, the way they generate the dataset, the inspection of the examples, should be in separated paragraphs.
5. This paper defines many symbols but only few of them are used frequently throughout the whole paper. Meanwhile, $\delta \ell (q)$ is not defined in the main text, but appears in two figures. I suggest the authors simplify the narrative to improve the readability of this paper.
I acknowledge the contribution of this paper. However, the presentation induces unnecessary difficulty to understand and assess this work. I suggest that this work needs more polish before being published.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. $\theta_t$ is the parameter before the $t$-th update?
2. In Figure 1, why can the number of training steps be minus?
3. In convention, $\mathcal{E}$ is usually used for error. The authors could consider using another symbol for effectivity.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Limitations are addressed in Appendix A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank reviewer 1qMb for the detailed review and valuable feedback.
We will respond to the weaknesses and questions in order:
> The interpretation about batch size may be too assertive. It is not certain that the retention rate will really go to 0 as the number of tokens increases. The interpretation is based on extrapolation.
We acknowledge the need for extended training to minimize extrapolation in Figures 4 and 5. However, given the significant computational burden of pretraining experiments (L690-691), such extensions are currently infeasible in our academic setting. We aim to address this with additional resources in future work.
Importantly, Figure 16 (Appendix E.5) demonstrates the power-law holds near the x-axis for 1B models, which show faster forgetting than 7B models, supporting our interpretation's reliability.
> The authors should also plot the relationship between batch size and effectivity.
The effect of reducing the batch size on effectivity is demonstrated in Figure 21 in Appendix G (Note: the LR was fixed for the experiments with the reduced batch size, and L246-248 & L725-727 are corrected in our latest version). Comparing Figure 2 with Figure 21, our results suggest that effectivity is increased with reduced batch size.
> it is possible that when the batch size is larger than a certain number, the model will not be able to learn long-tail knowledge in the batch.
We acknowledge the need for experiments with various batch sizes (noted in Limitations, L504-506) to generalize to extreme values. However, our experiments with batch sizes used in actual OLMo pretraining (4M tokens) provide significant implications for recent LLMs in practice.
> The evaluation metric (measuring the likelihood of the target sequence) may not effectively reflect whether the model acquires the knowledge, as the target phrase may not be the only correct continuation (at least it is unclear in the main text).
This is a good point! While the target span may not be the only correct continuation, we believe this does not have a significant impact on our interpretations:
- Knowledge acquisition should increase log probability for all correct continuations, including our target span.
- Using fictitious and factual knowledge mitigates the effects of multiple possible answers.
- Our focus is on relative comparisons across training conditions, not absolute likelihood increments.
These factors mitigate concerns about multiple-answer scenarios affecting our conclusions.
> Line 70: [13] does not study NLP tasks!
Thank you for pointing this out. We will correct it.
> the first paragraph in Section 4.2 contains information about model scales and pretraining stages. It's better to be re-organized as two paragraphs.
> The purpose of the experiment, the purpose of the dataset, the design of the dataset, the way they generate the dataset, the inspection of the examples, should be in separated paragraphs.
We appreciate your thoughtful suggestions to make our paper more readable. We will improve them in the revised version.
> In Section 3, the authors should provide more details about how the dataset is created.
Although we could not provide all the details in the main text due to the page limit, the detailed dataset generation process is in Appendix B (referenced in L98-99):
- Generation and filtering process (L509-535)
- Full examples in Tables 3 and 4
- Exact prompts used in Appendix C
We'll add more details to the main text in the revised version.
> This paper defines many symbols but only few of them are used frequently throughout the whole paper.
Our symbol definitions have dual purposes: quantifying our experimental results and providing a framework for future studies on LLM pretraining dynamics. While this may slightly compromise readability, it ensures interchangeability between studies with different setups. In the revised version, we will enhance the clarity and simplicity of our metrics and symbols, enabling readers to easily grasp their meanings while maintaining adaptability. However, we believe that oversimplifying definitions could limit their applicability in future work. For example, defining $t_{LAM}(q,i)$ and $\mathcal{E}(q,i)$ without considering $i$ (the $i$-th encounter of knowledge) would make measured values incompatible with future studies using different experimental setups, such as varying injection intervals.
> 𝛿ℓ(𝑞) is not defined in the main text, but appears in two figures.
$\Deltaℓ(q)$ is explained in Figure 1's caption where it's first introduced. We will add its definition to the main text as well.
> 𝜃𝑡 is the parameter before the 𝑡-th update?
Yes, we will clarify this in the revised version.
> In Figure 1, why can the number of training steps be minus?
In Figure 1, negative training steps represent the period before the model encounters the injected knowledge. We set $t=0$ as the reference point where the model is exposed to the minibatch containing the injected knowledge. For $t\leq0$, the model is updated only with the Dolma corpus.
> In convention, 𝐸 is usually used for error. The authors could consider using another symbol for effectivity.
Thank you for the feedback. We will use an alternative symbol in the revised version.
We hope our responses have addressed your concerns. We appreciate your thorough review and remain open to any further questions.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications. I am happy to increase my score by 1. | Summary: The paper proposes to inject synthetic factual knowledge in the pretraining data of large language models and measure the acquisition and retention of this knowledge over time. The study shows that model acquires factual knowledge gradually with many exposures of the same fact, and forgets over time if not reinforced. The results also show that forgetting follows a power-law relationship with training steps, and deduplication of training data can improve knowledge retention.
Strengths: - The paper uses a novel approach to study the acquisition and retention of factual knowledge in the pretraining phase of language models, by inserting synthetic facts into the training data and measuring the model's recall of these facts along the training process.
- The experimental design is thorough, covering different model sizes, batch sizes, and pretraining stages.
- The paper is well-structured and clearly written, with detailed explanations of the methodology and results.
- The work provides insights into why LLMs struggle with long-tail knowledge and how deduplication can enhance model performance, which are critical considerations for developing better strategies to improve LM training.
Weaknesses: - Counterintuitive results not explained in the main experiment: In figure 2, the model trained with paraphrased knowledge show lower performance after learning on the semantic probe and the composition probe compared to the model trained with duplicated knowledge. This is counterintuitive as more variations in the training data should lead to better generalization, and semantic probe and composition probe are designed to measure generalization. (The author seems wanting to explain this in line 212-215, but not arriving at an explanation in the end: duplicated knowledge forgets faster, but why it also learns faster?)
- An important factor that controls learning and forgetting, the learning rate, is not explicitly considered in the experiments. The lack of analysis on the impact of learning rate leaves many conclusions open to alternative interpretation. For example, could the gradual learning of facts be an artifact of low learning rate? If the learning rate is raised, could the model learn in fewer (or even a single) steps? For the power-law relationship between forgetting and training steps, the decay rate is also likely affected by the learning rate. The authors should investigate the impact of learning rate on the learning and forgetting dynamics to provide a more comprehensive understanding of the observed phenomena.
- The discussion on data deduplication seems contradictory to the results and the "learnability threshold": the author claims that deduplication slows down forgetting as is observed in Figure 2, but Figure 2 shows that deduplication also slows down learning. How to tradeoff between faster learning and slower forgetting is not obvious from the results (at the end of the training session in Figure 2, duplicated knowledge still has a higher performance than deduplicated knowledge). Also, deduplication increases the average interval between exposures, which could cause more knowledge to fall outside the learnability threshold and thus adversely affect knowledge learning. This is another contradiction that needs to be addressed.
Below are somewhat minor issues, but still affecting the credibility of the results:
- The training and evaluation data is not described clearly enough in main text: the authors should provide more description of the generated fictional knowledge paragraphs, for example, how it is generated, the main statistics of the passages, how is the "target span" selected, etc. The evaluation probes are also not specified in sufficient detail, for example, how is the composition probe constructed, and what does "composing factual knowledge in multiple sentences" exactly mean? These details are crucial for appreciating the experimental results and need to be included in the main text.
- Some important experiment details are missing: in line 156, what does the "paraphrased knowledge" refer to? Is it the same as the passages in semantic probe? How many paraphrases are used for each fact? These details could significantly affect the generalization of the learned knowledge and should be clarified.
- The idea of "learnability threshold" is cherishable, but may not be reliable enough as an explanation: although the observed forgetting fits to a power-law curve, extra care should be taken when extrapolating the curve to extreme values, such as to the vicinity of the x-axis, where the curve is likely to diverge from the power-law (or change to another piecewise power-law). More exact results are needed to measure the learnability threshold instead of extrapolating from far-away.
- The relationship between model size and forgetting is not sufficiently explored: it is often speculated that larger models may forget slower due to their larger capacity. Would the methodology used in this paper be able to give a precise characterization of the relationship between model size and forgetting?
Technical Quality: 2
Clarity: 3
Questions for Authors: Please refer to the above section.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank reviewer fwEj for the detailed review and valuable feedback. We will respond to each weakness and question in order:
> the model trained with paraphrased knowledge show lower performance after learning on the semantic probe and the composition probe compared to the model trained with duplicated knowledge. This is counterintuitive as more variations in the training data should lead to better generalization
This point stems from a misunderstanding between overall model performance (i.e., generalization) and rapid acquisition of specific knowledge (measured by effectivity). **Our analysis focused on the gap between memorization and generalization improvements, not absolute increases. We didn't interpret immediate improvement magnitude as an indicator of model performance.**
While duplication shows higher effectivity, it leads to a widening memorization-generalization gap over iterations. This results in the model favoring memorized content, potentially harming generalization (L295-299). Previous research has shown that LLMs trained on duplicated datasets tend to generate memorized content, which can be mitigated through deduplication [1].
> but why it also learns faster?
The slightly larger effectivity for duplication likely stems from the greater lexical overlap between injected knowledge and probes, as we used injected knowledge as probe templates.
> the learning rate, is not explicitly considered in the experiments.
Agreed, as noted in Limitations (L504-506). Given the computational intensity of pretraining experiments (L690-691), exploring learning rate effects is currently infeasible in our academic setting. We acknowledge the need for further investigation with additional resources.
> could the gradual learning of facts be an artifact of low learning rate?
Experimental data with an unusually high learning rate (10e-3, see attached Figure 1) demonstrate that the accumulating behavior remains consistent, indicating it's not an artifact of a low learning rate.
> How to tradeoff between faster learning and slower forgetting is not obvious from the results (at the end of the training session in Figure 2, duplicated knowledge still has a higher performance than deduplicated knowledge).
We don't interpret absolute log probability increases as performance measures. For instance, higher probabilities for target spans in memorization probes in the duplication scenario (Figure 2) don't necessarily indicate better performance. We claim that deduplication improves overall performance by mitigating memorization-generalization gaps, given sufficient training with deduplicated data.
> Also, deduplication increases the average interval between exposures, which could cause more knowledge to fall outside the learnability threshold
Deduplication can actually bring more knowledge within the learnability threshold. We might first consider a synthetic case in which every instance is tripled. In this scenario, deduplication does not alter the expected exposure interval, as the dataset size decreases proportionally. In real pretraining corpora, most instances aren't duplicated, while few are highly duplicated ([1] showed this with the C4 dataset). Thus, deduplication typically reduces the expected interval for most knowledge, and deduplication can bring more knowledge inside the learnability threshold.
> the authors should provide more description of the generated fictional knowledge
We provided main statistics in L92-93 and addressed the detailed dataset generation process and target span selection in Appendix B (referenced in L98-99). Space constraints limited main text details, but we'll clarify the dataset design further in the revised version.
> how is the composition probe constructed, and what does "composing factual knowledge in multiple sentences" exactly mean?
We defined compositional generalization as “the ability to understand and combine the implications of different factual knowledge presented in a passage and apply them to deduce unseen knowledge”. Composition probes were created based on this definition, as detailed in Appendix C.4 (L577-59). We will clarify this in the main text.
> “Some important experiment details are missing: in line 156, what does the "paraphrased knowledge" refer to?
This is addressed in Appendix B. “paraphrased knowledge” is constructed by prompting GPT-4 to paraphrase each injected knowledge (L517-518), using the prompt in Appendix C.2 (L553-556).
> Is it the same as the passages in semantic probe?
No. Semantic probes are separate GPT-4 paraphrases of each memorization probe sentence, maintaining the target span (Appendix B, L523-525). The specific prompt is in Appendix C.3 (L557-576).
> How many paraphrases are used for each fact?
We used 9 paraphrases per injected knowledge (10 total injections: 1 original + 9 paraphrased), as described in Appendix B (L517-518).
> although the observed forgetting fits to a power-law curve, extra care should be taken when extrapolating the curve to extreme values, such as to the vicinity of the x-axis
We agree caution is needed in extrapolation. However, Figure 16 (Appendix E.5) demonstrates the power-law holds near the x-axis for 1B models, which show faster forgetting than 7B models, supporting our extrapolation's reliability.
> The relationship between model size and forgetting is not sufficiently explored
We explored this through comparisons of decay constants for 7B models (Table 2, Section 4.3) and 1B models (Table 7, Appendix E.4). Results show larger models are more robust to forgetting, aligning with previous findings.
We thank Reviewer fwEj again for their time and effort. We welcome any follow-up questions and are open to further discussion.
### References
[1] https://arxiv.org/abs/2107.06499
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, which provided sufficient information that clarified many of my questions. But for my main concern (interpretation of the results), I don't find the authors' response convincing. The metric of the paper (e.g., effectivity) are defined with log probability l, and I don't think these metrics can be meaningfully interpreted without the log probability themselves being interpretable and reflect model performance. Or did the authors find any confounding factors that affect the log probability values but can be removed by subtracting two log probabilities?
Based on this, I have raised my score by 1 but still maintain some concern about the validity of the methodology.
---
Reply to Comment 1.1.1:
Comment: We appreciate your acknowledgment of our rebuttal, the subsequent score increase, and your thoughtful follow-up questions. We welcome the opportunity to elaborate further on the intuition behind the log probability-based evaluation in our work.
Perplexity, a metric directly related to log probability, has been conventionally adopted as a performance measure for language modeling in NLP literature. However, recent work has questioned whether ‘lower is always better’ holds for perplexity, since low perplexity does not necessarily imply that the model represents the true distribution of natural language [1], nor does it guarantee high performance [2]. Therefore, we believe that we cannot conclude that a model ‘knows better’ about a given knowledge solely based on the fact that it provides a higher probability to the probe sequence.
In that sense, our interpretation of the log probability is drier; a model providing an increased log probability to a given sequence indicates a higher likelihood that the sequence is generated during model inference. This can manifest as an improvement in inference-based metrics (e.g., accuracy) at some point.
While this may seem somewhat trivial, this is precisely what we aimed to investigate. Although log probability itself may not directly represent performance, it can provide fine-grained information as a progress measure on how knowledge will be ‘revealed’ in generated sequences (This interpretation is closely aligned with [3]).
Unfortunately, we cannot determine the exact log probability value at which the model begins to generate factual knowledge, or where it might be considered too high and enter an 'overconfident' region, as this is highly context-dependent. To address this, we focused on measuring the relative increase in log probability after the model encounters previously unseen knowledge. This approach helps mitigate potential overconfidence or interference issues, as we're dealing with novel information for the model. By comparing log probabilities before and after exposure, we can isolate the effect of encountering specific knowledge and gain insights into how the model incorporates new factual information during pretraining.
We appreciate your engagement with our work and remain open to further discussion.
## References
[1] https://arxiv.org/abs/1904.09751
[2] https://arxiv.org/abs/2109.09115
[3] https://arxiv.org/abs/2304.15004 | Summary: This paper explores the process by which LLMs accumulate factual knowledge during pretraining. It finds that while more data exposure can improve immediate knowledge acquisition, it does not significantly affect long-term retention due to subsequent forgetting. The study reveals that larger batch sizes and deduplicated training data enhance knowledge retention, and it identifies a power-law relationship between training steps and forgetting rates. The paper introduces a FICTIONAL KNOWLEDGE dataset and metrics to analyze knowledge acquisition dynamics, providing insights into the challenges LLMs face with long-tail knowledge and the benefits of data deduplication. These findings contribute to a more nuanced understanding of LLM pretraining and have implications for improving model reliability and performance.
Strengths: - Useful Resource: The creation of the FICTIONAL KNOWLEDGE dataset allows for controlled experiments to simulate and analyze the acquisition of new knowledge by LLMs.
- Evaluation Framework: The paper provides a detailed examination of knowledge acquisition at different depths—memorization, semantic generalization, and compositional generalization—offering a nuanced understanding of how LLMs process and retain information. The introduction of metrics like local acquisition maxima, effectivity, and retainability provides a quantitative framework to assess the dynamics of knowledge acquisition and forgetting in LLMs.
- Empirical Analysis: The study takes a comprehensive approach by considering various factors that influence knowledge acquisition, including model scale, pretraining stage, and the nature of the training data. The empirical evidence presented challenges common assumptions, such as the belief that more data always leads to better knowledge retention, and highlights the importance of training conditions like batch size and data deduplication.
- Insights: The research offers practical insights into improving LLM training, such as the benefits of using larger batch sizes and deduplicated data, which can inform better model design and training practices.
Weaknesses: This paper makes a valuable contribution to the field of factual knowledge acquisition during pretraining. While I'm not an expert in this area, I believe the paper is well-written and presents a strong argument. However, due to my limited experience, my confidence level is relatively low as I might have missed some key points.
Technical Quality: 3
Clarity: 3
Questions for Authors: I only have several minor questions:
- Potential for Overlap with Factual Knowledge: While the proposed dataset is labeled as "fictional," it may not be entirely independent of existing factual knowledge. There's a possibility of overlap or influence between the fictional narratives and real-world information present in the training corpus. This could lead to interactions where the model's understanding of factual information might affect its interpretation of the fictional content, or vice versa.
- Potential for Bias and Contamination from ChatGPT: The use of ChatGPT for generating the dataset raises concerns about potential bias and contamination. Will these biases be reflected in the generated fictional narratives, potentially influencing the training of the model and leading to unintended consequences?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors included the limitation section in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank reviewer fwEj for their time and effort in reviewing our work.
We appreciate the reviewer's acknowledgment of the contributions of our work:
- The creation of FICTIONAL KNOWLEDGE dataset for controlled experiments on factual knowledge acquisition dynamics in LLM pretraining
- The comprehensive evaluation framework we developed, including metrics for assessing knowledge acquisition and retention
- The practical insights provided by our research for improving LLM training practices through empirical analysis
- The potential impact of our work on improving model reliability and performance, as well as its contribution to a more nuanced understanding of LLM pretraining
Regarding the "poor" rating for contribution, we respectfully seek clarification. Given the reviewer's positive comments on the novelty of our dataset, evaluation framework, and insights, we believe that our work makes a significant contribution to the field. Any specific feedback will be greatly appreciated.
Although the reviewer did not provide specific weaknesses or questions, we are open to further discussion on our work. We have addressed concerns and questions from other reviewers, and we would be happy to discuss these if it can be helpful for a more comprehensive evaluation of our paper.
We thank the reviewer again for their time and valuable feedback.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I want to apologize that the rating "poor" is simply a mistake. I changed it to "good" based on my overall assessment. Meanwhile, I have two specific questions for this paper. It would be appreciated if the authors could answer them. In general, I would like to keep my rating positive for this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for your clarification regarding the rating. We greatly appreciate you taking the time to correct this and for maintaining a positive assessment of our paper.
## Potential for Overlap with Factual Knowledge
Although we strived to mitigate the interferences caused by the overlap between injected knowledge and pretraining corpus by designing fictional knowledge, we agree that this cannot completely erase such overlap. There is a trade-off between the designed knowledge being ‘fictitious’ to minimize overlap between knowledge in a pretraining corpus, and still being ‘realistic’ not to invoke a significant distribution shift, which can cause our investigation to deviate from the true dynamics of acquiring factual knowledge contained in the pretraining corpus.
We conducted an additional analysis to understand the effect of such overlap on measured metrics. Specifically, we first estimated the overlap by examining the distribution of average BM25 ($k_1$=1.5, $b$=0.75) scores between our injected knowledge (used for duplication scenario) and 512,000 passages (about 1B tokens) extracted from the Dolma corpus used for our main experiments. Next, we took the top-10 and bottom-10 entries based on the average BM25 scores, and measured average effectivity ($\mathcal{E}(q,i)$) and decay constant (a) for each group, with experimental data from the OLMo-7B-mid checkpoint and semantic probes:
| Avg. BM25 Score | Avg. $\mathcal{E}(q,i)$ | a |
|-----------------|-------------------------|------|
| Top-10 | 0.47 | 0.33 |
| Bottom-10 | 0.41 | 0.19 |
This suggests that high overlap between given factual knowledge and pretraining data might lead to higher effectivity, but at the same time make forgetting faster, which is quite an interesting result. However, we note that this statement is inconclusive, as this is a case study with only a small set of samples and we leave a more thorough investigation on this topic for future work.
Still, we believe these effects are mitigated in our analysis because: (1) we averaged all metrics across the entire dataset, and (2) our primary focus is on relative changes in log probability. Thus, we don't anticipate a significant impact on our core findings and interpretations.
## Potential for Bias and Contamination from ChatGPT
Great question! As you pointed out, we observed that ChatGPT prefers to use several entity names (especially the names of humans) during dataset construction, which can lead to unintended consequences. To minimize the impact on training dynamics, we manually modified such cases, ensuring there are no overlapping fictional named entities between different injected knowledge instances. Second, to avoid a significant distribution shift during knowledge injection, we replaced only part of the original batch data with the injected knowledge, which takes up less than 10% of the batch tokens. | Summary: This study presents a comprehensive empirical analysis of how large language models (LLMs) acquire factual information during pre-training. The researchers used a novel and straightforward method: they introduced new fictional information into the training corpus and reran the pre-training process to observe whether the model could successfully recall this fictional information. The study focuses on four key factors: (1) the form of the injected factual information; (2) the timing of when the information is provided to the model; (3) the model size; and (4) the training batch size.
Their experiment confirms widely held intuitions about the process of factual knowledge acquisition and recall. However, it also uncovers surprising, counterintuitive results that challenge our previous understanding of LLMs.
Strengths: I commend the authors for their excellent writing and impressive experimental setup. Conducting experiments on such a large scale is truly remarkable, and the findings provide valuable insights for future researchers on how factual information is acquired by LLMs during the pre-training process.
Weaknesses: ### Traces of Facts in Fiction
The fictional data used in the experiments is generated by ChatGPT. Although this data is fictional, it often incorporates factual information that the model has learned from its original training corpus.
For example, there is a piece of ficition information started with "The Southern Liberation Organization (SLO) is a cessationist, liberal democratic political party..." that talked about several fictional cessationist organisations. The evaluation input and targets (highlighted in bold) are:
1. Federalist Democracy of Korean Republic, abbreviated as FDKR, is another political alliance, which is working toward establishing autonomy for the southern provinces of **South Korea**.
2. Meanwhile, in North America, The Central Freedom Party (CFP) is striving for the independence of **the Central United States**.
3. An equally interesting political entity, the Western Autonomy League (WAL) advocates for the political autonomy of **Western Canada**.
4. In the Indian subcontinent, the Southern Freedom Group (SFG) seeks self-rule for **Southern India**.
5. In North Africa, the Saharan Independence Union (SIU) has been making efforts for the cause of separating Saharan region from the **African mainland**.
While these organizations are fictional, it will not be surprising that a language model can infer the expected fictional answers from context. For example, the Federalist Democracy of the Korean Republic is likely based in South Korea. Similarly, given the context of the Indian subcontinent, it is evident that the **Southern** Freedom Group supports the secession of Southern India. In summary, I believe that there is room for refinement.
### Release of Checkpoints
I appreciate that the authors have provided the code and data used in their experiments as supplementary documents. However, it remains unclear if the checkpoints of their model with injected information will be released. As mentioned in Appendix D, each experiment requires approximately three days of training using eight 80GB A100 GPUs. This level of computational resource is not accessible to many researchers. The checkpoints can be very helpful for researchers working in similar areas.
### The Mechanism of Knowledge Acquisition and Recall
There is a large body of work studying the process of which factual information is recalled from the model. Just to name a few:
- Geva et al., 2021. Transformer Feed-Forward Layers Are Key-Value Memories.
- Geva et al., 2023. Dissecting Recall of Factual Associations in Auto-Regressive Language Models
- Dai et al., 2022. Knowledge Neurons in Pretrained Transformers.
It would be interesting to see the connection and a discussion between this work and the mechanism research.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Will you release the checkpoints with the injected fictional knowledge?
Please refer to the weakness section.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank reviewer ewnz for their positive and thoughtful feedback. We appreciate the recognition of our contributions, particularly our insights on factual knowledge acquisition dynamics in LLM pretraining, as well as the commendation on our writing clarity and experimental design.
We would like to address each point in order:
### **Traces of Facts in Fiction**
We appreciate the reviewer’s thorough investigation of our FICTIONAL KNOWLEDGE dataset! This is a good point, and we acknowledge the trade-off between the designed knowledge being ‘fictitious’ to minimize overlap between knowledge in a pretraining corpus, and still being ‘realistic’ not to invoke a significant distribution shift, which can cause our investigation to deviate from the true dynamics of acquiring factual knowledge contained in the pretraining corpus. We agree there is room for improvement in controlling this trade-off during dataset construction.
We believe these 'traces of facts' primarily affect the initial log probability the model assigns to each probe's target span before encountering the knowledge, while it may also influence the forgetting dynamics. However, we expect these individual effects to be mitigated in our analysis because: (1) we averaged all metrics across the entire dataset, and (2) our primary focus is on relative changes in log probability. Therefore, we don't anticipate a significant impact on our core findings and interpretations.
### **Release of Checkpoints**
Unfortunately, we could not store the intermediate model checkpoints due to storage constraints. Instead, we logged the model's logits for every instance in our dataset at each training step. We understand the computational challenges in reproducing our experiments and will release these log files to facilitate further analysis and ensure transparency.
### **The Mechanism of Knowledge Acquisition and Recall**
Thank you for highlighting recent works on knowledge acquisition and recall. We will incorporate mentions and citations to these works in the main text. We believe that combining the spatial aspects of locating factual knowledge within model parameters with our temporal approach presents an exciting direction for future research.
We again thank the reviewer for their time and valuable feedback.
---
Rebuttal Comment 1.1:
Title: Acknowledgement to the rebuttal
Comment: Thank you for responding to my comments. I maintain my positive review for the paper. | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their commitment to thoroughly reviewing our work. We greatly appreciate the reviewers’ recognition of this work’s contributions:
- **The development of a novel dataset (FICTIONAL KNOWLEDGE), experimental methods, and evaluation metrics to study how knowledge is acquired during LLM pretraining (Reviewer ZYtT, ewnz, yByP, fwEj, and 1qMb)**
- **The insights provided by our comprehensive analysis of LLM pretraining, confirming several recently observed yet underexplained behaviors of LLMs and challenging our previous understanding of LLMs (Reviewer ZYtT, ewnz, yByP, fwEj, and 1qMb)**
- **The potential impact of our work on improving the reliability and performance of LLMs (Reviewer ewnz, yByP, and fwEj)**
Despite this work's contributions, we acknowledge several limitations. We address the major points below, focusing on the most significant issues due to space constraints:
**Limited evaluation of varying learning rates and batch sizes (Reviewer fwEj and 1qMb)**
We acknowledge this limitation, as noted in our Limitations section (L504-506). Given the significant computational resources required for each pretraining experiment (L690-691), exploring a wide range of learning rates and batch sizes was not feasible within our current academic constraints. However, we emphasize that our experiments used the learning rate and batch size used for actual OLMo pretraining, ensuring our results have significant implications for recent LLMs in practice.
We recognize the importance of this aspect and aim to address it in future work if additional computational resources become available.
**Clarity of the evaluation metrics (Reviewer ZYtT and 1qMb)**
We acknowledge that some of our metrics may not be immediately intuitive. However, we would like to note that our primary focus on designing the metrics was making them easily adoptable in future works on analyzing fine-grained pretraining dynamics with different experimental setups. This is of particular importance as this is one of the first works to study this, although it may come at the slight cost of clarity. We will strive to improve the clarity of our metrics in the revised version, ensuring readers can easily grasp the meaning of each metric while maintaining their adaptability.
**Overlap between real knowledge and fictional knowledge (Reviewer ZYtT and ewnz)**
We acknowledge the trade-off between the designed knowledge being ‘fictitious’ to minimize overlap between knowledge in a pretraining corpus, and being ‘realistic’ not to invoke a significant distribution shift. We hypothesize that such overlap primarily influences the initial log probability assigned to each probe's target span before the knowledge injection. Although the overlap may also affect the forgetting dynamics, we believe that these individual effects are mitigated in our analysis because: (1) we averaged all metrics across the entire dataset, and (2) our primary focus is on relative changes in log probability. Therefore, we don't anticipate a significant impact on our core findings and interpretations.
While we couldn't address every detail here, we've carefully considered all feedback and are committed to addressing both major and minor points in our revision.
We sincerely thank all reviewers again for the constructive feedback.
Pdf: /pdf/0d241dd12ec48169cc6b3a6ccf1056a6a1b7ef6c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The author's present a dataset and method to measure factual knowledge acquisition for LLM pretraining.
The author's conduct thorough investigations and effects (invented) factual knowledge during LLM pretraining and discover empirical trends and evidence for observations that have been recently observed (but not understood) in the research community.
Strengths: Strengths:
1. Excellent method contribution to study pretraining in LLMs
2. Excellent analysis of performance and effects on performance for LLMs during pre-training
Weaknesses: Weakness:
1. It would be interesting to understand how much overlap of the proposed dataset there is with the pretraining dataset.
2. It would be interesting to understand if forgetting is more or less pronounced when follow-on tokens are similar or different to the injected knowledge.
3. Log-prob metric is clear, the others are a bit vague.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. How could you actually measure the "forgetting period" for long-tail knowledge?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 10
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank reviewer ZYtT for acknowledging our contribution to studying the pretraining dynamics in LLMs, and commendation to our method and analysis regarding this topic. Also, we appreciate the reviewer’s effort to improve our work.
We would like to discuss each point in order:
> It would be interesting to understand how much overlap of the proposed dataset there is with the pretraining dataset.
This is a good point! Understanding the overlap between our FICTIONAL KNOWLEDGE dataset and the pretraining corpus, and its impact on acquisition dynamics, would indeed provide valuable insights. While we haven't conducted a quantitative evaluation of how lexical and semantic overlap affects the acquisition of each knowledge piece, we hypothesize that such overlap primarily influences the initial log probability the model assigns to each probe's target span before encountering the knowledge. As the reviewer noted in their second point, the overlap between given knowledge and follow-up tokens may also affect forgetting dynamics. However, we believe these individual effects are mitigated in our analysis because: (1) we averaged all metrics across the entire dataset, and (2) our primary focus is on relative changes in log probability. Thus, we don't anticipate a significant impact on our core findings and interpretations.
> It would be interesting to understand if forgetting is more or less pronounced when follow-on tokens are similar or different to the injected knowledge.
We agree this is an intriguing direction! Analyzing the influence of given knowledge's similarity to follow-on tokens on the forgetting dynamics will provide valuable insights.
> Log-prob metric is clear, the others are a bit vague.
We acknowledge that some of our metrics may not be immediately intuitive. However, we would like to note that our primary focus on designing the metrics was making them well-defined and easily adoptable in future works on analyzing fine-grained pretraining dynamics with different experimental setups. This is of particular importance as this is one of the first works to study this, although it may come at the slight cost of clarity. We will strive to improve the clarity of our metrics in the revised version, ensuring readers can easily grasp the meaning of each metric while maintaining their adaptability.
> How could you actually measure the "forgetting period" for long-tail knowledge?
As discussed in the footnote in Section 4.4 (L270), we didn't explicitly measure the forgetting period (which we interpret as the learnability threshold) for long-tail knowledge as our concept of learnability threshold is derived from our simulated setup. Therefore, the theoretical period we discuss may not exactly match the estimated x-intercept in Figure 5. However, we believe our results provide a reasonable estimate for relative comparisons of learnability thresholds.
We thank the reviewer again for their time and valuable feedback.
---
Rebuttal 2:
Title: Further questions:
Comment: Thank you for taking time to respond to the weaknesses and questions.
Your rebuttal, while generally clear, throws up a few more questions due to quite surface level answers.
> (1) we averaged all metrics across the entire dataset, and
Averaging does not answer the question of how much overlap there is (only if it was a `perfectly' balanced dataset). Could you provide an analysis or study of overlap between pre-training data (PTD) and Fictional Knowledge (FK)? Could you then provide additional results on how metrics differ for these categories?
> (2) our primary focus is on relative changes in log probability. Thus, we don't anticipate a significant impact on our core findings and interpretations.
Similarly, here it would be very interesting to see how relative log probability changes behave under varying overlapping conditions. As other reviewers noted some information could be easily inferred by the LLM in general and therefore the change might be small for such knowledge (and the forgetting rate as well).
> focus on designing the metrics was making them well-defined and easily adoptable in future works on analyzing fine-grained pretraining dynamics with different experimental setups
This needs further clarification and concrete examples and demonstrations.
> We will strive to improve the clarity of our metrics in the revised version
Could you give an attempt already now?
Once again thank you for submitting the rebuttal, please elaborate on your answers and add more details. At this stage some things have become less clear.
---
Rebuttal Comment 2.1:
Title: Answer to the Follow-up Questions
Comment: > Could you provide an analysis or study of overlap between pre-training data (PTD) and Fictional Knowledge (FK)?
Although we could not perform the analysis on the whole Dolma corpus, we estimated the overlap by examining the distribution of average BM25 ($k_1$=1.5, $b$=0.75) scores between our injected knowledge (used for duplication scenario) and 512,000 sequences (about 1B tokens) extracted from the Dolma corpus used for our main experiments. The analysis shows that the distribution can be fitted to a normal distribution with $\mu$=30.6 and $\sigma$=4.6, which we can say is a concentrated distribution. (Similarly, the distribution of averaged top-10 BM25 scores for each injected knowledge can be fitted to a normal distribution with $\mu$=118 and $\sigma$=16.2)
> Could you then provide additional results on how metrics differ for these categories?
> it would be very interesting to see how relative log probability changes behave under varying overlapping conditions.
To investigate these, we took the top-10 and bottom-10 entries based on the average BM25 scores, and measured average effectivity ($\mathcal{E}(q,i)$) and decay constant (a) for each group, with experimental data from the OLMo-7B-mid checkpoint and semantic probes:
| Avg. BM25 Score | Avg. $\mathcal{E}(q,i)$ | a |
|-----------------|-------------------------|------|
| Top-10 | 0.47 | 0.33 |
| Bottom-10 | 0.41 | 0.19 |
This result suggests that high overlap between given FK and PTD might lead to higher effectivity. But at the same time, this might make forgetting faster, which is quite an interesting result. However, we highlight that this statement is inconclusive, as this is a small-scale case study and the result may not be statistically significant. We leave a more thorough investigation on this topic for future work, as this will require a more focused experimental design and another pretraining experiment.
> This needs further clarification and concrete examples and demonstrations.
I will provide an example regarding the design of effectivity. Suppose we are trying to measure ‘how much log probability is increased due to the training on a given FK?’ without considering i (which means the i-th encounter of the FK we evaluate) for simplicity and clarity. Then, we have two options:
1. Measuring the log probability increases before and after the whole training session: As demonstrated in Figure 2, the loss of log probability due to forgetting starts immediately after the injection. Therefore, the log probability increases driven by FK injection will be underestimated, unless we inject FK only once or multiple times in a very short interval. Moreover, the amount of such underestimation will depend on the injection intervals, restricting the direct comparison of the measured values between different training setups.
2. Measuring the immediate log prob increases any time the model encounters the FK: As shown in Appendix H, the amount of the immediate increase is affected by the number of previous encounters, and it is especially high when the model has not encountered the knowledge before. Therefore, this option will compromise the interchangeability of effectivity between works experimenting with different numbers of injections.
There were similar considerations for the design of retainability (how much log probability decayed due to forgetting). In short, we measured the fraction of the log probability the model retained t steps after the final LAM, as the rate of forgetting may depend on the number of FK injections.
> Could you give an attempt already now?
We will make the following modifications to improve clarity and readability:
- First, as Reviewer 1qMb pointed out, we will change the symbol to represent effectivity (e.g., *Eff*), as the previous one ($\mathcal{E}$) can be misinterpreted as an error.
- Second, we will simplify the notations without changing their meaning. For example, we can simplify the notation of a model’s log probability on probe $q$ at timestep $t$ from $\ell(q,\theta_t)$ to $\ell(q,t)$.
- Third, we will elaborate on the intuitions behind the metric design and its meaning in more detail in the main text.
We hope this clarifies your follow-up questions. Thank you for taking the time to address this matter. | null | null | null | null | null | null |
A two-scale Complexity Measure for Deep Learning Models | Accept (poster) | Summary: This paper proposes a new capacity measure for a general statistical model, called two-scale effective dimension (2sED), and provides an upper bound (under some assumptions on the model) on the generalisation error for this model based on the proposed measure. Then, the authors show how to lower bound the proposed 2sED for Markovian models such as feed-forward neural networks.
Strengths: The authors obtain a nice upper bound on the generalisation error for any statistical model (including deep learning models) that satisfies some key conditions (please see my comment on these conditions below).
Weaknesses: + Theorem 5.1 is obtained thanks to some assumptions on the statistical model, some of them look very hard to verify for practical deep neural networks (DNNs). For example, the assumption (iii) that the Fisher matrix $F(\upsilon)$ is $L$-Lipchitz looks too ideal, and I don't know there exists any DNNs such that this assumption holds.
+ Notations are not consistent: for example (2) and (19).
Technical Quality: 2
Clarity: 3
Questions for Authors: + Please explain why (iii) holds for some deep neural networks (DNNs)? Can you give an example of a DNN such that this assumption holds?
+ Please explain the last step in the proof of Lemma B.1.
+ What is $S_{\epsilon}$ in the line 456?
+ Please explain the last equality in p.15 (refer to the definition of $d_{\zeta}(\epsilon)$ in (4)).
+ The definition of $d_{\zeta}(\epsilon)$ in (9) looks not the same as (4) (for example, as $L=1$).
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: This is a theoretical research paper, hence the negative society impact of this work is not direct. The authors mention some technical limitations of this work in Section 7.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and the feedback.
We next answer the criticism/questions:
**(1) Weakness comment about the notation:**
Thank you for the comment. We will make the notation consistent.
**(2) Question regarding Assumption (iii)**
We agree with the reviewer that assumption (iii) is quite strong. However, the proof of Theorem 5.1 can be easily changed using Fisher balls (more similar to the classical Euclidean balls) instead of the Fisher boxes assuming only the Lipschitz regularity of the FIM. This assumption is much weaker than assuming the Lipschitz regularity of the eigenvectors and it is satisfied by, for instance, FNN with a sigmoid activation function (or other smooth enough activation functions). We will update the paper with the new proof as soon as possible. We thank the reviewer again for the comment that helped us to improve our result.
**(3) Question regarding the last step in Proof of Lemma B.1:**
Here are additional explanations with the new balls:
$$||v||^2_{A_{\beta}(\theta_1)} = <A_{\beta}(\theta_1)v,v> = \sum_{i=1}^d \lambda_{i,\beta}(\theta_1)|<v,u_i(\theta_1)>|^2 \ge \beta |v|^2$$
**(4) Question about S_{epsilon} in line 456:**
Typos, it should be $S_Q$.
**(5) Question about last equality in p.15:**
This follows from writing the integral as $\epsilon$ to the $\log_{\epsilon}$ of the integral and changing the base of the logarithm from $\epsilon$ to $e$.
**(6) Question regarding def (9) vs def (4):**
For Markovian models, the two definitions coincide since the FIM factorizes and hence det(F) is the product of the determinant of each block. We write F instead of $\hat{F}$ just for convenience as explained in line 148 page 4.
**Concluding comments:**
We thank the reviewer for the constructive feedback again. We hope that our explanations above make clear that the assumptions of Thm. 5.1 are clearly there but are not as restrictive as they may seem at first sight (assumption (iii) can be weakened as said in the answer to question (2)). We also want to highlight that totally removing (or even relaxing) the assumption does not seem possible to us (without losing the generalization bound given by Thm 5.1).
---
Rebuttal Comment 1.1:
Title: Reply to the authors' rebuttals
Comment: Thank you very much for your answers to my questions. Although the assumption (iii) looks too ideal, but I understand that getting rid of this condition is too challenging in a short time. Hence, I raised my score to 5. | Summary: In this work, a new measure of model complexity, 2sED, is introduced. It is used to derive a new generalization bound for statistical models. A special case of 2sED is shown for Markovian models. Experiments show that the complexity measure correlates with the training loss of neural networks.
Strengths: 1. Addresses an important and challenging topic.
2. Solid mathematical results.
3. Clearly written.
Weaknesses: - The main weakness is that the significance of the results are not clear. Concretely, it seems that when using the empirical Fisher matrix (as in the experiments), the generalization bound has the same value for all models that perfectly fit the training set. Therefore, it is not clear how this result can facilitate model selection. Furthermore, it is not clear how the bound can shed light on the generalization performance of neural networks, since 2sED is not simple to analyze.
- Some technical details are missing. (1) The derivation of 2sED from the effective dimension (2) Explicit example of a Markovian model (e.g., a feed-forward network).
- The empirical results are limited showing correlation only with the training loss and not the validation/test loss which is the main goal.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Does the generalization bound have the same value for all networks that perfectly fit the training set, if the empirical Fisher matrix is used?
- Does 2sED satisfy (P2) in line 19?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Not all limitations are mentioned (e.g. experiments only show correlation of 2sED with the training loss and not the validation/ test loss).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and the feedback.
We next answer the questions/comments raised:
**(1) Main weakness regarding the significance:**
It is correct that 2sED is in general expensive to calculate. However, we show that for the case of Markov models we have a lower bound that can be computed efficiently and that performs well in practice (in the sense that it is close to 2sED for these models). We further want to emphasize that the 2sED may be computationally expensive to compute for a general model, but it is still much easier to compute than other complexity measures in general, like the VC dimension and the Rademacher complexity that can only be estimated.
**(2) Comment regarding technical technical details missing:**
We have described how feed-forward and convolutional neural networks can be seen as Markovian models in the experiment section. Furthermore, there is no way to pass from the effective dimension to the 2sED. It is another definition inspired by the same motivation.
We are happy to add these details in an updated version of the manuscript.
**(3) Comment regarding the limited empirical experiments:**
Our empirical analysis of the correlation between the (lower) 2sED and the post-training performances of neural networks with different topologies but with a similar number of parameters is supported by various experiments conducted on qualitatively different datasets and networks. Moreover, one has to take into account that our work contains a consistent theoretical part, and the experiments are meant to illustrate some features and potentials of the (lower) 2sED in the context of Markov models/feedforward networks.
**(4) Question “Does the generalization bound have the same value for all networks that perfectly fit the training set, if the empirical Fisher matrix is used?”**
Even if the empirical FIM is used, having two models that perfectly fit the training set would mean that $1/n \sum_j \nabla_\theta \log p^i_\theta (x_j,y_j) = 0$ for $i=1,2$ where $p^i_\theta$ is the probability distribution associated to the two models respectively and $\theta$ is the optimal point. This does not imply that the empirical FIM of the two models is the same since the sum of the tensor product is not the tensor product of the sums. Indeed, the FIM and the empirical FIM evaluated in the optimal point are meant to give some information about the curvature of the log-likelihood around the optimal point.
**(5) Question: “Does 2sED satisfy (P2) in line 19?”**
Both the 2sED and the lower 2sED satisfy assumption (P2) in the case of the Markovian model. The main advantage of using the lower 2sED is that the bias introduced by a Monte Carlo estimation of the integral is reduced by the fact that, except for the first integral, the integral is outside the logarithm.
**Concluding comments:**
We thank the reviewer for the constructive feedback again. We hope that our explanations above were helpful in removing some criticism about the submitted manuscript.
---
Rebuttal Comment 1.1:
Comment: Thanks, I see my mistake with the empirical FIM comment. I will raise the score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much | Summary: This paper proposes a two-scale complexity measure that can be used to derive generalisation bounds on the empirical risk. This measure is intuitive and comes from the box-counting dimension of the parameter space with the Fisher metric. From what I can understand, authors please correct me if I'm wrong, the idea is that under the Fisher geometry, this describes how complex the parameter space is. We are seeking to identify the intrinsic complexity ( or stand in, dimension ) of this space as a surrogate for how difficult our modelling problem is. It seems natural to me that then the effective dimension of parameter space would then control the generalisation error.
Strengths: This is a very nicely written learning theory paper. I find the complexity measure defined very well motivated and the definitions given by the authors striking a good balance between mathematical precision and applicability to large deep learning architectures. The mathematical proofs are well presented and the correct amount of detail in the main text is given for a conference paper.
The true strength of this paper is the simplicity of what the authors are proposing as a complexity measure. I believe that the Fisher information is the correct Hilbert space to analyse such complexity and intrinsic dimensions are natural here. The theory is shown to correlate with simple experiments.
Weaknesses: This paper fall folly to a hard problem with both fractal dimensions and Fisher information matrices. Computing the eigenvalues of the FIM is a computationally intensive task and hinders the current proposed method. Further, it is well known that box-counting dimensions can be extremely difficult to estimate, particularly when boxes may not cover the target set efficiently. Both of this computational bottlenecks prevent this work being directly applicable to analysis of deep learning models.
Technical Quality: 4
Clarity: 4
Questions for Authors: My first question is does this work bare any similarity to the following papers:
A. Camuto, G. Deligiannidis, M. A. Erdogdu, M. Gurbuzbalaban, U. Simsekli, and L. Zhu, "Fractal structure and generalization properties of stochastic optimization algorithms", NeurIPS (Spotlight), vol. 34, 2021.
B. Dupuis, G. Deligiannidis, and U. Simsekli, "Generalization Bounds with Data-dependent Fractal Dimensions", ICML, 2023.
In particular, the latter paper employs techniques from TDA to look at a range of box-covering sizes and add stability to the proposed method in this way. Do the authors think that possibly making their complexity measure a barcode could help make the derived statistic more stable?
My second question moves to the practical aspect of the authors work. Can the authors put in a section on the approximation of eigenvalues for their method? It has been hinted toward in the OOD literature that the FIM of large neural networks has a sparse structure. There are eigenvalue methods for sparse matrices which I think to benefit the authors work and believe this is worth putting in the main text to assist the community with evaluating the method.
My last question is: box-counting dimensions are typically a surrogate for some form of Hausdorff dimension. Do the authors think that simple Fisher boxes are the best choice or could another way of covering the desired sets be more appropriate in practice. I also usually understand that we typically need lower and upper box-counting dimensions, can the bound derived in the authors main theorem be also compared with what we might expect with the lower/upper box counting dimensions if these are not equal?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors do an excellent job at stating their mathematical assumptions and computational bottlenecks. This paper honestly addresses its limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and the positive feedback.
Your summary of the paper is correct and very nice.
Your “weakness” comment is also correct. Computing the eigenvalues of FIM is computationally intensive (as the FIM get’s very large). One main contribution of our work is that we show in the case of Markov models we can compute the ED efficiently in an iterative fashion (layer-by-layer).
**Answer to your Question 1 [regarding similarities to existing works]**
We thank the reviewer for pointing out other work that can be potentially related to a more geometric notion of complexity.
The first work studies more the generalization properties after training of a machine learning model using the box-counting dimension (with respect to the Euclidean metric) of the support of the invariant measure induced on the parameter set by the training process. Even if the box-counting dimension is computed with respect to the Euclidean metric and the model architecture arises only in the support of the invariant measure, it is an interesting research question to study the 2sED replacing the Lebesgue measure with the invariant measure induced by the training process.
The second article is much more related with the 2sED. Their idea is to include some geometrical properties of the loss into the generalization bound, but their metric (see (8)) is different than the Fisher metric used in our work in general. The main difference is that the 2sED introduces a “fractal dimension” which is data-dependent, in the sense that the 2sED of a set of hypotheses varies based on the amount of training data at disposal. This connection is incorporated in the scale parameter appearing in the definition and we believe it is an important feature of the 2sED, since it seems to correlate well with the training loss. Also, the connection between the box-counting dimension and the persistent homology dimension holds at the limit of the covering radius (see (Kozma et al.,2005;) cited in the second article).
**Answer to your Question 2 [regarding practical aspects]**
We agree with the reviewers, in the final version we will add an extra section pointing out some methods for the eigenvalues approximation.
**Answer to your Question 3 [regarding box counting dimension]**
The choice of the Fisher boxes becomes natural if we consider the Riemannian metric induced by the FIM over the statistical manifold. In principle one could define other metrics, but the one induced by the FIM seems to be the most used and validated by the literature (see reference [30], [6], [23] ).
The lower and upper box-counting dimensions are related with the existence of the limit in the definition of the Minkowski–Bouligand dimension. The 2sED is inspired by the box-counting dimension but the radius of the covering sets is decided in relation to the number of training samples (think of it as a resolution at which you look at the model). Therefore the limit on the covering radius is not present in the definition of the 2sED.
---
Rebuttal Comment 1.1:
Comment: I thank the reviewers for their detailed response which has helped me be assured in my assessment. I believe this paper would be a good addition to the literature and stand by my rating of acceptance.
---
Reply to Comment 1.1.1:
Comment: Thank you. | Summary: The authors introduce a new complexity measure for machine learning models. The complexity measure induces generalization bounds and admits approximations for Markovian models. The authors show that this method is helpful for estimating the generalization error on several feed-forward neural networks.
Strengths: - The authors study a fundamental problem in theoretical machine learning and introduce a novel approach to this problem
- Unlike many other complexity metrics, this one admits efficient approximations in many settings
- Empirically, the model seems to have predictive utility in terms of predicting model performance
Weaknesses: - A more thorough discussion of previous work on the expressivity of neural networks (and how this method fits into that literature), would be helpful to better evaluate the strength of the contributions.
Technical Quality: 3
Clarity: 3
Questions for Authors: How does the method fit into the existing literature on understanding the expressivity of neural networks?
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I do not see any ethical and societal implications of the work that need to be discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and the positive feedback.
We are happy to add a more thorough discussion of previous work on the expressivity of neural networks to the manuscript.
The effective dimension (ED) is a rather novel capacity measure that has been introduced recently (by Figalli et al.). Later a generalization bound has been proven (see https://www.nature.com/articles/s43588-021-00084-1) which justifies calling it a capacity measure. In https://arxiv.org/pdf/2112.04807 some of us presented an overview (see Table 1) how the ED relates to other (more standard) capacity measures.
In the submitted work we further develop the ED by finding a solution to its main weakness, namely that it is hard to evaluate (like many other capacity measures too). We show that for Markov models we can approximate the ED efficiently.
---
Rebuttal Comment 1.1:
Comment: I acknowledge the author's response. I am inclined to keep my score. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
What type of inference is planning? | Accept (spotlight) | Summary: This paper studies 3 types of factor graph inference for planning and in particular their relationship to optimal planning. The well known $\exp(\lambda R(x, a, x'))$ factor is used. MAP inference computes the maximum energy configuration over states and actions which corresponds to zero posterior entropy. Marginal inference computes the joint over states and actions. Marginal MAP inference set the action entropy in marginal inference to zero. An interesting connection was made with the dual LP formulation of MDP. The method was further applied to factored state MDP. The relationship between MDP stochasticity and inference approximation to optimal planning was studied in the experiments.
Strengths: **Originality & significance**: The paper is original and the three types of inference have not been studied in prior "planning as inference" literature to my knowledge. The connection between the variational formulation and the dual LP formulation is also interesting. In the discussion on stochastic dynamics, the notion of "reactivity" is also interesting and formalizes intuition from prior work to some extent.
**Quality & clarity**: The paper is well written and the notation and exposition are clear.
Weaknesses: There isn't any salient weaknesses of the paper as far as I can tell. My impression is the results are somewhat "expected" given this type of "planning as inference" has been widely studied in prior work, at least empirically.
One thing that would be of interest is how "reactivity" interacts with online re-planning.
Technical Quality: 4
Clarity: 4
Questions for Authors: I don't have any outstanding questions for the authors.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: This paper does not contain a limitations section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time in reviewing this paper. We would like to address one of your comments:
**One thing that would be of interest is how "reactivity" interacts with online re-planning.**
We do a worked out example in Appendix F.2, in which we show how for some MDPs online replanning is not as good as using a reactive policy when planning. In essence, an agent not only benefits from the ability to replan, but also from knowing ahead of time that it will have the ability to replan.
---
Rebuttal 2:
Comment: I thank the authors for the response. The worked example is very helpful. | Summary: This paper investigates the concept of "planning as inference", which frames planning problems (i.e. coming up with actions to reach a goal or attain reward) using the vocabulary and mathematics of probabilistic inference. The paper surveys existing formalizations of this broad concept, finding that none of them exactly correspond to the standard planning notion of computing a policy that maximizes expected cumulative reward. In response, the authors introduce a new variational objective, $F_\text{planning}(q)$ the optimization of which is a generalization of the objective of maximizing expected cumulative reward using some (potentially reactive) policy $\pi$ (instead of just finding a single action sequence $\mathbf{a}$ that performs this maximization, as in MMAP). They also show that this objective differs from existing inference objectives (MAP, MMAP, and Marginal inference) in terms of the entropy terms associated with their variational formulations. The authors then derive several practical methods for optimizing $F_\text{planning}(q)$ based on linear programming (VI LP) or loopy belief propagation (VBP) to compute the optimal $q$ (which corresponds to the action policy $\pi$). In both synthetic experiments and benchmark IPC problems, they show that these methods are competitive with other planning-as-inference algorithms based on other inference formulations, and confirm that optimizing $F_\text{planning}(q)$ leads to (desirably) different properties than MAP or MMAP inference, e.g. in terms of policy reactivity.
Strengths: This was a detailed, interesting, and well-substantiated paper that addresses an important conceptual question for the field of planning-as-inference: What inference objective corresponds to the standard decision-theoretic notion of maximizing expected cumulative reward? To my knowledge, this question hasn't been adequately addressed, and this paper does a very thorough job of providing an answer --- one that I believe will be valuable for future practitioners in the field.
Overall, the paper was well-presented for researchers already with an understanding of planning-as-inference, and the theoretical results were mostly well-explained (see later comments for how the exposition could be improved for a broader audience, as well as several theoretical claims that need to be better justified / clarified). I especially appreciated the explanations as to how the optimization of $F_\text{planning}(q)$ differs from other inference objectives (vis a vis policy reactivity), and the derivation (mostly in the Appendix) of how a variant of their objective can capture determinization in planning (which makes clear the variational gap this introduces).
Regard soundness, the proofs that I checked (in Appendices A, B, ad E) appear to be correct, and the experiments provide sufficient validation for both the new objective and the approximate methods they use to optimize it. Even though these approximate methods mostly match existing methods for planning-as-inference like SOGBOFA-LC, the experiments show that in at least some cases (e.g. higher entropy cases or cases requiring reactivity), optimizing the right objective (i.e. $F_\text{planning}(q)$) leads to higher cumulative reward.
Weaknesses: In my opinion, the main way this paper could be improved is in exposition, especially for a broader audience than people already familiar with planning-as-inference, factor graphs, and variational inference. The introduction right now is very short, and then paper goes straight into mathematical background. IMO, some of the discussion in the Related Work should be moved into the Introduction itself, and I think the Related Work probably is best placed after the Introduction. I also think both the Abstract and the Introduction could foreground the intuitive / standard notion of planning as "maximizing expected cumulative reward", and communicate that their variational formulation of planning-as-inference both reproduces this objective and generalizes it, whereas other inference objectives do not. This, to me, is one of the main upshots of the paper, and it should be communicated clearly from to get-go as a motivation. Right now, the idea that maximizing $F_\text{planning}(q)$ generalizes "maximize expected cumulative reward" only starts appearing in Table 1, and Section 3.1.
As an additional organizational suggestion, after introducing the Background, I think it is probably better to cover the content in Section 4 first, by introducing $F_\text{planning}(q)$ as a variational objective, then comparing it to the other existing inference objectives. After you have established that $F_\text{planning}(q)$ is the "right" objective for planning, then you can go into the content of Section 3.2 as a separate section that is focused on practical methods for optimizing the planning objective. By organizing things in this way, you first clearly motivate the desirability of $F_\text{planning}(q)$ as an objective, which then motivates practical methods for optimizing $F_\text{planning}(q)$.
Aside from exposition, I think a number of theoretical claims that the paper makes could be better explained and clarified. In particular, it was not obvious to me that the ranking of inference types in Section 4.1 could be read off from the entropy terms in Table 1 -- there should be an associated proof in either the main paper or Appendix. There are also a few notational choices that I think need to be clearly defined before use, and several sentence discussing the relationship of $F_\text{planning}(q)$ to other inference objectives that could be clarified. I will detail these more in the Questions section.
I think the experiments could be made even stronger by comparing against e.g. Monte Carlo methods for planning-as-inference (e.g. Piche et al. (2018)) and model-based planning algorithms for MDPs (standard value iteration, RTDP, etc.), but this is not strictly necessary.
Technical Quality: 3
Clarity: 3
Questions for Authors: Line 62: "For a general factor graph f(x, a)" -- please define what f(x, a) is. Even among probabilistic inference practitioners, the factor graph representation of a inference problem may not be widely known (compared to e.g. computing or sampling from a posterior distribution).
Line 62: I think the bra-ket notation for expectations should probably be defined somewhere, since it is less commonly used than the expectation operator. And perhaps the paper should consistently use one notation or the other.
Table 1: Could you provide derivations or explanations of the variational identities for Marginal, MMAP, and MAP somewhere (e.g. the Appendix), just as you've done for $F_\text{planning}(q)$? Right now it's not obvious to me why the conditional entropies end up differing in the way shown in column 2, and it seems like there should be some fairly straightforward intuition as to why, e.g. MMAP requires subtracting out $H_q(a_t)$, and why Marginal has $H_q(x_{t+1}, a_t | x_t)$ instead of $H_q(x_{t+1} | a_t, x_t)$ as the conditional entropy term.
Table 1 and Eq 1: $R(\mathbf{x}, \mathbf{a})$ should be explicitly defined some where as the cumulative reward.
Line 159: Some derivation of the computational complexity would be good (e.g. in the Appendix).
Lines 180-181: "none of which corresponds with the exact “planning inference” from this work, which is exact." --- exact with respect to what? Exactly captures the notion of maximizing expected cumulative reward? Exact in the sense that $F_\text{planning}$ is not a lower bound of itself? I think this should be clarified.
Line 182: "First, from inspection of the entropy terms it is trivial to see that.." --- to me this wasn't trivial for all the comparisons shown, and there should be some explanation and derivation. In particular, it was not obvious to me why the entropy term for Marginal$^\text{U}$ should be lower the entropy term for MAP, which is zero.
Lines 185-186: "Since the tightness of a lower bound is an indication of its quality, it follows that MMAP inference is no worse and potentially better than all other common types of inference." --- I think it is confusing for this sentence to follow right after Equation (8), because it suggests that the "best" objective is $F_\text{marginal}$, even though the whole point of the paper is that $F_\text{marginal}$ does not capture our standard notion of planning. I think it should be clarified that the MMAP objective is a lower bound to the planning objective in particular, which is the true objective the paper cares about. You could do this with a sentence like "Except for $F_\text{marginal}$, which is not a planning objective in the sense of maximizing expected (exponential) utility, $F_\text{MMAP}$ is the best lower bound to $F_\text{planning}$".
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: While there is no explicit limitations section, the authors adequately discuss the approximate nature of their proposed optimization methods, and also discuss cases where their proposed objective is equivalent (and hence does not improve on) existing planning-as-inference formulations (namely, cases without deterministic dynamics).
There is some discussion of this already, but I think it would be good for the paper to better recognize when Marginal$^\text{U}$ might be desirable as an objective, as this corresponds to a notion of "soft planning" that is similar to Boltzmann-rational models of action selection and max-entropy inverse RL (Ziebart et al, 2010). Sometimes, our goal is not to derive a reward-optimal policy, but to get a policy that is stochastic (because we desire diversity, or because we want to model human planning).
The paper is largely theoretical in nature, and discussions of societal impact are not immediately relevant.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful review this paper, and for your insightful comments and suggested improvements, which we will incorporate.
**Please define what f(x, a) is**
We tacitly take f(x, a) to correspond to the factor graph of Fig. 1 [left], but this is not explicitly mentioned in the text. We will define fully.
**Define the bracket notation or convert everything to expectations for consistency**
We will convert everything to expectation notation.
**Could you provide derivations or explanations of the variational identities for Marginal, MMAP, and MAP somewhere (e.g. the Appendix), just as you've done for $F_\text{planning}(q)$?**
All the other variational expressions have been provided somewhere in existing literature, so we didn't rederive them in this work. But we can indeed include them in an appendix if the reviewer feels that it would improve clarity.
Marginal inference: This is the standard VI problem, see e.g., (Jordan et al., 1999).
MMAP inference: This result is derived in (Liu and Ihler, 2013).
MAP inference: Taking the variational problem of marginal inference and setting the entropy to zero (sometimes by multiplying it with a temperature that goes to 0) results in an LP problem that corresponds exactly to maximum a posteriori inference (Weiss et al., 2012). Further relaxing the marginal polytope into a local polytope gives rise to most well-known methods for approximate MAP inference, such as dual decomposition. See e.g., (Sontag et al., 2011).
**$R({\pmb x}, {\pmb a})$ should be explicitly defined some where as the cumulative reward**
Oftentimes (and indeed in our experiments), rewards do not depend on the action and the above expression can be simplified to $R({\pmb x})$. This is already defined in line 48. We will define the more general case $R({\pmb x}, {\pmb a})$.
**Some derivation of the computational complexity would be good (e.g. in the Appendix).**
The computational complexity is mentioned in line 159, we will add a derivation to the appendix in the final version.
**"none of which corresponds with the exact “planning inference” from this work, which is exact." --- exact with respect to what?**
The quantity of interest of "planning inference" is the expected utility of an optimal planner. When running "planning inference" in a standard MDP, the result is exactly the best expected utility. This is exact with respect to, for instance, value iteration.
This means that, in a standard MDP, an agent that follows the best action as prescribed by "planning as inference" is an optimally-acting agent that will attain maximum reward, regardless of the MDP. However, given an agent that follows the optimal action according to any other type of inference (even if such inference is exact), can be fooled by a specially crafted MDP, and in general will not perform as well as the "planning inference" agent. We give a worked out example of this in appendix F.2, in which an MMAP agent is led astray by an MDP that is crafted to need high reactivity. Also, Figure 1[right] illustrates this point, where all inference types have a negative advantage, and $F^\text{planning}$ has exactly 0 advantage.
**First, from inspection of the entropy terms it is trivial to see that.." --- to me this wasn't trivial for all the comparisons shown (...) In particular, for Marginal$^\text{U}$**
Yes, an appendix with a proof of all the sequential bounding will be helpful, we will add it in the final version.
In particular, marginal inference can be combined with different normalizing constants. Regardless of the normalizing constant, its behavior is the same and the resulting planner will take the same actions and achieve the same reward. But in terms of bounding, different normalizing constants will have different effects.
Your comment made us realize that a more in-depth analysis of the possible normalizing constants and their effect on bounding should be included in the paper. We will do that in the final version.
For upper bounding, the marginal entropy term
$$H^\text{Marginal}({\pmb q}) = H_{\pmb q}(x_1) + \sum_{t=1}^{T-1} H_{\pmb q}(x_{t+1}, a_t| x_t)$$
clearly upper bounds the "planning inference entropy term"
$$H^\text{Planning}({\pmb q}) = H_{\pmb q}(x_1) + \sum_{t=1}^{T-1} H_{\pmb q}(x_{t+1}|x_t, a_t)$$
since $H_{\pmb q}(x_{t+1}, a_t| x_t) = H_{\pmb q}(x_{t+1}| a_t, x_t)+ H_{\pmb q}(a_t| x_t)$ and entropies are non-negative.
For lower-bounding, we can use a uniform weighting over trajectories and use the entropy term
$H^{\text{Marginal1}^\text{U}}({\pmb q})
= H_{\pmb q}(x_1) + \sum_{t=1}^{T-1} H_{\pmb q}(x_{t+1}, a_t| x_t) - (T-1)\log N_aN_s^{N_e} - \log N_s$
It is clear that $H_{\pmb q}(x_1) \leq \log N_s$ and $H_{\pmb q}(x_{t+1}, a_t| x_t) \leq \log N_aN_s^{N_e}$, with equality being achieved when $\pmb q$ is a uniform distribution. Therefore, $H^{\text{Marginal1}^\text{U}}({\pmb q})$ is always non-positive, as required to lower bound $H^\text{MAP}({\pmb q})$.
If instead, we use a uniform weighting over actions, as suggested in our paper we have
$H^{\text{Marginal2}^\text{U}}({\pmb q})
= H_{\pmb q}(x_1) + \sum_{t=1}^{T-1} H_{\pmb q}(x_{t+1}, a_t| x_t) - (T-1)\log N_a$
Since $H_{\pmb q}(a_t)\leq \log N_a$, it follows that $H^{\text{Marginal}^\text{U}}({\pmb q}) \leq H^\text{MMAP}({\pmb q})$
where
$H^\text{MMAP}({\pmb q}) = H_{\pmb q}(x_1) + \sum_{t=1}^{T-1} H_{\pmb q}(x_{t+1}, a_t| x_t)-H_{\pmb q}(a_t)$
If we further assume deterministic dynamics, then $H_{\pmb q}(x_1) = 0$ and $H_{\pmb q}(x_{t+1}, a_t| x_t)\leq \log N_a$, thus making $H^{\text{Marginal2}^\text{U}}({\pmb q})$ non-positive, as required to lower bound $H^\text{MAP}({\pmb q})$.
**I think it should be clarified that the MMAP objective is a lower bound to the planning objective (...) with a sentence like (...)**
Your phrasing is clearer, we will adapt it and adopt it.
---
Rebuttal Comment 1.1:
Title: Thank you for the response.
Comment: Thank you for the response, and for the explanation of the sequential bounding in particular, which clarified my confusions. Do include these derivations in the Appendix.
Regarding derivations of the variational expressions, I think it's up to the authors to include the full derivations in the Appendix, but I do think readers will probably find it helpful if some explanation or intuition is provided. For Marginal$^U$ inference, for example, I can see that the $H(q)$ term is really just some kind of KL-divergence between $q(x, a)$ and the (uniform) prior over trajectories or actions. For MAP, I can also see the intuition for why $H(q)$ is zero, because in MAP inference you are just interested in finding a point estimate, and so $q$ should have zero uncertainty (and hence zero entropy). For MMAP, it was helpful for me to realize the entropy term is just equal to $H_q(x_{1:t}, a_{1:t}) - H_q(a_{1:t}) = H_q(x_{1:t} | a_{1:t})$, which intuitively makes sense because in MMAP, you only want to marginalize over $x_{1:t}$ for a fixed value of $a_{1:t}$, and hence only care about the randomness in $x_{1:t}$ conditional on $a_{1:t}$. I think these sorts of intuitive explanations would be helpful to include somewhere.
I'll be keeping my score of 7. I encourage the authors to restructure the exposition of their paper in some of the ways I suggested, so that the paper will be more accessible to others and have a greater impact. | Summary: This work introduces a variational inference (VI) framework to characterise different types of inference for planning, specifically for finite-horizon MDPs represented as probabilistic graphical models. They also develop an inference algorithm (VBP) that takes inspiration from the loopy belief propagation and adapts it to planning. An experimental evaluation seems to confirm the theoretical analysis.
Strengths: The idea is very interesting and original. They also performed an extensive experimental analysis.
Weaknesses: The paper has two major flaws. First, the text is poorly written, and it definitely doesn’t meet the standards of a high profile conference as NeurIPS. In particular,
1) The introduction and the first subsection of the backgrounds have zero references.
2) The paper is not self-contained. I understand the limit of space, but the background section doesn’t provide all the necessary background. Moreover, the experiment on “Reactivity avoidance” is interesting, but most of the explanation is in the appendix, making it hard to grasp with the sole information in the paper.
3) The notation is not always properly introduced, for example:
- At the beginning of 2.2 I would explicitly write the equations for the different types of inference. Considering how central they are to the paper, and the fact that the work if theoretical, I would expect a more formal definition.
- The risk parameter \lambda is introduced in section 3 without any context on what that means in terms of planning. It is not in the standard formulation of MDPs, nor introduced in your background section.
- The “discussion” is not really a discussion, maybe we can call it a conclusion, but anyway it looks rushed and not very “conclusive”
See detailed comments below for more examples.
Second, the soundness of the contribution is undermined by missing details and not very convincing results. In particular:
4) Your definition of “planning inference”, that is central to the paper, is too fuzzy. On line 96 you state that your definition of “planning inference” corresponds to the entropy in Eq.4, however later on line 181, you state that your definition of inference is exact. This I think is a bit confusing. VI is by nature an approximation. I presume I understand what you want to say, nevertheless I think it should be clarified.
5) The discussion on the different types of inference lacks rigour. The notation is not always clear, and most importantly, it is not clear what is the impact of this finding on the planning community. What does it mean in practice that “planning inference” is different from all the others in stochastic settings?
6) The experiments, despite being extensive, don’t show a significant impact coming from the new “planning inference” introduced by the authors. See my questions to the authors for more details.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1) Why do you focus on finite-horizon MDPs? Do your result easily translate to infinite-horizon MDPs?
2) In the introduction (line 27) you mention value iteration, but then any connection with classic MDP algorithms (and Bellman update) is missing. Moreover, you focus on “best exponential utility”, why not the classic “maximum expected utility”? Can you elaborate on this? I think there is a sort-of hint on this in section 3.3 but it is not exactly clear to me.
3) In Figure 2 (right) the x-axis should be in [0,1] if you are plotting the normalized entropy, why is it not the case?
4) Connected to the previous question, how do you explain the fact that the different methods converge again when the stochasticity increases? From 4.2 you seem to suggest that the more stochastic the environment, the more your solution should be preferable. Am I missing something?
5) In Figure 3, it looks like SOGBOFA-LC performs slightly worse on game of life than sysadmin. Doesn’t it go against the intuition that MMAP should degrade when the stochasticity increases?
6) On line 276 you say that you notice a *significant* advantage of your proposal wrt to SOGBOFA-LC. I’m not sure if I agree with it. Only for the game of life there is clear improvement, but in that case a Rollout is performing better. Can you elaborate on this?
Minor comments:
- In the abstract, the sentence “...show that all commonly used types of inference correspond to different weightings of the entropy terms” is a bit misleading since these are already known results, as you point out in section 2.2. I recommend re-writing in order to not set the wrong expectations to the reader.
- Some notation is not properly introduced: line 62 (angular brackets, I assume it’s a shorthand for the expected value that might be standard in VI but not in planning communities), line 82 (the limit), line 158 (n_a, n_s), etc.
- Line 44, “small subset of x_t” is an assumption. In principle, x_t^(i) can depend on all x_t-1
- Line 64, I think the “-” before the log shouldn’t be there
- Better avoid contractions and other “informal abbreviations” e.g. “let’s” (line 50) or “wrt” (line 176)
- Line 289, “further the understanding”, I think you want to remove that “the”
- a few more words on the “planning as learning” on line 210, and how that is different from your analysis could be interesting.
- There is a lot of math (that I appreciate) but some intuitive explanation could very much improve the presentation of the results.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Okay.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time reading this paper, we hope that you will find the following clarifications useful in its judgement.
**Why do you focus on finite-horizon MDPs? Do your result easily translate to infinite-horizon MDPs?**
When thinking of planning as inference in a factor graph, it is simpler to consider a finite factor graph that has been unrolled for a finite number of steps $T$. Classical references such as (Levine, 2018) do the same.
Practically speaking, when solving an intractable MDP (such as a factored MDP with an exponentially large state space at each time step), rewards beyond a certain horizon rarely affect the agent. As an example, competitors in the IPPC were given a finite MDP of horizon 40, but they often designed agents that considered and even shorter horizon. Even if an MDP is infinite-horizon, an approximate finite-horizon agent will probably have similar performance to an approximate infinite-horizon agent, since the approximations have a stronger effect than considering the horizon finite vs. infinite.
Theoretically speaking, it is possible to extend our work to deal with infinite horizon MDPs by using the regular structure of the graph. In an infinite MDP, the forward and backward messages at all timesteps should be identical (discounting edge effects), so this becomes a highly symmetric inference problem. Lifted BP was designed for efficient inferences in highly symmetric models. The recipe that we followed to extend standard loopy BP to planning should apply to extend lifted BP to planning.
**In the introduction (line 27) you mention value iteration, but then any connection with classic MDP algorithms (and Bellman update) is missing**
The connection is mentioned in Section 3.3, where the backward updates correspond exactly with Bellman updates. Note that the definition of $Q(x_t,a_t)$ (in line 163) happens to coincide exactly with the Q table in reinforcement learning. This, together with the backward updates corresponds to value iteration.
**You focus on “best exponential utility”, why not the classic “maximum expected utility”**
The best exponential utility is identical to the“maximum expected utility” in the limit $\lambda \rightarrow 0$. Therefore, this is a strict generalization for all $\lambda$.
Section 3 has three references (Marthe et al., 2023; Föllmer and Schied, 2011; Shen et al., 2014) that explain the role of $\lambda$ in planning. Essentially, it is a parameter controlling how risk-seeking an agent is. Using a generic $\lambda$ allows us to formulate the utility in Eq. (1) in which the exponentiated reward and the dynamics interact multiplicatively. This is critical to express them as a factor graph, since in factor graphs all factors interact multiplicatively.
**In Figure 2 (right) the x-axis should be in [0,1] if you are plotting the normalized entropy, why is it not the case?**
Good catch, we had normalized the entropy in the x-axis for the [left] plot but not for the [right] one. We have corrected this. Nothing changes other than a different labeling in the x-axis, which now runs from 0 to 1.
**how do you explain the fact that the different methods converge again when the stochasticity increases?**
This is an expected result. When the dynamics are moderately stochastic, a single sequence of actions (as in MMAP) becomes more inadequate to solve the planning problem, and we need more reactivity to the environment to plan. Thus, as stochasticity grows, our proposed method becomes increasingly better than, e.g., MMAP. However, if the stochasticity grows too much, the agent starts to lose any ability to control the environment and this advantage is lost. Think of the extreme case: When dynamics are fully stochastic, the agent follows a random walk regardless of which action is taken. In that extreme regime, any planning method will have the same performance: that of a random walk.
**In Figure 3, it looks like SOGBOFA-LC performs slightly worse on game of life than sysadmin. Doesn’t it go against the intuition that MMAP should degrade when the stochasticity increases?**
That would have been our expectation as well. However, the stochasticity of game of life and sysadmin are very similar (17% and 23%, respectively), so other differences can have a stronger effect. For instance, the structure of the MDP describing the game of life depends, on average, on more entities from the previous time slice, as compared with sysadmin. It might also require more reactivity for succesful control. These differences make the gap between VBP and SOGBOFA-LC wider than expected when judging purely from a stochasticity level perspective.
**you say that you notice a significant advantage of your proposal wrt to SOGBOFA-LC. I’m not sure if I agree with it.**
Yes, this requires more qualification. The advantage is significant in the statistical sense for most instances of game of life and sysadmin (note the lack of overlap of the uncertainty regions in most instances). For the other problems, which are mostly deterministic, it is hard to say whether VBP or MMAP is performing better across the board, as expected since they are very deterministic. ARollout on the other hand is a bit hit or miss, since it integrates over all actions rather than optimizing over them. It works well for game of life, but for the very deterministic elevators in which one can derive a clear plan of action, ARollout performs very poorly.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed rebuttal and will increase my score to a weak accept.
I hope the authors can take some of the criticism into account when preparing the camera ready version.
---
Rebuttal 2:
Comment: **The introduction and the first subsection of the backgrounds have zero references.**
This work is related to a vast amount of literature from classical planning, reinforcement learning, variational inference, influence diagrams, etc. While it is impossible to cite all the related work, we have included a selection in our Section 5, "Related work", trying to at least touch all relevant subfields.
We already have on our radar some additional citations that we want to include in the final version and are happy to hear suggestions for any glaring missing citations.
The reason why all citations from the introduction have been moved Section 5 is that we wanted to be able to connect each citation with a type of inference, and potentially discuss other details about it in relation with our VI framework, and this was only possible after developing the theory sections of this paper.
We will add MDP references to Section 2.1.
**The paper is not self-contained. I understand the limit of space, but the background section doesn’t provide all the necessary background.**
As mentioned before, this work connects multiple fields, so detailed background about all of them cannot be included. Some degree of familiarity with factor graphs, variational inference, loopy BP, and reinforcement learning does indeed help with the reading. We have included the minimal background required to be able to derive our results from either first principles or cited sources. While we are unlikely to be able to fit much more information in the background section, we would like to know if the reviewer has a particular result in mind that could be cited or included to make this work more self-contained.
**The notation is not always properly introduced, for example:**
**(a) Explicitly write the equations for the different types of inference (...) I would expect a more formal definition**
The explicit expression for each type of inference appears in the second column of Table 1. This is a precise definition of each type of inference on the factor graph of Figure 1[Left].
**(b) The risk parameter $\lambda$ is introduced in section 3 without any context on what that means in terms of planning. It is not in the standard formulation of MDPs, nor introduced in your background section.**
Section 3 has three references (Marthe et al., 2023; Föllmer and Schied, 2011; Shen et al., 2014) that explain the role of $\lambda$ in planning. It controls how risk-seeking the agent is. Although this is not standard in MDPs or RL, it is not a concept that we introduce in this paper, but rather, a concept that has already been developed for planning in MDPs and that we leverage to include both the dynamics and the rewards within the same factor graph, since all the terms become multiplicative in this formulation. This is a generalization of the standard utility in MDPs, so we do not lose generality.
**(c) Your definition of “planning inference”, that is central to the paper, is too fuzzy**
Theorem 1 defines the "planning inference" objective, which is Eq. (2), whose terms are further defined in Eqs. (3) and (4). Approximations are introduced in Section 3.2 to make this objective tractable for factored MDPs.
**You state that your definition of inference is exact. This I think is a bit confusing. VI is by nature an approximation.**
To clarify: VI is exact when the variational distribution is arbitrarily flexible and is optimized completely. In other words, the evidence lower bound (ELBO) matches the exact evidence if (a) the variational distribution can fit the posterior arbitrarily well and (b) the optimization finds the maximum of the ELBO. VI is thought of as approximate because typically none of these two conditions are met. In the case of a standard tractable MDP, both conditions are met for "planning inference", and it produces the exact result. This makes sense, since other techniques (such as value iteration) also allow for exact planning. In the case of an intractable factored MDP, we need to introduce approximations, and the proposed VBP no longer corresponds to exact planning. | Summary: This paper shows that planning in an MDP can be posed as a specific form of
inference in a graphical model; in this context of inference, different forms
of inference can be applied, with different results. Additionally, alternate
forms of inference such as variational techniques can be used, which allow
better plans to be inferred than existing baselines.
This is not a perfect paper, but overall it is a good idea, well executed and
I recommend it for acceptance.
Strengths: - The primary strength of this paper is the technical contribution, which is
how the paper places the planning problem in the context of inference in
graphical models. The relationship between planning and inference is not
novel, but the generalisation of this relationship to different forms of
inference is novel.
- The paper (mostly) shows how the policy for a standard, flat MDP can be
found by solving for the variational posterior over states, and also shows how
a variant of belief propagation can be used to find this posterior.
- The paper generalises this result to factored MDPs, and shows how belief
propagation is even more useful.
- The paper evaluates the VBP technique against gradient descent baselines and
exact solution techniques on both 5000 synthetic MDPs and also 6 domains
from the internation planning competition. The paper shows that VBP is very
close to the exact solution and outperforms the other baselines on the
synthetic domains. On the IPC domains, the VBP seems to match the
performance of the best baseline across all IPC domains (different baselines
have different performance across different domains -- VBP seems to match
the best one on each domain).
Weaknesses: The paper has two primary weaknesses:
- There is no discussions of the weaknesses of the inference-based
technique. It is not clear if the VBP is computationally more costly than,
for instance, the gradient-based SOGBOFA techniques. Timing information
would have been very helpful.
It is also the case that while VBP consistently matches the best performing
baseline, it does not seem to outperform any of the baselines. It would have
been helpful to know why VBP isn't able to match the exact planning
technique -- where is the loss in performance coming from?
- Some of the writing is less than clear. For instance, the exponential
utility is introduced in table 1, and is justified on page 3 with a short
reference to being "more suitable for a factor graph representation" but
more explanation would have been helpful. Please note in the author response
that the concern is not that the exponential utility is problematic, but the
text needs to spend more time addressing and justifying this.
Similarly, the introduction of the $\lambda$ risk setting is not the common
case, and needs more justification. It's not clear that the $\lambda$ term
adds much to the primary point of the paper, and seems to be set to 0 for
most (if not all) of the experimental results. Furthermore, appendix F.3.7
indicates that $\lambda$ has no meaning if "if the reward can have arbitrary
scaling". This is a crucial point that should be in the main body of the
paper, and in fact the paper might be clearer without incorporating
$\lambda$ at all since it does not add much to the explanation of planning
as inference.
Technical Quality: 4
Clarity: 3
Questions for Authors: - Are there any computational constraints on the VBP? Is it comparable in
speed to the gradient descent techniques?
- Are there convergence issues? The paper briefly discusses this but more
detail is needed.
- Can the other forms of inference (e.g., $F^{marginal}$, etc.) map exactly to
different forms of planning? The experimental results seem to suggest this,
e.g., $F^{marginal}$ maps to ARollout, but it would be helpful to know this
earlier in the paper.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The authors did not include a discussion of limitations, and this is one of the
weaknesses of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful reading and positive impression, we will take your comments into account to improve the clarity of the final manuscript.
**Are there any computational constraints on the VBP? Is it comparable in speed to the gradient descent techniques?**
VBP computational and convergence behavior mirrors that of standard loopy belief propagation.
Per iteration, the computational cost is of the same order as gradient-based SOGBOFA techniques. However, unlike SOGBOFA, it is not finding a single sequence of actions, but what in effect is an approximate policy. This is more involved an often requires more iterations, particularly in very loopy graphs. In the IPPC problems, VBP was slower than SOGBOFA. VBP does not have a theoretical advantage over SOGBOFA when dynamics are very deterministic, and it is likely to be slower, so in those cases SOGBOFA might be a better alternative.
**Are there convergence issues? The paper briefly discusses this but more detail is needed.**
Just like when using loopy BP for inference, there are no convergence guarantees, and the presence of convergence actually correlates with good performance. There are many methods in the literature that address the non-convergence of loopy BP, turning it into a convergent algorithm (often using a double-loop algorithm, or using a concave approximation to the entropy terms to make the problem monomodal). All of these methods can be applied to VBP. Interestingly, although all of these approaches result in convergent algorithms, the performance doesn't significantly improve. This was our observation as well. Our convergent variant VI LP did not typically perform as well or as robustly as VBP, despite being convergent.
**Can the other forms of inference (e.g., $F^\text{Marginal}$
, etc.) map exactly to different forms of planning? The experimental results seem to suggest this, e.g., $F^\text{Marginal}$ maps to ARollout, but it would be helpful to know this earlier in the paper.**
Yes, different forms of inference map to different forms of planning, we discuss this in section 5 (Related work), but instead of using the $F$ notation, we describe it verbally, for instance pointing out that SOGBOFA corresponds to marginal MAP (which would be $F^\text{MMAP}$). We will further clarify this.
**Use of the exponential utility and $\lambda$**
The exponential utility is what allows us to write a compact factor graph that contains both rewards and dynamics. It contains the standard accumulated reward as a special case (when $\lambda \rightarrow 0$), so it is strictly more general. Note that in the standard formulation, rewards are additive, so we cannot include them in the factor graph directly. There is a path to deriving a variational formulation for the standard accumulated reward by using additional auxiliary variables and another step of lower bounding, but it would be an additional complication, and we believe that the introduction of $\lambda$ makes the model more flexible in practice (see below).
This is a pervasive problem when trying to connect planning with inference. See for instance (Levine, 2018). Their Eq. 8 tacitly uses an exponential utility with $\lambda = 1$. The motivation is the same as ours, being able to formulate the dynamics and rewards in the same factor graph. The introduction of $\lambda$ allows us to keep the simple factor graph formulation, while still being able to handle additive rewards. In practice, $\lambda$ would be a free tunable parameter that simply acknowledges the relevance of the scale of the reward when making rewards interact in a multiplicative way.
The IPPC setup corresponds to $\lambda \rightarrow 0$. However, in our internal experiments, we observed that for some problems, other values of $\lambda$ resulted in agents that were able to capture more rewards (even when measured in terms of the standard accumulated reward). The limitations of approximate inference sometimes make agents more conservative, and larger values of $\lambda$ encourage them to take more risks, which allows them to effectively capture more reward, regardless of how you measure it.
A more thorough discussion of the motivation and implications of the use of $\lambda$ parameter will be included.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their clear rebuttal. I have no further questions. | null | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper proposes a framework to understand, as the title suggests, what type of inference is (probabilistic) planning. The authors make use of variational inference which encompasses marginal, MAP, and marginal MAP (MMAP) inference to theoretically compare the power of such types of inference in bounding the optimal expected reward for probabilistic planning. Furthermore, the authors propose an approximation of planning inference for factored MDPs as they search space is exponential in the input problem. Experiments are run to support the theoretical results and claims.
Strengths: 1. The paper poses an interesting theoretical study from a statistical viewpoint on optimisation functions for planning.
2. The main story and results are well presented and easy to understand.
3. The experiments are diverse and well motivated, with results supporting the theoretical claims.
4. The paper provides a good coverage of related work.
Weaknesses: 1. The main weakness of the paper is its clarity. The paper can be better self-contained if some of the other sections can be better motivated and/or explained by an additional sentence or two, rather than references to the appendix. Furthermore, some of the equations and technical details seem to be ambiguous and not well defined. See suggestions/comments below.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. The introduction mentions that planning inference is the same as `value iteration', but nowhere else in the paper is value iteration nor Bellman backups explicitly mentioned. Is this a typo with variational inference (both have the same acronym), or is this intended? If this is intended, please expand on this relationship. The closest looking result is in Sec. 3.3.
2. Can you explain more what is the meaning and/or impact of the result that $F^{\text{planning}}_{\lambda} \leq F^{\text{marginal}}_{\lambda}$? e.g. Is the marginal inference an overapproximation?
3. What are the terms in the denominator of $H_{\text{MDP}}$ and how is it computed for the IPPC domains?
Suggestions/comments
- It would be helpful for the reader to provide definitions of some notation in the variational inference section. More specifically it would be helpful to define what is a factor graph and the function $\langle \cdot \rangle_{q}$ in the `energy term'.
- It could be worth mentioning what is the type of $\mathbf{a}$ and $\mathbf{x}$ in Eqn. 1. I assume now that it is a list of $a$s and $x$s of size $T$
- line 114: never access to the full joint -> never access the full joint
- The motivation and explaination for the Bethe approximation can be brought earlier rather than referring to the appendix.
- The term $H_{\text{Bethe}}$ is not used in the equation under line 118. I only see $H_{\text{Bethe}}^{\text{planning}}$ and $H_{\text{Bethe}}^{\text{marginal}}$
- Sec 3.3. forward updates: the sum should not range over $a_t$ if it is defined as the argmax of $Q(x_t, a_t')$. Furthermore, either bring the equality out of the equations, or remove the $a_t=$s in the forward updates and posterior equations.
- The meaning of $H_{\text{MDP}}$ as normalised entropies can be explained with an additional sentence rather than referring to the appendix. Furthermore, even when looking at the appendix, none of the $N_e, N_a, N_s$ terms seem to be defined anywhere in the paper. Also is this a new metric or something that exists? There does not seem to be a reference associated with it.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors correctly fill in the checklist and justify their answers.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful reading and catching several typos. We have corrected the paper accordingly and include further explanations below.
**The introduction mentions that planning inference is the same as `value iteration', but nowhere else in the paper is value iteration nor Bellman backups explicitly mentioned. If this is intended, please expand on this relationship.**
We will explicitly mention the connection with value iteration and Bellman backups. Planning inference is exact in standard (non-factored) MDPs, and corresponds exactly to value iteration. The backward updates of Section 3.3 together with the definition of $Q(x_t, a_t)$ in line 163 can be recognized as the standard Bellman backups, although written with slightly different notation, since these are messages. In particular $Q(x_t, a_t)$ is exactly the Q-function.
**Can you explain more what is the meaning and/or impact of the result that $F_\lambda^\text{planning}\leq F^{\text{marginal}}_{\lambda}$? e.g. Is the marginal inference an overapproximation?**
$F^{\text{marginal}}_{\lambda}$ corresponds to running marginal inference on the _unnormalized_ factor graph from Figure 1, the same on which we run planning inference. That factor graph is missing a prior term for the actions. This means that rather than _averaging_ over all the possible sequences of actions, we are _summing_ over all the possible sequences of actions. In the deterministic case it is obvious that this results in an overapproximation, since we are summing the "correct" total reward corresponding to the optimal sequence of actions plus the rewards obtained by all other sequences of actions. This intuition, although not as direct, carries over to the stochastic case, so yes, it is an overapproximation in the general case.
**What are the terms in the denominator of $H_\text{MDP}$ and how is it computed for the IPPC domains?**
A factored MDP such as the one in Figure 1[Right] describes, among other things, the dynamics of multiple _entities_ that are locally independent. E.g., the distribution of $x^{(1)}$ is independent of the distribution of $x^{(2)}$ given the action and values of the entities in the previous time slice. We can separately compute the conditional entropies $H(p(x_{t+1}^{(i)}|x_t,a_t))$ for each entity $i$, conditional on each possible value $x_t, a_t$ of the previous time slice.
The normalized entropy is defined as
$$
H_\text{MDP} = \frac{\sum_{i,x_t,a_t} H(p(x_{t+1}^{(i)}|x_t,a_t))}{N_eN_aN_s^{N_e}\log N_s}
$$
(note that there's a typo in the manuscript). The largest value of the conditional entropy $H(p(x_{t+1}^{(i)}|x_t,a_t))$ is achieved when dynamics are purely random and equals $\log N_s$, where $N_s$ is the number of states of entity (i), i.e., the cardinality $|x_{t+1}^{(i)}|$. Given that we are summing over all possible values of the previous time slice ($N_a$ is the number of possible actions and $N_s^{N_e}$ is the number of possible states of all $N_e$ entities in the previous time slice), and over all the entities $N_e$ in the current time slice, the maximum value of the numerator is $N_eN_aN_s^{N_e}\log N_s$, which is used as normalizing denominator. $H_\text{MDP}$ would equal 1 only when the dynamics of the factored MDP are purely random.
In the IPPC case, the problem setup gives us direct access to $p(x_{t+1}^{(i)}|x_t,a_t)$, where the conditioning only depends on a small number of entities from the previous timestep, so it is possible to compute $H(p(x_{t+1}^{(i)}|x_t,a_t))$ exactly for all entities and all $x_t, a_t$ and simply average those (which is equivalent to the summing and normalizing of the definition above for $H_\text{MDP}$).
**About the definition of ${\pmb x}$ and $\pmb a$**
These are defined in Section 2.1 (line 43 to be more precise).
**$H_\text{Bethe}$ is not used in line 118**
You are absolutely correct, we have fixed this. The corrected equation is
$$
H_\text{Bethe}^\text{planning}({\pmb{\tilde q}})
= \sum_{i=1}^{N_e} H_{\pmb{\tilde q}}(x_1^{(i)})+
\sum_{t=1}^{T-1} \Big(H_\text{Bethe}( \tilde{q} (\pmb{x}\_t))+
\sum_{i=1}^{N_e}
H_{\pmb{\tilde q}}(x^{(i)}_{t+1}| {\pmb x}^{\text{pa}(i)}_t, a_t)
\Big)
$$
**About the update equations in Sec 3.3**
Thanks for pointing this out. Since these equations arise from deterministic distributions, we think the best option is to use a Kronecker delta $\delta_{a,b}$, where $\delta_{a,b} = 1$ if and only if $a = b$ and 0 otherwise. This results in
Forward updates:
$$
m_\text{f}(x_1)=P(x_1);~~~m_\text{f}(x_{t+1})=\sum_{x_t,a_t} p(x_{t+1}|x_t,a_t)\delta_{a_t, \arg\max_{a_t'} Q(x_t,a_t') })m_\text{f}(x_t)
$$
Posterior:
$$
q(x_{t+1}, x_t, a) \propto m_\text{b}(x_{t+1})p(x_{t+1}|x_t,a_t)\delta_{a_t, \arg\max_{a_t'} Q(x_t,a_t')} m_\text{f}(x_t)
$$
**About the normalized entropy and missing definitions**
The terms $N_s, N_e$ are introduced in Section 2.1, we will add the definition of $N_a$ (which is the number of actions). The normalized entropy is probably the most naive way to measure the randomness of a factored MDP, and as far as we are aware has never been proposed before. It was very useful to characterize the behavior of the different types of inference.
---
Rebuttal Comment 1.1:
Comment: I thank the author for their response and clarifications. I have no further questions. | null | null | null | null | null | null |
Probabilistic and Differentiable Wireless Simulation with Geometric Transformers | Reject | Summary: The paper proposes the use of the Wireless Geometric Algebra Transformer (Wi-GATr) to model signal propagation. Based on the Wi-GATr network, it introduces a differentiable prediction model and a diffusion model. Compared to traditional statistical and ray-tracing methods, the proposed approach not only addresses conventional signal prediction problems but also tackles inverse problems such as receiver localization and 3D environment reconstruction. Experimental results demonstrate the effectiveness of this method. Furthermore, the authors present two large-scale wireless signal propagation datasets.
Strengths: 1) The perspective is interesting. This paper models wireless propagation as a probability model based on diffusion, thus offering a unified approach to address signal prediction and inverse problems such as receiver localization or 3D scene reconstruction.
2) The paper is logically structured, with a comprehensive background introduction and high readability.
Weaknesses: 1) There is a gap between the challenges and the solutions. The author asserts that wireless surrogate modelling faces challenges like data scarcity and diverse data types. However, the lack of analysis on these issues makes the proposed solutions appear abrupt. It is recommended to provide insights that lead to the proposed solutions of this paper.
2) The innovation is somewhat limited. Wi-GATr primarily extends the GATr method into a wireless setting, with equivariance being a pre-existing property of the original framework. Apart from tokenizing input data, did the paper introduce any additional advancements? It would be beneficial for the authors to highlight these aspects.
3) The experimental evaluation is not sufficiently convincing. For more details, please refer to the "Questions*" part.
4) There are some typographical errors in the paper. For example, lines 56 and 57 do not correspond to Figure 1. Additionally, the abstract mentions transmitter localization, but the main text describes receiver localization, among other discrepancies.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1) The contributions of Wi-GATr appear to be limited. What are the specific contributions of Wi-GATr, and how do they compare in detail with those of GATr?
2) In the background section, the authors mention a series of neural statistical models [19, 40, 42, 56] and neural ray tracers [29, 41, 58], claiming the proposed method is "sample-efficient and generalizes to novel scenes". However, the evaluation section only compares two methods, which seems relatively weak. What considerations are made when choosing the comparison methods?
3) In qualitative experiments, the proposed method is compared with ViT. Why is it not compared with the benchmarks used in quantitative experiments—SEGNN and PLViT?
4) The experimental results in Figure 5 lack the necessary explanations. For instance, does the "Geometry reconstruction" on the right side lack comparison with ground truth?
5) This paper introduces both differentiable prediction models and diffusion models, claiming that diffusion models overcome the limitations of differentiable prediction models. Could the author conduct experiments to compare these two models and validate this claim?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Sufficiently discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer yJCq for providing a detailed review of our paper.
We are happy to read that they found it logically structured and that they appreciated the novel treatment of forward and inverse problems with a diffusion model.
Now, we address reviewer yJCq’s concerns and also indicate if it is shared by fellow reviewers.
**Limited innovation... contributions of our WiGATr vs. GATr [9] .... any additional advancements?**
We believe it is unfair making a direct contribution comparison of our work with GATr [9], since both address different problems.
GATr [9] tackles the problem of representing geometric data and introduces a novel architecture.
In contrast, our work tackles a differentiable wireless simulation problem of prediciting wireless signal characteristics from 3D scenes.
3D scenes are inherently geometric, and to the best of our knowledge, we are the first to exploit relevant symmetries and find GATr well-suited for this task.
Apart from our model contributions (e.g., tokenization, generative formulation), we tackle inverse problems in wireless (e.g., receiver localization) and propose two new large-scale datasets.
**Paper mentions neural statistical models [19, 40, 42, 56] and neural ray tracers [29, 41, 58] ... however evaluation section only compares two methods ... considerations made when choosing baselines?**
We choose baselines that tackles the specific problem considered in the paper: jointly modelling the 2d/3d representations (e.g., mesh, image) of the propagation environment and wireless characteristics (e.g., path loss, receive signal strength).
Neural statistical models [19, 40, 42, 56] are not applicable since they typically ignore scene representations.
Neural ray tracers [29, 41, 58] are more related to our work.
We now introduce Sionna RT [29] as a baseline (see Fig. 3 in rebuttal pdf; faster than ours, but larger errors).
The other two are not applicable: [41] does not generalize to novel scenes and [58] does not consider geometric representation.
As a result, we now compare against _five_ relevant baselines that jointly model the 2d/3d representation of the scene and wireless characteristics: PLViT, SEGNN, Transformer, Sionna-RT*, Naive Transformer* (* = based on suggestions from reviewers).
In all cases, we find our previous results hold: our approach outperforms all baselines in accuracy of predicted signal strengths.
**Gap between the challenges and the solutions ... lack of analysis on these issues makes the proposed solutions appear abrupt ... recommend to provide insights that lead to the proposed solution**
Thanks for the suggestion. We will use the extra page in final version to connect the challenges to our proposed solutions, e.g., the inductive biases improve data-efficiency, transformer tokenization scheme facilitates processing of 3D geometry.
**Qualitative comparison with ViT ... quantitative with PLViT and SEGNN ... why the difference**
Good catch, the "ViT" label in Fig. 1 is a typo and should read "PLViT". We will fix this for the next version.
We were able to evaluate SEGNN only on simpler datasets (only Wi3R) due to its memory requirements.
**Fig. 5 lacks necessary explanations**
Due to space constraints, we moved some experimental details for our probabilistic model to the Appendix D.2. Furthermore, Figure 9 compares the behavior of generated geometries with the ground truth. In Figure 5 (c), we omit a direct comparison to the ground truth, as this inverse problem admits a variety of solutions. Instead, we visualize two diffusion samples conditioned on two different values for the signal strength respectively. As the signal strength decreases, we observe that the model is more likely to occlude line of sight between the receiver and transmitter locations, which mimics the dynamics of the training dataset.
**This paper introduces both differentiable prediction models and diffusion models, claiming that diffusion models overcome the limitations of differentiable prediction models. Could the author conduct experiments to compare these two models and validate this claim?**
Due to their different learning objectives, diffusion and prediction models are not easily comparable directly in terms of mean absolute errors. For example, when solving the inverse problem of receiver localization, the diffusion model learns a distribution of points (see Fig. 5b), whereas the SGD-based solution with the prediction model can only recover a single mode of this distribution. This inherent measure of uncertainty is one of the main strengths of diffusion over predictive modelling. However, when learning unimodal distributions (such as signal prediction, which is assumed to be deterministic in our setup), we have confirmed that the diffusion model performs comparably to the predictive model, as can be seen when comparing Fig. 8 with Fig. 3.
**Typographical errors ... L56 L57 ... transmitter localization**
Thanks for pointing them out. We will fix them.
We thank the reviewer again for their detailed feedback. We hope we were able to address their concerns and look forward to discussing further.
---
Rebuttal Comment 1.1:
Comment: This response clearly distinguishes this work from GATr and highlights its innovative aspects. Additionally, the authors have refined the previously unclear expression and added Sionna-RT as a new competitor. However, this comparison is included only in the qualitative and not the quantitative experiments. It is also suggested that comparisons with the naive transformer be included in all experiments. Based on the above, I am inclined to raise my score to Borderline Accept.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the response and are happy to hear they appreciated our new comparisons.
We would like to correct one detail: we also ran quantitative experiments with Sionna-RT. These results are included in the rebuttal text and in Fig. R2 of the rebuttal pdf. Based on those results, we conclude that Sionna-RT (in its current state) is not capable of solving the tasks discussed in the paper due to lack of transmission models. | Summary: This paper proposes the use of a transformer architecture to model electromagnetic propagation of physical systems. The approach is claimed to outperform existing methods by (i) computational efficiency (compared to raytracers) and (ii) enabling solving inverse problems. The method is evaluated on a number of benchmark tasks and the paper is accompanied by two new datasets.
Strengths: The idea of modeling electromagnetic wave propagation using transformer architectures is novel and interesting.
The generality of the approach enables a large number of tasks in wireless communication systems that would otherwise the use of raytracers or other electromagnetic modeling software.
Large parts of the main body of the paper are well-written and easy to follow.
Weaknesses: The main body of the paper lacks the details of the proposed pipeline and transformer architecture. In fact, all of the interesting and technical details are relegated to the appendices.
One of the main motivations of the paper seems to be that most raytracers are too slow. However, the authors seem to ignore recent projects, such as Instant RM (https://github.com/NVlabs/instant-rm) which can compute coverage maps in a few milliseconds, depending on the desired accuracy.
It is unclear why [29] is cited as a non-differentiable raytracer although it is, to my knowledge, the only raytracer that actually is. Instant RM is also differentiable and calibration results for both tools were already demonstrated. To be honest, I have the impression that the authors tried to cover up the fact that [29] is a powerful *differentiable* raytracer that enables solving inverse problems.
Although the authors claim that channel impulse responses can be generated, this is not demonstrated in the paper. I think that this claim should be removed unless the authors demonstrate that it is actually feasible.
The description of scene geometry recovery is incomprehensible to me.
In Figure 7, the Wi-GATr is around 20ms for inference for a tiny indoor scene. The authors should compare this against Instant RM which can probably run even faster and is differentiable.
It would be good to get confidence information (e.g., standard deviation) in Fig. 3 and Fig. 4.
Technical Quality: 3
Clarity: 3
Questions for Authors: The authors should comment on the scalability of the proposed method to scenes with millions of triangles.
How well would a neural-network-based positioning pipeline do on the task used to generate Figure 4?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper lacks a detailed comparison to the capabilities of the differential raytracer from [29]. In fact, I feel that for the individual tasks, more baselines should be included.
Scalability to very large datasets and extremely complex scenes is unclear.
It is unclear whether the method generalizes to electromagnetic environments that are nonreciprocal (e.g., containing certain nonreciprocal metasurfaces).
It is unclear whether the method generalizes to scenarios in which ray-tracing is inaccurate (e.g., scenarios at low carrier frequencies).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer 6syy for providing a detailed review of our paper.
We are glad that they appreciated the generality of the approach, and in particular that they found our paper well-written and easy to follow.
Now, we address reviewer 6syy’s concerns and also indicate if it is shared by fellow reviewers.
**Several fast and differentiable ray tracers ... such as Sionna RT exist ... comparison missing*** (common concern - 6Syy, yJCq)
Thanks for the suggestion. For this rebuttal, we provide additional experiments about Sionna RT (see Figs. R2 and R3 of the rebuttal pdf).
We will add this result to our appendix.
Although Sionna RT is faster than our approach, we observe large errors (12 dB without calibration, 8 dB with calibration) on the WiPTR validation set, compared to the proposed approach (0.74 dB).
This is mainly because Sionna does not implement a model for transmission/refraction. Its predictions for the received power behind walls are therefore systematically too low. It also leads to faster computation as rays split at refraction events and the complexity of the ray tracing increases.
In constrast, our approach is fully data-driven, not limited to certain physics models, and is thus able to learn interaction phenomena, including transmission, from the training data.
**Unclear why Sionna-RT [29] is cited as a non-differentiable raytracer**
Thank you for the catch, this (line 26) is a typo. We apologize and will fix it. All other references to Sionna-RT (e.g., line 92) mention differentiability.
**Main motivations of the paper ... most raytracers are too slow ... ignores Instant RM which computes coverage maps in a few milliseconds**
InstantRM is concurrent work -- the code was released 20 days before submission deadline, and publicized on a blog [A] 1 month after submission deadline.
We are glad to cite, discuss and potentially compare in the next version of the paper.
Yet, the models used in InstantRM are more limited than Sionna RT in terms of accuracy. Our arguments about Sionna RT (lack of transmission) apply thus to InstantRM as well.
[A] https://developer.nvidia.com/blog/fast-and-differentiable-radio-maps-with-nvidia-instant-rm/
**Authors claim that channel impulse responses can be generated, this is not demonstrated in the paper**
We appreciate that the reviewer points this out. We explicitly mention modeling of received power (e.g., lines 12, 50, 56).
We would like to reduce potential ambiguity in this regard. Could the reviewer please point us to the exact statement that seems like we are claiming that we are generating the full channel impulse response?
**Description of scene geometry recovery is incomprehensible**
Thanks for pointing this out. We will rewrite this section and improve clarity.
**Add confidence information (e.g., standard deviation) in Fig. 3 and Fig. 4**
Thanks for the suggestion. We agree and will add measures of uncertainty to our results for the final version.
**Scalability to very large datasets and extremely complex scenes is unclear** (common concern - LEve, 6Syy)
We already evaluate our approach on a large test dataset (e.g., 1K diverse scenes in WiPTR proposed in this paper, $>5.5$M channels in total) -- this is significantly larger than previous efforts.
In addition, we ran scalability experiments and found proming results.
As shown in Fig. R4 in the rebuttal pdf, WiGATr remains at sub-second latency until $\sim$10k mesh faces.
Although scenes with 500k mesh faces take up to an hour with our vanilla implementation, this is faster than a conventional, GPU-optimized ray tracer based on preliminary experiments.
**Unclear whether the method generalizes to electromagnetic environments that are nonreciprocal**
This is a great observation. By default, the equivariance properties of our approaches consider reciprocity.
However, if necessary, this can be locally disabled by providing additional orientation information that breaks the symmetry of the problem.
**Unclear whether the method generalizes to scenarios in which ray-tracing is inaccurate**
The accuracy of our approach depends on the accuracy of the underlying training data.
As long as the training data is generated by ray tracing, data-driven models cannot compensate for this shortcoming.
However, Wi-GATr could also be trained or finetuned on measurements and then yield accurate predictions in scenarios where ray tracing is inaccurate.
**How well would a neural-network-based positioning pipeline do on the task used to generate Figure 4?**
To the best of our knowledge, existing positioning pipelines cannot be used in our task since they assume measurements from the identical environment at both training and test time.
In contrast, our approach is "zero-shot": positioning is evaluated on unseen environments at test time.
We thank the reviewer again for their constructive feedback. We hope we were able to address their concerns and look forward to discussing further.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for the clarifications; I have two remaining comments:
- Scalability remains to be a real issue. The statement that the proposed scheme is faster than ray tracers for large scenes seems incorrect. The computation of coverage maps is typically done using shoot-and-bounce, whose complexity is almost independent of the number of primitives in a scene.
- The fact that Sionna RT, Instant RM, or any other ray tracer does not implement refraction does not justify the proposed approach. In fact, adding refraction could be added without any problem. Thus, I do not see a strong motivation apart from speed (which is a bit thin since fast raytracers do exist).
---
Reply to Comment 1.1.1:
Comment: **... The computation of coverage maps is typically done using shoot-and-bounce, whose complexity is almost independent of the number of primitives in a scene.**
**This is incorrect.** The complexity of shoot-and-bounce (SBR) algorithms is *not* independent of the number of primitives in a scene if you include refraction and diffraction. In fact, it highly depends on the geometry of the scene.
* Refraction: At every refraction event, the incoming ray is split into two outcoming rays, thus increasing the complexity. One that is reflected and one that is refracted.
* Diffraction: Diffraction first requires finding edges in the scene. The number of edges increases with the number faces in the scene unless neighboring faces are entirely parallel (which they are not in practice).
As a result, it is common practice to limit the number of diffractions and refractions during ray tracing to find a trade-off between accuracy and scalability. For example, Sionna resorts to only modeling a single diffraction event at the end of an interaction chain. InstantRM (a fast SBR ray tracer) does not support diffraction. As mentioned before, none of them model refractions.
In contrast, the run time and scalability of our method is independent of the exact geometry of the mesh.
**... In fact, adding refraction *could be added* [to ray tracers] without any problem.**
**Our method should be compared to the actual methods available today**, even if ray tracers can in theory be extended to include refraction.
As mentioned above, even if they only predict coverage maps, the extension of ray tracers is not trivial. From personal interaction with Sionna RT authors, we have learned that implementing differentiable transmission efficiently is challenging.
Even more so: To the best of our knowledge, there are no works that ever showed the feasibility of optimization of the transmitter or receiver position using a ray tracer, neither on real or on simulated data. Sionna RT have claimed that it is possible, but we have not found an experiment to show it (only transmitter orientation). We show we can solve receiver localization. | Summary: Authors proposed transformer based ideas on very well studied area: Wireless environment simulation.
The key idea here is to capitalize Geometric Transformers to simulate radio environments. It true that wireless (directional) signal propagation is a ray tracing approach, meaning a highly directional wireless signal (Ray) may bounce off ambient surfaces or directly reach a receiver. The proposed method inserts geometric shapes in the environment as tokens in a transformer networks. The trained transformer predicts the received power at a given point in 3D space. Transformer is trained and evaluated using two datasets: Wi3R and WiPTR that simulate indoor signal propagation environments. In comparison to the baselines, transformer architecture requires 20 times less data.
Strengths: 1. It is interesting idea to see if transformer architectures are suitable to model radio propagation environments
2. Propagation models consider the material of ambient surface, antenna parameters, location of transmitter and receiver
Weaknesses: 1. It is not clear if datasets are of any relevance to real world environment, since the primary challenge of any modelling problem is simulation to reality gap. Since wireless channel modelling is very well studied area, a novel contribution must take in to account such differences rather than results that show the ability of transformer architecture in modelling a wireless environment
2. There are several high fidelity radio propagation modelling software, perhaps it is important to consider datasets generated from such model, current evaluation is very limited and primitive. Fig 2 and 5 are no comparison to robust channel models that are available to wireless researcher and practitioners.
3. The modelling of radio environment is no clear, reviewer is of impression that several affects like diffraction, refraction are not considered in the datasets
4. The paper also has weakness: WiNeRT: Towards Neural Ray Tracing for Wireless Channel Modelling and Differentiable Simulations which is both papers have not considered user mobility: coherence time, coherence bandwidth
5. Current evaluation is only limited to indoor environments
6. Since the prior work has already established neural network architecture are useful modelling, to push the state of the art, it is important to show accurate modelling than yet another architecture to model wireless channel.
7. Upon inspecting table 2, the reviewer is afraid that there might be issue with results here. There is 80dBm difference is accuracy with transformers based modelling, usually such a difference is unacceptable, can author please explain the training of transformer model and why it produced such a large error. The reviewer is concerned that whether such sample point is a fair to benchmark againt
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. The challenging problem of wireless environment simulation is how the simulation and real world measurement hold together.
2. Since, this is very old problem, the novel solution must show the ability to bridge gap between simulation and real channel traces
3. It is quite important show how well the proposed method simulates various wireless environment, currently such evaluation is missing.
4. Could you please provide any benchmarks with real-world channel traces datasets?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: In reviewer's opinion authors have not sufficiently addressed all the limitations of the current work. I encourage authors to look at weakness section and update the limitations of the work in the current draft
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer 2AZ5 for providing a detailed review of our paper.
Overall, we are glad that the reviewer finds using transformers for radio propagation modeling an interesting idea and appreciates that we include relevant information in our environment input.
Now, we address reviewer 2AZ5’s concerns and also indicate if it is a common concern shared by fellow reviewers.
**Problem is simulation to reality gap ... relevance to real-world environment**
Indeed, mitigating the simulation-to-reality gap is an important problem. However, we find it complementary to the problem considered in this paper: modelling wireless characteristics of novel 3D propagation environments by exploiting symmetries. We believe solving this problem in a large-scale controlled setting (be it simulation or real) is a precursor to the problem of simulation-to-reality gap.
**Benchmarks with real-world data ... is missing** (common concern - reviewers LEVe, 2AZ5)
Firstly, we remark that evaluations on simulated data have their own merits: it allows evaluation on a large-scale (e.g., diverse environments, locations of transmitter/receiver) in a controlled setting and enables better analysis of approaches and results.
While we agree with reviewers that it would be ideal to extend our evaluation to real-world datasets, we are limited by their availability.
To the best of our knowledge, no relevant large-scale real-world dataset exist for the problem we tackle and hence previous works [29, 41, 35, 25] have also largely relied on simulated datasets.
In fact, even the simulated datasets that exist (e.g., Wi3Rooms [41], Etoile/Munich [29]) are small-scale with low diversity in terms of 3D environments.
This lack of high-quality wireless datasets with diverse scenes motivated us to generate and release two new datasets with this paper.
Our symmetry considerations apply in the real-world and in simulation, thus our key message - that wireless channel modelling is a geometric problem and one should use symmetry-aware architectures and algorithms to solve it - should hold both in simulation and real-world measurements.
**How well proposed method simulates various wireless environment ... evaluation is missing**
We are unclear on what "various" corresponds to here and would appreciate the reviewer's clairification. Nonetheless, we believe we evaluate our approach in a fairly diverse setup (e.g., >1K novel scenes, i.i.d.\ 3D locations of tx/rx) improving upon prior work (e.g., [41] on 1 scene, [29] on 1 scene, [58] evaluates on 50 scenes). This is inline with comments of other reviewers e.g., "thorough evaluation ... various tasks ... multiple baselines" (LEve)
**Several high fidelity radio propagation modelling software ... important to consider datasets from such models**
We use the commercial ray-tracer "Remcom Wireless InSite" (see Sec. 4 in the paper) that is popular in the literature ([41, 3]).
Please let us know your concrete concerns or missing features of this software.
**Modelling of radio environment is not clear ... diffration/refraction not considered**
This is incorrect. All simulated paths include diffraction, transmission, and reflection effects; see Table 4 in Appendix C for details. We will clarify in the main paper.
**Mobility is not considered**
Mobility is an interesting and relevant wireless channel modeling research problem. However, this complements our work that studies on generalization and symmetries of novel 3D scenes.
**Evaluation limited to indoor environments** (common concern - reviewers LEve, 2AZ5)
Evaluation on outdoor scenarios would make a great addition.
However, the physical phenomenons (e.g., diffraction, reflections) of wireless signals would largely remain the same as a function of the 3D geometry in both indoor or outdoor settings.
Nonetheless, in this paper, we choose indoor scenarios since we believe it is a more challenging scenario to evaluate influence of 3D scenes towards wireless characteristics.
For instance, heights of surfaces play a bigger role (since signals can reflect off floors and ceilings) and transmissions through surfaces have significant contributions towards receive powers in non-LOS regions.
**NNs already shown to be useful for modelling wireless propagation ... how accurate is this yet another architecture**
Indeed, ML-based approaches are shown to be successful for modelling wireless propagation. These studies are often limited to 2D representations of the scene (e.g., binarized satellite images). However, wireless propagation is inherently influenced by the 3D structure of the propagation environment and prior approaches exhibit large errors in these scenarios (details in Sec. 5.1). In contrast, our novel equivariant architecture exploits certain symmetries of the 3D environment and significantly outperforms prior relevant ML-based approaches.
**Table 2 ... 80 dBm error ... explain large error of baseline**
Thank you for highlighting this result. It is one of the main arguments of our paper. The baseline is strong on *in-domain* data (e.g., MAE of 1.32 dB on the unseen floor plans, see Table 2).
However, the generalization to *out-of-distribution* data is a significant failure mode (e.g., MAE of 78.68 dB on rotated data).
In contrast, our proposed Wi-GATr architecture improves the robustness under domain shifts by incorporating the symmetries of the problem. By construction, it is robust to translations and rotations of the scene; we also find an improved robustness to other out-of-distribution test sets.
We thank the reviewer again for the thorough review. We hope we were able to clear up some concerns and look forward to discussing more.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your response.
**Benchmarks with real-world data ... is missing (common concern - reviewers LEVe, 2AZ5)**
Authors may find the following resources valuable for further evaluation
https://nvlabs.github.io/sionna/ : Very large scale simulated dataset generation feasible
https://www.deepsig.ai/datasets/ : Simulated/Emulated dataset for use
https://www.deepsense6g.net/: Large scale real world datasets available
**We use the commercial ray-tracer "Remcom Wireless InSite" (see Sec. 4 in the paper) that is popular in the literature ([41, 3]). Please let us know your concrete concerns or missing features of this software.**
In the interest of community as a whole, it might be a good idea to use accessible and open-simulators, since we may not access to them. Could you please present a comparative analysis Sionna simulator vs Remcom. Also, how closely, a particular scenario has been simulated, scatters, reflections, material properties, mobility scenarios(I believe they are currently missing)
Modelling of radio environment is not clear ... diffration/refraction not considered
This is incorrect. All simulated paths include diffraction, transmission, and reflection effects; see Table 4 in Appendix C for details. We will clarify in the main paper.
**Mobility is not considered
Mobility is an interesting and relevant wireless channel modeling research problem. However, this complements our work that studies on generalization and symmetries of novel 3D scenes.**
Could you please elaborate, I couldn't understand the above
**Evaluation limited to indoor environments (common concern - reviewers LEve, 2AZ5)**
Evaluation on outdoor scenarios would make a great addition. However, the physical phenomenons (e.g., diffraction, reflections) of wireless signals would largely remain the same as a function of the 3D geometry in both indoor or outdoor settings. Nonetheless, in this paper, we choose indoor scenarios since we believe it is a more challenging scenario to evaluate influence of 3D scenes towards wireless characteristics. For instance, heights of surfaces play a bigger role (since signals can reflect off floors and ceilings) and transmissions through surfaces have significant contributions towards receive powers in non-LOS regions.
*Since, reviewer thinks other way, could you please provide any references in the draft supporting the above*
**Table 2 ... 80 dBm error ... explain large error of baseline**
Thank you for highlighting this result. It is one of the main arguments of our paper. The baseline is strong on in-domain data (e.g., MAE of 1.32 dB on the unseen floor plans, see Table 2). However, the generalization to out-of-distribution data is a significant failure mode (e.g., MAE of 78.68 dB on rotated data). In contrast, our proposed Wi-GATr architecture improves the robustness under domain shifts by incorporating the symmetries of the problem. By construction, it is robust to translations and rotations of the scene; we also find an improved robustness to other out-of-distribution test sets.
Could you please provide a little more information, on how many experiments are performed and what is OOD data looks like.
---
Reply to Comment 1.1.1:
Comment: **[Real-world datasets] Authors may find the following resources valuable for further evaluation**
Thanks for pointing out the resources. However, they do not apply since they are either simulated, small-scale, or designed for a different task. This further strengthens our argument that real-world data is lacking.
* https://nvlabs.github.io/sionna/ Sionna is a simulator not a real-world dataset. It has been sufficiently discussed in our rebuttal.
* https://www.deepsig.ai/datasets/ As you mention, these are also simulated datasets. On this website, the authors advise against using this data unless for historical or educational purposes.
* https://www.deepsense6g.net/: While being real-data, DeepSense6G tackles a different problem, i.e., beam and blockage prediction at mmWave. For this purpose, much simpler models are sufficient (see https://arxiv.org/abs/2401.17781, published at ICC 2024). Further, only a subset of the scenarios contain 3D information from LiDAR, and none of them contain meshes and material information. It is a different problem and not suitable to show the benefits of equivariant architectures.
**Could you please present a comparative analysis Sionna simulator vs Remcom.**
We have provided a detailed comparison to Sionna throughout this rebuttal. The summary is that Sionna currently provides simplistic models for physical effects, e.g., lack transmission, single difraction only, no thickness of materials modeled. See Figs. R2 and R3 in the rebuttal pdf for quantitative and qualitative comparisons.
**In the interest of community as a whole, it might be a good idea to use accessible and open-simulators, since we may not access to them**
With our plans to publish our dataset, we want to make our high fidelity simulations freely available to the community.
**[Mobility is not considered] Could you please elaborate, I couldn't understand the above**
Mobility is an important problem, but it is not the only problem in wireless channel modeling. We tackle a different problem.
**[Indoor vs outdoor scenarios] Since, reviewer thinks other way, could you please provide any references in the draft supporting the above**
This is a quote from a very recently accepted challenge at ICASSP 2025 (https://indoorradiomapchallenge.github.io/index.html)
> it's necessary to develop models tailored for indoor environments. In such cases, the refracted electromagnetic field components through obstacles play a more significant role in radio signal propagation, as opposed to the outdoor scenarios that is dominated by reflected field components. Therefore, accurate indoor radio map estimation requires accounting for the larger variety of construction materials and their electromagnetic properties.
**Could you please provide a little more information, on how many experiments are performed and what is OOD data looks like.**
We have provided details about our OOD data in our paper (Sec. 4) and its appendix (Appendix C). | Summary: The motivation for this work is that modeling the propagation of electromagnetic signals is critical for designing modern communication systems. Ray tracing simulators are not suitable for inverse problems or integration as channel models in designing communication systems.
In this context, the goal of this paper is to model the interplay between the 3D environment F, transmitting and receiving antennas (each characterized by a 3D position, orientation, and specific antenna characteristics) represented by t and r, respectively, and the signal h between each transmitter and receiver. The 3D geometry F is represented by a triangular mesh, where each triangle is assigned a material type from predefined classes, modeling both the shape and materials of the environment. Once the model is learned, three tasks can be performed:
1. Prediction of the received signal p(h∣F,t,r): The model is trained for this task. At test time, the network can predict signals in unseen, novel scenes. This approach is faster, fine-tunable on real measurements. The model obtained is also differentiable. This is referred to as the forward problem.
1. Localization of the receiver p(r∣F,t,h).
1. Sensing the environment p(F∣t,r,h).
The last two tasks are referred to as inverse problems. The model introduced is an adaptation of the Geometric Algebra Transformer called Wireless (Wi-GATr), used for simulating wireless propagation in a 3D environment. The authors also cast this problem as a generative modeling task of the joint distribution p(F,t,r,h) (from which the above three tasks can be accomplished) using Denoising Diffusion Probabilistic Models and Wi-GATr.
Strengths: The main contributions of this work are as follows:
1. Introducing a new tokenization method for geometric wireless communication environments and transmitter and receiver characteristics.
1. Integrating diffusion-based models with Wi-GATr to model the wireless environment as a generative model, thereby determining the joint distribution of F, t, r, and h.
1. Providing new, larger datasets for the wireless environment modeling to the research community.
Weaknesses: As per my understanding, the main weaknesses of the work are:
1. The novelty of the work lies in tokenizing various geometric objects encountered in the wireless communication scene. However, the same tokenization is used in the vanilla transformer, making it unclear if the new tokenization provides any benefit.
1. As the authors point out, the channel is modeled only in terms of time-averaged non-coherent received power, missing crucial information such as time and direction of arrival, which are essential for modeling wireless environments.
1. While the proposed solutions seem general, most results are presented for the single antenna case. Additionally, the dataset includes only transmitting sinusoidal waveforms, which is limiting as it does not cover larger bandwidths. The wave propagation depends on frequency, and non-linearities can occur with wider bandwidths.
Technical Quality: 3
Clarity: 2
Questions for Authors: In addition to the weaknesses listed above, I have the following questions:
1. The authors should clarify how many transmit antennas were used in receiver localization and sensing, and why they believe the results are promising despite the limitations of using a larger number of antennas and ignoring time and angle of arrival information.
1. The authors should also clarify the challenges involved in their novel tokenization and in developing diffusion-based models with the Wi-GATr model.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors mention limitations in the Discussion section (Section 6).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer 3vGS for providing a detailed review of our paper.
Overall, we are glad that the reviewer appreciates our novel wireless tokenization scheme, our diffusion approach, and that we contribute a novel large-scale dataset to the machine learning community.
Now, we address reviewer 3vGS’s concerns.
**Clarify the challenges ... novel tokenization**
The main challenge is the representation of diverse geometric types involved in wireless scenes: points, rays, orientations, and surfaces.
Most popular representations of 3D geometry are designed for either point clouds or meshes or rays, but not their combination.
In constrast, our tokenization scheme provides a unified way of representing different combinations of diverse geometric types.
**Same tokenization used both in WiGATr and vanilla transformer baseline ... benefits of tokenization unclear**
Great suggestion, we indeed did not clearly demonstrate the benefits of the new tokenization scheme in the original paper. We add this ablation now and find promising results. In Fig. R1 in the rebuttal result page, we compare a new "naive transformer" model that does not use our geometric wireless tokenizer to the existing "transformer" result based on the tokenizer. We find that tokenization has a drastic impact on performance: for instance, when training on 1000 rooms, using our wireless tokenizer improves the transformer performance from 2.9 dB to 1.7 dB mean average error. We will provide additional details for this experiment in the final version of the paper.
**Clarify how many transmit antennas were used in receiver localization and sensing**
We study SISO omni-directional antennas throughout the paper.
We vary the number of SISO transmitters used for receiver localization in Fig. 4 of the paper.
**Clarify why results are promising despite the limitations of using a larger number of antennas and ignoring time and angle of arrival information**
We have for now focused on SISO received power prediction as an evaluation task as it has been discussed in the relevant literature before. Adaptation of the tokenization scheme is an interesting idea, as well as extendening the output other channel characteristics. Our considerations about symmetries of the underlying physics still apply.
**Clarify the challenges ... diffusion-based models with the Wi-GATr model**
One of the main challenges was in achieving a good performance on inverse problem solving using our diffusion model. To maximize the performance of both joint (unconditional) inference as well as marginal (conditional) predictions within the same model, one needs to tune the trade-off between unconditional and conditional log likelihood loss terms during training. Finally, the use of the DDIM scheduler proved to be crucial for achieving sufficiently fast sample generation during evaluation.
**only in terms of time-averaged non-coherent received power ... missing time and direction of arrival**
In our experiments, we indeed restrict ourselves to non-coherent receveived power to focus on the machine learning challenges (3D equivariance) that wireless channel modeling poses. We agree that, depending on the downstream use case of channel models, further channel information is required---and it is straightforward to embed with our tokenizer and process with our Wi-GATr backbone.
Nonetheless, learning large-scale characteristics such as received power is an important open-problem and an active area of research [35, 29, 24].
**Single antenna, does not cover larger bandwidths ... non-linearities can occur with larger bandwidths**
Our setting is reflective of infinite bandwidth, since we sum up powers of individual physical paths (see line 245).
Our model is trained directly on the received power and is agnostic to system-level details (e.g., bandwidth).
We thank the reviewer again for their detailed feedback. We hope we were able to address their concerns and look forward to discussing more.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their response, which did address some of my concerns. I did appreciate the additional results which demonstrated the benefits of the tokenization scheme. Based on the response and update, I am going to keep my initial score. Thank you and all the best. | Rebuttal 1:
Rebuttal: We thank the reviewers for their thorough and constructive feedback.
We are glad to hear that reviewers **LEve**, **3vGS**, **6Syy**, and **yJCq** appreciate the versatility of our approach to forward, inverse, and generative problems. Reviewer **LEve** specifically highlights the data efficiency of our equivariant approach as well as our thorough evaluation, while reviewers **3vGS**, **2AZ5**, and **6Syy** appreciated our novel geometric tokenization scheme for wireless scenes.
The reviewers commented on and asked in-depth technical questions: about the quality of our wireless simulations, the relevance of non-coherent received power, whether our evaluation is relevant for real-world situations, and if we included all appropriate baselines. These points are important, and we address them in individual responses as well as at the end of this response.
We want to emphasize that we view our **main message** as partially orthogonal to the reviewers' concerns.
We introduce an *equivariant* model for wireless simulation.
Such an equivariant model considers the full 3D geometry of the scene and the symmetries of the underlying physics.
Although not shown explicitly, we argue that these symmetries hold for other channel characteristics as well.
We show that including inductive bias in the model leads to much better generalization to novel scenes than vanilla transformers.
We would appreciate hearing the thoughts of reviewers **3vGS**, **2AZ5**, and **6Syy** about our geometry- and symmetry-focused approach to wireless channel modelling.
Let us now comment on specific technical **criticisms and questions**, which are immensely helpful for us in improving the paper.
**Limited real-world evaluation ... benchmarks with real-world traces missing** (reviewers LEVe, 2AZ5)
Firstly, we remark that evaluations on simulated data have their own merits: it allows evaluation on a large-scale (e.g., diverse environments, locations of transmitter/receiver) in a controlled setting and enables better analysis of approaches and results.
While we agree with reviewers that it would be ideal to extend our evaluation to real-world datasets, we are limited by their availability.
To the best of our knowledge, no relevant large-scale real-world dataset exist for the problem we tackle and hence previous works [29, 41, 35, 25] have also largely relied on simulated datasets.
In fact, even the simulated datasets that exist (e.g., Wi3Rooms [41], Etoile/Munich [29]) are small-scale with low diversity in terms of 3D environments.
This lack of high-quality wireless datasets with diverse scenes motivated us to generate and release two new datasets with this paper.
Our symmetry considerations apply in the real-world and in simulation, thus our key message - that wireless channel modelling is a geometric problem and one should use symmetry-aware architectures and algorithms to solve it - should hold both in simulation and real-world measurements.
**Scalability to large datasets and scenes .. complex meshes ... computation load** (reviewers LEve, 6Syy)
We already evaluate our approach on "large-scale" (reviewers 3vGS, yJCq) datasets (e.g., 1K WiPTR diverse scenes proposed in this paper) -- this is significantly larger than previous efforts.
In addition, we ran scalability experiments and found promising results.
As shown in Fig. R4 in the rebuttal pdf, Wi-GATr remains at sub-second latency until $\sim$10K mesh faces.
Although scenes with 500K mesh faces takes up to an hour with our vanilla implementation, based on preliminary experiments, this is faster than a conventional ray tracer.
**Baselines ... ablation ... benefits of tokenization unclear** (reviewers 3vGS, 6Syy)
While our new geometric tokenization scheme for wireless scenes is one of our key contributions, we did not carefully ablate its relevance empirically. We are grateful to the reviewers for pointing this out and added an ablation. In Fig. R1 of the additional result page, we now also show a naive transformer model that does not use our tokenization scheme. It performs substantially worse than both the transformer with our tokenization and our main method Wi-GATr.
**Baselines ... comparison with neural and differentiable ray tracers ... Sionna-RT [29]** (reviewers 6Syy, yJCq)
We appreciate the suggestion. We added a comparison to Sionna RT, in Figures R2 and R3 of the rebuttal result page. Since Sionna RT does not model transmission effects, we observe it gives poor results on our indoor scenes.
We hope we were able to address the reviewers' concerns and clarify some misunderstandings. We are looking forward to an insightful discussion.
Pdf: /pdf/07efc002166ffbef367e007be623e9548ee34bf1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper presents the Wireless Geometric Algebra Transformer (Wi-GATr), a new architecture for simulating wireless signal propagation in 3D environments. This model utilizes geometric algebra to handle the geometric complexities of wireless scenes and ensures E(3) equivariance to respect the symmetries of the physical problem. The authors introduce two datasets, Wi3R and WiPTR, to benchmark their model. Wi-GATr outperforms existing baselines in terms of prediction fidelity and data efficiency, and it can solve both forward (signal prediction) and inverse (receiver localization and geometry reconstruction) problems in wireless communication.
Strengths: 1. The integration of geometric algebra for handling complex 3D geometric data and ensuring E(3) equivariance is a novel and effective approach. This addresses the core challenge of accurately modeling wireless signal propagation in diverse environments.
2. The paper provides a thorough evaluation of Wi-GATr against multiple baselines across various tasks, demonstrating superior performance in signal prediction, receiver localization, and geometry reconstruction.
3. Wi-GATr shows remarkable data efficiency, achieving high-fidelity predictions with significantly less training data compared to other models. This is particularly beneficial for scenarios where obtaining large amounts of training data is challenging.
4. The model's ability to handle both forward (predictive modeling) and inverse (localization and reconstruction) problems showcases its versatility and potential for a wide range of applications in wireless communication.
Weaknesses: 1. Limited Real-World Testing: While the model performs well on the introduced datasets, its application in real-world, dynamic environments remains underexplored. Additional experiments in more varied and complex real-world scenarios, such as urban or industrial settings, would strengthen the paper.
2. Scalability and Computational Load: The paper could provide more detailed insights into the computational requirements and scalability of Wi-GATr. Understanding the model's performance with larger datasets and more complex environments would be valuable for practical deployment.
3. Generalizability Across Frequencies: The model is tested at a specific frequency (3.5 GHz). Evaluating its performance across different frequencies and under various signal conditions would provide a more comprehensive understanding of its robustness and generalizability.
4. Detailed Case Studies: While the paper presents strong experimental results, including more detailed case studies or examples of practical applications, such as network design or optimization in real-world environments, would illustrate the model's impact and practical benefits.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. The paper focuses on indoor scenes for evaluation. Have you considered testing Wi-GATr on datasets that include outdoor or mixed environments to assess its robustness and generalizability?
2. The paper mentions solving inverse problems like receiver localization and geometry reconstruction. Can you provide more detailed examples or case studies where these capabilities significantly outperform traditional methods?
3. Given the computational complexity, how does Wi-GATr perform in real-time applications? Can it be integrated into real-time wireless network management systems, and if so, what are the challenges?
4. The datasets introduced are focused on indoor environments. Have you considered incorporating additional data sources, such as satellite imagery or LIDAR data, to enhance the model's accuracy and applicability in different environments?
5. How does the model adapt to changes in the environment, such as new constructions or changes in material properties? Is there a mechanism for updating the model dynamically to reflect these changes?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors briefly discussed the limitation in Sec. 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer LEve for providing a detailed review of our paper.
We are glad that the reviewer appreciates our equivariant approach to model diverse environments, the strong performance in our thorough evaluation, and the data efficiency and versatility of our approach on forward and inverse problems.
Further, we appreciate the useful comments to improve our manuscript.
Now, we address reviewer LEve’s concerns and also indicate if it is a common concern shared by fellow reviewers.
**Limited real-world testing** (common concern - reviewers LEVe, 2AZ5)
Firstly, we remark that evaluations on simulated data have their own merits: it allows evaluation on a large-scale (e.g., diverse environments, locations of transmitter/receiver) in a controlled setting and enables better analysis of approaches and results.
While we agree with reviewers that it would be ideal to extend our evaluation to real-world datasets, we are limited by their availability.
To the best of our knowledge, no relevant large-scale real-world dataset exist for the problem we tackle and hence previous works [29, 41, 35, 25] have also largely relied on simulated datasets.
In fact, even the simulated datasets that exist (e.g., Wi3Rooms [41], Etoile/Munich [29]) are small-scale with low diversity in terms of 3D environments.
This lack of high-quality wireless datasets with diverse scenes motivated us to generate and release two new datasets with this paper.
Our symmetry considerations apply in the real-world and in simulation, thus our key message - that wireless channel modelling is a geometric problem and one should use symmetry-aware architectures and algorithms to solve it - should hold both in simulation and real-world measurements.
**Scalability and Computational Load** (common concern - reviewers LEve, 6Syy)
In Fig. 7 in Appendix D.1, we show timing measurements for a single room. Predicting the received power with Wi-GATr takes 0.02 s per Tx-Rx link, which is substantially faster than the ray tracers with similar accuracy.
Thanks for the suggestion to study scalability to larger scenes, which we add in Fig. R4 in the rebuttal result page. Wi-GATr remains at sub-second latency for up to 10k tokens. The quadratic compute scaling of the attention mechanism with the number of tokens means that larger scenes with millions of mesh faces take hours to evaluate.
**Generalizability Across Frequencies**
Evaluating generalization across frequencies makes for an interesting experiment.
However, the focus of this paper was generalization across *scenes*, since we believe this plays a larger role on wireless characteristics (unlike carrier frequencies, which are typically regulated and fixed).
Nonetheless, our approach does not make any assumption on the carrier frequency and would make an ideal candidate for this experiment. We will consider this in future.
**The paper focuses only on indoor evaluation ... considered outdoor/mixed?** (common concern - reviewers LEve, 2AZ5)
Evaluation on outdoor scenarios would make a great addition.
The physics (e.g., diffraction, reflections) of wireless signals would largely remain the same as a function of the 3D geometry in both indoor or outdoor settings.
Nonetheless, in this paper, we choose indoor scenarios since we believe it is a more challenging scenario to evaluate influence of 3D scenes towards wireless characteristics.
For instance, heights of surfaces play a bigger role (since signals can reflect off floors and ceilings) and transmissions through surfaces have significant contributions towards receive powers in non-LOS regions.
**Comparison of inverse problems with traditional methods**
We have provided an additional comparison to differentiable ray tracing (see *Sionna* in rebuttal pdf Fig. R2) on the forward problem and conclude it is not applicable to the inverse problem given its current accuracy. Other neural network methods we are aware of require data from the same scene. We evaluate on novel scenes not seen during training. We appreciate pointers to relevant methods that apply to this generalization setting.
**How does Wi-GATr perform in real-time applications? ... Can it be integrated into real-time wireless network management systems ... what are the challenges?**
The answer depends on the application and the required latency.
Our approach predicts at sub-second latency for scenes with up to 10k mesh faces, see Fig. R2 and R4 in the rebuttal pdf.
However, certain applications will require faster inference speeds.
For instance, receiver localization through gradient descent might be too slow for real-time applications.
Nonetheless, we expect that the inference speed of our architecture can still be optimized using standard tricks (e.g., compilation of computation graph, distillation, efficient diffusion samplers).
**Have you considered additional data sources ... satellite imagery, LIDAR ... to enhance the model's accuracy and applicability**
This is an interesting idea and would further highlight the versatility of learning-based simulation approaches. We have not considered it in this paper.
**How does the model adapt to changes in the environment ... such as new constructions**
Our approach takes the description of the environment as input, and our predictions adapt naturally to changing conditions.
For instance, one of our out-of-distribution test scenarios "OOD layout" considers removing walls (see Table 2 of main paper, WiPTR dataset) and we observe Wi-GATr adapts.
Similarly, our network predictions will change if the materials are changed.
**Detailed case studies ... practical applications**
We have shown the application of model to receiver localization in our extensive evaluation. This problem has practical use cases in robotics or, in the form of transmitter localization, in base station placement.
We thank the reviewer again for their feedback. We hope we were able to address their questions and look forward to discussing further.
---
Rebuttal Comment 1.1:
Title: Thanks for your response.
Comment: The unclear effects in real-world scenarios and outdoor environments reduce the contribution of this paper. The authors claim that indoor rooms are more critical and challenging. However, this does not mean that evaluating their work in outdoor scenarios is unnecessary. I acknowledge this work's novelty, and it's ok for me that the work is not evaluated on real-world data, but I think the experiments on the simulation data are still insufficient.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for taking the time to read our rebuttal. We would like to clarify one thing: we did not mean to imply that outdoor evaluations are unnecessary or that indoor measurements are more critical – on the contrary, we would find an analysis on outdoor data just as valuable.
However, we want to stress that the symmetries of the underlying physics are the same in indoor and outdoor problems. We therefore believe that our high-level findings about the benefits of geometric, symmetry-aware machine learning models should also apply to outdoor scenes. | null | null | null | null | null | null |
Subject-driven Text-to-Image Generation via Preference-based Reinforcement Learning | Accept (poster) | Summary: Summary:
This paper proposed a method which utilized ALIGN model to calculate $\lambda$-Harmonic reward function and combine binary cross-entropy function and DPO as the preference loss to supervise image generation models.
Strengths: Strength:
1. The method is computation-efficient while achieving the highest scores on CLIP-I and CLIP-T.
2. Successfully extend preference optimization methods to Subject-driven image generation.
Weaknesses: Weakness:
1. I think there should be more visualization results showing the superiority of the proposed method. However, even considering the images in Appendix, the number of generated images is still not enough.
2. RPO does not achieve satisfactory performances in DINO scores. The authors have explained the reason, but I still wonder whether there is any method that could prevent the loss of detailed pixel-level information.
Technical Quality: 3
Clarity: 3
Questions for Authors: No question. Just hope the authors could provide more visualization examples and discuss more about DINO scores.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review and thoughtful feedback. Please see our responses to your specific questions below.
**Q: More visual results**
We have added 32 images generated by RPO to the attached PDF on global response, specifically focusing on prompts that are both unseen in the training data and highly imaginative.
**Q: Improve DINO scores**
To preserve the details of the reference images, we may need to use ControlNet [1] or incorporate an additional cross-attention layer for the reference images within the U-Net component [2]. Mathematically, these methods will modify the distribution from $p_{\theta}(x_{t - 1} | c, x_t)$ to $p_{\theta}(x_{t - 1} | c, x_t, x_{\text{ref}}),$ allowing the model to capture extra information from the reference images during the inference phase. However, RPO does not have any assumptions about the model architecture. Therefore, in future work, we will integrate the ControlNet or cross-attention layer approach with RPO to improve DINO scores.
[1] Zhang, Lvmin, Anyi Rao, and Maneesh Agrawala. "Adding conditional control to text-to-image diffusion models." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
[2] Chen, Wenhu, et al. "Subject-driven text-to-image generation via apprenticeship learning." Advances in Neural Information Processing Systems 36 (2024).
---
Rebuttal Comment 1.1:
Title: Please respond to the authors' rebuttal
Comment: Dear Reviewer u7k9,
Thank you again for reviewing this paper. Since the reviewer-author discussion phase is closing soon, could you please respond to the authors' comments?
Best,
AC | Summary: The paper introduces a novel method for generating text-to-image outputs that incorporate specific subjects from reference images. The authors propose a λ-Harmonic reward function and a Reward Preference Optimization (RPO) algorithm, which simplifies the training process and enhances the model's ability to maintain subject fidelity while generating diverse and contextually accurate images. The approach outperforms existing methods like DreamBooth and SuTI in various metrics, demonstrating significant improvements in training efficiency and image quality.
Strengths: 1. **Innovative Reward Function**: The introduction of the λ-Harmonic reward function is a notable advancement, providing a robust reward signal that facilitates early stopping and regularization, thus accelerating the training process.
2. **Efficient Training Process**: The proposed RPO method significantly reduces the computational resources required for training by only needing 3% of the negative samples used by DreamBooth and fewer gradient steps, making it highly efficient.
3. **Empirical Performance**: The method achieves state-of-the-art performance on the DreamBench dataset, with superior CLIP-I and CLIP-T scores, indicating strong text-to-image alignment and subject fidelity.
4. **Simplicity**: The approach simplifies the training pipeline by fine-tuning only the U-Net component of the diffusion model, avoiding the need for optimizing text embeddings or training a text encoder.
Weaknesses: 1. **Limited Evaluation Metrics**: While the paper uses DINO and CLIP-I/CLIP-T scores, it would benefit from additional evaluation metrics that capture other aspects of image quality, such as perceptual quality or user satisfaction.
2. **Overfitting Risk**: Although the λ-Harmonic reward function helps in regularization, there is still a noted risk of overfitting, particularly in generating images with high text-to-image alignment but lower uniqueness in certain features.
Technical Quality: 3
Clarity: 3
Questions for Authors: How does the proposed method perform with completely unseen or highly imaginative prompts that deviate significantly from the training data?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review and thoughtful feedback. Please see our responses to your specific questions below.
**Q: Limited evaluation metrics.**
Limited evaluation metrics are a common issue in subject-driven tasks. We appreciate you raising this issue and providing reference metrics, such as image quality. In the table below, we report the average aesthetic scores [1] of the real reference images in DreamBench and the average aesthetic scores obtained with the SOTA CLIP I/T lambda configuration ($\lambda_{\text{val}} = 0.5$) in DreamBench. We observe that RPO does not decrease the quality of images; instead, the generated images achieve slightly better quality than the reference images.
| Method | Aesthetic Score |
| :---------- | :-------------- |
| Real images | 5.145 $\pm$ 0.312 |
| Ours: RPO | **5.208 $\pm$ 0.327** |
**Q: Overfitting risk.**
We observe this problem and propose a hypothesis in lines 228 to 231. To solve this overfitting issue, we may need to use ControlNet [2] or add an additional cross-attention layer to the reference images within the U-Net component [3]. Mathematically, using these methods, the distribution will be changed from $p_{\theta}(x_{t - 1} | c, x_t)$ to $p_{\theta}(x_{t - 1} | c, x_t, x_{\text{ref}}),$ which can capture the information from the reference images during the inference phase. However, RPO has no assumptions about the model architecture. Therefore, in future work, we will combine the ControlNet or cross-attention layer approach with RPO to alleviate this overfitting risk.
**Q: Unseen prompts visualization results.**
We have added 32 images generated by highly imaginative prompts to the attached PDF on global response.
[1] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Worts- man, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, and Jenia Jitsev. "Laion-5b: An open large-scale dataset for training next generation image-text models", 2022.
[2] Zhang, Lvmin, Anyi Rao, and Maneesh Agrawala. "Adding conditional control to text-to-image diffusion models." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
[3] Chen, Wenhu, et al. "Subject-driven text-to-image generation via apprenticeship learning." Advances in Neural Information Processing Systems 36 (2024).
---
Rebuttal Comment 1.1:
Title: Please respond to the authors' rebuttal
Comment: Dear Reviewer 5xjH,
Thank you again for reviewing this paper. Since the reviewer-author discussion phase is closing soon, could you please respond to the authors' comments?
Best,
AC
---
Rebuttal Comment 1.2:
Comment: Thank you for addressing most of my concerns. I have no major issues with the methods and experiments presented in the paper. As a result, I will maintain my current score. | Summary: This paper proposes a Reward Preference Optimization (RPO) method, introducing the 𝜆-harmonic reward function to address overfitting and accelerate the fine-tuning process in subject-driven text-to-image generation. Experimental results demonstrate the effectiveness of the proposed approach.
Strengths: 1. By introducing the λ-harmonic reward function, this model can achieve simple and efficient subject-driven text-to-image generation.
2. Quantitative comparisons demonstrate that this model achieves superior text-alignment results.
Weaknesses: 1. The novelty of the proposed method is limited for the following reasons. First, although this paper uses the weighted harmonic mean [ref-1] as the reward function, it is a classical Pythagorean mean. Second, the loss 𝐿_𝑝𝑒𝑟 is a simple variation of the binary cross-entropy and DPO.
2. The experimental results do not effectively demonstrate the advantages of the proposed method. Although the proposed RPO method performs better in text-alignment in quantitative comparisons, this improvement is not evident in Figure 3. Specifically, in the first and second rows, the terms "eating" and "on table" do not appear to be closely aligned.
3. The comparisons are unfair since this paper does not compare the proposed method with any reinforcement methods, such as DPO.
4. The ablation studies show the effectiveness of the loss 𝐿_𝑟𝑒𝑓. However, it is not defined in this paper.
5. More details about the CLIP and DINO models need to be provided, as stronger models may yield more detailed results, both in your findings and in compared results.
[ref-1] Weighted Harmonic Means, Complex Anal. Oper. Theory (2017) 11:1715–1728.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see weaknesses.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: no
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review and thoughtful feedback. Please see our responses to your specific questions below.
**Q: Limited novelty of reward function and loss.**
Yes, the $\lambda$-harmonic reward function is indeed a classical Pythagorean mean with weights. In fact, when $\lambda = 0.5$, it is exactly the classical Pythagorean mean. The novelty of our contribution comes from the combination of using this reward function along with reinforcement learning-based finetuning approach, not from the reward function alone. There are two aspects to consider in the design of the $\lambda$-harmonic reward function: $\lambda$ provides flexibility to the user in skewing the relative importance of image-to-image alignment and text-to-image alignment. Also, as mentioned in lines 139 to 143, the harmonic variant is preferred over geometric and arithmetic counterparts because we do not want the fine-tuning process to ignore text-to-image alignment since they tend to yield lower reward values.
While $\mathcal{L}_{\text{pre}}$ is a variation of binary cross-entropy (BCE) and DPO, it has an important difference albeit the seemingly unimpressive formulation. Validation in DPO is done using an external model or human evaluation [1], while trained using a different approach using raw preference data. Our approach uses binary labels (hence requiring the BCE loss) sampled from the reward model to fine-tune, but the validation also comes from the same reward model. This reduces distributional shifts between training and validation.
Using the $\lambda$-Harmonic function and our simple loss function (Equation 9), RPO allows for early stopping and requires only 3% of the negative samples used by DreamBooth, as well as fewer gradient steps, making it highly efficient.
**Q: Quantitative comparisons do not show the improvement.**
For Figure 3, as we discussed in our paper (Lines 234 to 236), the first prompt is not grammatically correct: "A dog eating a cherry bowl." The correct prompt should be "A dog eating a cherry from a bowl." The trained model was confused by the original incorrect prompt and generated this image. As for the concern over the misalignment with "table" in the second row, the image generated by our model clearly shows the object standing on a table (at the bottom of the image). Furthermore, we also provide an additional comparison in the Appendix A.3. RPO can address some of the failure cases in DreamBooth and SuTI.
**Q: Comparison to DPO.**
The original DPO is not suitable for subject-driven tasks because the datasets do not contain preference labels. We introduce the $\lambda$-harmonic function and design a variant of DPO for this task. To address your concern, we implemented a pure DPO for diffusion [2] (without image similarity loss) using preference labels for image-to-image similarity and text-to-image alignment. We chose $\lambda_{\text{train}} = 0.5$ because this value assigns equal weights to image-to-image similarity and text-to-image alignment. For a fair comparison, we also report the results from RPO with the same $\lambda_{\text{train}} = 0.5$. The results on DreamBench for these two methods are shown in the following table.
| Method | DINO | CLIP-I | CLIP-T |
| :-------- | :-------- | :-------- | :-------- |
| DPO | 0.338 | 0.702 | **0.334** |
| Ours: RPO | **0.649** | **0.819** | 0.314 |
These results show that DPO can capture the text-to-image alignment from the preference labels. However, without $\mathcal{L}_{\text{sim}}$, DPO faces a significant overfitting problem; i.e., it achieves high text-to-image alignment but cannot preserve the subject's unique features.
**Q: Undefined function.**
Thank you for pointing out the typo. **$\mathcal{L}_{\text{ref}}$** should have in fact been **$\mathcal{L}_{\text{pre}}$**.
**Q: Details on CLIP and DINO.**
CLIP consists of two encoders: one for the image and one for the text [3]. Suppose a dataset contains pairs of image and caption. The objective is to ensure that the encoding of the image and the encoding of the text (caption) are aligned in terms of cosine similarity. This is achieved by contrastive learning using positive and negative pairs, which naturally proves useful in our case since this cosine similarity can be used as a reward signal for both image-image and image-text alignment. DINO, on the other hand, encodes only images and in a significantly different manner [4]. Image augmentations are made with random perturbation and cropping, passed through the student and teacher models to produce representations that summarize the image. This makes a better fine-grained understanding of images than CLIP in terms of producing accurate rewards, but cannot provide signals on text-image alignment.
[1] Rafailov, Rafael, et al. "Direct preference optimization: Your language model is secretly a reward model." *Advances in Neural Information Processing Systems 36*. 2024.
[2] Wallace, Bram, et al. "Diffusion model alignment using direct preference optimization." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
[3] Radford, Alec, et al. "Learning transferable visual models from natural language supervision." *International conference on machine learning*, PMLR. 2021.
[4] Caron, Mathilde, et al. "Emerging properties in self-supervised vision transformers." *Proceedings of the IEEE/CVF international conference on computer vision*. 2021.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Most of my concerns have been addressed. While the novelty is somewhat incremental, I believe this is good work. Therefore, I’ve decided to raise my score to borderline accept in support of your efforts. Thank you for your hard work.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful feedback and for raising your score. We are glad that our response addressed most of your concerns, and we appreciate your recognition of our work. | Summary: This paper presents a method for generating personalized images from text using the λ-Harmonic reward function in a system called Reward Preference Optimization (RPO). Specifically, RPO fine-tunes a pre-trained text-to-image diffusion model using a reinforcement-learning-based objective concerning with the harmonic averaged value of text similarity and image similarity. This objective is used in a reinforcement learning framework for diffusion models. The results demonstrate that the proposed techniques improve the performance of the DreamBooth baseline and alleviate the image overfitting problem.
Strengths: * The paper is well-structured and easy to follow.
* The proposed λ-Harmonic reward function is reasonable and has achieved better results than the baseline.
Weaknesses: * The authors could introduce briefly how does the method support adjusting λ at the test time. How about the method comparing with adding classifier-based guidance?
* The wall-clock time analysis of the proposed method against the baseline DreamBooth in training and sampling is missing.
* The experimental analysis of why harmonic mean is used instead of arithmetic mean is missing.
* Comparisons could be made more extensive. There are more state-of-the-art personalized generation methods nowadays like [a,b].
* The authors may want to compare with more baselines that also support adjusting the text-image-tradeoff at the test time [c,d,e].
* The motivation to alleviate the image overfitting and enhance the prompt adherence has been explored in [f]. Comparisons or at least some discussions are necessary.
[a] Disenbooth: Identity-preserving disentangled tuning for subject-driven text-to-image generation, Chen et al., ICLR 2024.
[b] Multi-concept customization of text-to-image diffusion, Kumari et al., CVPR 2023.
[c] ELITE: Encoding Visual Concepts into Textual Embeddings for Customized Text-to-Image Generation, Wei et al., ICCV 2023.
[d] IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models, Ye et al..
[e] SSR-Encoder: Encoding Selective Subject Representation for Subject-Driven Generation, Zhang et al., CVPR 2024.
[f] PALP: Prompt Aligned Personalization of Text-to-Image Models, Tewel et al..
Technical Quality: 3
Clarity: 3
Questions for Authors: * The metrics evaluations in Tables 1 and 2 appear inconsistent, despite seemingly identical parameters (λval = 0.3).
* Additionally, in the metrics ALIGN-I and ALIGN-T, I am unclear about the rationale behind adding "+1" to the similarity calculations.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This paper have discussed it’s limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review and thoughtful feedback. Please see our responses to your specific questions below.
**Q: How does the method allow for adjusting $\lambda$ during inference, and how does it compare to classifier-guidance inference?**
Our contribution is on the fine-tuning step on a pretrained diffusion model. Currently, our method (RPO) only provides reward or preference signals during the fine-tuning phase. $\lambda_{\text{val}}$ allows flexibility in the model-selection process for the trade-off between image-image alignment and text-image alignment for subject-driven tasks (details can be seen in Table 3). The reward function selects the optimal checkpoint during fine-tuning, and we do not modify the diffusion reverse process for the inference phase. Therefore, our inference is the same as the SD, which is classifier-free.
**Q: Wall-clock comparison between DreamBooth.**
We define the fine-tuning time as the sum of the preparation time and training time. The preparation time refers to the time spent generating negative samples. We use RPO and DreamBooth to fine-tune SD on 4 TPU v4 nodes and report the wall-clock fine-tuning time for these two methods in the following table.
| Method | Aesthetic Score |
| :---------- | :-------------- |
| DreamBooth | 28 min 38.87 sec |
| Ours: RPO | **7 min 59.78 sec** |
**Q: Experimental results of arithmetic mean is missing.**
We set the harmonic mean as the arithmetic mean and test the $\lambda_{\text{val}} = 0.3, 0.5, 0.7$. We report the results of the arithmetic mean reward function in the following table. We refer readers to Table 3 in our paper for comparison. We discussed the difference between the harmonic mean and the arithmetic mean from lines 139 to 143. The arithmetic mean is not very sensitive to smaller values; it tends to maximize higher values to achieve a better final score. In practice, ALIGN-I will receive a higher value (this effect can be seen from CLIP-I and CLIP-T in Tables 1 to 3). Thus, the model will tend to optimize image-to-image alignment and achieve good results on DINO and CLIP-I but have a lower score for text-to-image alignment.
| | DINO | CLIP-I | CLIP-T |
| :----------------------------- | :-------- | :-------- | :-------- |
| $\lambda_{\text{val}} = 0.3$ (arithmetic) | 0.638 $\pm$ 0.083 | 0.823 $\pm$ 0.037 | 0.318 $\pm$ 0.027 |
| $\lambda_{\text{val}} = 0.5$ (arithmetic) | 0.702 $\pm$ 0.078 | 0.857 $\pm$ 0.047 | 0.295 $\pm$ 0.017 |
| $\lambda_{\text{val}} = 0.7$ (arithmetic) | 0.678 $\pm$ 0.085 | 0.851 $\pm$ 0.041 | 0.299 $\pm$ 0.026 |
**Q: Extensive comparisons with other baseline methods.**
We encourage readers to refer to the *Extensive comparisons with other baseline methods* section in the global response.
**Q: Discussion of PALP**
We encourage readers to refer to the *Discussion of PALP* section in the global response.
**Q: Table values seem to be inconsistent**
Table 1 results from $\lambda_{\text{val}} = 0.5$ (Details show in Table 3). Table 2 results from $\lambda_{\text{val}} = 0.3$, which is our default setting.
**Q: Unclear about the rationale behind adding "+1" to the similarity calculations.**
Firstly, harmonic mean only supports elements that are non-negative [1]. Secondly, our measure of alignment is $\text{CosSim}(u, v) \in [-1, +1]$. Therefore, we add 1 then divide the sum by 2 to ensure each element is between 0 and 1.
[1] Scipy Library. https://github.com/scipy/scipy/blob/v1.14.0/scipy/stats/_stats_py.py#L215-L307
---
Rebuttal Comment 1.1:
Comment: Thanks for the response especially the additional results, which alleviate some of my concerns. I have increased my score to reflect this. Nevertheless, I still think the insight of this paper is not so strong given the existence of [f].
---
Reply to Comment 1.1.1:
Comment: Thank you again for taking the time to review our paper and for your valuable feedback. We are pleased to hear that the additional results addressed some of your concerns, and we appreciate your recognition of the improvements. | Rebuttal 1:
Rebuttal: # Global response
We would like to thank all the reviewers for providing high-quality reviews and constructive feedback. We are encouraged that the reviewers think our paper *"generated images presented in the paper demonstrate superior text fidelity of the proposed method. (Reviewer tGfs)"*, *"proposed reward-based model selection and regularization are reasonable and compelling (Reviewer tGfs)"*, *"significantly reduces the computational resources (Reviewer 5xjH)"*, and *"successfully extend preference optimization methods to subject-driven image generation (Reviewer u7k9)"*.
Below, we summarize our major work for this rebuttal.
- **Extensive baseline comparison.**
In response to Reviewers tGfs, Kj9e, and BTXt, we compare RPO to additional baseline methods.
- **Additional visual results.**
We generated 32 additional images for the unseen and highly imaginative prompts to address the concerns of Reviewers 5xjH and u7k9.
## Additional Response to Reviewer Kj9e
Due to the character limit for the rebuttal, we provide extensive comparisons with additional baseline methods and discuss the differences between our approach and another method, PALP [8], which focuses on balancing text-to-image alignment.
**Q: Extensive comparisons with other baseline methods.**
We compared all personalization and text-image trade-off baseline methods Reviewer Kj9e mentioned in the DreamBench dataset, and the results are shown in the following table. We highlight that our method, RPO, still achieves the highest CLIP-I and CLIP-T results across these new baselines.
| Method | DINO | CLIP-I | CLIP-T |
| :---------------- | :-------- | :-------- | :-------- |
| DisenBooth [1] | 0.574 | 0.755 | 0.255 |
| Multi-concept [2] | **0.695** | 0.801 | 0.245 |
| ELETE [3] | 0.652 | 0.762 | 0.255 |
| IP-Adapter [4] | 0.608 | 0.809 | 0.274 |
| SSR-encoder [5] | 0.612 | 0.821 | 0.308 |
| Ours: RPO | 0.652 | **0.833** | **0.314** |
**Q: Discussion of PALP**
The PALP method [6] is designed for balancing the trade-off between personalization and text-to-image alignment. PALP involves two different phases: the personalization phase and the prompt-aligned phase. The first phase minimizes the image similarity loss. During the second phase, PALP optimizes the model by taking the gradient (Equation 6 in [6]). Here we use $\theta$ to represent the fine-tuning parameters, $\phi$ be the parameters of the pre-trained model, $\hat{x}_0(\theta)$ is a prediction of $x_0$ (Equation 15 in [7]), $\hat{x}_t(\theta) = \sqrt{\bar{\alpha}_t} \hat{x}_0(\theta) + \sqrt{1 - \bar{\alpha}_t} \epsilon$, $\epsilon \sim \mathcal{N}(0, \mathbf{I}),$ and $y_p$ is the personalized prompt with an identity token, and $y_c$ is the clean prompt, e.g., **$y_p = \text{``a photo of a [V] dog''}$** and **$y_c = \text{``a photo of a dog''}$**.
Following the idea from [8] (Appendix A.4 in [8]), we can derive the following equation:
$$\nabla_{\theta} L_{PALP} \propto \nabla_{\theta} \mathbb{D}_{\text{KL}}\big(\text{model}(\hat{x}_t(\theta) \mid y_p) \| \text{pre-trained model}(\hat{x}_t(\theta) \mid y_c)\big)$$
Thus, updating parameters by $\nabla_{\theta} L_{PALP}$ is equivalent to minimizing the KL divergence between **$p_{\theta}(\hat{x}_t(\theta) \mid y_p)$** and **$p_{\phi}(\hat{x}_t(\theta) \mid y_c)$**. PALP alleviates the text-to-image alignment by restricting the learned model to be close to the pretrained model. However, in RPO, we optimize the lower bound of RL objective function (Equation 3 in our paper).
RPO does not only include the penalty of the KL divergence between the learned probability distribution and the pretrained probability distribution, but also utilizes the reward signals. Compared to PALP, our method should be more flexible since we have no assumptions about the reward signals, and these reward signals can also be adjusted to other objectives, e.g., aesthetic score.
[1] Disenbooth: Identity-preserving disentangled tuning for subject-driven text-to-image generation, Chen et al., ICLR 2024.
[2] Multi-concept customization of text-to-image diffusion, Kumari et al., CVPR 2023.
[3] ELITE: Encoding Visual Concepts into Textual Embeddings for Customized Text-to-Image Generation, Wei et al., ICCV 2023.
[4] IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models, Ye et al.
[5] SSR-Encoder: Encoding Selective Subject Representation for Subject-Driven Generation, Zhang et al., CVPR 2024.
[6] Arar, Moab, et al. PALP: Prompt Aligned Personalization of Text-to-Image Models. arXiv preprint arXiv:2401.06105 (2024).
[7] Ho, Jonathan, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems 33 (2020): 6840-6851.
[8] Ben Poole, Ajay Jain, Jonathan T. Barron, and Ben Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. In The
Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023.
Pdf: /pdf/ebd8c747f279f0e95e99560de88e89540b150bc1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper proposes a harmonic reward function for text-to-image personalization. Specifically, it uses this reward function to perform early stopping and regularization. Experimental results demonstrate the effectiveness of the proposed method.
Strengths: * Their proposed reward-based model selection and regularization are reasonable and compelling.
* The generated images presented in the paper demonstrate superior text fidelity of the proposed method.
Weaknesses: * More baselines are needed. Specifically, since this paper is similar to DCO [1], a comparison with this method is needed.
[1] Lee, Kyungmin, et al. "Direct consistency optimization for compositional text-to-image personalization." arXiv preprint arXiv:2402.12004 (2024).
Technical Quality: 3
Clarity: 2
Questions for Authors: What are the results when the lambda value of the harmonic reward function is not set to 0 in the preference loss during training?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review and thoughtful feedback. Please see our responses to your specific questions below.
**Q: Comparison to DCO.**
Lee et al. [1] use SDXL [2] as the backbone model and apply LoRA to fine-tune the pretrained model. For a fair comparison, we only use LoRA to fine-tune the U-Net component for both methods (DCO and RPO). Furthermore, we implement RPO with SDXL as the backbone model, training the LoRA parameters with 1000 gradient steps and saving the checkpoint from the last gradient step, i.e., without early stopping. RPO achieves better image-to-image similarity results than DCO (higher DINO and CLIP-I). Even without early stopping, the CLIP-T score is only slightly lower (by 0.016) than DCO.
| Method | Backbone | DINO | CLIP-I | CLIP-T |
| :----- | :------- | :---- | :----- | :----- |
| DCO | SDXL | 0.593 | 0.792 | **0.343** |
| RPO (w/o early stopping) | SDXL | **0.644** | **0.823** | 0.327 |
**Q: What happens when $\lambda_{\text{train}} \neq 0$ during training?**
We test four different values of $\lambda_{\text{train}} = 0.3, 0.5, 0.7$ with the default $\lambda_{\text{val}} = 0.3$ and report their results in DreamBench. We observe that the image-to-image alignment increases with larger $\lambda_{\text{train}}$, but the text-to-image alignment decreases because the preference model tends to favor alignment with reference images and ignores the prompt alignment.
| | DINO | CLIP-I | CLIP-T |
| :----------------------------- | :-------- | :-------- | :-------- |
| $\lambda_{\text{train}} = 0.0$ | 0.581 $\pm$ 0.113 | 0.798 $\pm$ 0.039 | **0.329 $\pm$ 0.021** |
| $\lambda_{\text{train}} = 0.3$ | 0.646 $\pm$ 0.083 | 0.815 $\pm$ 0.037 | 0.315 $\pm$ 0.026 |
| $\lambda_{\text{train}} = 0.5$ | 0.649 $\pm$ 0.080 | 0.829 $\pm$ 0.039 | 0.314 $\pm$ 0.026 |
| $\lambda_{\text{train}} = 0.7$ | **0.651 $\pm$ 0.088** | **0.831 $\pm$ 0.033** | 0.314 $\pm$ 0.026 |
[1]: Lee, Kyungmin, et al. "Direct consistency optimization for compositional text-to-image personalization." arXiv preprint arXiv:2402.12004 (2024).
[2]: Podell, Dustin, et al. "Sdxl: Improving latent diffusion models for high-resolution image synthesis." arXiv preprint arXiv:2307.01952 (2023).
---
Rebuttal Comment 1.1:
Title: Please respond to the authors' rebuttal
Comment: Dear Reviewer tGfs,
Thank you again for reviewing this paper. Since the reviewer-author discussion phase is closing soon, could you please respond to the authors' comments?
Best,
AC | null | null | null | null | null | null |
Trade-Offs of Diagonal Fisher Information Matrix Estimators | Accept (poster) | Summary: The authors study two popular estimators of the Fisher information matrix with respect to neural network parameters. They derive upper and lower bounds for the variance of these estimators and showcase them in applications to regression and classification problems.
Strengths: - Analyzing the convergence properties of Fisher information estimators is a timely and nontrivial endeavor, in part due to their use in natural gradient descent.
- The paper is very extensive and mathematically sound.
Weaknesses: - The paper is highly technical and the numerical examples are fairly toy.
- The paper takes strong inspiration from a previous work [Soen, Alexander, and Ke Sun. "On the variance of the Fisher information for deep learning." Advances in Neural Information Processing Systems 34 (2021): 5708-5719] on the topic. As such, I question the value the present submission adds to this line of research. In particular, I think the submission does a poor job in highlighting the central novel aspects of their work and reviewing the field.
Technical Quality: 4
Clarity: 3
Questions for Authors: - The authors should clarify clearly to what extent their submission improves upon previous works, particularly [Soen, Alexander, and Ke Sun. "On the variance of the Fisher information for deep learning." Advances in Neural Information Processing Systems 34 (2021): 5708-5719]. How do their bounds compare to the ones previously derived?
- In my opinion, the impact of the submission would be substantially improved if the authors state more clearly in what scenarios which of the two estimators will be preferred and provide a rough guideline for practitioners.
- Could the authors provide a numerical example for regression similar to Fig. 2?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: There is no further need for authors to address limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing our contributions and their soundness.
> Re: *Significance vs [37]*
Please see our global response to all reviewers.
> Re: *Better clarity on which estimator is preferred*
Through our analysis, We have identified certain scenarios where one estimator
is superior to the other.
For example, ReLU networks can only apply $\\hat{\\mathcal{I}}_1$. As another
example, $\\hat{\\mathcal{I}}_2$ has zero variance in the last layer and is
always preferable than $\\hat{\\mathcal{I}}_1$. Similarly, in the second last
layer, $\\hat{\\mathcal{I}}_2$ has a simple closed form and preferable for
neurons in their linear regions (see Remark 4.5).
In general, Theorem 4.1 suggests that the variance of both estimators depend on
three factors: (1) the number of samples; (2) the derivatives of the neural
network output wrt its parameters; (3) moments of the sufficient statistics
$\\mathbf{t}(\\mathbf{y})$ (see Table 1). One has to incorporate these factors
based on their specific neural network and settings, to decide which estimator
to use. We agree with the reviewer to recall these points in the conclusion
section.
> Re: *Numerical examples are fairly toy*
Our contributions are mainly theoretical and do not depend on the empirical
results. In the paper and appendix, we use MNIST MLP with different activations
(sigmoid and log-sigmoid) and different number of layers (4 and 5) to showcase
the bounds of the variance and how the variance evolves over training time in
different layers of the neural network. These numerical examples mainly serves
the purpose of providing intuition. We do agree with the reviewer that more
extensive empirical studies are meaningful as future work.
> Re: *Numeric example for regression similar to Fig (2)*
We thank the reviewer for this suggestion. As remarked by Proposition 5.1, for
regression, all bounds in Theorem 4.1 becomes equalities. Therefore it is less
meaningful to examine the case of regression, where all interested quantities
$\\mathcal{I}$, $\\mathcal{V}_1$, and $\\mathcal{V}_2$ reduce to evaluation of the
Jacobian/Hessian of the parameter-output mapping. The limited time for the
rebuttal does not allow for redesigning the pipeline of FIM estimation on a new
dataset and learning task. We therefore hesitate to overstate our empirical
discoveries or to promise experimental extensions without clearly understanding
their implications.
We would also like to iterate that aim of this paper's contribution is
primarily theoretical. All though additional numerical experiments would be
nice, a thorough independent empirical study is out of this paper's scope and
is arguably would make an independent piece of work (examining various
settings, optimizers, and architectures etc).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their concise rebuttal and have updated my score accordingly. | Summary: The paper analyses two estimators of the Fisher Information matrix and specifically their variances, which (from reference 37) have closed-form but non-practical expressions. The authors show a sequence of inequalities and derive practical bounds for these variances, both element-wise and trace-wise, both conditioned to x and not. Several remarks and trade-offs on these bounds clarify when an estimator is preferable over the other.
The results hold in the general exponential family likelihood but are also further concretized for regression and classification.
Strengths: The paper is overall very well written and structured.
Notation is clear, results are properly presented and every step is further clarified in the appendix. \
The alternation between formal statements and explanations is great and makes the paper vey pleasant to read. I particularly like the amount of remarks and observations that discuss specific terms of the equations, explains them and connect them to each other. And the same is true for all the highlighted trade-offs (for example line 212-215)
Weaknesses: I have found no major weaknesses in the paper. I report some minor things.
I personally dislike the notation $\hat{\mathcal{I}}_1(\theta_i)$ for the i-th element of the diagonal introduced in line 87. This notation suggest that this term only depends on $\theta_i$, but it actually depends on the whole $\theta$ vector. On the other hand, this is quite a light easy notations, and I can't think of alternatives which are not much heavier to read.
Also, I find the definition in line 120 "is defined by $\hat{\mathcal{I}}_j(\theta_i)$ with $x=x_1=...=x_N$" to be yes understandable but not super clean. The alternatives are not as compact, but I'd recommend using the extra space to clarify this a bit better. At least you should write that $y_1,...,y_N$ are i.i.d. from $p(y|x;\theta)$ and not from $p(y,x;\theta)$ as in $\hat{\mathcal{I}}_j(\theta_i)$.
It would be nice to explicitly write the proof for the variance closed expression in Lemma B.1, yet clarifying that it's not a paper contribution. This is for two reasons: (1) it would make the paper self contained, without the need for the reader to read reference 37 and (2) the notation is slightly different and it would be easier for the reader to be consistent.
Technical Quality: 4
Clarity: 3
Questions for Authors: In line 168. Shouldn't "of Eqs. (4) and (6)" instead be "of Eqs. (4) and (5)"? And consistently shouldn't "$\mathcal{V}_1(\theta_i|x)$" be "$\mathcal{V}_2(\theta_i|x)$"? \
I ask this because "small shifts in parameter space yield large changes in the output" to me refers to the magnitude of the network jacobian $||\partial_i h(x)||$, and that appear in the bound for $V_1$, not for $V_2$. Am I misunderstanding something or is this a typo?
In line 246. Isn't the bound sample complexity $\mathcal{O}(\frac{1}{N_x} + \frac{1}{N_y})$? I can't see why it should be the product, can you elaborate more on this derivation?
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: I think it would be nice to clarify better that the work is incremental on "Soen et all - On the variance of the Fisher information for deep learning" [reference 37] and their closed form expression reported in the appendix in Lemma B.1. (also it would be nice to move these closed form expressions to the main paper).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing our contributions and praising our writing.
We respond to your remarks and suggestions as below.
> Re: *The notation* $\\hat{\\mathcal{I}}_{1}(\\theta_i)$
We propose to explicitly define it as $\\hat{\\mathcal{I}}_{1}(\\theta_i) := (\\hat{\\mathcal{I}}_1(\\theta))\_{i i}$ at the first appearance of this notation in
the beginning of Section 3, and add a sentence to remind the reader that
$\\hat{\\mathcal{I}}_1(\\theta_i)$ is an abuse of notation and actually depends on
the whole $\\mathbf{\\theta}$ vector.
> Re: *The notation* $\\hat{\\mathcal{I}}_j(\\theta_i \\mid \\mathbf{x})$ *with*
> $\\mathbf{x}=\\mathbf{x}_1=\\cdots=\\mathbf{x}_N$
We will rewrite the sentence to clarify that $y_1,\\cdots,y_N$ are i.i.d. from
$p(y \\mid \\mathbf{x}; \\mathbf{\\theta})$ as the reviewer has suggested.
> Re: *Explicit proof of Lemma B.1 to make the paper self-contained*
We agree to include such a proof that is consistent with our current notation.
> Re: *Line 168, Eqs. (4) and (6), the scale of* $\\mathcal{I}(\\theta_i \\mid
> \\mathbf{x})$ *and* $\\mathcal{V}_2(\\theta_i \\mid \\mathbf{x})$
Yes, this is a typo and the corrections suggested by the reviewers are
correct. Eqs. (4) and (5) are the bounds which depend on the network Jacobian.
> Re: *Line 246, sample complexity should not be* $\\mathcal{O}(\\frac{1}{N_xN_y})$
We thank the reviewer for pointing out this mistake. By the total variance
decomposition in Eq. (13), the variance wrt $q(\\mathbf{x})$ scales inversely to
$N_x$, the number of $\\mathbf{x}$-samples, and the variance wrt $p(y \\mid
\\mathbf{x})$ scales inversely to $N_y$. Therefore the total variance is
in the order of $\\mathcal{O}\\left(\\frac{1}{N_x}+\\frac{1}{N_y}\\right)$ as the
reviewer suggested. We will make sure to correct this sentence and related
places in this paragraph.
> Re: *Relationship with reference [37]*
Please see our global response. We shifted the closed-form expression of
the variances to the appendix mainly due to space limitation. Should more space
be permitted, we agree to improve our self-contained narrative by moving them
back into the main paper.
---
Rebuttal Comment 1.1:
Title: Keep score
Comment: Thanks for the clarification. While I do agree with the other reviewers that the improvements made on top of [37] could have been clarified better, I do think that they are overall clear and valuable. Hence, I keep my score and strongly support acceptance. | Summary: In this paper, the authors analyzed two different estimators, $I_1(\theta)$ and $I_2(\theta)$, for the diagonal elements
of the Fisher information matrix of a parametric neural net model. These diagonal elements are approximations for the entire matrix,
which is unfeasible to be calculated in real neural nets models. They identified the situations in which each of these estimators is preferable with respect to the other.
Strengths: The Fisher information matrix is a major parameter driving the quality of the fit and highlighting important aspects of the interdependence between the parameters. It has been used in several papers of continual learning, the research area that I follow.
It focuses on the diagonal elements of the Fisher information matrix, offering approximations that are feasible to calculate in real neural net models, addressing a significant practical challenge given the unfeasibility of computing the entire matrix. he authors identified specific situations where each of the two estimators, $I_1(\theta)$ and $I_2(\theta)$, is preferable. This practical guidance can help researchers and practitioners make informed decisions about which estimator to use in different scenarios.
Weaknesses: This is a highly theoretical paper and I have difficulty following all the findings presented by the authors. One of the main problems with this paper is that it has a large intersection with Soen and Sun (2021) and this becomes clear while we read the paper. In the Related work, Soen and Sun (2021) is mentioned in passing and it is a major reference work for the current manuscript. It is worth to have a better explanation of the relative merits of these two papers. Both papers analyzed two different estimators, $I_1(\theta)$ and $I_2(\theta)$, for the Fisher information matrix of a parametric neural net model. While the older paper focuses on the matrix as a whole, the present manuscript focuses on the diagonal elements. However, the work of Soen and Sun (2021) is repeatedly referred to during the technical development of the paper. The contribution of this paper seems marginal when compared to the previous paper but still relevant.
The results are derived assuming $\theta$ is equal to its true value. In practice, the maximum likelihood estimator (MLE) of
$\theta is used, which introduces additional variability. The paper does not adequately address the implications of this assumption or the consequences when using the MLE, which limits the practical applicability of the results.
The paper does not discuss the impact of model misspecification on the results. In real-world scenarios, the data might not follow the proposed model exactly, and the inverse-Fisher information matrix might not represent the correct asymptotic variance of MLE estimators. The authors should comment on how their findings would be affected in such cases.
Technical Quality: 2
Clarity: 2
Questions for Authors: The results are all given using $\theta$ equal to its true value. In practice, we use the MLE estimator of $\theta$. The results
are not valid anymore as we have this additional source of variability. The authors may comment on the consequences of this?
Confidence: 1
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: This is a theoretical paper with no immediate connection to societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing our strength and potential usefulness
in application areas including continual learning.
> Re: *Significance vs [37]*
We kindly refer the reviewer to our global response.
> Re: *Assumption of* $\\theta$ *being the true value*
We clarify that our result holds for any $\\theta$ in the parameter space
of neural networks, aka the neuromanifold. Our formal statements and
their proofs do *not* depend on setting $\\theta$ to the true value or its MLE,
nor to be "nearby" to the MLE in any sense.
Similar to how the FIM $\\mathcal{I}(\\theta)$ is a PSD tensor when $\\theta$
varies on the neuromanifold, our central subjects
$\\hat{\\mathcal{I}}_1(\\theta)$, $\\hat{\\mathcal{I}}_2(\\theta)$,
$\\mathcal{V}_1(\\theta)$, and $\\mathcal{V}_2(\\theta)$ are all tensors defined
for any $\\theta$.
> Re: *What are the effects of model miss-specification*
Model miss-specification can be examined from the perspective of empirical data
deviating from the joint model distribution $p(\\mathbf{x}, \\mathbf{y};
\\mathbf{\\theta}) = q(\\mathbf{x})p(\\mathbf{y} \\mid \\mathbf{x}; \\mathbf{\\theta})$
in Line 24. Our analysis does provide convenient tools to study this
phenomenon.
When the observed samples $\\mathbf{x}$ deviates from the "true"
$q(\\mathbf{x})$, the error in estimating the FIM is described by the first term
in Eq. (13): a high variance of the conditional FIM $\\mathcal{I}(\\theta_i \\mid
\\mathbf{x})$ leads to a large error in estimating the FIM.
On the other hand, when the observed $(\\mathbf{x},\\hat{\\mathbf{y}})$'s sampling
deviates from the predictive model $p(\\mathbf{y} \\mid \\mathbf{x};
\\mathbf{\\theta})$, data also plays a larger role.
In this case, the empirical Fisher becomes a biased estimation of the FIM,
which is described by Lemma 6.1 and its surrounding text. Our FIM estimators do
not depends on the observed $\\hat{\\mathbf{y}}$ in the dataset and is not
affected by such a bias. Note the difference in sampling of the labels
$\\mathbf{y}$ / $\\hat{\\mathbf{y}}$ in the data FIM versus the FIM.
When the model is well specified, the data FIM and FIM are equivalent (see Line
328).
---
Rebuttal Comment 1.1:
Comment: Thanks for responding to my questions. A better clarification of the contribution of this paper wrt [37] is appreciated.
I keep my positive evaluation and my score. | Summary: Summary
-------
The paper studies two estimators for estimating the diagonal of a Fisher information
matrix (FIM) of a parametric machine learning model. Both estimators are based on equivalent
expressions for the FIM. The first is the standard definition, expressed in terms of the
derivatives of the log likelihood, and the latter, expressed in terms of the second
derivative when the model is twice differentiable with respect to the parameters. The
estimators replace the expectation with an average over the data. The authors provide
bounds on the variance of both (unbiased) estimators in terms of the minimum/maximum
eigenvalues of the FIM, and then provide bounds on these eigenvalues under special cases.
They also provide a modest empirical evaluation.
Overall, while I think there is a fair amount of technical meat in the paper, I am
unconvinced about the problem. I will wait to hear from more expert reviewers in this
field before I make my decision.
Detailed comments:
------------------
While I am theoretically inclined, I am not an expert in this sub-field. Therefore, I will
keep my comments at a high level.
Significance of problem: I struggled to understand when estimating the FIM will be useful.
- One point raised by authors is that it is used in optimizing NNs. If this was true, it
would have been nice to see the estimators used in NN optimization empirically, or a
theoretical analysis showing how the bounds derived here percolate to bounds on
optimization.
- Another explanation given is that it could be used to analyze the structure fo NNs.
But this is stated vaguely, and it wasn't clear to me that such an analysis would help
design better neural networks.
Significance of techniques: The proof in the appendix is substantial but the authors have
not stated which techniques used here are novel. Currently technical results are presented
one after the other. It might help the reader/reviewer
appreciate the contributions of the paper if the authors can highlight the novel tools
they used to derive these bounds.
Presentation and writing: The technical content was clear for the most part. However,
the writing was very equation-driven. I understand that this is in part due to the nature
of the paper, but the authors should simplify the exposition further. One option is to
cut down on some of the technical content.
Strengths: See above
Weaknesses: See above
Technical Quality: 3
Clarity: 2
Questions for Authors: See above
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for praising the technical developments of the paper.
> Re: *Usefulness of FIM estimation, e.g. in NN optimization*
As stated in the submitted paper, the FIM can be used (Line 117) "to study the
singular structure of the neuromanifold [2,40], the curvature of the loss [8],
to quantify model sensitivity [28], and to evaluate the quality of the local
optimum [15,16]." Other applications include continual learning as pointed out
by **Reviewer wDHQ**.
We believe studying the estimation quality of the FIM in the general setting --
without examining the specific application and implications of these use cases
-- is meaningful on its own.
First, the bias and variance always exists for tackling the computational
cost of estimating the (diagonal) FIM, but has not been thoroughly analyzed.
Second, such a study can be potentially applied to different scenarios.
For example, in NN optimization, the bounds of the diagonal FIM leads to bounds
of a learning step, which is determined by the product of the inverse of the
diagonal FIM and the gradient vector. In our toy example in Figure 1, we
demonstrated how optimization can be interfered with the variance of the FIM.
To apply our results into specific optimizers would be another independent
work.
> Re: *FIM for analyzing NN structure*
In the paper, we mentioned that one application of the FIM is to study the
"intrinsic structure of the neuromanifold". Here, "structure" does not refers
to the NN structure but the Riemannian geometry structure in the space of
neural networks (aka the neuromanifold), where the FIM plays the role of a local
metric tensor. Our main results are universal in the sense that they do not
rely on specific NN structure. On the other hand, the FIM is indeed affected by
the NN structure, which in the simplest case includes the depth and width of
neural networks [15,16]. This is beyond the scope of the current paper.
> Re: *Organization of technical contents*
A main reason for the technical density is space limitation. We tried hard
to layout the main results while providing enough examples with intuitions.
To address the reviewer's concern, we propose to not further increase the
density and use extra space, if provided, to remark on the connections between
the equations -- to improve the paper's self-contained narrative (also
suggested by **Reviewer qMTe**) -- and remark on takeaways in the conclusion
(suggested by **Reviewer QMbw**).
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks for responding to my questions. Based on the reply, I have increased my score. | Rebuttal 1:
Rebuttal: We extend our appreciation to all reviewers for their thoughtful reviews.
We are pleased to acknowledge the positive remarks on "[...] fair amount of technical
meat [...]" (**Reviewer 7Xmq**) and that the paper "[...] is very
extensive and mathematically sound" (**Reviewer QMbw**).
Despite the technical nature of the paper, we are glad to hear that the "[...]
alternation between formal statements and explanations is great and makes the
paper very pleasant to read" (**Reviewer qMTe**). We are also happy to learn about
other applications of the FIM, where it has been "[...] used in several papers of
continual learning [...]" (**Reviewer wDHQ**), which extends the potential
applicability of our results.
Please find the per-point rebuttals in each reviewer's response section.
Immediately below, we provide a shared response regarding our paper's relation to
reference [37]:
> [37] Soen and Sun. On the variance of the Fisher information for deep learning.
In Advances in Neural Information Processing Systems, 2021.
## Relationship to [37]
In response to **Reviewer wDHQ**, **Reviewer qMTe**, and **Reviewer QMbw**, we
hereby clarify our relationship with [37]. Indeed, both the current submission
and [37] study the same subject, and our developments rely on the closed-form
expression of the FIM variances, which is proved in [37]. Despite that, the
significance of the current work is explained as below.
- In [37], only the norm of the FIM and related variance tensors are bounded. By
focusing on the diagonal elements, we derive novel bounds on the individual
elements of these interested tensors, where the spectrum of sufficient
statistics quantities naturally appear. In terms of proof techniques, [37]
mostly utilize Holder's inequality whilst we utilize variational definitions /
computations of eigenvalues.
- The main subjects in [37] are 4D tensors and their norms, which can not be
easily computed. Their case studies are constrained to 1D exponential families.
Our results leads to numerical algorithms that can be implemented through
auto-differentiation. As a result, our bounds extend to typical learning
settings (Section 5).
- We discussed not only variances but also bias in the estimation of the FIM,
and clarified the relationship with empirical FIM (data FIM) that is widely
used.
- [37] only considered the conditional FIM and associated objects in its study.
Hence, their results only accounts for the sampling of $\\mathbf{y}$ with a
fixed $\\mathbf{x}$.
Our variance decomposition in Theorem 4.7 clarified how the variance is
affected by both the sampling of $\\mathbf{x}$ and the sampling of
$\\mathbf{y}$.
We are happy to include this discussion in Section 1 (around Line 42) and other relevant
places by utilizing additional space, if provided, to clarify the contributions
of our paper. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
On the Identifiability of Poisson Branching Structural Causal Model Using Probability Generating Function | Accept (spotlight) | Summary: In order to achieve causal structure learning on PB-SCM, this paper explores the identifiability of PB-SCM using the Probability Generating Function (PGF). Furthermore, this paper enables the identification of the local structures by testing their corresponding component appearances in the PGF. Building on this, this paper proposes a practical algorithm for learning causal skeletons and identifying causal directions of PB-SCM using PGF.
Strengths: The use of Probability Generating Function (PGF) to address the identifiability of the Poisson Branching Structural Causal Model (PB-SCM) is a novel approach. The proposed practical algorithm for learning causal skeletons and identifying causal directions of PB-SCM using PGF is well-detailed. The authors provide a clear and systematic approach, making it easier for practitioners to implement the algorithm. This paper makes several theoretical contributions and has a strong theoretical basis.
Weaknesses: 1. While these experiments demonstrate the effectiveness of the method, additional experiments on a wider variety of real-world datasets from different domains would strengthen the argument for the method's generalizability.
2. The symbols T and the corresponding formulas that appear in Figure 3 are explained later in the text, which may cause significant confusion for readers.
3. The term ch(i) in Formula 3 lacks a definition. It is possibly intended to be Des(i) as defined earlier in the text.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. What are the accuracies of learning the Probability Generating Function and the subsequent causal structure learning? What are the bottlenecks of the proposed method?
2. The experiments on real-world data provide insufficient information. How do other methods perform on the real datasets?
3. How does the proposed method perform on networks that are not purely Poisson Bayesian networks or on mixed networks of general Bayesian and Poisson Bayesian networks? Additionally, does this method require prior knowledge that the data originates from a non-Poisson Bayesian network?
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: This paper briefly touches on the limitations of the proposed method. This paper has no negative social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1:** While these experiments ... argument for the method's generalizability.
**A1:** We appreciate the suggestion that makes this paper more complete and closer to realistic applications. We further examined our method using a shopping mall paid search campaign dataset [1]. This dataset contains data from a five-month paid search campaign for a U.S. shopping mall, spanning from July to November 2021. In this dataset, we focus on the variables *Impressions*, *Clicks*, and *Conversions*, which are fundamental count data in e-commerce scenarios. These variables follow causal relationships: *Impressions* $\rightarrow$ *Clicks*, *Clicks* $\rightarrow$ *Conversions*, and *Impressions* $\rightarrow$ *Conversions*, forming a triangular structure.
Our method successfully identifies the adjacent vertices between *Impressions* and *Clicks*, as well as the other two causal directions. This outcome is consistent with our theoretical result, which means that our method can be applied to more real-world scenarios. Following your suggestion, we have added these experimental results to our paper.
[1] https://www.kaggle.com/datasets/marceaxl82/shopping-mall-paid-search-campaign-dataset
> **W2:** The symbols T and the corresponding ... which may cause significant confusion for readers.
**A2:** Thank you for your suggestion, which will help enhance the clarity and readability of our paper. We have added an explanation of $T_{X_j}$ in the caption of Figure 3: ``Illustration of the graphical implications in the closed-form solution for the PGF of PB-SCM. Here, $\alpha_{i,j} T_{X_j}(1)$ indicates that $X_j$ is reached from $X_i$ in this branching structure, while $(1-\alpha_{i,j})T_{X_j}(0)$ indicates the opposite."
> **W3:** The term ch(i) in Formula 3 lacks a definition. It is possibly intended to be Des(i) as defined earlier in the text.
**A3:** Thank you for your careful review and for pointing this out. In the context of a given DAG $G(\mathbf{V}, \mathbf{E})$, $Ch(i)$ is defined as $Ch(i) = \\{j \mid i \to j \in \mathbf{E}\\}$, representing the set of children of vertex $i$. We have added this definition alongside the definitions of $Pa(i)$ and $Des(i)$ to ensure clarity.
> **Q1:** What are the accuracies of learning the Probability Generating Function and the subsequent causal structure learning? What are the bottlenecks of the proposed method?
**A4:** Thank you very much for this suggestion to improve the completeness of our paper.
The accuracies of our method are supported by the fact that the empirical probability generating function (ePGF) $\bar G_{\mathbf{X},n}(\mathbf{z}) = \frac{1}{n}\sum_{i=1}^{n} z_{1}^{X_{1}^{(i)}}\cdots z_{d}^{X_{d}^{(i)}}$ is an unbiased estimator of the PGF $G_{\mathbf{X}}(\mathbf{z})$ and almost surely converges to $G_{\mathbf{X}}(\mathbf{z})$ as the sample size $n\to \infty$, according to the strong law of large numbers [2][3].
However, as mentioned in the conclusion section, the performance of our method can be limited by the scale of the dimension. While the ePGF exhibits effectiveness in estimating PGF, it probably encounters challenges when dealing with high-dimensional data, where the main reason is that the product of multiple $ z_{i}^{X_{i}}$ can result in an extremely small number. Thus, a promising future direction is to develop a sample-efficient method for estimating PGF.
[2] Esquível, Manuel. "Some applications of probability generating function based methods to statistical estimation." Discussiones Mathematicae Probability and Statistics 29.2 (2009): 131-153.
[3] Nakamura, Miguel, and Víctor Pérez-Abreu. "Empirical probability generating function: An overview." Insurance: Mathematics and Economics 12.3 (1993): 287-295.
> **Q2:** The experiments on real-world data provide insufficient information. How do other methods perform on the real datasets?
**A5:** Thanks for your valuable suggestion to improve the completeness of our experiments. We have further examined the performance of other methods on the real datasets. The cumulant-based method successfully identified the correct direction of edges, except for $ F\rightarrow R$. This can be attributed to the fact that only a few fouls directly cause a red card, thus leading to estimation issues with the cumulant method. As for PC, GES, and OCD, they struggle to recover the structure, with their F1 scores being 0.53, 0.18, and 0.44 respectively. Following your suggestion, we will add these experimental results to our paper.
> **Q3:** How does the proposed method perform on networks that are not purely Poisson Bayesian networks or on mixed networks of general Bayesian and Poisson Bayesian networks? Additionally, does this method require prior knowledge that the data originates from a non-Poisson Bayesian network?
**A6:** Thank you for this insightful question, which allows us to clarify the scope and applicability of our method. The main contribution of our paper is to propose a more comprehensive identifiability result for PB-SCM using PGF. Despite that our PGF-based method may generate an incorrect graph structure when applied to data with inappropriate distributions - since different distributions have different PGFs - We want to emphasize that the assumptions of PB-SCM are widespread in many real-world scenarios.
By leveraging the thinning operation, PB-SCM explicitly models the inherent branching relationships between discrete variables, particularly for count data. Intuitively, the influence from a parent vertex to a child vertex is directly represented by the change in event counts, as the thinning operation outputs an integer representing the contribution of the count from the parent event. This directly reflects the dynamics of count data, effectively modeling the natural generating process in many real-world systems. Therefore, although our method relies on model assumptions, these assumptions are both common and typical.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response, it addresses my concerns. I will keep my rating.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you for your time and thoughtful review. We're pleased with your positive feedback and look forward to your continued support in the discussion phases. Thanks again! | Summary: The paper focuses on causal discovery from observational count data, proposing a method to address the identifiability gap in Poisson Branching Structural Causal Models (PB-SCM) using Probability Generating Function (PGF). The authors develop a closed-form solution for the PGF of PB-SCM, enabling the identification of local causal structures by testing component appearances in the PGF. The effectiveness of the method is demonstrated through experiments on both synthetic and real datasets.
Strengths: The paper has good presentation and motivation.
The authors derive a compact and exact closed-form solution for the PGF of PB-SCM, establishing a connection between the PGF and the causal structure.
The paper proposes a practical algorithm to learn causal skeletons and identify causal directions in PB-SCM using the derived PGF.
The proposed method is validated through experiments on synthetic and real-world datasets, demonstrating its effectiveness and superiority over existing methods in identifying causal structures in count data.
Weaknesses: The proposed method cannot handle high-dimensional covariates.
The orientation of edges is not detailed.
Figure 4, the caption should be correct.
``F: Foul, Y1: Yellow card, Y2: Second yellow card, R: Red card, S: Substitution, H: Hand ball'' -> ``$F$: Foul, $Y_1$: Yellow card, $Y_2$: Second yellow card, $R$: Red card, $S$: Substitution, $H$: Hand ball''
The soundness of the proposed method requires the causal sufficiency assumption, which limits the applications of the method in real-world scenarios.
Minor:
Building on this, We -> Building on this, we
Technical Quality: 4
Clarity: 3
Questions for Authors: n/a
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1:** The proposed method cannot handle high-dimensional covariates.
**A1:** Thank you for pointing out this issue; this will be an important problem for us to address in future work. As mentioned in the conclusion section, the performance of our method can be limited by the scale of the dimension.
While the empirical PGF is effective in estimating the PGF, it encounters challenges when dealing with high-dimensional data. The main reason is that the product of multiple $\displaystyle z_{i}^{X_{i}}$ can result in extremely small numbers, which affects numerical stability and accuracy. Therefore, a promising future direction is to develop a sample-efficient method for estimating the PGF to better handle high-dimensional covariates.
> **W2:** The orientation of edges is not detailed.
**A2:** Thank you for pointing out the need for more detail on edge orientation. According to the proposed closed form of PGF, the terms in the PGF correspond directly to the graphical structure, thus we can perform orientation by checking whether specific terms exist in PGF.
For instance, Eq. (5) in the paper presents the closed form of the PGF for the triangular structure in Fig. 2. The presence of the term $z_1z_2z^2_3$ in Eq. (5) indicates that there are two directed paths leading to $X_3$ from either $X_1$ or $X_2$. Here, the order of $z_3$ is 2, meaning that $X_3$ is the vertex with two in-degrees in the triangular structure. Consequently, we can orient the edges as $X_1 \to X_3$ and $X_2 \to X_3$. Building on this, we orient each edge by considering a local triangular structure with this edge and examining whether this edge points to a vertex with two in-degrees, as described in Algorithm 1 (Lines 5-7) in the paper.
Otherwise, if an edge $X_i - X_j$ cannot be part of a triangular structure, we attempt to orient it in a pattern such as $X_i - X_j \leftarrow X_k$ or $X_i - X_j - X_k$ with another vertex $X_k$. Specifically, by examining whether the term $z_iz_jz_k$ does not appear in the local PGF involving $X_i, X_j, X_k$, we can conclude that there is no directed path between $X_i$ and $X_j$ through $X_k$. In such cases, we orient the edges as $X_i \to X_j$, and $X_j \leftarrow X_k$ as described in Algorithm 1 (Lines 8-10).
Following your suggestion, we will provide a more detailed description of the orientation process in the method section.
> **W3:** Figure 4, the caption should be correct.
**A3:** Thank you for pointing out the typos and providing the correction. Following your suggestion, we have corrected the caption of Figure 4: Football Dateset Result ($F$: Foul, $Y_1$: Yellow card, $Y_2$: Second yellow card, $R$: Red card, $S$: Substitution, $H$: Handball)
> **W4:** The soundness of the proposed method requires the causal sufficiency assumption, which limits the applications of the method in real-world scenarios.
**A4:** Thank you for your valuable suggestion. Relaxing the causal sufficiency assumption will indeed be a focus of our future work. Nevertheless, this assumption is a common assumption in many causal inference methods. Several prominent works in the field also utilize this assumption. For instance, Spirtes et al. in their seminal work [1] on causal discovery algorithms assume causal sufficiency to ensure accurate inference of causal structures. Similarly, Pearl discusses the importance of causal sufficiency in structural causal models [2]. These references highlight that while the causal sufficiency assumption can limit applicability in some real-world scenarios, it is a foundational assumption that underpins many existing causal inference methods. In future work, we will further explore the identification of latent confounders to address the limitations posed by this assumption.
[1] Spirtes, Peter, Clark Glymour, and Richard Scheines. Causation, prediction, and search. MIT press, 2001.
[2] Pearl, Judea. Causality. Cambridge university press, 2009.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal. My concerns are fully addressed. I will keep my rating.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you for taking the time to review our manuscript. We're glad to see your positive feedback and appreciate your continued support during the discussion phases. Thanks again! | Summary: This paper addresses the identifiability of Poisson Branching Structure Causal Model (PB-SCM) using the probability generating function, during which a compact, exact closed-form PGF solution is developed and the identifiability of PB-SCM is complete based on such relation. A practical algorithm for learning causal skeletons and causal directions using PGF is proposed, with effectiveness demonstrated on synthetic and real datasets.
Strengths: - This paper addresses the identification of the PB-SCM model using a probability-generating function. Compared with the previous cumulant-based, the identifiability of PB-SCM is further complete.
- The theory releases an interesting connection between the probability-generating function and the graph of PB-SCM which allows us to further explore the identifiability of the PB-SCM.
- Based on the theoretical results an efficient algorithm is proposed, and the effectiveness of the proposed method is also verified in the experiments.
- The writing is clear and well-organized and the intuition of the proposed method is effectively conveyed.
Weaknesses: - This paper states that the cumulant-based method is exactly the method that identifies the causal direction by detecting the highest order of $z_i$. Can you explain why? It would be interesting to see the connection between these two methods.
- The local collider structure used in section 4 seems to be referred to the unshielded collider structure which should be stated clearly to avoid ambiguity.
- Theorem 7 states that the identifiability of the causal direction is given the known causal skeleton. Can it be extended to address the general identifiability of the whole causal structure?
- Since the paper is quite dense, more illustrations can be provided in the toy example to illustrate the connection between PGF and the graph and how it leads to the identifiability.
- The citing format should be improved as it is mixed with the main text.
Typos:
- Line 204 similarly
Technical Quality: 4
Clarity: 4
Questions for Authors: - What is the connection between the cumulant-based method and the proposed PGB-based method?
- Can you further improve Theorem 7?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors had adequately addressed the limitations in their conclusion and the broader societal impacts in their appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1:** This paper states that the cumulant-based ... connection between these two methods. \& **Q1:** What is the connection between the cumulant-based method and the proposed PGF-based method?
**A1:** Thank you for your question allowing us to further clarify the connection between the cumulant-based method and our method, as well as the advantages of our method.
The cumulant-based method [1] introduces the concept of $k$-path cumulants summation, denoted as $\tilde\Lambda_{k}(X_{i} \leadsto X_{j})$. Here, $\tilde\Lambda_{k}(X_{i}\leadsto X_{j})\neq 0$ indicates the existence of a common ancestor $m$ for vertices $i$ and $j$, with $k$ directed paths from $m$ to $j$. Such path information is encoded within the PGF. If there are $k$ paths from vertex $m$ to $j$, terms involving $z_m z_j^k$ appear in the PGF. Thus, detecting paths to vertex $j$ is equivalent to identifying the order of $z_j$ in the PGF.
However, only the highest non-zero order of $\tilde\Lambda_{k}(X_{i} \leadsto X_{j})$, involving $k+1$ order cumulants, helps identify the causal direction, as it does not show asymmetry at lower orders. For example, consider vertices $X_i$, $X_j$, and $X_m$, where $X_i \to X_j$, and there are $k$ paths from $X_m$ to $X_i$, and $k+p$ paths from $X_m$ to $X_j$ where $p \geq 1$. Here, $\tilde\Lambda_{k}(X_{i} \leadsto X_{j}) \neq 0$ and $\tilde\Lambda_{k}(X_{j}\leadsto X_{i}) \neq 0$, and $\tilde\Lambda_{k+1}(X_{i} \leadsto X_{j}) \neq 0$ while $\tilde\Lambda_{k+1}(X_{j} \leadsto X_{i}) = 0$. The latter shows an asymmetry that the former does not. This means that the cumulant-based method must detect the highest non-zero order of $\tilde\Lambda_{k}(X_{j} \leadsto X_{i})$, i.e., the highest order of $z_i$ in PGF terms, to reveal the asymmetry. Moreover, certain local (unshielded) collider structures, such as $X_1 \to X_2$ and $X_3 \to X_2$ in Fig. 1(a), are non-identifiable using this method because both directions result in $\tilde\Lambda_{k=1} \neq 0$ and $\tilde\Lambda_{k=2} = 0$, leading to non-identifiability.
In contrast, our method fully leverages lower-order information due to the local property of the PGF. By removing redundant paths by setting the corresponding $z$ to zero in the PGF, we focus on a small local structure, avoiding the need for high-order information.
[1] Qiao, Jie, et al. "Causal Discovery from Poisson Branching Structural Causal Model Using High-Order Cumulant with Path Analysis." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 18. 2024.
> **W2:** The local collider structure ... stated clearly to avoid ambiguity.
**A2:** Thank you for your insightful question, which helps us clarify the terminology used in our paper. Exactly, the local collider structure in this paper can be referred to as the unshielded collider structure. Following your suggestion, we have added the definition of a local structure: Given a DAG $G(\mathbf{V},\mathbf{E})$, a local structure $G'(\mathbf{L},\mathbf{E_L})$ is a subgraph of $G(\mathbf{V},\mathbf{E})$ with vertex set $\mathbf{L} \subset \mathbf{V}$ and edge set $\mathbf{E_L} = \\{i \to j \mid i,j \in \mathbf{L}, i \to j \in \mathbf{E}\\}$.
Given vertices $i, j, k$, if $i \to j \leftarrow k$ and $i$ and $k$ are not adjacent, they form an unshielded collider structure, which we refer to as a local collider structure. If there is a direct edge between $i$ and $k$, this edge will be included in the local structure formed by $i, j, k$, known as a local triangular structure. We will clearly define these terms to avoid ambiguity.
> **W3:** Theorem 7 states ... identifiability of the whole causal structure? \& **Q2:** Can you further improve Theorem 7?
**A3:** Thank you for your valuable suggestion.
Indeed, it is theoretically possible to identify the whole causal structure without relying on the causal skeleton because the PGF uniquely encodes distribution, corresponding to a unique causal structure. However, the complexity of such an approach can be exceedingly high.
Let us clarify what this identifiability issue is and why learning the causal skeleton is essential for addressing the identifiability of the whole causal structure.
As a foundation of identifiability, the closed-form solution of the PGF is directly associated with the causal structure,
which allows us to recover the causal structure by exploring whether different structures correspond to the form of PGF.
However, if we try to directly analyze the whole causal structure, the number of possible forms of PGF we have to examine can be large since the number of structures significantly increases with the number of vertices. The large search space makes structural identifiability difficult.
Therefore, to address this issue, we introduce the local PGF, which enables us to examine its form in a small local structure, i.e., local triangular structure and local (unshielded) collider structure. This approach reduces the complexity of the search space by focusing on identifiable local patterns. Therefore, we require the causal skeleton and prioritize the identifiability of adjacent vertices in the Identifiability section.
> **W4:** Since the paper is dense, more illustrations ... identifiability.
**A4:** Thank you for your suggestion, we will add more illustrations to the toy example to enhance understanding. Specifically, we have enhanced the description of $T_{X_i}$ in the caption of Fig. 3: ``Here, $\alpha_{i,j}T_{X_j}(1)$ indicates that $X_j$ is reached from $X_i$ in this branching structure, while $(1-\alpha_{i,j})T_{X_j}(0)$ indicates the opposite.'' This aims to provide an intuitive understanding of the connection between the PGF and the graph structure.
> **W5:** The citing format ... main text.
**A5:** Thank you for your suggestion, which will help enhance the clarity and readability of our paper. We will change the citation format to use square brackets to clearly distinguish them from the main text.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed responses. The responses have well addressed my concerns and questions, which further confirms my rating.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: We sincerely appreciate the time and effort you have dedicated to reviewing our manuscript. We are grateful for your positive evaluation of our work and are hopeful for your continued support in the subsequent discussion phases. Thank you! | Summary: This paper investigates the causal structure learning on the Poisson Branching Structural Causal Model (PB-SCM)and its identifiability using probability-generating function (PGF). The identifiability is established by developing the closed-form solution of the PGF which can be utilized to identify the causal structure of PB-SCM. The proposed methods are validated through synthetic experiments and real-world experiments.
Strengths: - This work proposes a PGF-based method for identifying the causal structure which is interesting.
- The authors propose the closed-form solution of PGF and establish the connection to the graph. By this, this work shows that there still remains a gap in the identifiability of the PB-SCM and addresses this gap using the local property of the PGF. This is a novel and significant contribution.
- The proposed method is overall sound with a theoretical guarantee.
- This paper is well-present and well-motivated.
Weaknesses: - The authors propose to use the local PGF for identifying the causal structure. It seems that the global PGF can also be used for identifying the causal structure and it would be more beneficial if more discussions could be involved.
- What is the benefit of using the local property of PGF? Is it possible to set the z_i not approach to zero?
- The authors propose to use the Gaussian distribution for the rank test. It would be better to illustrate why the trace of the matrix converges to a normal distribution.
- The experiments only contain some random experiments. It would be more beneficial if there are some case studies.
- The experiments should explain the arrow in the table after the metric.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1**: The authors propose to use the local PGF for identifying the causal structure. It seems that the global PGF can also be used for identifying the causal structure and it would be more beneficial if more discussions could be involved.
**A1**:Thank you very much for this suggestion to improve the completeness of our paper. It is indeed theoretically feasible to identify causal structures through the global PGF. However, using the PGF directly to determine causal directions involves computing higher-order derivatives, which can complicate the implementation and thus affect its feasibility.
To illustrate this point, we provide a toy example with a causal structure of 5 vertices, as shown in Fig. R1 of our attached PDF. In this example, we focus on identifying the edge $X_4 \to X_5$. For the correct structure where $X_4 \to X_5$, the terms $C_1 \times z_1z_2z_3z_4^2z_5^4$ and $C_2 \times z_1z_2z_3z_4^2z_5^2$ exist in the PGF. Conversely, for the reverse direction $X_4 \leftarrow X_5$, the terms $C'_1 \times z_1z_2z_3z_4^4z_5^2$ and $C'_2 \times z_1z_2z_3z_4^2z_5^2$ appear in the PGF.
Notably, as shown in Fig. R1 (c) and (f), the term $z_1z_2z_3z_4^2z_5^2$ exists in the PGF for both structures. This is because there are always at least two directed paths from $X_1$ to each of $X_4$ and $X_5$, regardless of the direction between $X_4$ and $X_5$. This implies that taking the second derivative with respect to $z_4$ does not reveal any asymmetry between the structures.
Therefore, if we want to identify the direction using the global PGF, we have to apply a test involving the third derivative that distinguishes the correct structure by showing the absence of terms with $z_4^3$ in the PGF for the correct direction. Following your suggestion, we have added this toy example and discussion to the paper to enhance the understanding of the connection between the PGF and causal structures.
> **Q2**: What is the benefit of using the local property of PGF? Is it possible to set the $\displaystyle z_{i}$ not approach to zero?
**A2**: Thank you for this valuable question to emphasize our contributions. As demonstrated in the answer to Q1, it is possible to use the PGF directly without setting $z_{i}$ to approach zero when identifying directions. However, when the graph structure becomes more complex and the number of paths between vertices increases, higher-order derivatives are required, making implementation challenging.
Therefore, we propose using the local PGF, which simplifies the process by isolating specific terms in the PGF. This is done by setting other $z_i$ to approach zero, effectively reducing the complexity of the derivative calculations. In the example above, by letting $z_2$ and $z_3$ approach zero, we can remove terms in the PGF that involve $z_2$ and $z_3$, such as $C_1 \times z_1z_2z_3z_4^2z_5^2$ and $C_2 \times z_1z_2z_3z_4^2z_5^4$. This reduction allows us to focus on examining the local triangular structure formed by $X_1$, $X_4$, and $X_5$, which only requires estimating the second derivative.
> **Q3**: The authors propose to use the Gaussian distribution for the rank test. It would be better to illustrate why the trace of the matrix converges to a normal distribution.
**A3**: Thank you for this question. As mentioned in Section 4.5, we use the empirical PGF (ePGF) $\bar G_{\mathbf{X},n}(\mathbf{z}) = \frac{1}{n}\sum_{i=1}^{n} z_{1}^{X_{1}^{(i)}} \cdots z_{d}^{X_{d}^{(i)}}$ to estimate the PGF $ G_{\mathbf{X}}(\mathbf{z})$, which is an unbiased estimator of the PGF [1][2]. According to the central limit theorem, the quantity $ n^{1/2}\\{\bar G_{\mathbf{X},n}(\mathbf{z}) - G_{\mathbf{X}}(\mathbf{z}) \\}$ converges in distribution to a normal distribution $ N(0, \sigma^2)$ with zero mean and variance $\sigma^2$, which can be estimated using the bootstrap method. Therefore, in matrix $\mathbf{A}^{\\{i,j\\}}$, the estimations of the local PGF $ G_{\mathbf{X}}^{\\{i,j\\}}(\mathbf{z})$ and its partial derivative $\frac{\partial^2 G_{\mathbf{X}}^{\\{i,j\\}}(\mathbf{z})}{\partial z_{i} \partial z_{j}}$ also converge in distribution to normal distributions.
Since the sum of two normal distributions is also a normal distribution, the traces of matrices $\mathbf{A}^{\\{i,j\\}}$ converge to a normal distribution. The same reasoning applies to matrices $\mathbf{B}^{\\{i,j,k\\}}$ and $\mathbf{C}^{\\{i,j,k\\}}$.
[1] Esquível, Manuel. "Some applications of probability generating function based methods to statistical estimation." Discussiones Mathematicae Probability and Statistics 29.2 (2009): 131-153.
[2] Nakamura, Miguel, and Víctor Pérez-Abreu. "Empirical probability generating function: An overview." Insurance: Mathematics and Economics 12.3 (1993): 287-295.
> **Q4**: The experiments only contain some random experiments. It would be more beneficial if there are some case studies.
**A4**: Thank you for your valuable suggestion, which allows our experiments more comprehensive. We have added three case studies involving causal graphs with 3, 4, and 5 vertices respectively. The results are presented in Table R1 in our attached PDF.
> **Q5**: The experiments should explain the arrow in the table after the metric.
**A5**: Thank you for your suggestion. We have added a description of the arrows in the table: ↑ indicates that a higher value is better, while ↓ indicates that a lower value is better. | Rebuttal 1:
Rebuttal: Dear Reviewers w346, Gqam, nGrv, and CtKZ,
Thanks for the thoughtful and constructive reviews, which improve the completeness and readability of our paper. It is encouraging that reviewers think that the proposed PGF-based method for identifying PB-SCM is novel (w346, Gqam, CtKZ) and interesting (Gqam), our theoretical result is sound (w346, CtKZ), our paper is well-present and well-organized(w346, Gqam, nGrv), and our experimental results show the effectiveness (Gqam, nGrv) of our PGF-based method. We here provide a general response to summarize the modifications of the paper.
- To Reviewer w346, we have added an additional discussion on using the (global) PGF to identify the causal structure.
- To Reviewer w346, we have explained the benefits of using the local properties of the PGF.
- To Reviewer w346, we have clarified why the trace of the matrix converges to a normal distribution.
- To Reviewer w346, we have included three case studies in our attached PDF involving causal graphs with 3, 4, and 5 vertices, respectively.
- To Reviewer w346, we have added descriptions of the arrows in Table 1 and Table 2 of the paper.
- To Reviewer Gqam, we have illustrated the connection between the cumulant-based method and our PGF-based method.
- To Reviewer Gqam, we have clarified the terminology used for the local collider structure.
- To Reviewer Gqam, we have explained the necessity of the causal skeleton in addressing the identifiability of the entire causal structure.
- To Reviewer Gqam, we have improved the description of the toy example and the citation format.
- To Reviewer nGrv, we have discussed the limitations of our method and the reasonableness of the causal sufficiency assumption.
- To Reviewer nGrv, we have added a detailed description of how to orient the edges.
- To Reviewer nGrv, we have corrected the caption of Fig. 4.
- To Reviewer CtKZ, we have added a discussion on the accuracies and bottlenecks of our PGF-based method.
- To Reviewer CtKZ, we have included an additional real-world experiment, and provided the performance of other methods in the real-world experiment of the paper.
- To Reviewer CtKZ, we have added an explanation of $T_{X_i}$ in the caption of Fig. 3 and included the definition of $Ch(i)$.
- To Reviewer CtKZ, we have discussed potential issues our method may encounter when assumptions are violated.
- To all reviewers, we have polished our paper and corrected the pointed-out typos.
Thanks again for your time dedicated to carefully reviewing this paper. We hope that our response properly addresses your concerns.
With best regards,
Authors of submission 17086
Pdf: /pdf/d14ca5efa60ed326b7a5a14edf14d70b5a0ed702.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SGD vs GD: Rank Deficiency in Linear Networks | Accept (poster) | Summary: This paper presents an interesting dichotomy between SGD vs. GD, and how stochasticity in the gradients have the ability to implicitly regularize towards low-rank solutions. I believe this point is best highlighted by Theorem 5.1, where the show that the highlighted term in Equation (5.6) has a repulsive force, increasing the gap between larger and smaller eigenvalues. The "punchline" of the paper is technically given in Theorems 4.1 and 4.2, where they show that the solutions given by GD cannot have its rank diminished (rather stays at initialization), whereas we can see a decay in SGD (or SGF rather).
Strengths: - I believe this paper fits well within the study of implicit regularization properties of certain optimizers and how they drive generalization. To this end, it also includes a related works section.
- The results are interesting, albeit a little limiting.
- I think the result in Theorem 4.2 also highlights how large step sizes can be beneficial for generalization. This is quite a popular topic in the literature. It is easy to see that when $\eta$ increases, the determinant vanishes faster.
Weaknesses: - I think the presentation of the paper can be slightly improved. For example, the discussion on initialization on Page 4 was a bit confusing. The authors discuss different initialization tactics and the balancing effect, but they do not clarify which initialization they ultimately chose. I believe the intention was to show how initialization plays a role in Theorems 4.1 and 4.2, but this could be made clearer if that was indeed the intention. Furthermore, the authors could also change the notation to $\Delta(t)$ between lines 149-150. The authors are also missing a > in line 228.
- The experiments section is a bit lacking. Are there ways we can leverage the results from this paper to more practical scenarios / networks?
- My main concern (which is also the weakness) is the similarity between this paper and [1], which I don't believe the authors cited. From what I read, the main takeaway in [1] is that the noise in SGD has the ability to "push" or "attract" networks towards simpler networks, which they describe to be invariant sets. If you look at Sections 5 and 6 in [1], I believe what they are trying to say is that the noise has the ability to collapse the network to an invariant set that is low-rank, and one in which some singular modes are actually never learned. At a high-level, it seems to me that this might be a stronger result than to say that there exists a repulsive force between the eigenvalues, which is the current result. If I am missing something, please feel free to correct me. Though, at the very least, I think the authors should cite this paper and delineate the similarities / differences.
[1] Feng Chen, Daniel Kunin, Atsushi Yamamura, Surya Ganguli. "Stochastic Collapse: How Gradient Noise Attracts SGD Dynamics Towards Simpler Subnetworks". NeurIPS 2023.
Technical Quality: 3
Clarity: 2
Questions for Authors: See Weaknesses. I would be more than happy to raise my score if my concerns can be addressed.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: I have stated the limitations throughout my review.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable comments, and for pointing out the missing reference.
> The result in Theorem 4.2 also highlights how large step sizes can be beneficial for generalization.
Yes, the presence of $\eta$ does control the rate of decrease of the determinant. Hence, the larger the step size, the greater the regularization effect, which can be beneficial for generalization. Note however that due to our choice of continuous-time modeling of gradient methods, as the step size increases, the discretization error makes it difficult to accurately capture the trajectories of the discrete methods with their continuous-time counterparts, leading to potentially different behavior. In the revision, we will add a statement indicating that our model captures the role of step size up to a certain order.
> Initialization, balancedness and presentation.
Thank you for the suggestion and the comments on the presentation of the initialization and its balancing effects. We address these points here and will revise the manuscript to incorporate these comments, making the presentation clearer.
We mentioned initialization in the problem setup because it is an important ingredient in studying the implicit bias of homogeneous networks, including linear and ReLU networks. Previous works offer various initialization strategies for gradient flow, where certain structures emerge due to the balancedness properties of gradient flow. In contrast, with stochastic methods, the initialization does not matter. The emergence of low-rank structures is observed despite general initialization. This behavior is exemplified in Theorem 4.1 and 4.2, which hold for any initialization. In addition, we discuss these different initialization strategies to highlight the challenges for stochastic gradients, as the balanced quantities studied for gradient flow are insufficient.
> Limited experimentation, more practical networks.
We kindly refer the reviewer to the general comment regarding this limitation.
> Detailed comparison with [1]
Thank you for pointing out this work. We were aware of it but unfortunately missed including the reference and detailed comparison in the manuscript. We will revise it accordingly.
[1] investigates a phenomenon similar to ours, referred to as stochastic collapse, where the noise in the gradients causes the iterates to collapse onto certain invariant sets. [1] also uses continuous-time modelling of SGD as an SDE. The differences between the works are outlined as follows:
- In the first part of their work, [1] provides a condition under which an invariant set is attractive for the SDE they consider. Methodologically, their work characterizes the **local behavior** around the invariant sets. Specifically, once the iterates enter a small
$\epsilon$-neighborhood of these sets, they are attracted towards the sets. However, their does not address whether the iterates ever enter such an $\epsilon$-neighborhood for a general initialization.
In contrast, we offer a **global guarantee**, albeit within a simplified model: low-rank behavior is observed for any initialization. To establish these global properties, we developed a novel approach by tracking a specific quantity (the determinant). This approach enables us to characterize the global behavior and reveal the dichotomy between the presence and absence of noise.
- The paper also studies linear networks in a teacher-student setup. Under a set of assumptions A1-A4~[1, p.30], they derive the evolution of singular values. However, due to the balanced spectral initialization (A3-A4) and structured label-noise (A2), the analysis falls short of capturing the repulsive force in the singular values. On a technical front, deriving the SDE for the singular values is simpler when the singular vectors remain stationary, as their assumptions ensured. When the singular vectors do not move, all the matrices can be simultaneously diagonalized, which simplifies deriving the singular values' dynamics. In our problem, however, the singular vectors move and follow a stochastic process, which makes tracking them significantly more challenging.
[1] Feng Chen, Daniel Kunin, Atsushi Yamamura, Surya Ganguli. "Stochastic Collapse: How Gradient Noise Attracts SGD Dynamics Towards Simpler Subnetworks". NeurIPS 2023.
---
Rebuttal Comment 1.1:
Title: Reviewer Response
Comment: Thank you for the clarifications, particularly with the missing citation. I now see clearly the difference in contribution. I think the proof techniques used in the paper, especially in tracking the determinant, is useful for future work and will raise my score. | Summary: This paper studies and analyzes how the rank of the parameter matrices evolves for two-layer linear networks when using GD and label noise SGD. The paper basically shows that while GD preserves the rank at initialization throughout its trajectory, SGD reduces the rank, thereby removing spurious directions. A stochastic differential equation (SDE) is derived to characterize the evolution of the eigenvalues of the parameter matrix to formalize the rank-reducing behavior of SGD. Small-scale experiments validate the derived theory.
Strengths: The paper has a nice insight backed up by solid theory. I'm not an expert in this area but it seems that this is a novel insight.
Weaknesses: **1.** While the rank-reducing property of SGD is interesting, it does not imply that the prediction error (which is the generalization metric most people would care about) of SGD is lower than that of GD. What if SGD converges to a completely erroneous low-rank solution? I understand that the paper is not directly about the generalization abilities of SGD and GD, but I feel this point should be discussed somewhere.
**2.** Since the results here are only for linear networks (which is completely fine), is there any intuition for why the results and claims of this paper should extend to non-linear networks?
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see the Weaknesses. Also, I have the following questions:
**1.** In line 174, is $M = \Theta^\top \Theta$?
**2.** In line 207 and the equation thereafter, if $W = U \Sigma V^\top$, then $W^\top a$ should be $V \Sigma U^\top a$.
**3.** I don't understand the jump from eq. (5.2) to (5.3). There seems to be an issue with the $\sqrt{\eta \delta}$ term (with the definition of $d X$).
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Discussed somewhat throughout the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their insightful suggestions.
> Low-rank solutions and prediction error.
We agree with the reviewer's comment that the implicit regularization might only be useful for generalization if it is aligned with the ground truth model (i.e., in the classical example of sparse regression, $\ell_1$-regularization only improves generalization of ERM if the ground truth regressor is sparse). We also agree with the reviewer's comment regarding convergence to a sub-optimal low-rank solution which can happen due to possible over-regularization. We will discuss these issues in the conclusion of our manuscript.
> Intuition for non-linear network.
Empirically, the low-rank phenomenon is prominent for non-linear networks when trained with SGD, see Figure 2,3 in the attached pdf.
For piecewise linear non-linearity like ReLU, each neuron's weights exhibit a multiplicative structure. Let $h (a,w, .) = \sum_{j} a_j \sigma(\langle{w_j}{.}\rangle) $ be the parametric model where $\sigma$ is the ReLU non-linearity and $(w_i,a_i)$ are the input and output weights of a neuron. With $\theta_i = \begin{bmatrix}
w_i^{\top} & a_i
\end{bmatrix} $, the dynamics of the neuron can be rewritten as $\mathrm{d}{\theta_i} = \theta_i \mathrm{d}{J_i}$, for some curated matrix stochastic process $J_i$.
For linear networks, these matrix-valued processes are identical across neurons, which makes the analysis tractable. In contrast, non-linear networks do not share this property. However, during the large noise phase, the stochastic matrices $J_i$ are dominated by noise even for the ReLU dynamics, which can result in some similarities between these matrices. This may vaguely explain the observed low-rank behavior.
> Is $M = \theta^{\top}\theta$ in line 174 ? The clarification of $W^{\top}a$ in line 207 ?
Yes, thank you for pointing out this missing definition. We will correct it. In l.207, it should be $Wa$ instead of $W^{\top}a$.
> The jump from eq. (5.2) to (5.3).
It is possible due to changing the time, let $t' = (\eta \delta) t $. Now we have that $$ \mathrm{d}{t} = \frac{1}{\eta \delta} \mathrm{d}{t'}. $$ The Brownian motion has the following property $\mathrm{d}{\mathbf{B}_{t'}} = \sqrt{\eta \delta} ~ \mathrm{d} {\mathbf{B}}_t$ (the square root is due to different scaling of Brownian motion while changing time). Substituting these gives us the required jump.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response! | Summary: This paper studies the implicit bias of SGD for two-layer linear networks. The authors study this primarily using (stochastic) gradient flow (S)GF, the continuous time version of (S)GD. To model the stochasticity, they approximate the SGD noise with independent label noise. Under these assumptions, they prove that the determinant of the Gram matrix of the parameters is invariant under GF, whereas it exponentially decays under SGF. Hence, if such a network is initialized with a full-rank parameter matrix, the rank asymptotically drops by at least one under label noise SGF, which lends some evidence to a low-rank bias for SGD not present in GD. The stochastic eigenvalue dynamics are investigated further, and while the exact dynamics are not rigorously pinned down, the paper calls out the repulsive forces that could explain why SGD converges to lower rank solutions. These results are empirically verified using some synthetic data experiments for regression and classification.
Strengths: 1. Laying theoretical foundations for the benefit of SGD over GD for optimizing neural networks is an important direction for the theory of deep learning. This paper takes a step towards identifying the implicit bias of SGD.
2. Previous work has that balancedness of adjacent layers is a conserved quantity for deep linear neural networks optimized using GF. Invariants of the dynamics are an important analytic tool; balancedness is crucial for proving convergence for deep linear networks. This paper identifies a very clean invariant for GF which separates GF from (label noise) SGF: the determinant of the Gram matrix of a certain concatenation of the linear layers ($\Theta$ in the paper). In particular, although the determinant is conserved for GF, for SGF the determinant decays exponentially (exactly!). This is a very nice contribution to the theory of SGD.
3. By itself, the determinant decay only establishes that the rank drops by one in SGF. However, the authors also investigate the behavior of the singular values of the first layer $W$ in a simpler scalar regression setting by deriving the SDE for the eigenvalues of $M = W^\top W$. Although they are not able to characterize the solution to this SDE, they clearly interpret the forces at play and highlight a repulsive effect between the eigenvalues of $M$, which encourage the singular values to decay towards zero.
4. The authors also consider the large noise limit and cleverly reduce to the 2-dimensional setting, where having a zero singular value is equivalent to the two rows being multiples of each other. They prove that, in expectation, (a power of) the larger singular value does not decrease, whereas the (same power of the) smaller singular value does not increase.
5. Section 6 contains many interesting extensions to other settings, such as classification, directly modeling the SGD noise as Gaussian with matching covariance, and discrete time methods.
6. The paper is quite transparent with the limitations of the theory. It also clearly highlights where the future directions are (e.g., the full characterization of the solution for the SDE governing the evolution of eigenvalues).
Weaknesses: It was nice to see some experiments validating the theory, but I felt that the dimensionality of the problems should be increased to be more convincing. Right now the experiments are done with $(p, l, k) = (5, 10, 2)$. Even increasing these to being in the hundreds would be interesting, and correspondingly it might be appropriate to use other metrics to measure the rank of $W_1$, such as the effective rank $\tr(M)/||M||$.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. In line 95, should $d$ and $p$ be swapped? Since the input is $p$-dimensional, whereas the parameter $\theta$ is $d$-dimensional (for example, the two-layer setting $d = p \cdot k$).
2. In line 96, there should be an $\ell_2$ norm outside of $y_i - f_\theta(x_i)$.
3. In line 108, $k=1$ should be in math mode.
4. There is a minor typo on the equation after line 208: it should read $Wa$ rather than $W^\top a$.
5. In the statement of Theorem 5.1 on Line 228, there is a missing comma in the definition of the ordered eigenvalues.
6. Could the authors add a reference to the proofs in the appendix when the result is stated in the main text (e.g. Lemma 6.1)?
7. In Section 7, I think I am missing something obvious: why are the $d + \ell - k$ smallest singular values equal $\sigma_2, \sigma_3, \sigma_4$, since to my understanding $d + \ell - k = 10 + 10 - 2 = 18$?
8. What can one hope to say about deeper linear networks? If I understand correctly, if one naively generalizes the construction of $\Theta$ then the special block structure of the label noise portion will no longer hold (in particular, it seems that $Theta$ would need to contain all the prefix and suffix products of the linear layers)
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback, encouraging remarks, and meticulous proofreading of our work. They will help to make our work clearer.
> It was nice to see some experiments validating the theory, but I felt that the dimensionality of the problems should be increased to be more convincing. Right now the experiments are done with $(p,l,k) = (5,10,2)$. Even increasing these to being in the hundreds would be interesting, and correspondingly it might be appropriate to use other metrics to measure the rank such as the effective rank.
We refer the reviewer to the general comment for the experiments with higher dimensions.
> In Section 7, I think I am missing something obvious: why are the smallest singular values equal
$\sigma_2, \sigma_3, \sigma_4$. since to my understanding $d + l - k = 18$ ?
Apologies for the typo. As we are measuring the singular values of $W_1$, it should be the $p - k$ smallest singular values instead of the $p + l -k$ ones . Here $p=5$, $l = 10 (>p)$ and $k=2$, hence the $p - k = 3$ smallest singular values are $\sigma_2,\sigma_3,\sigma_{4}$.
> What can one hope to say about deeper linear networks? If I understand correctly, if one naively generalizes the construction of $\Theta$
then the special block structure of the label noise portion will no longer hold (in particular, it seems that $\Theta$
would need to contain all the prefix and suffix products of the linear layers)
Thank you for this question. Indeed, deeper layers cannot be directly written in this multiplicative format. However, if the network has $l$ layers $W_1,\ldots,W_l$, for any adjacent layers $W_{i}$ and $W_{i+1}$, we can form a block matrix, $\theta_{i} = \begin{bmatrix} W_{i}^{\top} & W_{i+1} \end{bmatrix}$ and the multiplicative structure can be seen in the evolution of $\theta_i$. The analysis can then be performed with these block structures. For the stochastic case, however, the noise covariance is not straightforward and needs to be handled carefully.
This approach might present a preliminary way forward for deeper networks. Another approach could be to carefully formulate a tensor structure that extends the spirit of our work beyond two layers.
---
Rebuttal Comment 1.1:
Comment: Thank you for the additional experiments and clarifications; I am definitely convinced by the plots you included in the supplemental figures.
Also, I appreciate the thoughtful response to my question about deeper linear layers, and would love to see any future work in this direction.
Cheers! | Summary: The paper proposes an analysis of the gradient flows of two layer linear neural networks with a squared loss borrowing tools from differential equations. The paper's result establishes that the stochastic gradient method generates solutions (limit of the flow as time goes to infinity) with determinant of the parameters x parameters transpose tending to zero. This implies that the parameters of a model optimized with stochastic gradient converge to a low-rank (simple) solution. While the full gradient method's solutions for the same problem do not exhibit the same simple structure. Authors conclude that this finding gives an explanation to the implicit bias of SGD.
Strengths: The paper is tackling a very interesting problem and in a relatively simple yet representative setup where learnings can be used for deriving broader intuitions on the reasons behind SGD's implicit bias. The paper's methodology looks solid and the conclusions in the proposed context are well presented.
Weaknesses: The paper studies a very simple model. Numerical experiments are also on the same and in very small data. It would benefit the paper to run experiments on larger / more complex models to discuss the limits and validity of this theory. The generalization to other settings section 6 is only changing the loss function. The section's title is perhaps overpromising.
Technical Quality: 3
Clarity: 4
Questions for Authors: Is it possible to verify the breadth of validity of the result (parameters of SGD based solution live in a low-rank manifold, not the FGD) on different neural network architectures and see whether similar conclusions hold? Or which other property (if low rank is too restrictive) would you test for?
Does this theory imply that for the simpler vector problem the solution of SGD is a sparse vector where FGD finds dense solutions?
If we add a ReLU non-linearity between W1 and W2, then in what form the simplicity of the solution by SGD will get modified? locally low-rank?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The papers does not discuss how broadly their findings are valid. Numerical experiments could be used for testing whether findings are still valid in slightly more complex models
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their interesting questions and encouraging comments.
> Numerical experiments are also on the same and in very small data. It would benefit the paper to run experiments on larger / more complex models to discuss the limits and validity of this theory.
Please refer to the general comment.
> Is it possible to verify the breadth of validity of the result (parameters of SGD based solution live in a low-rank manifold, not the FGD) on different neural network architectures and see whether similar conclusions hold? Or which other property (if low rank is too restrictive) would you test for?
The phenomenon that the parameters of solutions found by SGD live in a low-rank manifold is already empirically observed in [1] (note that the noise is more prominent when the step size is large, hence the effects of SGD are prominent with large step size). Given that measuring the rank is too computationally expensive, metrics that measure the similarity between the activations or weights of the neurons would serve as a good alternative. Such metrics are used by [1] and [2].
[1] M. Andruischenko, AV Varre, L. Pillaud-Vivien, N. Flammarion. SGD with large step
size learns sparse features, ICML 2023.
[2] Feng Chen, Daniel Kunin, Atsushi Yamamura, Surya Ganguli. "Stochastic Collapse: How Gradient Noise Attracts SGD Dynamics Towards Simpler Subnetworks". NeurIPS 2023.
> Does this theory imply that for the simpler vector problem the solution of SGD is a sparse vector where FGD finds dense solutions?
No, the theory does not directly imply it.
Consider the vector problem of diagonal networks where $u \odot v$ is the re-parameterization considered.
Let $U = \textrm{diag}(u)$, and $V = \textrm{diag}(v)$ be diagonal matrices with diagonal entries $u$ and $v$, respectively.
The block structure can be recovered with $\theta = [ U ~ ~ V ] $.
However, the dimensions of this block matrix are $p \times 2p $, hence our theory does not have any implications as we need $l > d+k$. However, if a complete characterization can be obtained, then indeed a low rank of $\theta$ would imply a sparse solution. The sparse vector problem with label noise has received much attention [3,4] and the complete characterization in the symmetric case (u=v) is given by [5].
[3]J. Z. HaoChen, C. Wei, J. Lee, and T. Ma. Shape matters: Understanding the implicit bias of the noise covariance. COLT, 2021.
[4] Z. Li, T. Wang, and S. Arora. What happens after sgd reaches zero loss? –a mathematical framework. ICLR 2022.
[5] Pillaud-Vivien, L., Reygner, J., and Flammarion, N. Label noise (stochastic) gradient descent implicitly solves the lasso for quadratic parametrisation. COLT, 2022.
> If we add a ReLU non-linearity between W1 and W2, then in what form the simplicity of the solution by SGD will get modified? locally low-rank?
Please refer to the general comment, particularly the paragraph on ReLU networks, as the experiments indicate the simplicity bias that SGD tends to induce. | Rebuttal 1:
Rebuttal: ## Broader empirical evaluation
We would like to thank the reviewers for their positive assessment of the paper and their appreciation of our work. Below, we address general comments that were raised multiple times. Individual responses can be found following each review. All referenced figures can be found in the attached PDF.
**Higher dimensions.**
First, at the request of reviewer Kp9L, we have conducted experiments in higher dimensions to robustly evaluate our theoretical findings. We choose a scalar linear regression problem on Gaussian data in dimension $p = 100$. We trained a linear network with an inner layer width of $l = 100$ using gradient descent, both with and without label noise. As shown in Figure 1, we observe that when trained with label noise, all singular values of the $W_1$ matrix, except the largest one, diminish to zero. This behavior is in contrast to the deterministic full-batch gradient descent where the singular values do not diminish.
**Non-linear networks.**
Many reviewers have asked for broader empirical validation of the phenomenon we theoretically capture in linear networks.
Indeed, recent efforts have been made to empirically understand the regularization effects induced by stochastic noise across various architectures.
For the sake of completeness, we recall some empirical results here. First, we consider simple non-linear architectures and then discuss general architectures. This discussion will be added to the revised version of the manuscript.
**ReLU non-linearity.** (more relevant to the question of reviewers V8cQ, cAQT).
Following the approach of [1], we consider a one hidden-layer ReLU network in a teacher-student setup for a scalar regression task.
We first present empirical evidence for $p=2$, as this allows for clearer visualizations.
Figure 2 shows how training with label noise regularizes ReLU networks by pushing the neurons to align with a few relevant directions (the directions of the teacher neurons here).
As reviewer V8cQ pointed out, the weight matrix gets locally low rank and the neurons are aligned.
Once they are aligned, they stay aligned, thus inducing alignment globally.
In Figure 3, we show the singular values for a regression problem in dimension $p=5$. The dynamics of the singular values are also similar to the case of linear networks we have theoretically studied.
**General architectures.**
For large-scale neural networks, we refer to the experiments in papers [1] and [2].
[1] studies implicit regularization across various architectures, from single hidden-layer networks to deep networks like DenseNet, while [2] focuses on various deep learning architectures like ResNet and VGG.
The main takeaway from these works is the empirical verification of the low-rank phenomenon, which we aim to support with theoretical backing.
The low-rank phenomenon is verified by the sparsity coefficient in [1] and the fraction of independent neurons in [2], which measures the alignment between the weights or activations of the neurons.
Intuitively, the more aligned they are, the lower the rank of the parameters.
We will appropriately cite these empirical works throughout our paper to broaden the scope of our results.
[1] M. Andruischenko, AV Varre, L. Pillaud-Vivien, N. Flammarion. SGD with large step
size learns sparse features, ICML 2023.
[2] F. Chen, D. Kunin, A. Yamamura, S. Ganguli. "Stochastic Collapse: How Gradient Noise Attracts SGD Dynamics Towards Simpler Subnetworks". NeurIPS 2023.
Pdf: /pdf/749ea00dfe6f5ef18aae8a6f22cfc79bf4247000.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MetaCURL: Non-stationary Concave Utility Reinforcement Learning | Accept (poster) | Summary: This paper addresses concave utility RL in a non-stationary episodic setting, where the transition probabilities as well as the utility function may change from one episode to the other. The paper proposes an algorithm, dubbed MetaCURL, which dynamically select the best performing "expert" within a set of baseline algorithms with different starting points and hyper-parameters. The paper analyses the dynamic regret of MetaCURL as a function of some characteristics of the instance, such as the number of abrupt changes and the magnitude of changes, under a deterministic dynamics plus action-independent noise assumption.
Strengths: - Novelty of the setting. Concave utility RL has been studied extensively recently, but I am not aware of any work combining concave utilities and non-stationarity;
- Simplicity of the approach. The expert-based solution is neat and easy to follow.
Weaknesses: - Motivation of the setting (1). The paper does not do much to support why studying non-stationarity in CURL is useful/interesting;
- Motivation of the setting (2). The paper only evaluates the performance of the expected realization rather than the expected performance;
- Restrictive assumptions. The setting presents several challenges but does not require exploration due to an assumption that makes the stochasticity of the transitions independent from the actions;
- Analysis. It is unclear whether the analysis brings some interesting techniques beyond what has been done in prior works.
This paper tackles a setting comprising an eye-popping list of challenges, which include: non-stationarity of transitions, adversarial learning, concave cost functions. However, aside from restrictive assumptions that makes strategic exploration unnecessary, I believe the paper somewhat fails in two aspects: First, to motivate the setting properly, not just as a "patchwork" of other settings considered in the literature but as a problem raising from important application; Second, to highlight what is the value of their results in terms of insights and employed techniques.
To these reasons, I am currently providing a slightly negative evaluation. I would like to hear from the authors on these concerns. More details below.
Technical Quality: 3
Clarity: 3
Questions for Authors: MOTIVATION: NON-STATIONARY CURL.
Both CURL and non-stationary MDPs are interesting and worth studying, but why the combination of CURL and non-stationarity is interesting? Does the combination brings more challenges than just the sum of its components? Does the algorithm bring novel ideas? Does the analysis bring novel techniques or valuable insights? I feel that the paper shall do more to support why the reported results matter.
MOTIVATION: OCCUPANCY VS SINGLE-TRIAL.
As it is typical in the original formulation, the paper defines the CURL objective as a concave function of the occupancy measure, i.e., the expected state visitations. More recently (see Mutti et al., Challenging common assumptions in convex reinforcement learning, 2022), it has been shown that there is a crucial difference between optimizing the concave objective over the expected visitation and the expected concave objective. Can the authors comment on why the occupancy formulation is relevant in this setting?
MOTIVATION: NO EXPLORATION.
Studying the regret in a setting in which strategic exploration is unnecessary looks a bit odd. Also, the provided motivation is that applying optimism would induce computational issues, which I do not find fully convincing. There exist tractable no-regret algorithms in RL and it shall be shown why it is not the case for CURL. Moreover, plenty of works in RL studies the regret by relying on planning oracles that are intractable to implement. This is a common choice to isolate statistical complexity from computational considerations. I think that considering $g_n$ unknown but belonging to a known function class would make for a significantly more interesting analysis.
NOVELTY OF THE ANALYSIS.
Can the authors highlight what is novel in the analysis? From reading the paper, the main effort seems to reduce the analysis to a combination of EWA and a known rate for the baseline algorithm.
ADDITIONAL REFERENCES.
While the work does a good job of placing the contribution with respect to previous literature in non-stationary RL, there are some missing works that seem relevant. As mentioned above, the papers *(Mutti et al., Challenging common assumptions in convex reinforcement learning, 2022; Mutti et al., Convex reinforcement learning in finite-trials, 2023)* study a so-called "single-trial" variation of the CURL formulation. The works *(Cheung, Regret minimization for reinforcement learning with vectorial feedback and complex objectives, 2019; Cheung, Exploration-exploitation trade-off in reinforcement learning on online markov decision processes with global concave rewards, 2019)* analyse regret-minimization in a problem formulation akin to an infinite-horizon form of CURL. The papers *(Prajapat et al., Submodular reinforcement learning, 2023; De Santi et al., Global reinforcement learning, 2024)* look also closely related.
OTHER COMMENTS.
- Examples of applications: the paper makes an effort of describing a few applications that fulfil the dynamics assumption, not why they are relevant for CURL.
- Does the analysis need $F_t$ to be fully revealed instead of bandit feedback? If that is the case, can the authors comment on how this assumption may be overcome?
- The considered policies seem to depend only on the current state of the environment (aka Markovian). Can the author discuss why history-based policies are unnecessary in this setting?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The paper is mostly upfront in mentioning the limitations introduced by restrictive assumptions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful comments. Below, we address the main concerns:
- **Motivating the Setting:** In our common response to all reviewers we further elaborate on how the applications presented in the paper align with CURL. We plan to include them earlier in the introduction for the extended version.
- **Novelty of the analysis:** Our approach goes beyond simply combining RL with EWA. The challenge is that, due to the uncertainty and non-stationarity of the environment, the losses of each expert are unknown.
Indeed, to use the learning with experts framework we need to estimate the losses of non-played expert policies based solely on the observations from the played policy, given the incomplete knowledge of dynamics. For that, we must construct an estimate $\hat{p}^t$ of the non-stationary probability transition. Using the empirical estimator for $\hat{p}^t$ and standard RL results for bounding the $L_1$ norm between $\hat{p}^t$ and the true dynamics $p^t$ [Neu et al. 2012; 38] cannot be applied due to non-stationarity. To overcome this, we propose a second sleeping expert scheme to compute $\hat{p}^t$ (Alg. 3), where each expert is an empirical estimation of $p^t$ using values from different intervals. Obtaining the optimal rate for it was challenging and required new technical approaches, including a new specific loss function (see Eq. (12) and Alg. 3), and a new regret analysis (see Prop. 5.2).
Finally, we need to estimate $\hat{p}^t$ because we are dealing with CURL instead of RL. In RL, the linearity allows us to accurately estimate the loss of each expert's policy, even if the policy is not played, by simulating trajectories with $g\_n$ using data from the played policy. However, in the convex case, this approach does not work, so we must estimate $\hat{p}^t$ to determine the losses of non-played policies.
- **No exploration:** We address the concern about the dynamic hypothesis in our common answer to all reviewers. We believe our work could be extended to cases where $g_n$ belongs to a known family of parametric functions. We address below the question regarding the computational complexity:
In tabular RL, there are two approaches for dealing with dynamic’s uncertainty and adversarial losses: policy optimization (PO) and occupancy-measure methods. Occupancy-measure methods use ideas from UCRL2 [4]. They construct a set of plausible MDPs compatible with the observed samples and then play the policy that minimizes the cost over this set of plausible MDPs. PO methods evaluates the value function and uses a mirror descent update directly on the policy, solving the optimization problem through dynamic programming to obtain a closed-form solution. PO is then more computationally efficient than planning in the space of all plausible MDPs [Luo et al. 2021, Efroni et al. 2020a,b].
No efficient method that fully explores in CURL exists. As PO methods for RL rely on the value function, they are unsuitable for CURL, as CURL invalidates the classical Bellman equations. The occupancy-measure methods can be applied but are computationally less efficient. The algorithm we propose as a black-box is a PO method adapted for CURL from [35], but that also assumes that $ g\_n $ is known.
- **Occupancy vs. single-trial:** Thank you for pointing out the work by Mutti et al., 2022. It is an interesting question and we will discuss it in our paper. We chose to work with the expected realization setting to complement existing CURL research, also benefiting from the algorithm from [35] as a black-box. In scenarios with many homogeneous agents (like those in Section 2), a mean-field approach could justify this choice.
- **Bandit feedback of $F^t$:** This is an interesting question for a future work. The CURL bandit framework is significantly more difficult, in the same way that the convex bandit framework is more difficult than the multi-armed bandit one in online learning. In addition, it is also harder for the meta-algorithm to combine results from bandit algorithms while maintaining the optimal regret rates (see Agarwal et al., 2016).
- **Markovian policies:** In a non-stationary environment history-based policies can be detrimental, as they rely on outdated information. What was learned about the environment in previous episodes might no longer be accurate in the current one. Since the learner is unaware of the variation budget of the environment, the best approach is to discard past knowledge and restart learning at every major change in the environment.
Moreover, history-based policies are unnecessary for determining optimal restart times. Our meta-algorithm aggregates active instances of a black-box algorithm, ensuring performance at least as good as the best active instance, likely the longest-running one since the last major change in the environment. Given that the environment is near-stationary since the last major change, any history-based policy can be associated with a Markovian policy with the same state-action distribution [Putterman, 1994]. We can then just work with Markovian policies.
- **Additional references:** Thank you for pointing out these interesting new references. We will add them to our related work.
### References:
- *Neu et. al 2012, The adversarial stochastic shortest path problem with unknown transition probabilities, AISTATS*
- *Luo et al. 2021, Policy Optimization in Adversarial MDPs: Improved Exploration via Dilated Bonuses, NeurIPS*
- *Efroni et al. 2020a, Optimistic Policy Optimization with Bandit Feedback, ICML*
- *Efroni et al. 2020b, Exploration-Exploitation in Constrained MDPs*
- *Agarwal et. al 2016, Corralling a Band of Bandit Algorithms*
- *Jin et al. 2020, Learning Adversarial MDPs with Bandit Feedback and Unknown Transition, ICML*
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thank you for your thoughtful replies to my comments and the general response above.
I think you are making a compelling enough case for the paper showing why the analysis is interesting and providing some application where the need for models that are non-stationary and concave-utility simultaneously may be reasonable. I am increasing my score towards acceptance. I still believe that sidestepping the exploration problem is not fully motivated and that potential application shall be explored in more details.
Some additional comments:
- The reported application are interesting, but I want to stress that they sometimes fail to answer the question. For instance, you are explaining why the finance domain may be non-stationary (clear), but not why the utility shall be concave (e.g., through risk aversion);
- "No efficient method that fully explores in CURL exists". What about intractable methods that are statistically efficient? Previous work show that CURL can be solved through a sequence of RL problems, so it is not clear what makes CURL inefficient. Is it the instantiation of the RL problem (aka computing the reward function)?
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for their prompt response and for recognizing the motivations and novelties in our theoretical analysis. We also appreciate the reviewer's insightful comments and suggestions, and we will incorporate the recommended changes into our paper. Below, we address the reviewer's additional comments.
> The reported application are interesting, but I want to stress that they sometimes fail to answer the question. For instance, you are explaining why the finance domain may be non-stationary (clear), but not why the utility shall be concave (e.g., through risk aversion);
We agree that the objective function was not explicitly stated in some cases. In the inventory management domain, for instance, a multi-objective function can be needed, while in finance, the objective might involve minimizing a convex risk measure (e.g., Conditional Value-at-Risk) while maximizing returns. To clarify our motivation, we will carefully explain in the paper how each example provided fully aligns with our framework.
> “No efficient method that fully explores in CURL exists". What about intractable methods that are statistically efficient? Previous work show that CURL can be solved through a sequence of RL problems, so it is not clear what makes CURL inefficient. Is it the instantiation of the RL problem (aka computing the reward function)?
We acknowledge that statistically efficient methods for online CURL with adversarial losses exist, such as occupancy-measure approaches using Mirror Descent combined with UCRL2 techniques. However, these methods are not computationally efficient, either for RL or CURL, as they require solving an optimization problem over the set of all plausible MDPs within a confidence interval at each iteration. On the other hand, policy optimization algorithms, which are both statistically and computationally efficient for RL, rely on value-based techniques that cannot be applied to CURL due to the invalidation of the classical Bellman equations.
The work that demonstrated that CURL can be solved through a sequence of RL problems [Zahavy et al. (2021)], considers a different setting than ours. They consider the CURL problem in Equation (1) with a fixed convex objective function and unknown dynamics. We focus on the online version with adversarial objective functions, i.e. that change with each episode and are unknown to the learner. As a result, their reduction does not apply to our case.
### References
- *Zahavy et al. 2021, Reward is Enough for Convex MDPs, NeurIPS* | Summary: This paper focuses on CURL, i.e., the Concave Utility Reinforcement Learning problem, which can be treated an extension of traditional RL to deal with convex performance criteria. The authors introduce MetaCURL for non-stationary MDPs. This is a theory heavy paper.
Strengths: Proofs are provided. It seems that the proposed theorems and propositions are novel.
Weaknesses: Actually, for readers (like me) are not an expert in CURL, some introductory examples are helpful. Additionall, it seems that this paper does not have any experiment.
Technical Quality: 2
Clarity: 1
Questions for Authors: First of all, I strongly recommend the authors to include at least one or two introductory examples or figures to show the high-level intuition of their settings and motivation. E.g., difference between RL and concave utility RL; a figure showing offline CURL and online CURL; why concave utility RL is important; why do we need to focus on non-stationarity; and so on. Some examples can be within the domains of robotics, medical treatment, and so on.
A good figure example can be found in Fig.1 of [1].
[1] Levine, Sergey, et al. "Offline reinforcement learning: Tutorial, review, and perspectives on open problems." arXiv preprint arXiv:2005.01643 (2020).
A good introductory example can be done by setting some parameters equal to low dimensions.
# Specific Questions about the Settings
1. Can your framework handle cases when the state or the action is continuous?
2. Do we need to assume the occupancy measure is non-zero?
# Inconsistent Notations
1. Authors mention that L247-248, "We assume T experts, with expert s active in interval [s, T]". Actually, such notations are very confusion. $t$ refers to the episode, but here $T$ represents the number of experts. Especially in L209, the authors mention "over a series of T episodes". Again, in L127, the authors mention that "for any interval $I \subseteq [T]$", what interval does $[T]$ represent, $[1:T]$ or $[t:T]$? It is extremely hard for readers to understand their paper with inconsistent notations.
2. In L248, the authors mention that "Expert s at episode t > s outputs". $s$ refers to the expert, how it can be compared with $t$?
3. In L19, authors claim "episodes of length N". Does it mean that each episode has N steps? Do the authors need to assume each episode has the exactly same length?
4. In L248, Why N is treated as a function, $N(x')$? But in L19, authors claim "episodes of length N".
5. In L210, "For each round t". In L225, "In every episode t". Actually, from my perspective, round t is different from episode t.
6. Confusing exponent symbols and notations. In L268, $T^{2/3}$ represents T exponent $2/3$. Is that correct? In your Algorithm 1, symbol $F^{t}$ and $\hat{p}^{t+1}$ represent exponent $t$?
I strongly recommend the authors to introduce a notation table.
# Specific Questions about MetaCURL
As mentioned previously, because the notations are inconsistent, it is really hard for me to fully understand the proposed framework.
1. Learning with Expert Advice (LEA)
1.1 Do you assume these K experts are optimal? What if there is limited expert knowledge?
1.2 What if two experts contradict to each other? Will your algorithm take the average of these two values? E.g., for the state $s_1$, expert 1 takes $a_1$, expert_2 takes $-a_1$, What should be the expert loss in that case? Could you show some numerical intro examples?
2. The authors mention that "This problem can be reduced to the sleeping expert problem". However, it is still unclear how and why this problem can be reduced?
2.1 The authors mention that "experts are not required to provide solutions at every time step". What if at certain step there is not any expert providing solutions? Is there a lower bound of the number of experts that have to be awake?
3. Does the agent have access to the "external noise sequence $\left(\varepsilon_{n}^{t}\right)$"? Because $\epsilon_{n}^{t} \sim h_{n}^{t}(\cdot)$, is there any information leakage?
4. Does the algorithm know which expert is active? Is that signal too strong? Can such information be learned?
5. What is the time complexity of algo. 1?
6. Theorem 5.1
6.1 In Theorem 5.1, why the complexity is determined by $\Delta^{\pi^{*}}$, $\Delta_{\infty}^{p}$ and $\Delta^{p}$ simultanously? What is the intuition of taking the minimum of abrupt and smooth variation? Under the worst case, should we take the maximum of such variations?
6.2 In [2], their regret bound contains the dimenions for the state space, action space, and the length of each episode. Why in Theorem 5.1, they are not included? Does it mean that your algorithm will not change, when their dimensions are increasing? Still, because of the inconsistent $T$ notation, it is unclear such $T$ represents? (L247-248, "We assume T experts, with expert s active in interval [s, T]". In L209, the authors mention "over a series of T episodes".)
[2] Rosenberg, Aviv, and Yishay Mansour. "Online convex optimization in adversarial markov decision processes." International Conference on Machine Learning. PMLR, 2019.
7. Because there is not any experiment in the current version, I have general concerns over the practicality of the proposed method, e.g., infinite horizon or continuous spaces.
Confidence: 2
Soundness: 2
Presentation: 1
Contribution: 3
Limitations: This paper does not have any introductory examples, or figures, or experiments. However, as such a notation dense paper, it seems that the notations are not consistent, making readers extremely hard to understand their major contributions. Additionally, the agents are assumed to have special strong information, e.g., experts signal or external noise sequence, which might be missing in real-world.
-----------------Updates
Thanks for the thoughtful discussions. After careful consideration, I have decided to maintain my current rating of 4.
tl&dr: Recommendation for Rejection: **The majority of reviewers (Reviewer unHu, Reviewer FWrb, Reviewer La9P, Reviewer ZTXg) expressed some concerns regarding the assumptions underlying this paper.** Suggesting inclusion of motivating examples and realistic experiments to enhance community-wide benefits from your insights.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Motivational examples:
We agree with the suggestion to provide examples to motivate our setting. We will include these examples in the introduction on the extended version.
### Questions about the setting:
- **1.** We focus on the model-based RL framework commonly used in theoretical works, which does not address continuous states or actions. However, our work can be extended to handle continuous states and actions by using a function approximation algorithm as a baseline and assuming the transition probabilities belong to a known parametric family.
- **2.** No, there are no restrictions on the occupancy measure.
### No inconsistent notations:
There is no inconsistency in our notations and we respectfully disagree with the reviewer on this point. We address all the specific concerns below:
- **1. and 2.** In Section 4.1, we introduce the general setting of learning with expert advice, involving $K$ experts and $T$ episodes. In our algorithm, the number of experts equals the number of episodes, i.e., $K = T$. At each episode $s$ we activate an expert that remains active until the finite horizon $T$. Thus we index each expert by the episode it has been activated at. We define the notation $[T]$ on line 21 as $[d] := \{1, \ldots, d\}$ for all $d \in \mathbb{N}$. This definition is used consistently throughout the paper. For intervals starting from an episode $t \neq 1$, we explicitly write $[t, T]$.
- **3. Length $N$:** Yes, each episode consists of $N$ steps, a common assumption in episodic MDP. Practical examples include daily tasks where each day is an episode discretised within $N$ time steps.
- **4. Notation $N$:** The notation $ N_{n,x,a}^{s,t}(x') $ represents the number of times an agent transitions from the state-action pair $(x, a)$ to the state $x’$ at time step $n$ between episodes $s$ and $t$. The notation $N$ indicates the length of an episode. To avoid confusion, we will change the letter for one of these notations.
- **5. round $t$:** Round $t$ is the same as episode $t$.
- **6. exponent symbols:** The $2/3$ in the regret expression at Line 268 is an exponent. The $t$ in $F^t$ or $\hat{p}^t$ indexes the objective functions or probability estimation at episode $t$, as defined in Lines 115/116 and Line 118, and is used consistently throughout the paper. This is a common notation in the literature used to avoid having all indexes written as subscripts when many indexes are needed, see for example [Perrin et. al, 2020].
### Questions about MetaCURL
- **1. and 2.** We do not fully understand the question. Could you please provide more details?
- **3.** The distribution $h\_n^t$ is entirely unknown to the learner. However, since $g\_n$ is assumed to be known and each agent observes their own state-action trajectory, they can determine their external noise trajectory by simply inverting $g\_n$, that is commonly an additive or multiplicative function with respect to the noise.
- **4.** There is no need to learn it. If an expert is active at episode $t$, it will output a policy; if not, it will not have an output. This is how the meta-algorithm knows which experts are active.
- **5.** The runtime is determined by the number of experts (i.e., instances of the black-box algorithm) multiplied by the time complexity of the black-box algorithm. Given our choice of naive intervals for running each black-box algorithm, there are $T$ experts, which increases the computational complexity by a factor of $T$. However, as noted in Remark 3.1, there are more sophisticated ways for designing running intervals that can reduce the computational complexity to $\log(T)$ and can be adapted to our case.
- **6.**
- **6.1**
The algorithm's error is measured by the dynamic regret in Eq. (5), which is the difference between the learner's total loss and that of any policy sequence. In environments with changing objective functions, the regret bound depends on the total variation of the objective functions or the policy sequence, $\Delta^{\pi^*}$.
For environments with changing probability transitions, the regret bound depends on either the abrupt variation, $\Delta^p_\infty$, or the smooth variation, $\Delta^p$. A robust algorithm depends on the minimum of these two metrics. If there are few abrupt changes, $\Delta^p\_\infty$ is smaller, and the algorithm performs at $\sqrt{\Delta^p_\infty T}$. If changes are frequent but minor, $\Delta^p$ is smaller, and the algorithm performs at $T^{2/3} (\Delta^p)^{1/3}$. Prior to our work, only [44] had an algorithm demonstrating such robustness.
Our algorithm is the first to achieve optimal performance with respect to both $\Delta^p$ and $\Delta^p\_\infty$ and depends on $\Delta^{\pi^*}$ rather than variations in the objective function. This allows our algorithm to excel even when the objective function changes arbitrarily, a capability not offered by previous work.
- **6.2** Our regret bound also depends on the state space $\mathcal{X}$, the action space $\mathcal{A}$, and the episode length $N$. Due to these dependencies making the regret bound expression cumbersome, we simplify the presentation by expressing the result in the order of the number of episodes $T$ alone. The notation $\tilde{O}$ signifies that the bound is of the same order as the expression in $T$, disregarding constants and logarithmic factors. Also, in Propositions 5.2, 5.3 and Appendix F we explicitly detail these dependencies. This is common practice in the literature, see for example [44].
- **7.**
We recognize that experiments are valuable for assessing an algorithm's practical performance, but our paper focuses on developing an algorithm that theoretically achieves the optimal regret bound within a specific framework.
### References:
- *Perrin et. al 2020, Fictitious Play for Mean Field Games: Continuous Time Analysis and Applications, NeurIPS 2020*
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. While I appreciate the clarifications provided, I remain concerned about certain aspects of your problem settings and their practical applications.
I strongly recommend the authors to add the clarifications above to their paper.
> we introduce the general setting of learning with expert advice, involving $K$ experts and $T$ episodes. In our algorithm, the number of experts equals the number of episodes, i.e., $K = T$. At each episode $s$ we activate an expert that remains active until the finite horizon $T$.
This raises a key question. What if $K \neq T$? Additionally, regarding my earlier inquiries about Learning with Expert Advice (LEA), are all your experts considered optimal? If some experts are not optimal (e.g., expert $k_1$ has a performance of 1, while expert $k_2$ has a performance of 0.9), how does your algorithm accommodate such variability?
Furthermore, let me add more details to my previous questions. If the experts propose different actions under similar circumstances ($s_1$, expert 1 takes $a_1$ but expert 2 takes $-a_1$), how does your algorithm resolve these conflicts? Addressing these scenarios would significantly enhance the robustness and applicability of your approach.
The authors mention that "this problem can be reduced to the sleeping expert problem." Could you provide further explanation on this point? Specifically: In L218, The authors mention that "experts are not required to provide solutions at every time step". What if, at a certain step, no experts provide solutions? Is there a minimum number of experts that must be "awake" or active for the algorithm to function effectively?
Could you clarify what do you mean by "that remains active until the finite horizon $T$"?
> $g_n$ is assumed to be known.
Is this assumption necessary? What if $g_n$ is not known? How could we validate it in real-world scenarios?
> The notation $N(X')$ represents the number of times an agent transitions from the state-action pair. The notation $N$ indicates the length of an episode. To avoid confusion, we will change the letter for one of these notations.
I appreciate your willingness to revise the notation to reduce confusion. This supports my initial observation that your current notation is not very consistent. As such, I strongly recommend including a notation table in the paper to clearly define and distinguish all critical symbols and terms. This would greatly aid in understanding and improve the readability of your work.
> We recognize that experiments are valuable for assessing an algorithm's practical performance, but our paper focuses on developing an algorithm that theoretically achieves the optimal regret bound within a specific framework.
Providing numerical examples or visual figures that illustrate real-world applications would greatly strengthen your paper. Such examples would address concerns raised by other reviewers, including Reviewer unHu, regarding the practicality and applicability of your algorithm. Empirical evidence, even in simplified scenarios, would offer a more comprehensive evaluation of your proposed method. E.g., could you provide an intuitive numerical example based on your global rebuttal (energy grid optimization, or robotics or finance)?
Additionally, in your newly added reference, [Perrin et. al, 2020] have some experiments, and figures clearly showing their results.
> Our regret bound also depends on the state space, the action space, and the episode length.
As such, how your algorithm will perform, when their dimensions are increasing?
---
Reply to Comment 1.1.1:
Comment: Thank you for your quick response. We believe there might unfortunately be some misunderstandings. Below, we offer our point-by-point responses to address and clarify these concerns.
> This raises a key question. What if $K \neq T$?
**Answer:** This scenario is not possible because the core principle of our meta-algorithm (Alg. 1) is specifically to design $K=T$ experts. This approach is a well-established technique in several dynamic online learning studies (see [13, 22, 24] cited in the paper). Additionally, as stated in lines 225-226, an expert is simply an instance of the black-box algorithm launched by the meta-algorithm. Therefore, we can have as many experts as necessary (the number is controlled by the meta-algorithm).
> […] are all your experts considered optimal? If some experts are not optimal (e.g., expert $k_1$ has a performance of 1, while expert $k_2$ has a performance of 0.9), how does your algorithm accommodate such variability?
**Answer:** We don't quite understand your point. Naturally, not all experts are optimal. The purpose of the meta-algorithm is to perform as well as the best expert, which aligns with the standard regret objective in Learning with Expert Advice. Regardless of the experts' performance, our theoretical guarantee in Thm. 5.1 remains valid.
> If the experts propose different actions under similar circumstances ($s_1$, expert 1 takes $a_1$ but expert 2 takes $-a_1$), how does your algorithm resolve these conflicts?
**Answer:** Experts proposing different actions under similar circumstances is not a problem. As is standard in the literature on learning with expert advice (see *Prediction, Learning, and Games* by Cesa-Bianchi and Lugosi, 2006), the experts' advice—represented as different state-action distributions—are combined in line 7 of Alg. 1. The resulting combined state-action distribution is then used to select the action to be taken.
> "this problem can be reduced to the sleeping expert problem." Could you provide further explanation on this point? Specifically: In L218, The authors mention that "experts are not required to provide solutions at every time step". What if, at a certain step, no experts provide solutions? Is there a minimum number of experts that must be "awake" or active for the algorithm to function effectively?
**Answer:** This will never be the case. An expert is simply an instance of the black-box algorithm. By design, we initialize one black-box algorithm at the beginning of each episode, which continues to provide outputs throughout all episodes until $T$. Thus, in every episode, there is always an expert providing solutions. The expert corresponding to the black-box instance initialized in episode $t$ is not active in any earlier episode $t’ < t$. However, all experts initialized in episodes $s < t’$ are active at episode $t$.
> $g_n$ known. Is this assumption necessary? What if $g_n$ is not known? How could we validate it in real-world scenarios?
**Answer:** Yes, this assumption is necessary in our framework. We explain the importance of addressing CURL and this assumption in our general response to all reviewers. If $g_n$ is unknown but belongs to a known family of parametric functions, we could extend our analysis. However, if $g_n$ is entirely unknown, it would be more challenging and would require further work, as already mentioned in the Conclusion (line 319) of our paper. We've already provided several examples of real-world scenarios where the dynamics can be modeled under the assumption of $g_n$ being known, with an unknown and non-stationary external noise distribution $h_n^t$ (see lines 142-154 in the paper and our general response).
> This supports my initial observation that your current notation is not very consistent. As such, I strongly recommend including a notation table in the paper to clearly define and distinguish all critical symbols and terms. This would greatly aid in understanding and improve the readability of your work.
**Answer:** The notation is not inconsistent. The function N can easily be distinguished from index N both by its signature and the presence of indices. We find the statement that: "since we accept to modify this notation, it supports the claim of inconsistency of all our notations" to be unjustified. The purpose of the review process is to improve the paper, and using our willingness to make changes to satisfy the reviewer as an argument against the paper is non-constructive and unfair. Nonetheless, we can easily implement this modification in the final version, which we see as a minor issue, and believe it should not be used as a reason to reject the paper. We will include a notation table in the final version.
---
Reply to Comment 1.1.2:
Comment: We continue below to address each point of the reviewer.
> Providing numerical examples or visual figures that illustrate real-world applications would greatly strengthen your paper. […] in your newly added reference, [Perrin et. al, 2020] have some experiments, and figures clearly showing their results.
**Answer:** We emphasize that this is a paper on theoretical reinforcement learning and online learning. The work by Perrin et al. (2020) addresses an offline scenario in mean field reinforcement learning with fully known dynamics (both $g_n$ and $h_n$ are known) and assumes stationary dynamics and losses, making it relatively straightforward to design experiments for such a setting.
In contrast, our framework is considerably more complex, involving adversarial losses and changing dynamics. Conducting experiments for adversarial and non-stationary MDPs is particularly challenging due to the difficulty of constructing optimal policies across episodes. The existing literature largely lacks experimental validation for such scenarios. Most of the influential papers addressing non-stationarity in reinforcement learning that we cite do not include experiments, such as [38, 25, 15, 40, 27, 12, 11, 36, 16, 44], among others.
> As such, how your algorithm will perform, when their dimensions are increasing?
**Answer:** As previously mentioned, this dependency in each term of the regret is explicitly outlined in Propositions 5.2, 5.3, 5.4 and Appendix F. The final dependency in Theorem 5.1 is influenced by the choice of the black-box algorithm. For the Greedy MD-CURL algorithm we propose, the dependency on the constants is the same as in [38]: $N^2 \mathcal{X} \sqrt{\mathcal{A}}$, multiplied by the dependency on $T$ and the variation budgets as has been outlined in Appendix G.
---
Rebuttal 2:
Title: Recommendation for Rejection: Suggesting Inclusion of Motivating Examples and Realistic Experiments to Enhance Community-Wide Benefits from Your Insights
Comment: Still, from my perspective, a theoretical paper should only be accepted at NeurIPS if it clearly demonstrates its motivation, and its assumptions are valid in practice (as mentioned by Reviewer unHu, Reviewer FWrb, Reviewer La9P, Reviewer ZTXg); and the whole RL community can benefit from its insights.
As the authors mention Robotics and Finance in their global response, here are some open repositories I recommend they test:
[1] https://github.com/openai/gym
[2] https://github.com/google-deepmind/dm_control
[3] https://github.com/AI4Finance-Foundation/FinRL
For example, many existing control and robotics problems involve infinite horizons and continuous actions and states. How can we assume a large number of experts are possible? How can your assumptions make sense in such scenarios? Additionally, in finance, many high-frequency trading problems occur within milliseconds, making it impractical for all your assumptions to be true.
> Moreover, although the reviewer may not fully appreciate it, the CURL framework is significantly more challenging than the classical RL framework, where the loss is linear.
I agree CURL is more challenging. As such, it is more important to clearly on experiments show the proposed method is able to outperform the existing methods.
> This scenario is not possible because the core principle of our meta-algorithm (Alg. 1) is specifically to design $K=T$ experts. Therefore, we can have as many experts as necessary (the number is controlled by the meta-algorithm).
Why is such a scenario considered impossible in your settings? In practice, it's common to encounter extremely long horizons or a high number of episodes, yet with a limited number of experts. Still, your settings appear overly restrictive.
> Including a notation table is a good suggestion that can easily be added to the paper and should not be a reason for rejection.
Actually, I need to ensure your theory is correct. While reading, I still find lots of notations should be better and properly defined. Without clear notation, it is extremely difficult for me to evaluate it. Therefore, I still have some concerns regarding your theoretical contributions.
> The dynamic assumption may appear strong, but it is valid in practical situations. For instance, [34] provides a real-world motivating application that aligns perfectly with our assumption and framework, addressing a significant problem in the context of climate change. We also believe that our algorithm could be extended in a future work to deal with continuous spaces using function approximation approaches. We point out that this extension has nothing to do with the dynamic’s assumption.
I strongly recommend that the authors directly test their framework on such benchmarks, instead of merely discussing it.
This claim remains questionable. | Summary: The paper studied the Concave Utility Reinforcement Learning (CURL) problem in non-stationary environments, which extends classical reinforcement learning (RL) to handle convex performance criteria in state-action distributions induced by agent policies. The paper proposed MetaCURL to address the challenges posed by non-stationary Markov Decision Processes (MDPs) with changing losses and probability transitions by using a meta-algorithm that aggregates multiple black-box algorithm instances over different intervals. The algorithm achieves optimal dynamic regret without requiring prior knowledge of the MDP changes, making it suitable for adversarial losses and providing a significant advancement in the RL community.
Strengths: 1. The designed algorithm is parameter-free, and does not require the knowledge of variation budget of transitions.
2. The results are theoretically solid.
Weaknesses: 1. The model falls into the tabular MDPs, where the state-action pairs are finite. I am wondering if similar technique can be applied to the case of function approximation. In addition, there is a missing related work on the non-stationary RL with function approximation [1].
2. The transition dynamics, while there are many suitable applications, is also restrictive for a theoretical work.
Feng et al., Non-stationary Reinforcement Learning under General Function Approximation, ICML, 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weaknesses part.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No further limitations to be addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and for recognizing the new theoretical insights in our paper. We address the questions below:
- **1. tabular MDP:** Thank you for bringing this related work to our attention, we will include a citation in our paper. Thanks for raising this question, we believe this could be an interesting future work of our paper and we will add it to the conclusions. To answer it, we can break the necessary changes to adapt our result to function approximation in two:
- **The Meta algorithm:** the analysis of the meta-algorithm depends on the method used to estimate $ p^t $. We believe that a parametric estimation could be employed, which would require a new loss function for the second expert algorithm that handles non-stationarity on the estimation of the probability transitions.
- **The Black-Box Algorithm:** We need to select a black-box algorithm for CURL that generalizes well to function approximation scenarios. Just as policy optimization algorithms are known to generalize effectively in such scenarios for RL (e.g., see [Luo et al. 2021]), we believe that the black-box approach we propose from [35] in Appendix G, which adapts policy optimization for CURL, can be extended to this context.
- **2. Transition dynamics:** We address this concern in our general response to all reviewers as well. We open on some details below:
- **New theoretical analysis:** Addressing the non-stationary CURL scenario is theoretically more challenging than RL, even with restrictive assumptions about the dynamics. To derive a policy using the learning with experts advice framework, we must estimate the losses of non-played expert policies based solely on the observations from the played policy.
If $ g\_n $ is known, we can use the external noises observed by the agent and the function $g\_n $ to simulate the trajectory of any non-played policy in the true environment, thereby defining an empirical state-action distribution $\hat{\mu}$ for each expert. In RL, where $ F^t(\mu) := \langle \ell^t, \mu \rangle $, estimating the loss of each expert at episode $ t $ by taking $\langle \ell^t, \hat{\mu} \rangle$ is sufficient to prove meta-regret convergence due to the linearity of the expression, requiring only common model-based RL techniques.
In CURL, this approach fails due to its non-linearity convexity. Therefore, we need to address the more challenging task of estimating the non-stationary probability transition $ p^t $ to estimate the losses. This leads to the main theoretical innovation of the paper: the development of the second expert scheme capable of estimating the transition probabilities even in the presence of non-stationarity. The non-linearity of CURL makes it theoretically more challenging than RL.
- **Existence of black-box algorithm for CURL that explores:** No closed-form algorithm for exploration in CURL exists. Policy optimization (PO) methods for RL rely on the value function, making them unsuitable for CURL, as CURL invalidates the classical Bellman equations. The alternative occupancy-measure methods, while applicable, are computationally less efficient and lack a closed-form solution. The algorithm we propose as a black-box is a PO method adapted for CURL and is derived from [35], but it also assumes that $ g_n $ is known.
### References:
- *Luo et al. 2021, Policy Optimization in Adversarial MDPs: Improved Exploration via Dilated Bonuses, NeurIPS 2021*
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I decide to maintain my score. | Summary: The authors present a policy learning algorithm for non-stationary (+ uncertain) environments and convex utilities.
The proposed algorithm is a meta-algorithm which runs multiple black-box algorithms and aggregates outputs with something they call a sleeping expert framework. The algorithm achieves optimal dynamic regret.
Strengths: - Proposed an algorithm for non-stationary and uncertain convex utility MDP
- Achieves optimal dynamic regret
Weaknesses: - The paper considers uncertain/non-stationary noise distribution. However, until 1/3rd of the paper, I got an impression of uncertain dynamics (without any structure assumption).
- The paper is difficult to follow particularly section 3 from line 175 and section 4
Technical Quality: 3
Clarity: 2
Questions for Authors: - Are baselines the same as the black-box algorithms?
- Line 164, what does parametric algorithm mean in this context
- Line 141 why do policy optimization algorithms have lower complexity?
- Any thoughts on how the algorithm changes if g_n is uncertain?
Comments for improving paper:
- line 13: we achieve ---> the algorithm achieves?
- line 15: "full adversarial losses, not just stochastic ones." is ill-posed
- I am familiar with both non-stationarity and uncertainty. However, since the paper builds on these two challenges, I would suggest explaining them (at least in a high-level) early in the introduction. Similarly, for the terms adversarial objectives and learner (line 50).
- Line 133-135: Please include it in the introduction (could be a bit vague) but it's important to note that the structure of MDP is already known and only the noise is uncertain.
- Maybe write contributions as a paragraph, since points 2,3 are not contributions but explanations of point 1.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and questions. We agree that the paper is notation-heavy and may be difficult to follow in some parts. We welcome any further suggestions on how to improve the readability of the paper.
### Questions
- **Baselines vs blackbox:** Yes, baselines algorithms are the same as blackbox, we apologize for the confusion caused by using both terms. We will revise this to ensure consistency.
- **Line 164** We apologize for the typo where we incorrectly used the word "parametric." We wanted to say that our approach is applicable to any black-box algorithm that satisfies Equation (10), and specifically it also works if the algorithm depends on a learning rate $\lambda$, without requiring us to specify the optimal $\lambda$ as an input (parameter-free).
- **Line 141** In tabular RL, there are two main approaches for dealing with uncertainty and adversarial losses: policy optimization (PO) and occupancy-measure (or state-action distribution) methods.
Occupancy-measure methods leverage the ideas from UCRL2 [4]. They construct a set of plausible MDPs compatible with the observed samples and then play the policy that minimizes the cost over this set of plausible MDPs.
PO methods evaluates the value function and uses a mirror descent-like updates directly on the policy, solving the optimization problem through dynamic programming to obtain a closed-form solution for the policy. This approach is more computationally efficient than planning in the space of all plausible MDPs.
For a more detailed discussion on both approaches we refer to the works of [Luo et al. 2021, Efroni et al. 2020a,b].
- **How the algorithm changes with $g\_n$ unknown:**
- **MetaCURL analysis:** MetaCURL with a black-box algorithm that explores could be extended to the case where $g_n$ is unknown but belongs to a known parametric family. The case where $g_n$ is completely unknown would be harder and require further work.
- **Existence of black-box algorithm for CURL that explores:** No closed-form algorithm for exploration in CURL exists. PO methods for RL rely on the value function, making them unsuitable for CURL, as CURL invalidates the classical Bellman equations. The alternative occupancy-measure methods, while applicable, are computationally less efficient and lack a closed-form solution. The algorithm we propose as a black-box is a PO method adapted for CURL and is derived from [35], but it also assumes that $ g_n $ is known.
### Comments for improving the paper:
Thank you for pointing out the typos in lines 13 and 15; we will correct them. We also appreciate the feedback on making our paper more accessible to readers outside the online learning and theoretical reinforcement learning communities. We will provide a clearer explanation of the challenges of non-stationarity, uncertainty, and adversarial losses earlier in the introduction.
We will clarify our assumptions on the dynamics by moving Equation (8) to the introduction. We just want to highlight that we fully acknowledge the limitations of our restrictive assumptions in the paper. These are addressed in the abstract, introduction, comparison table 1, Section 2, and the conclusion, where we also point it as an important improvement for future work.
Regarding our contributions, we list them as separate points to highlight the distinct novelties of our method. The third point, in particular, stands as an independent contribution:
- **Point $1$, New algorithm:** MetaCURL is a new-algorithm for online MDPs dealing with non-stationarity, partial uncertainty, adversarial losses, convex losses, achieving optimal dynamic regret, and being efficient to compute.
- **Point $2$, New analysis:** this is the first application of learning with expert advice theory to Markov Decision Processes, resulting in analyses significantly distinct from existing RL approaches and potentially inspiring new algorithms for RL and CURL.
- **Point $3$, New black-box algorithm:** Developing the meta algorithm and proving its regret is one aspect (points $1$ and $2$), while demonstrating the existence of a baseline algorithm that meets the regret bound in Equation (10) is another contribution, detailed in Section 5 and Appendix G.
### References:
- *Luo et al. 2021, Policy Optimization in Adversarial MDPs: Improved Exploration via Dilated Bonuses, NeurIPS 2021*
- *Efroni et al. 2020a, Optimistic Policy Optimization with Bandit Feedback, ICML 2020*
- *Efroni et al. 2020b, Exploration-Exploitation in Constrained MDPs*
---
Rebuttal 2:
Comment: I thank the authors for their replies. I went over them and do not have any further questions. Please include the changes as promised in your revised version, particularly regarding the dynamics assumption and improving readability. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time and feedback in evaluating our paper. While we agree that the assumption in the dynamics of Equation (8) may seem restrictive, we explain below why studying this case is important for CURL. Additionally, we highlight some of the novel contributions of our work.
- **New algorithm:** This is the first work where the framework of learning with expert advice has been adapted to online MDP to address non-stationarity. Previous RL approaches rely on sliding windows or restarts, which require prior knowledge of the variation budget. The first method to overcome this issue [44] does not handle adversarial losses, where losses can change arbitrarily at each episode and are unknown to the learner. Introducing the use of experts is novel for both RL and CURL, and can open the way for the development of new algorithms.
- **New analysis and need of dynamic assumption (8):**
We emphasize that CURL (which is a convex objective) is significantly harder than RL (which is a linear objective). A notable example in the online learning literature is the comparison between convex bandits [Lattimore 2024] and multi-armed bandits [Lattimore and Szepesvari, 2020], where the former are highly more challenging. Early studies on more complex problems may need stronger assumptions to enable the development of new methods for general scenarios. Addressing online CURL needs a new theoretical analysis distinct from RL, even with the dynamics assumption in Equation (8). Existing work on online CURL, like [35], also requires this assumption and is limited to stationary environments.
Our work adds the challenge of non-stationarity. In the meta-algorithm constructed using the experts framework, the complexity of working with CURL is addressed through our novel construction of $\hat{p}^t$, the estimator of the probability transition. This approach differs significantly from classic RL methods [Neu et al. 2012] and necessitates a new analysis (see Proposition 5.2).
- **Real world applications satisfying CURL with dynamic assumption (8):**
Our setting encompasses many real-world problems outlined in the paper and further detailed below:
- **Energy grid optimization.** To balance the energy production with the consumption, an energy provider may want to control the average consumption of electrical appliances (electric vehicles, water heaters, etc), with known physical dynamics but unpredictable and varying consumer behavior. This is a central problem in the context of climate change to introduce renewable energies into the electrical grid. The application paper Moreno et. al 2023 exactly fits into our CURL framework (see Eq. (1) of Moreno et al. 2023). Other related works are [41, Coffman et. al 2023].
- **Robotics.** Controlling a population of drones often involves environments with non-stationary dynamics, such as changing weather conditions and human interventions. One objective might be to reach a certain target while avoiding specific states. This can be formulated as the following convex objective:
$$F(\mu) := -\langle r, \mu \rangle + (\langle \mu, c \rangle)^2$$
where $r$ is a reward vector and $c$ is a cost vector. Alternatively, the objective might be to distribute the drones uniformly throughout the space, which can be expressed using the entropy function:
$$F(\mu) := \langle \mu, \log(\mu) \rangle.$$
- **Inventory management.** Online resource allocation, where user demand is the unknown external noise independent of stock levels [Lee et al. 2023];
- **Finance.** Trading tasks, assuming market independence from the trajectory of a single agent [Riva et. al 2022]; among others.
Since the external noise distribution and the cost function are unknown, we must still simultaneously learn the model and optimize the policy, which keeps us within the realm of reinforcement learning. These points provide enough motivation for addressing this setting, even though exploration is unnecessary.
### References
- *Lattimore 2024, Bandit Convex Optimisation*
- *Lattimore and Szepesvari 2020, Bandit algorithms*
- *Neu et. al 2012, The adversarial stochastic shortest path problem with unknown transition probabilities, AISTATS*
- *Moreno et. al 2023, (Online) Convex Optimization for Demand-Side Management: Application to Thermostatically Controlled Loads*
- *Coffman et. al 2023, A unified framework for coordination of thermostatically controlled loads, Automatica*
- *Lee et al. 2023, Online Resource Allocation in Episodic Markov Decision Processes*
- *Riva et al. 2022, Addressing Non-Stationarity in FX Trading with Online Model Selection of Offline RL Experts, ICAIF* | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper presents theoretical results on the CURL algorithm for non-stationary MDPs. The proposed meta CURL models non-stationarity factors using external noise and achieves low dynamic regret in near-stationary environments, with regret only related to the frequency and magnitude of changes. Overall, the work provides solid theoretical results for CURL and non-stationary RL. Although I am not an expert on RL theory (my focus is more on algorithms and applications), I would give an acceptance rating for this initial review and will be engaged in the discussion.
Strengths: - [**Motivation and Significance**]: The problem of non-stationary MDPs is critical in CURL, and this paper provides valuable theoretical insights and guarantees for the CURL algorithm. While I am not an expert in RL theory, I think in general this work is critical for both RL theory and empirical algorithms design.
- [**Technical Soundness**]: While I am not an expert in RL theory, the theoretical results and proofs presented in the paper appear to be solid and well-constructed, especially on the regret analysis part (only related to changing frequency and magnitude).
- [**Presentation**]: The paper is generally well-written and the theoretical analysis is clearly presented (accessible even to readers outside the theory domain)
Weaknesses: Since I am not an expert on RL theory, I have listed most of my questions in this section.
Q1: In equation (8), the noise captures the non-stationary factors. Will these factors from different parts of the MDP (such as the reward function or state dynamics) separately influence or impact the major theory and regret bound?
Q2: In some non-stationary or meta RL works, the non-stationary or distribution change factors are not only from external noise but also from the agent’s policy itself. How does this affect the major theoretical results? Any comments on this point would be appreciated.
Technical Quality: 3
Clarity: 3
Questions for Authors: I put the questions in the above section.
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I do not think this theoretical work could pose any potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and for recognizing the new theoretical insights in our paper. We address the questions below:
- **Q1:** We take into account two types of non-stationarity: on the objective functions and on the dynamics from the external noise distribution $h_n^t$. Each impact the regret in a different way in Theorem 5.1:
- **Non-stationarity factors from the objective functions:** Our regret bound works for adversarial objective functions, i.e. that can change arbitrarily at each episode, and are unknown to the learner. This robustness to non-stationarity in losses affects the regret through the term $\Delta^{\pi^*}$. This is a novel aspect of our algorithm. We are the first to present an algorithm that addresses both dynamic non-stationarity and arbitrarily changing losses without requiring prior knowledge of the variation budget.
- **Non-stationarity in dynamics:** In the paper, non-stationarity in the dynamics arises from the unknown distribution of the external noises $ h\_n^t $. This affects the regret through the terms $\Delta^p$ and $\Delta^p\_\infty$, as detailed in Theorem 5.1. However, our bounds would still hold if the state dynamics were also non-stationary (i.e., $ g\_n^t $ in line 128 instead of just $ g\_n $), provided $ g\_n^t $ is known to the learner.
- **Q2:** We also account for the non-stationarity of the policies in our work through the term $\Delta^{\pi^{\*}}$, defined in Equation (7). Our algorithm achieves the optimal regret bound for this term as stated in Theorem 5.1. Could you please provide the references to the works you mentioned so that we can offer more details on this?
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. My concerns have been mostly addressed. As to the Q2, one reference could be [1] where the non-stationary factors could come from different sources in MDPs.
[1] Xie, Annie, James Harrison, and Chelsea Finn. "Deep reinforcement learning amidst continual structured non-stationarity." International Conference on Machine Learning. PMLR, 2021.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response, we are glad to have addressed most of the reviewer concerns. We hope to address Q2 below, based on reference [1].
In [1], non-stationarity is represented by a probabilistic model. Indeed, the Markov decision process (including both the rewards and the probability transition kernels) is indexed by some latent variable $z$ that evolves according to a hidden Markov model. Our paper does not assume any model for how the non-stationarity arises, making it a robust approach.
From the perspective of losses, our algorithm is robust to adversarial losses—losses that can change arbitrarily and are unknown to the learner—affecting our regret bound through $\Delta^{\pi^*}$.
From the perspective of the probability transition kernel, we do not make any model assumptions about the distribution of the non-stationarity factors, $h_n^t$. The only assumption is that $g_n$ from Equation (8) is known. However, we believe our approach could be extended to the case where $g_n$ belongs to a known family of parametric functions with parameters that are unknown and vary across episodes, also without assuming any specific model for the distribution of these parameter changes. | Summary: This paper addresses online learning in non-stationary episodic loop-free Markov decision processes (MDPs) with changing losses and probability transitions. It extends the Concave Utility Reinforcement Learning (CURL) problem to handle convex performance criteria in state-action distributions, overcoming the non-linearity that invalidates traditional Bellman equations. The introduced MetaCURL algorithm, the first for non-stationary MDPs, runs multiple black-box algorithm instances over different intervals, using a sleeping expert framework to aggregate outputs. MetaCURL achieves optimal dynamic regret under partial information without prior knowledge of MDP changes, handling fully adversarial losses. This approach is expected to be of significant interest to the RL community.
Strengths: The novelty is clear. That is expanding CURL in non-stationary MDPs.
Weaknesses: My comments on weaknesses are listed below.
Technical Quality: 3
Clarity: 2
Questions for Authors: Thanks for the opportunity to review this paper. I have some questions as follows.
[Major issues]
1. Novelty
- The statement "non-requiring prior knowledge of the environment's variations" (lines 68-69) needs clarification. In Table 1, the dynamic regret of the proposed algorithms still depends on $\Delta^p_\infty$ and $\Delta^p$, which seem to contain information about environmental changes (lines 123-126). How does this align with the claim of not requiring prior knowledge?
2. Dynamic regret representation
- The use of $\Delta^{\pi^\star}_t$ to represent the upper bound of dynamic regret requires further justification. While it may seem reasonable since the optimal policy is defined given the environment, what happens if the optimal policy is not unique? In such cases, $\Delta^{\pi^\star}_t$ could take multiple values, as there can be multiple optimal policies at times $t$ and $t+1$.
3. Dynamic hypothesis
- The dynamic hypothesis presented in line 128 appears to be a strong assumption. Reinforcement learning (RL) is inherently about learning from data, and assuming prior knowledge of how the environment operates seems misaligned with RL's principles. This assumption aligns more with traditional control theory, where the plant dynamics are known. In RL, even in model-based RL, the model is estimated, and the policy is learned simultaneously.
[Minnor issues]
1. Terminology
- The term "loop-free" used in the abstract needs clarification. This term is not commonly used in the context of learning but is more prevalent in control theory. Is "loop-free learning" akin to continual learning with infinite episodes? More precise terminology would help readers understand your contributions better.
2. Regarding with the related works.
- It seems the paper lacks some recent related works (around 2023,2204) in theoretical non-stationary RL works. Please include the following to keep the track.
[1] Lee, H., Ding, Y., Lee, J., Jin, M., Lavaei, J., & Sojoudi, S. (2024). Tempo adaptation in non-stationary reinforcement learning. Advances in Neural Information Processing Systems, 36.
[2] Feng, S., Yin, M., Huang, R., Wang, Y. X., Yang, J., & Liang, Y. (2023, July). Non-stationary reinforcement learning under general function approximation. In International Conference on Machine Learning (pp. 9976-10007). PMLR.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: There is no limitations on potential negative societal impact .
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Questions
- **Novelty:** Our algorithm runs without needing knowledge of the environment changes as input, but the final error guarantee does depend on these variations, specifically through the non-stationarity measures $\Delta^p$ and $\Delta^p_\infty$. This agrees with the lower bound in [33], which we cite in line 68 of the paper. This lower bound is independent of the algorithm and captures the difficulty of the problem as a function of these measures. Note that all previous algorithms in the literature but the one in [44] require knowledge of the environment changes as input, which in practice is often not feasible. This is an advantage of our algorithm.
- **Dynamic regret representation:** Here, $(\pi^{\*,t})\_{t \in [T]}$ refers to any sequence of policies, not just the optimal ones. The dynamic regret in Equation (5) and the non-stationarity measure $\Delta\_t^{\pi^{*}}$ in Equation (7) are defined for any sequence of policies $(\pi^{\*,t})\_{t \in [T]}$. Therefore, our regret bound is valid for any sequence of policies. Choosing the optimal sequence of policies to evaluate the regret is natural. The fact that this sequence of policies may not be unique poses no problem in which case we would pay $\min\_{\pi^{\*}} \Delta^{\pi^{\*}}$.
- **Dynamic Hypothesis:** We further discuss the importance of the dynamics assumption in Eq. (8) in the common response to all reviewers. We want to emphasize that our dynamics still assume that the noise's distribution $h_n^t$ and the cost functions are completely unknown, requiring simultaneous model learning and optimization. This places our problem outside the scope of control literature and into the realm of reinforcement learning. Many existing RL studies address problems other than the exploration-exploitation dilemma, such as the initial work on concave utility RL mentioned in the paper, which typically consider offline settings with fully known dynamics [23, 47, 48, 5, 46, 20].
- **Terminology:** We appreciate the reviewer's comment and agree that the term may be ambiguous. We will provide additional explanations in the paper to clarify its meaning. However, "loop-free" is also commonly used in the context of learning in model-based episodic Markov Decision Processes (MDP) to describe online MDP problems with fixed episode lengths and a fixed initial state-action distribution per episode. This term was first introduced by [Neu et al. 2010], and has since been used in several influential works, such as: [Neu et al. 2012, Zimin et al. 2013, Rosenberg et al. 2019, Jin et al. 2020, Efroni et al. 2020, Moreno et al. 2024]; among many others.
- **Related work:** We greatly appreciate the reviewer for pointing us to these related works. We will include them in our citations.
### References:
- *Neu et al. 2010, Online Markov Decision Processes under Bandit Feedback, NeurIPS*
- *Neu et al. 2012, The adversarial stochastic shortest path problem with unknown transition probabilities, AISTATS*
- *Zimin et al. 2013, Online Learning in Episodic Markovian Decision Processes by Relative Entropy Policy Search, NeurIPS*
- *Rosenberg et al. 2019, Online Convex Optimization in Adversarial Markov Decision Processes, ICML*
- *Jin et al. 2020, Learning Adversarial Markov Decision Processes with Bandit Feedback and Unknown Transition, ICML*
- *Efroni et al. 2020, Optimistic Policy Optimization with Bandit Feedback, ICML*
- *Moreno et al. 2024, Efficient Model-Based Concave Utility Reinforcement Learning through Greedy Mirror Descent, AISTATS*
---
Rebuttal Comment 1.1:
Comment: Thank you for the update.
The reviewer still has a few questions and requires some clarification.
- **Novelty**: The reviewer acknowledges that the upper bound should be dependent on the environment's changes. However, the statement "Our algorithm runs without needing knowledge of the environment changes as input" seems to imply a rather strong assumption. Does this mean that the algorithm requires *exact* information about the environment changes, or is it based on *estimated* information? If it requires exact information, the algorithm might lose its practicality, as it is generally impossible to know the environment's changes with precision in a time-varying setting. If this is the case, could the authors propose an upper bound that accounts for estimation error? I also recommend referring to recent non-stationary reinforcement learning papers [1,2] that incorporate estimation errors in both model-free and model-based approaches.
[1] Lee, H., Ding, Y., Lee, J., Jin, M., Lavaei, J., & Sojoudi, S. (2024). Tempo adaptation in non-stationary reinforcement learning. Advances in Neural Information Processing Systems, 36. – This work includes estimation error in a model-based approach.
[2] Lee, H., Jin, M., Lavaei, J., & Sojoudi, S. (2024). Pausing Policy Learning in Non-stationary Reinforcement Learning. In Forty-first International Conference on Machine Learning. – This work incorporates estimation error in a model-free manner.
- **Dynamic Regret Representation**: While the author's response wasn't perfectly aligned with the reviewer's initial query, the clarification provided has fully addressed the concern. The reviewer is now persuaded by the approach of taking $ \min_{\pi^*} \Delta^{\pi^*} $ for the constant upper bound. Thank you.
- **Dynamic Hypothesis**: The reviewer still finds the dynamic hypothesis to be a somewhat strong assumption, even though the system includes noise. This is because noise is an inevitable consideration in any system design. However, based on the prior works referenced, this issue is considered fully addressed. Thank you.
- **Terminology**: Thank you for the clarification. Could the authors please be more specific about the term "loop-free"? What exactly does "loop" refer to, and why is the proposed algorithm described as "loop-free"?
- **Related Work**: Thank you for the consideration. Please include these references if the paper is accepted.
---
Reply to Comment 1.1.1:
Comment: Thank you for your prompt response. We are pleased that we have addressed most of your questions. The remaining issue appears to stem from a misunderstanding, which we hope to clarify below. We hope that the reviewer will consider this in their final assessment.
> **Question about novelty:** The reviewer acknowledges that the upper bound should be dependent on the environment's changes. [...] However, the statement "Our algorithm runs without needing knowledge of the environment changes as input" seems to imply a rather strong assumption. Does this mean that the algorithm requires exact information about the environment changes, or is it based on estimated information?
**Answer:** There is still a misunderstanding. Our algorithm **does not** require prior-knowledge of the environment changes contrary to most existing works. The fact that it appears in the performance bounds is a strength, not a weakness of our result, and does not mean at all that the algorithm needs it to satisfy the bound.
As the reviewer acknowledges, it is highly challenging to know the environment's changes with precision in a time-varying setting. Achieving our results without needing this knowledge was thus non-trivial and most existing algorithms loose their practicality contrary to us. This is one of our core contributions.
Thanks again for pointing us to these related works. We will include them in our citations.
> **Loop free terminology:** Could the authors please be more specific about the term "loop-free"?
**Answer:** The loop-free problem refers to a specific case of episodic MDPs, where an agent must transverse episodes of a fixed length (denoted by $N$ in our case), always starting from the same state or from a state-action pair sampled from the same distribution (denoted by $\mu_0$ in our case), with transition probabilities dependent on the agent's time step $n \in [N]$. We will include these details in the main paper.
---
Rebuttal 2:
Comment: Thank you for the update.
- **Novelty:** I apologize for the earlier confusion. I now understand what the authors mean. The algorithm does not require any prior information about the environment, and the performance bound is indeed dependent on the environmental changes. This point has been fully addressed.
- **Loop-free:** Thank you for the explanation. This question is now fully addressed. It seems that "loop-free" refers to resetting the agent and starting from the initial stage multiple times. (Please correct me if I am mistaken.)
I also appreciate how the author have written its. contribution in a explicit way and actively engaging in the discussions. **I have raised my score to 7.** Please revise the paper based on our discussions if the paper get accepted. I will also consider the feedback from other reviewers. Good luck with your paper. Thank you.
---
Rebuttal Comment 2.1:
Comment: Thank you for your quick response and for recognizing the new contributions of our paper. Yes, that is precisely what "loop-free" signifies in this context. We are pleased to have addressed all of the reviewer's questions, and we appreciate you taking the time to review our work. | null | null | null | null |
One-Step Effective Diffusion Network for Real-World Image Super-Resolution | Accept (poster) | Summary: In this paper, authors propose a novel one-step effective diffusion network, termed as OSEDiff, for the Real-ISR. The proposed OSEDiff adopts LQ image as the input and directly output the final output with the help of the decoder of VAE, thus eliminate the uncertainty introduced by random noise sampling and achieve one-step generation of diffusion model. Besides, the variational score distillation is introduced in the latent space to conduct KL-divergence regularization to enhance the quality of final Real-ISR output. In the experimental part, the proposed method outperforms other baselines in both quantitative and qualitative comparisons.
Strengths: There are several strengths here:
1. One-step diffusion model is attractive and intereting, which will significantly reduce the inference time.
2. The paper is easy to read and understand.
3. The proposed method show comparable results in the experimental part.
Weaknesses: There are several weakness here:
1. Experimental part could be further improved, which will be further demonstrated in the following section.
2. The efficient and params of the proposed method is expected to be detailed demonstrated.
Technical Quality: 3
Clarity: 3
Questions for Authors: There are several concerns and suggestions here:
1. I am wondering the comparisons between the proposed method and SinSR. Does the improvement of performance gain from the pre-trained SD?
2. Experiments of the paper is expected to be improved. For instance, total parameters instead of the training parameters are expected to be compared in the Table 2, which is more closely related to the deployment.
3. I am wondering the reason why the setting of "w/o VSD loss" may achieve better PSNR than proposed method. I would be better to provide more visual results of it to demonstrate the effectiveness of the VSD loss.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Could be referred to above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Comparison with SinSR.**
The improvements of OSEDiff over SinSR mainly come from the pre-trained SD model and the VSD loss. The SD model, pre-trained on large-scale data, contains rich prior knowledge of natural images, enabling effective one-step generation. Additionally, we finetune the multi-step SD model into a single-step network using LoRA and VSD loss. It ensures the one step generation capacity of OSEDiff for restoration tasks while avoiding the lengthy inference time of multiple steps.
**Q2. Total params rather than training params.**
Thanks for the suggestion. The table below show the total number of parameters, FLOPs and the inference time of the competing methods. Please refer to our responses to Q2 of Reviewer KDhM for more discussions. We will add this table and the associated discussions in the revision.
**Table: Complexity comparison among different methods. All methods are tested with an input image of size 512×512 on an A100 GPU.**
| | StableSR | DiffBIR | SeeSR | PASD | ResShift | SinSR | OSEDiff |
|-----------------------|:--------:|:-------:|:-----:|:------:|:--------:|:-----:|:-------:|
| **# Total Param (M)** | 1410 | 1717 | 2524 | 1900 | 119 | 119 | 1775 |
| **FLOPs (G)** | 79940 | 24234 | 65857 | 29125 | 5491 | 2649 | 2265 |
| **Inference Step** | 200 | 50 | 50 | 20 | 15 | 1 | 1 |
| **Inference Time (s)**| 11.50 | 2.72 | 4.30 | 2.80 | 0.71 | 0.16 | 0.35 |
**Q3. Why is the PSNR lower when using VSD Loss compared to not using it, and more visualization results.**
In the objective function of a general learning-based image restoration framework (refer to Eq. (1) in the main paper), there are two major components: a fidelity term and a regularization term. The fidelity term is usually measured and constrained by $L_2$ or $L_1$ norms, which is friendly to the PSNR metric, while the regularization term is designed based on the employed prior knowledge of natural images. In our work, the VSD loss, which aligns the distribution of network outputs to that of SD prior distribution (i.e., natural image prior distribution), is used as the regularization term. With VSD, the fidelity will be traded-off a little but the perceptual quality will be much enhanced (due to the alignment with natural image prior distribution). Without VSD, the network will focus on optimizing the fidelity term so that the PSNR metric can be improved; however, the perceptual quality will be decreased. Visual examples can be found in Figure 5 of the enclosed PDF file. Without the VSD loss, the PSNR of the restoration results is higher, but they lack semantic details.
---
Rebuttal Comment 1.1:
Comment: Many thanks for authors' response. Most of the concerns have been satisfied. The only concerns relies on the inference times, I hope the authors could demonstrate more on why the proposed OSEDiff obtains few FLOPs but more inference times than SinSR. It would be helpful for the readers.
---
Rebuttal 2:
Comment: Thank you very much for pointing out this problem. We carefully re-examined OSEDiff's inference code and found that it is a Python decorator function that consumes much time. This function is just to calculate the inference memory usage without affecting anything on the restoration process. We therefore removed this function and reassessed the inference time of different modules for both OSEDiff and SinSR. The results are shown in the table below:
| OSEDiff | DAPE | Text Encoder | VAE Encoder | UNet | VAE Decoder | SUM |
|:------------:|:------:|:------------:|:-----------:|:------:|:-----------:|:------:|
| FLOPs (G) | 104 | 22 | 542 | 355 | 1242 | 2265 |
| Time cost (s)| 0.023 | 0.010 | 0.020 | 0.026 | 0.037 | 0.116 |
| SinSR | VAE Encoder | UNet | VAE Decoder | SUM |
|:------------:|:-----------:|:------:|:-----------:|:------:|
| FLOPs (G) | 898 | 202 | 1549 | 2649 |
| Time cost (s)| 0.034 | 0.045 | 0.051 | 0.130 |
We can see that the overall time consumption of OSEDiff is 0.116s, which is actually less than SinSR (0.130s). Though the DAPE module in OSEDiff costs additional time, the VAE Encoder, UNet, and VAE Decoder modules of OSEDiff consume less time than those of SinSR. Note that although SinSR's UNet has lower FLOPs (202G) than OSEDiff's UNet (355G), its inference time is nearly doubled because SinSR's UNet employs Swin Transformer blocks. The frequent window partitioning operations elevate memory access and data movement costs, resulting in increased latency. Similar observations can be seen for OSEDiff's DAPE module, which also uses a Swin Transformer backbone. Its FLOPs is less than 1/3 of that of UNet, yet its time consumption is comparable to UNet.
The time consumption of SinSR reported in our main paper is 0.16s, which includes the time for converting the model output tensor to an image. We will change it to 0.130s in the revision for fair comparison.
Again, we sincerely thank this reviewer for the careful reading of our paper and indicating this problem. We will correct this issue in the revision.
---
Rebuttal Comment 2.1:
Comment: Thanks for your response. All my concerns have been addressed. I will keep my positive rating. | Summary: This paper introduces a diffusion-based real-world image super-resolution method, OSEDiff, which can efficiently generate high-quality images in just one diffusion step. Firstly, in order to eliminate the uncertainty introduced by random noise sampling, the authors propose to directly feed the low-quality images without any random noise into the pre-trained SD model which integrates trainable LoRA layers. Furthermore, the variational score distillation for KL-divergence regularization is utilized to align the distribution of generated images with natural image prior, ensuring that the one-step diffusion model could still generate high-quality output. OSEDiff achieves comparable performance to state-of-the-art SD-based real-world image super-resolution methods with fewer trainable parameters and inference time.
Strengths: This paper introduces a one-step effective diffusion network for real-world image super-resolution. The low-quality images without any random noise are utilized as the inputs of the pre-trained SD model and the VSD in latent space are utilized to align the distribution of generated images with natural image prior. The proposed method solves two problems in diffusion-based real-world image super-resolution and it achieves comparable performance to state-of-the-art SD-based real-world image super-resolution methods with fewer trainable parameters and inference time. The figures of this paper are simple and clear, with precise formula expressions, demonstrating a certain clarity.
Weaknesses: (1) The motivation is unclear. In the introduction, the authors use a certain amount of space to mention two major issues in training a Real-ISR model, including the problem of how to build the LQ-HQ training image pairs, but this paper does not propose an innovative solution to this problem.
(2) The explanation of the novelty and its effectiveness are insufficient. 1) Contribution 1, "directly feeding the LQ images into the pre-trained SD model without introducing any random noise," claims to eliminate the uncertainty of output but lacks supporting evidence, raising doubts about its validity. It is recommended to provide visual evidence and analyze why this can eliminate the uncertainty and achieve better Real-ISR performance in detail. 2) Contribution 2, “utilizing variational score distillation for KL-divergence regularization,” it is recommended to provide some visualization of the output of diffusion network which are sent to the two regularizer networks to prove that it can align the distribution of generated images with natural image prior.
(3) The metrics used need more explanation. Since CLIPIQA is text-related, although it has been proved to be effective and generalized in image quality assessment, it is not that convincing to directly apply it to evaluate the quality of real-world super-resolution images. More explanation can be provided.
(4) Details of the network structure are lacking in the figure. In Figure 2, the detailed network structure of VAE encoder and the diffusion network with trainable LoRA layers are lacking. It is recommended to provide detailed network structure.
(5) In “Ablation Study”, the super-resolution performance of the method with text prompt extractor is not competitive enough compared to removing text prompt. Although the qualitative comparisons show that the method with text prompt extractor can generate richer image details, the method without text prompt is much better for several common metrics.
Technical Quality: 3
Clarity: 3
Questions for Authors: (1) Could you please provide some visualization of the output of diffusion network which are sent to the two regularizer networks to further prove the effectiveness of the regularization loss in the latent space?
(2) Could you please explain in detail why "directly feeding the LQ images into the pre-trained SD model" can eliminate the uncertainty of output and achieve better Real-ISR performance than the methods which uses inputs with random noise?
(3) Could you please provide detailed network structure of VAE encoder and the diffusion network with trainable LoRA layers?
(4) Could you please provide more qualitative comparisons of the method with/without text prompt extractor to further prove the effectiveness of text prompt or further elaborate on the importance of the text prompt to prove that it is essential?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: (1) Briefly explain why "directly feeding the LQ images into the pre-trained SD model" can eliminate the uncertainty of output, and why it can achieve better Real-ISR performance than the methods which uses inputs with random noise.
(2) Please provide some visualization of the output of diffusion network which are sent to the two regularizer networks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Unclear motivation.**
Thanks for the comments. The goal of this work is to develop an efficient and effective Real-ISR method by using the pre-trained SD prior. In the research and development of Real-ISR methods, how to construct LQ-HQ training pairs is a critical issue. Therefore, we spend a certain amount of space to introduce this problem so that readers can have a more complete understanding of Real-ISR. Based on this reviewer's comments, we will compress this part and make the introduction of LQ-HQ pair more concise.
**Q2. The benefits of reduced uncertainty.**
Thanks for your suggestion. Previous diffusion-based Real-ISR methods apply several denoising steps to generate HQ latents from noisy inputs. However, this process introduces unwanted randomness in the Real-ISR outputs, causing variations with different noise samples. OSEDiff uses LQ image as input to the diffusion network without random noise sampling, enabling a deterministic mapping from LQ to the HQ image. Meanwhile, OSEDiff leverages the VSD loss to enhance the generation capability of the LoRA finetuned network.
As suggested by this reviewer, in Figure 2 of the enclosed PDF file we provide visual comparison of the Real-ISR results of OSEDiff, StableSR and SeeSR. For StableSR and SeeSR, we show their results with two different noise samples. One can see clearly the difference between the two outputs caused by the randomness of noise. For example, the details in StableSR-1 are overly generated, while the result of StableSR-2 is smooth. In contrast, OSEDiff does not involve noise in the input, and it achieves stable Real-ISR result. We will add the visual evidence and analysis in the revision.
**Q3. Visualization of network outputs.**
Thanks for the good suggestion. As suggested by this reviewer, we visualize the distributions of outputs of OSEDiff with and without VSD loss using t-SNE on the RealSR dataset. The distribution of pre-trained SD model is plotted as a reference. We use LLaVa1.5 to generate the captions for the RealSR dataset, and use them as prompts for pre-trained SD, with the inference steps set to 50. As shown in Figure 3 of the enclosed PDF file, the output distribution of OSEDiff with VSD loss is much closer to the pre-trained SD than the OSEDiff without VSD loss. This clearly validates that the VSD loss aligns the distribution of diffusion network outputs with that of pre-trained SD. We will add this part in the revision.
**Q4. Why use CLIPIQA for Real-ISR evaluation.**
We follow previous works (StableSR, DiffBIR, and SeeSR) to use CLIPIQA as an evaluation metric for fair comparison. The visual examples in Figures 1, 4, and 5 the enclosed PDF file demonstrate that higher CLIPIQA scores generally correspond to better visual quality of the images. However, we agree with this reviewer that highly reliable no-reference IQA remains an unsolved problem, especially for real-world images, and it needs more efforts from the community to find a more robust metric.
**Q5. Details about trainable LoRA module.**
Thanks for the suggestion. We finetuned all layers except the normalization layers using LoRA. We empirically found that fine-tuning the normalization layers causes the model to crash, so we frozen them. We will add more detailed descriptions in the revision.
**Q6. Null prompts vs. Tag prompts.**
Though the results with null prompts score higher on some reference metrics (e.g., PSNR and DISTS) than those with tag-style prompts extracted by DAPE, their visual quality is inferior. This is mainly because the commonly used metrics such as PSNR cannot faithfully reflect the visual quality of images. As shown in Figure 4 of the enclosed PDF file, the results using tag-style prompts have richer leaf vein textures, clearer lines, and less noise. Therefore, we choose tag prompts extracted by DAPE. We will further clarify this in the revision.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer px3F
Comment: Thanks for the authors' rebuttal. Most of my concerns have been addressed. I still have one question about the SR results in Table 1. We can see from this table that 9 metrics are used for performance evaluation, and part of them are better than other methods. In my opinion, the metrics results of different models of the proposed method may vary significantly. Some models can be chosen to perform well on part of the metrics. In fact, the same applies to the comparison methods, i.e., they can also be tailored to specific metrics like this paper. This paper directly uses the default models of comparison methods, and haven’t chosen models for metrics like this paper. Could this approach potentially make some comparison methods perform better than the proposed method? This can be analyzed in the revised version.
---
Reply to Comment 1.1.1:
Comment: We are pleased that this reviewer found our responses helpful, and we thank this reviewer for the further question on evaluation metrics, which is actually an open problem for real-world image super-resolution (Real-ISR) evaluation.
**Q1. Performance evaluation metrics**
The key contribution of our work is the development of a fast one-step diffusion model for Real-ISR. As can be seen in the main paper, our OSEDiff model significantly reduces the inference time of previous SD-based models (e.g., it is 30x faster than StableSR and 10x faster than SeeSR). Apart from complexity and speed, for the performance evaluation metrics shown in Table 1 of our paper, we actually adopted the ones used in existing works in order for fair comparison with them. We didn’t deliberately select metrics which are advantages to our model.
Those metrics can be generally classified into two categories: fidelity-oriented ones (e.g., PSNR, SSIM) and perception-oriented ones (e.g., NIQE, MUSIQ, CLIPIQA). For the Real-ISR task, usually we emphasize a little bit more on the perceptual quality of the restoration output. Unfortunately, by far the accurate perceptual image quality assessment (IQA) remains an open problem, and no one single metric can align with human visual perception very well. On the other hand, there are certain conflicts between fidelity-oriented metrics and perception-oriented ones. This also explains why one algorithm performs well on some metrics but not so well on other metrics. If an algorithm is tailored to some metrics, it may become disadvantageous on other metrics.
Being highly advantageous on inference speed, our approach shows comparable with or even superior results to the competing methods on the nine metrics. The visual comparisons also demonstrate its competitiveness. We will release our codes and trained models so that the peers can test OSEDiff’s performance on different scenarios.
**Q2. Default models of comparison methods**
We follow the convention in the community to use the official models of competing methods for comparison, with the parameter settings recommended by the authors. We believe this is the fairest way to compare the different Real-ISR methods. If we tailor one model or change its parameter setting to improve some metrics, other metrics may be deteriorated (see our responses to Q1), and this is unfair for other competing methods. Nonetheless, we agree that how to better evaluate the performance of Real-ISR models needs the collective efforts from the community. | Summary: This paper presents a novel approach to real-world image super-resolution for a one-step effective diffusion network (OSEDiff). The proposed OSEDiff effectively eliminates the uncertainty introduced by random noise sampling in previous methods, achieving significant performance improvements across multiple benchmarks, demonstrating its potential for practical applications.
Strengths: 1. The focus on one-step diffusion in this paper is particularly noteworthy, as it addresses a critical challenge for the practical application of diffusion models in image super-resolution tasks.
2. The performance gains and visually compelling results suggest the effectiveness of the proposed method for real-world image super-resolution.
3. The paper is well-structured and easy to follow.
Weaknesses: 1. The core concept of this paper shares similarities with "One-step Diffusion with Distribution Matching Distillation". A comprehensive comparison with this work, including performance metrics, training efficiency (parameters and FLOPs), and potential limitations, would strengthen the paper's contribution.
2. While Table 2 provides a comparison of trainable parameters, including the total number of model parameters and FLOPs during inference would offer a more complete picture of computational efficiency.
3. There is a lack of discussion on the degradation in LQ images, such as how to ensure that text prompts remain accurate in low-quality scenarios.
4. In Figure 7 (first and second scenarios), some generated details and textures appear to be creatively added rather than strictly reconstructed from the LQ input. Addressing potential limitations in fidelity due to this creative aspect would be valuable.
Technical Quality: 3
Clarity: 3
Questions for Authors: No more questions.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed about the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Comparison with DMD.**
Thanks for the nice suggestion. While both OSEDiff and DMD draw upon the concept of variational distillation from ProlificDreamer, they differ significantly in several aspects. First, DMD is designed for text-to-image tasks, whereas OSEDiff is tailored for image restoration tasks, aiming to reconstruct an HQ image from its LQ counterpart. To achieve this goal, OSEDiff employs a trainable VAE encoder to remove degradation from LQ images, and it directly takes the LQ as input to the diffusion network to avoid the uncertainty caused by random noise sampling. Trainable LoRA layers are introduced to the diffusion UNet to adapt it to the restoration task. In contrast, DMD lacks these specific designs tailored for image restoration tasks. Furthermore, DMD requires full parameter fine-tuning of two SD models, while OSEDiff only needs to fine-tune a small number of LoRA parameters. This makes OSEDiff more memory-efficient and training-friendly. We will cite DMD and discuss its similarity and differences from OSEDiff in the revision.
**Q2. Comparison of the total number of model parameters and FLOPs.**
Thanks for the suggestion. The table below compares the total number of model parameters and FLOPs of different methods.
We see that those pre-trained SD model based methods (i.e., StableSR, DiffBIR, SeeSR, PASD and OSEDiff) have a similar total number of parameters. SinSR employs the diffusion model trained in ResShift, which is much smaller than the pre-trained SD model. However, the generalization capability of ResShift and SinSR is much lower than that of the SD based methods.
In terms of FLOPs, the multi-step diffusion-based models have significantly higher FLOPs than single-step methods (e.g., SinSR and OSEDiff) because they run the SD UNet for multiple times.
The input resolution of SinSR's VAE decoder is twice of that of OSEDiff's VAE decoder, as SinSR uses an f4 VAE and OSEDiff uses an f8 VAE. Overall, OSEDiff and SinSR have similar FLOPs. We will add these analysis in the revision.
**Table: Complexity comparison among different methods. All methods are tested with an input image of size 512×512 on an A100 GPU.**
| | StableSR | DiffBIR | SeeSR | PASD | ResShift | SinSR | OSEDiff |
|-----------------------|:--------:|:-------:|:-----:|:------:|:--------:|:-----:|:-------:|
| **# Total Param (M)** | 1410 | 1717 | 2524 | 1900 | 119 | 119 | 1775 |
| **FLOPs (G)** | 79940 | 24234 | 65857 | 29125 | 5491 | 2649 | 2265 |
| **Inference Step** | 200 | 50 | 50 | 20 | 15 | 1 | 1 |
| **Inference Time (s)**| 11.50 | 2.72 | 4.30 | 2.80 | 0.71 | 0.16 | 0.35 |
**Q3. Robust text prompts from LQ.**
Sorry that we did not make it clear. We adopt the DAPE module from SeeSR, which has proven to be robust to the degradation of LQ images for extracting tag-style text prompts. The DAPE module is trained to generate correct semantic prompts even when the input images are severely degraded.
**Q4. Limitations in fidelity.**
Compared with traditional image restoration methods, the recent methods that leverage pre-trained SD priors can produce perceptually much more realistic results. As a compromise, some generated details may not be faithful enough to the input LQ image. This is an inherent limitation of such generative prior based methods, as noted by this reviewer.
Compared with other methods along this line, however, the proposed OSEDiff has achieved a much better balance between fidelity and creativity. First, it takes the LQ image as input without any random noise and performs only one-step diffusion. This significantly improves the stability of image synthesis using diffusion priors, reducing much the possibility of generating false details. On the other hand, OSEDiff utilizes VSD loss to regularize the generator network, ensuring that it can preserve the generative capacity of the pre-trained SD model. As can be seen from the experiments in the main paper, OSEDiff achieves state-of-the-art results in both fidelity and perceptual metrics among the SD based methods. In future research, for image scenes or regions that require high-fidelity, such as face and text, we can apply a higher weight of fidelity loss. For scenes like greenery and hair where the perceptual quality is more important, we can adaptively apply a higher weight of VSD loss to enhance expressiveness.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the careful reply. While the implementation draws some insights from ProlificDreamer and DMD, this paper is the first to address super-resolution tasks, and its contribution is valuable to the community. However, I have a further question regarding the VSD loss. Although the ablation experiments demonstrate its effectiveness, I have concerns. In super-resolution, the output must add details and remain consistent with the input. If the noise added in VSD loss is minimal, denoising might yield results similar to the original, making it hard to enhance details. On the other hand, too much noise could increase detail but compromise fidelity. Balancing these factors is challenging, which makes VSD more effective for generation tasks but less theoretically clear for image restoration. Could the authors offer a more illustration about this point? Would using a larger super-resolution model for distillation be a more reasonable approach?
---
Reply to Comment 1.1.1:
Comment: **Q1. How to balance perception and fidelity in OSEDiff?**
We sincerely thank this reviewer for the recognition on our contribution. This reviewer's comments on the role of VSD loss in OSEDiff is correct. VSD was originally proposed for the 3D generation task. We adopt it as a regularization term on image distribution, aiming to generate details and enhance the naturalness of the SR output. Adding heavier noise in VSD loss will improve generative ability, while adding lighter noise will result in better fidelity. In our implementation, we uniformly sampled $t$ in the VSD loss from 1 to 1000 in each iteration. Kindly note that in addition to VSD loss, we also employed the $L_2$ loss and LPIPS loss as fidelity constraints in OSEDiff. We balance perceptual quality and fidelity by adjusting the weights on different loss terms. We empirically set the weights of $L_2$ loss, LPIPS loss, and VSD Loss as 1, 2 and 1, respectively, and found that a good balance on perceptual quality and fidelity can be achieved, as evidenced by the experiments in our manuscript.
**Q2. Would using a larger super-resolution model for distillation be a more reasonable approach?**
Thanks for the insightful comments. Actually, at the very beginning of this work, we indeed have tried to use a large diffusion-based SR model, more specifically SeeSR, as the regularization model to distil OSEDiff. However, the trained model suffered from weak detail generation ability. This is because SeeSR, as an image restoration model but not a generation model, is already a trade-off between fidelity and perception. Its generative ability is reduced to ensure the fidelity of SR outputs. When using SeeSR as the regularization model, it is difficult to endow the OSEDiff model enough capacity to generate image details. Therefore, we directly use a pre-trained SD model as the regularization model. | Summary: The paper introduces OSEDiff, a one-step diffusion network for Real-World Image Super-Resolution. OSEDiff performs variational score distillation in the latent space to ensure predicted scores align with those of multi-step pre-trained models. By fine-tuning the pre-trained diffusion network with trainable LoRA layers, OSEDiff can adapt to real-world image degradations. Experimental results show that OSEDiff achieves comparable or superior Real-ISR outcomes to previous methods. The paper's contributions include the development of OSEDiff, which addresses the limitations of traditional multi-step diffusion models and enables efficient production of high-quality images in one diffusion step.
Strengths: 1. The proposed method outperforms the previous single-step SR method (SinSR) in terms of the most of metrics.
2. It is comparable to previous multi-step diffusion models but greatly reduces inference time.
3. The model has very few training parameters and can be trained with minimal resources, which is advantageous.
4. Figure 3 shows that the qualitative comparison results are very promising.
Weaknesses: 1. How much would the performance of OSEDiff change if multiple denoising steps were performed, such as 20 steps or 50 steps?
2. The design of the key module, the regularizer networks, is not clearly explained.
3. It is unclear how the LR image is used as a condition. Is it concatenated along the channel dimension in the latent space? However, the pre-trained SD’s VAE encoder downscales by a factor of eight, so for a 512x512 HR input, the latent output is only 64x64, which conflicts with the 128*128 LQ setting mentioned in the first paragraph of section 4.1.
4. The reason for fine-tuning the VAE encoder is not very clear. Please explain in the revision why it is necessary to fine-tune the VAE encoder jointly with the diffusion model. If this has already been discussed, please indicate where it is covered.
5. Does the model support input of LR images with different resolutions? For example, if the input is a 32x32 LR image, can feasible results be obtained?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weakness part
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Multiple inference steps for OSEDiff.**
Please kindly note that OSEDiff is specifically designed for one-step diffusion for Real-ISR. Unlike previous multi-step methods (e.g., StableSR, PASD, SeeSR), which are all based on ControlNet by using noise as input and LQ image as control signal, OSEDiff directly uses LQ image as input (without random noise) and apply LoRA to finetune the diffusion network for producing HQ output. In its current network design, OSEDiff cannot be used for multiple denoising steps. Nonetheless, we believe that for the Real-ISR task, OSEDiff is much preferred as it achieves faithful and perceptually realistic results while significantly reducing the complexity. If we wish to extend OSEDiff to a multi-step framework for stronger generative capability, we could model multi-step diffusion in the residual space, similar to ResShift. We will explore this possibility in the future.
**Q2. Details about regularizer networks.**
Thanks for pointing out this problem. We will add more details on the regularizer networks in the revision. To model the distribution of natural and restored images, two diffusion models are needed to compute the KL-divergence via variational score distillation (VSD): one for the natural image distribution and another for the restored image distribution. Since the pre-trained SD model can effectively represent the natural image distribution, one regularizer network can be the frozen SD U-net. On the other hand, the U-net can be fine-tuned to model the distribution of restored images. Therefore, we fine-tune the SD U-net as another regularizer network in a similar way to that we fine-tune the SD as the generator network. However, they are trained differently: the generator is trained on $t=999$ to perform super-resolution, while the regularizer is trained on $t \in (\{0, \dots,999\})$ to align with the restored image distribution via VSD loss.
**Q3. LQ as condition.**
Sorry for the confusion caused. Actually, different from previous methods such as StableSR, PASD and SeeSR, our proposed OSEDiff does not use the LQ image as a condition but directly as the input to the UNet. First, we upsample the LQ image to the target size. Then, we input it into the encoder to obtain the latent feature, which is passed into the UNet to obtain the refined feature. Finally, the refined feature goes through the decoder to output the restored HQ image. For example, for $\times$4 SR on a 128$\times$128 LQ image, we first use a bicubic interpolator to upsample it to 512$\times$512. Then, we pass it to OSEDiff to get a 512$\times$512 HQ image. We will clarify this in the revised manuscript.
**Q4. Finetune VAE encoder.**
We apologize for any lack of clarity in our manuscript. The VAE encoder is set to be trainable to remove degradation. In other words, we fine-tune the encoder with LoRA so that it can also serve as a degradation removal network to pre-process the input LQ image.
To better illustrate the role of fine-tuning the encoder, we compare the performance of OSEDiff on the RealSR dataset with and without fine-tuning the VAE encoder. The results are shown in the table below. We can see that fine-tuning the encoder significantly improves the no-reference metrics. Although fine-tuning the encoder leads to slightly worse full-reference metrics such as PSNR and LPIPS, these metrics do not necessarily indicate better visual quality. In Figure 1 of the enclosed PDF file, we visualize the results of OSEDiff with and without encoder fine-tuning. One can see that fixing the VAE encoder may introduce some artifacts, which can be caused by the severe degradation in the LQ input. We will add the above discussions in the revision.
**Table: Comparison of OSEDiff with and without fine-tuning the VAE encoder on the RealSR dataset.**
| | Finetune VAE Encoder | PSNR↑ | SSIM↑ | LPIPS↓ | CLIPIQA↑ | MUSIQ↑ | NIQE↓ |
|---------------|:-----------------:|:-----:|:------:|:------:|:--------:|:------:|:------:|
| OSEDiff | ✗ | 25.27 | 0.1966 | 0.2656 | 0.5303 | 58.99 | 6.5496 |
| OSEDiff | ✓ | 25.15 | 0.2128 | 0.2921 | 0.6693 | 69.09 | 5.6479 |
**Q5. LR inputs of different resolutions.**
Yes, OSEDiff can handle inputs of different resolutions in a similar way to other diffusion based super-resolution methods. First, the LR image (with the size of H$\times$W) is upsampled to an HR image $Y_{bicubic}$ (with the size of rH$\times$rW) using bicubic interpolation, where $r$ is the SR factor. If the short side of $Y_{bicubic}$ is less than 512, we resize its short side to 512 before feeding it to OSEDiff. Finally, the output is resized back to rH$\times$rW to obtain $Y_{osediff}$. In other cases, $Y_{bicubic}$ (rH$\times$rW) is directly fed into OSEDiff to get $Y_{osediff}$ (rH$\times$rW). For the $\times$4 SR task, if LR is 32$\times$32, we first upsample LR to 512$\times$512 using bicubic interpolation, enhance it to 512$\times$512 by OSEDiff, and then resize the output to 256$\times$256.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. The authors have solved my problems and I am about to maintain my positive recommendation; hope to see the VSD training code soon.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank this reviewer for the positive feedback. For sure we will release the training codes and all the trained models soon.
Authors of paper 4246 | Rebuttal 1:
Rebuttal: Dear Reviewers, Area Chairs, and Program Chairs:
We are grateful for the constructive comments and valuable feedback from the reviewers. We appreciate the reviewers' recognition on the novelty of our method (Reviewers KDhM and 4vqL), its superior performance (Reviewers jXQz and px3F), efficient training (Reviewer jXQz), and its clarity (Reviewers KDhM, px3F, and 4vqL).
To address the reviewers' concerns, we have provided more analysis about inference setting, VAE encoder finetuning, the distribution of output, comparison with DMD and SinSR, and the total parameters and FLOPs. We have also attached a PDF file for more visual comparisons. Please find our itemized responses to all reviewer’s comments below. We will really appreciate it if Reviewer px3F can kindly reconsider the decision, provided that the main concerns are well addressed.
Best regards,
Authors of Paper 4246
Pdf: /pdf/e6b7344d9c31f98d9ad908f68d2570dac74691eb.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Open-Vocabulary Object Detection via Language Hierarchy | Accept (poster) | Summary: The paper proposes a novel method for leveraging image-level annotations for object detection pretraining, specifically for zero-shot and open vocabulary detection settings. The proposed method, DetLH, leverages self-training to generate pseudo-labelled object proposals, and then adjust the pseudo-label for each box according to the image's class and the classes in WordNet that are hierarchically related to it. Furthermore, the authors propose a method for prompt generation that does not require additional training. The proposed method is evaluated extensively and achieves state-of-the-art results in relevant zero-shot and open-vocabulary object detection benchmarks.
Strengths: 1. The paper is well written and its key ideas clearly communicated.
2. The proposed method is well presented and novel, and constitutes an improved way to leverage labelled object centric images to facilitate object detector training.
3. The conducted experiments are extensive and demonstrate that the proposed method consistently achieves state-of-the-art performance in various settings and datasets.
Weaknesses: 1. Unless I am mistaken, there is no ablation study related to the impact of the reliability score. I would be very interested to see how it affects performance.
2. While justifications are provided in the appendix, I believe more comparisons should have been included with relative methods (L661-671). More importantly, I believe that the ommision of evaluations with LVIS/COCO is a significant issue, and I am not convinced by the authors stated reason (L683-687) that they focused on better suited datasets. The paper has extensive experiments with many datasets, and it is confusing to me why they chose to leave out what is arguably the most common dataset for detection tasks.
Technical Quality: 4
Clarity: 4
Questions for Authors: My primary concern regarding the paper is related to the evaluations: a) Whereas the paper is a clear improvement over Detic, it avoids extensive comparisons with more recent methods (Detic being a ECCV 2022 paper) such as the ones included in the paper (L661-671), and b) evaluations with LVIS, a shared benchmark for most relative papers, are not conducted.
Overall, however, I am very satisfied with the paper's contribution and I believe that the conducted experiments adequately demonstrate the proposed method's effectiveness. My main reason for not giving a higher rating is because LVIS/COCO results are not included, which would be informative and are significant to facilitate comparisons with existing and future relevant works.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors have adequately addressed the relevant limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[Response 1] The affect of the reliability score:**
Thank you for your suggestion. As suggested, we conduct the new ablation study to examine the effect of the reliability score in the proposed DetLH. As the table below shows (on Object365), DetLH effectively mitigate the label noises with the design of reliability score.
| | AP50|
|:----:|:----:|
|DetLH without reliability score | 31.8|
|DetLH | 32.5 |
**[Response 2] More comparisons:**
We would clarify that we did not provide much comparisons with those methods as mentioned in Lines 661-671 because they focus on different tasks with different objectives (also use different training data, backbones and benchmarks). We largely focus on benchmarking DetLH with Detic as both methods work on the same task WSOD and use the same training data (ImageNet-21K and LVIS) and backbone. As suggested, we will provide more comparisons between DetLH and those detectors and include the comparisons in the updated manuscript or appendix.
In addition, we would clarify that we focus on cross-dataset zero-shot evaluations [48, 68] (i.e., evaluation on unseen datasets like CLIP) as discussed in Lines 117-119. We did not benchmark on LVIS/COCO (both use the same data but different annotations) since we used LVIS/COCO in network training.
As suggested, we benchmark DetLH on LVIS and COCO as shown below and will include these results in the updated manuscript or appendix.
|LVIS Dataset | AP50|
|:----:|:----:|
|MosaicOS |28.3 |
|CenterNet |34.9 |
|AsyncSLL |36.0 |
|SeesawLoss |37.3 |
|Baseline |40.7 |
|Detic |41.7 |
|**DetLH** | **42.2** |
|COCO Dataset | AP50|
|:----:|:----:|
|Baseline |39.3 |
|Self-training |39.5 |
|WSDDN |39.9 |
|DLWL |42.9 |
|Detic |44.7 |
|**DetLH** |**45.3** |
---
Rebuttal Comment 1.1:
Title: Reply to authors
Comment: I thank the authors for addressing the issues raised by my review.
I would like to also see what other reviewers think about the responses to the issues they raised, but I am satisfied by the authors’ response to my review and am inclined to recommend acceptance (i.e. raise my score to 7 after the discussion).
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer VMio,
Thank you for your encouraging feedback and positive evaluation for our work. We will include the new ablation studies and the more comparisons in the main paper in the revised version. We sincerely appreciate your constructive comments, which have strengthened our paper.
Best regards,
Authors | Summary: This paper focuses on scaling the detectors’ vocabulary with image-level weak supervision. To better leverage the image-level labels in this task, this paper introduces a language hierarchical self-training (LHST) framework that incorporates language hierarchy (i.e., WordNet) with self-training. In addition, the authors introduce a language hierarchical prompt generation (LHPG) method enabling open-vocabulary object detection. The proposed method outperforms baselines across 14 commonly used benchmarks. Comprehensive ablation experiments validate the effectiveness and generalization of the proposed method.
Strengths: + This paper is well organized and written. The overall paper is easy to follow and technically sound.
+ On extensive benchmarks, the proposed method achieves non-trivial performance improvement and shows strong generalization.
Weaknesses: - The necessity of the investigated problem (i.e., scaling the vocabulary of the detectors with image labels) should be further stated, as the recent MLLM-based detection/grounding methods inheritly have strong zero-shot capabilities.
- The proposed method is somewhat trivial and heuristic, where only the usage of WordNet is new and interesting. Most techniques are from existing WSOD and semi-supervised object detection methods. It would be better to highlight the novelty.
- Several semi-supervised WSOD methods trained with both instance- and image-level annotations are missing. It would be better to review them for completeness.
[a] UniT: Unified Knowledge Transfer for Any-Shot Object Detection and Segmentation, CVPR’21
[b] Cyclic Self-Training with Proposal Weight Modulation for Cross-Supervised Object Detection, TIP’23
[c] H2FA R-CNN: Holistic and Hierarchical Feature Alignment for Cross-Domain Weakly Supervised Object Detection, CVPR’22
[d] DOCK: Detecting Objects by transferring Common-sense Knowledge, ECCV’18
- In Line 149, the authors claimed ‘a single-label annotation could be expanded into a multi-label annotation’. It would be better to give some visualization examples. In Line 164, authors claimed ‘the labels expanded by WordNet are not all reliable’. Giving some examples would be helpful to improve the readability.
- Why only the AP results of oracles are reported on Object 365?
- How about the efficiency of WordNet in training and inference?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have discussed the limitations, and this works does not show potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[Response 1] The necessity of the investigated problem:**
Thank you for your suggestion. We would clarify that, compared with recent MLLM-based detection/grounding methods, our approach of scaling the vocabulary of detectors using image labels offers significant advantages in training efficiency. As shown in Table 14 (as copied and posted below), DetLH is up to ten times more efficient than current visual grounding/MLLM-based methods such as GLIP and DetCLIP. The superior efficiency is critical to the scalability of detectors in many practical tasks.
| Method | Types | Run time (ms)|
|:----:|:----:|:----:|
GLIP w Swin-T |Visual Grounding-based |8333.3|
DetCLIP w Swin-T |Visual Grounding-based |434.7 |
DetLH w Swin-B (ours) | WSOD |46.0|
**[Response 2] The novelty of the proposed method:**
Thank you for your suggestion! We would share that, different from existing WSOD and semi-supervised WSOD methods, DetLH tackles the image-to-box label mismatch by introducing language hierarchy into self-training: 1) it introduces WordNet's language hierarchy to expand the image labels and accordingly enables co-regularization between the expanded labels and self-training. As shown in Table 10 (in the submitted manuscript), just introducing language hierarchy or simply combing self-training and language hierarchy does not work without our co-regularization design; 2) it introduces language hierarchy into prompt generation to bridge the vocabulary gaps between detector training and testing, leading to better prompt generation and detection performance. Extensive experiments validate the effectiveness of our designs in DetLH. We will highlight the above points in the updated manuscript.
**[Response 3] Comparisons with other semi-supervised WSOD methods:**
Thanks for sharing [a,b,c,d]. We will review them and highlight how DetLH differs in the revised paper. As the table below shows, DetLH clearly outperforms [a,b,c,d] on Object365.
| | Baseline | [a] | [b] | [c] | [d] | DetLH|
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|AP50 |29.4 |29.5 |29.9|29.8|29.6|**32.5**|
**[Response 4] Visualizations of label expanding:**
Thank you for your suggestion! **Figure 1** in the attached PDF shows that the labels expanded by WordNet are not all reliable. We will include the suggested visualization in the updated manuscript or appendix.
**[Response 5] Missing oracle values on Object365 in Table 1:**
We reported AP value only for Object365 as we referenced to the Detic paper which only reports AP value. For Pascal VOC, we trained the model to get the oracles of all AP metrics. We will train models on Object365 to obtain the complete AP values and update them in the updated manuscript.
**[Response 6] The efficiency of WordNet in training and inference:**
| | Training Speed (second/image) | Inference Speed (second/image)|
|:----:|:----:|:----:|
|Detic (w/o WordNet language hierarchy)| 0.7009 | 0.045|
| Our DetLH |0.7110| 0.046|
---
Rebuttal Comment 1.1:
Comment: Thanks for the efforts in rebuttal. Your response has addressed my concerns.
After reading other reviewers' comments and the response, I would like to raise my rating.
---
Rebuttal 2:
Comment: Dear Reviewer PjbE,
Thank you for your insightful feedback. We have carefully considered your questions and suggestions and have addressed them accordingly. We sincerely appreciate your constructive comments, which have helped strengthen our paper. As the discussion phase is nearing its conclusion, we would appreciate it if you could let us know if there are any additional questions or suggestions.
Best regards,
Authors | Summary: * The authors propose to expand the training data for object detection using classification datasets through two main contributions:
1. Combine a language hierarchy with self-training to extend the training dataset, while minimizing label noise by re-weighting categories with reliability scores.
2. Bridge the vocabulary gaps between training and testing using a vocabulary proxy (WordNet) improving zero-shot learning.
* Extensive experiments in multiple datasets are provided showing an improvement over the state-of-the-art.
Strengths: 1.The authors clearly exposed their ideas and the paper is reasonably well written.
2.Even though using a language hierachy to improve object recognition is an old idea, [Marszalek, et al. Semantic Hierarchies for Visual Object Recognition] it's appealing to see it applied to a modern, large-scale, object detection setting. Moreover, the combination with self-training is original and well justified.
3.The results show a significant improvement over the state-of-the-art.
4.The evaluation is very extensive and performed over an impressive quantity of diverse datasets.
Weaknesses: 1. The experimental section and ablation do not clearly show if the authors' assumptions are correct.
* The idea of the paper heavily relies on the assumption that there are discrepancies on the taxonomy levels of the categories of image and detection datasets. This assumption may hold for some datasets but not for others and it should have a significant impact on the method's results. I miss a section presenting an analysis of the taxonomies of image vs box categories, and how it may impact peformance.
2. Discussion on the specific impact of each contribution to the final result is lacking
* The ablation study (Section 4.2) does not seem to add much value to the experimental discussion. First, showing the performance increase over the baseline [68] seems misleading when the authors compare DetLH against only-box supervision, since the work of [68] already proposes to expand detector (boxed) training data with image-classification data. The work of [68] already demonstrated that adding image-based training data improves detection results, and it should not be the goal of the ablation section to show this. A far more interesting baseline could have been to train DetLH with uniform category weights to demonstrate how DetLH deals with noisy labels.
* Another important ablation experiment would be to analize the impact of narrowing the gap between training and testing label distributions. The CLIP embeddings would already help in this regard, but how impactful is the use of a proxy vocabulary on top of it is not clearly quantified.
3. Wording is sometimes ambiguous or could lead to contribution misunderstandings:
* For instance the authors say "These detectors usually yield constrained detection peformance when trained only with detection datasets... [68]", when the work of [68] is actually trained with both detection and classification data. If the authors want to point out that [68] already suggested this, it should be more explicitly stated. Even more so, if it's followed with a claim that the current method overcomes such limitations: "DetLH introduces large-scale image-level datasets to enlarge the data [...] in detector training.", which is misleading as this is not something the present work introduces, but the work of [68].
Technical Quality: 2
Clarity: 3
Questions for Authors: * Futher analysis and validation of main assumption. How do the category taxonomy levels differ across datasets?
* Where does the performance gain come from? Robustness to label noise or vocabulary proxy?
* In general in the experimental section I miss more insight and discussion on results, experimental setup and data setup. I feel that most discussion repeatedly states the obvious (performance increases due to leveraging larger amouts of data). Whenever more specific insight is provided, it is very speculative. For example, authors state that performance gains in Wildlife are due to the "introduction of the language hierarchy", but there are no specific experiments designed to proof such claim.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes, the limitations and potential negative impact are addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[Response 1] Analysis of discrepancies of the taxonomies of image vs. box categories:**
Thank you for pointing out this issue! As suggested, we analysed the mismatch between image-level and box-level categories and how much it could affect the detection performance. As the table below shows, the mismatch between image-level and box-level categories varies across datasets, and the proposed DetLH improves more with the increase of mismatch levels. We will include a subsection to discuss this issue in the updated paper.
|ImageNet-21K |Mismatch Ratio| Baseline (AP50) | DetLH (AP50) | $\Delta$ |
|:----:|:----:|:----:|:----:|:---:|
v.s. Cityscapes |0.13 |47.1 |50.3 |+3.2|
v.s. DETRAC |0.25 |39.2 |44.0 |+4.8|
v.s. MIO-TCD |0.27 |20.6 |24.5 |+3.9|
v.s. African Wildlife |0.50 |80.9 |87.2 |+6.3|
v.s. Vistas |0.67 |35.6 |44.0 |+8.4|
v.s. Arthropod Detection| 0.86 |36.7 |49.0 |+12.3|
**[Response 2] More ablation studies:**
Thank you for your comment! As suggested, we conduct new ablation studies to analyse how effective DetLH deals with noisy labels. Specifically, we compare DetLH with and without using reliability scores (the latter means uniform category weights) over Object365. As the table below shows, including the adaptive weighting mechanism (i.e., reliability scores) helps mitigate the label noises effectively.
| | AP50|
|:----:|:----:|
DetLH without reliability score | 31.8|
DetLH | 32.5 |
Regarding the impact of using proxy vocabulary, we conduct new experiments on Object365 dataset to compare LHPG with CLIP embeddings only and LHPG with both CLIP embeddings and proxy vocabulary. As the table below shows, using a proxy vocabulary performs clearly better, demonstrating its effectiveness in narrowing the distribution gap between training and test labels.
| | AP50|
|:----:|:----:|
Baseline | 29.4|
LHPG (CLIP embeddings only) | 30.0 |
LHPG (CLIP embeddings + proxy vocabulary) | 31.0 |
**[Response 3] Misleading Claims:**
Thank you for your suggestion! To avoid confusion, we will rephrase the text "These detectors usually yield constrained detection... [68]" to "As reported in [68], detectors trained solely on detection datasets often exhibits constrained performance.", and the claim "DetLH introduces large-scale image-level datasets [...]." to "Similar to [68], DetLH follows semi-supervised WSOD and introduces large-scale image-level datasets for detector training.".
**[Response 4] More insights and analysis should be provided:**
Thank you for pointing out this issue. We would clarify that we provided extensive discussion and related insights in the appendix (e.g., Section B Additional Discussion, and Section C Additional Comparison), including more ablation studies, strategy studies, parameter studies, comparisons, etc. In addition, we benchmark DetLH across multiple datasets and discuss its generalization ability and parameter sensitivity In Section 4.1 Comparison with the SOTA and Section 4.3 Discussion. We will revise the two Sections to remove repetitive text and include some insight analysis from the appendix.
Regarding Wildlife datasets, our statement is largely based on the intuition that these datasets with many fine-grained categories should suffer from clear vocabulary mismatch, and the language hierarchy introduced by DetLH should help mitigate such mismatch and improve the detection. The new experiments in Response 1 show that wildlife datasets do suffer from more severe mismatch and DetLH helps mitigate the mismatch effectively. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for your positive comments on our work. In particular, it is encouraging that the reviewers acknowledge that 1) our work is novel and appealing [TMzm,VMio]; 2) the proposed method is effective [TMzm,PjbE,VMio]; 3) the evaluation is extensive [TMzm,PjbE,VMio] and 4) the paper is well written [TMzm,PjbE,VMio].
We address the questions and concerns raised by each reviewer point-by-point in the respective threads. Additionally, attached please find a PDF file that contains the figure that is related to the rebuttal.
Pdf: /pdf/a42444945b604a82a5fbc9ca2d84f37ae9172cdd.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Confident Natural Policy Gradient for Local Planning in $q_\pi$-realizable Constrained MDPs | Accept (poster) | Summary: This is the first work which addresses and achieves polynomial sample complexity for the learning problem of CMDPs in the more general setting of linear function approximation with $q_{\pi}$ realizability. The authors propose a primal-dual algorithm and utilize a local access model (can be viewed to in between online learning and a generative model ). The algorithm reliably produces a policy that strictly adheres to the constraints while closely optimizing the reward function's value.
Strengths: - The paper is mostly well written with good summarization of the main ideas and theorem statements.
- It is the first work achieving polynomial sample complexity for CMDPs in the more general $q_{\pi}$ realizability setting.
Weaknesses: - Lack of experimental results.
- The paper seems an extension of earlier work i.e., (Weisz et al., 2022).
- Despite general good writing, the main algorithm description in section 4.2-4.3 is very dense and difficult to parse.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Since the paper claims that $q_{\pi}$ realizability is more general than the linear MDP setting, does it mean that linear MDP setting implies $q_{\pi}$ realizability ? Is this result known or trivial ? Could this point be included in the paper?
- The paper seems an extension of earlier work i.e., (Weisz et al., 2022). Can the authors elaborate on what were the main technical challenges overcome on extending this work to the CMDP setting ?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: There are no potential negative societal impact of their work
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Yes, you are correct that linear MDP implies $q_\pi$ realizability. However, $q_\pi$ realizability DOES NOT imply linear MDP, and one can show this via a counterexample. To name a few references that discuss this, please refer to Proposition 4 of Andrea Zanette et al., Learning near optimal policies with low inherent Bellman error, 2020a. (https://arxiv.org/pdf/2003.00153) and Hao, B et al., Confident least square value iteration with local access to a simulator (2022) (https://proceedings.mlr.press/v151/hao22a/hao22a.pdf). | Summary: This paper studied constrained Markov decision processes (CMDP) and proposed a confident policy gradient algorithm for $q_\pi$ realizable MDPs. Here, a $q_\pi$ realizable MDP assumes that the Q function can be approximated by a linear function w.r.t. some feature of state-action pairs. By using primal-dual methods, the proposed algorithm can find an $\epsilon$-optimal policy satisfying the constraint or with small constraint violation. The proposed approach can also be applied to the misspecification case.
Strengths: 1. The paper is technically sound.
2. The studied problem is important and well-motivated.
Weaknesses: 1. The major weakness is the presentation of contribution. According to the paper's presentation, it feels like the proposed method is a straightforward extension of Weisz et al., 2022 applying to the Lagrangian objective function. It would be better to highlight the difficulties of handling CMDP or this Lagrangian function.
2. The presentation of the two learning goals is also confusing. In my understanding, the proposed algorithm can already achieve the stringent constraint, i.e. zero violation, and the relaxed feasibility is satisfied naturally. It seems redundant to spend a whole section to describe the result of relaxed feasibility. Some papers split the two results (Liu et al., 2021) because they need different assumptions. Perhaps the authors can explain or organize the presentation more clear.
3. Another presentation issue: There is no introduction or justification of the terminology *natural policy gradient* in this paper.
[1]. Weisz et al., Confident Approximate Policy Iteration for Efficient Local Planning in $q_\pi$-realizable MDPs, 2022.
[2]. Liu et al., Learning policies with zero or bounded constraint violation for constrained mdps, 2021
Technical Quality: 3
Clarity: 2
Questions for Authors: Please see the weaknesses part.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: No further limitations need to be addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Please note that our algorithm is NOT a straightforward extension of CAPI-QPI-Plan. Moreover, our paper distinguishes between the relaxed-feasibility and strict-feasibility problem settings. Treating each setting uniquely is a key strength of our work. In the relaxed-feasibility problem, the returned policy $\bar \pi_K$ is allowed to have some constraint violations, specifically where $v_{\bar \pi_K}^c(s_0) \geq b - \epsilon$. This contrasts with the strict-feasibility setting, where no constraint violations are permitted, meaning $v_{\bar \pi_K}^c(s_0) \geq b$. To address the strict-feasibility problem, the algorithm must solve a more conservative CMDP, as discussed in Section 6 of our paper. However, solving this more conservative CMDP incurs a higher sample complexity cost, necessitating that the relaxed-feasibility setting be treated separately. Additionally, in the presence of a misspecification error $\omega > 0$, the strict-feasibility setting requires additional assumptions on $\omega$, whereas the relaxed-feasibility setting does not. The sample complexity of the relaxed-feasibility setting can be independent of Slater's constant, whereas for strict feasibility, the returned policy must strictly adhere to constraints, and we cannot simply set Slater's constant $\zeta$ to $\epsilon$ and disregard its impact.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response, which largely addresses my concerns such as the separation of relaxed and strict constraint cases and differences from previous works. However, I believe that these discussions are necessary and the current version would benefit from a careful revision. I decide to slightly change my score and lower my confidence.
---
Rebuttal 2:
Comment: We are pleased to have addressed your concerns, and we sincerely thank the reviewer for their response. | Summary: **Problem setup**
The authors consider the task of global planning in large Constrained
Discounted MDPs. They assume local access to a simulator which can be queried at previously
encountered state-action pairs to obtain a next state sample and immediate
reward, and that the $Q$-value of any policy is linearly realizable by a given feature map. Under this more general assumption on the MDP, they aim to develop an efficient method which learns a policy that maximizes return with minimal constraint violations.
**Approach**
In the above context, the authors propose a primal-dual approach to solve a constrained
return optimization problem. Under the weaker assumption of $Q^{\pi}$-realizability and local access to a simulator, their method combines existing techniques on $Q$-value estimation in large MDPs and off-policy evaluation to nicely approximate the gradients of the primal-dual objective while conserving samples. Precisely, they apply the $Q$-values estimation sub-routine of CAPI-QPI-PLAN in [1] to construct least squares estimates which are guaranteed to closely approximate the true value at a subset of important (namely \emph{core}) state-action pairs. Furthermore, contrary to the fully on-policy evaluation routine of CAPI-QPI-PLAN, their method Confident-NPG-CMDP reuses the core set and trajectory data for value estimation and policy improvement across a number of steps in each learning phase.
**Result**
The authors derive performance guarantees for their method in terms of the value error and constraint violation of the output mixture policy. Precisely, they prove that w.h.p their method learns a near-optimal policy with minimal constraint violation with at most $\mathcal{O}(poly(d)\varepsilon^{-3})$ queries to the simulator and iterations.
[1] Weisz, G., György, A., Kozuno, T., & Szepesvári, C. (2022). Confident Approximate Policy Iteration for Efficient Local Planning in $ q^\pi $-realizable MDPs. Advances in Neural Information Processing Systems, 35, 25547-25559.
**Update**: I also updated my score for contribution.
Strengths: 1. This paper addresses the task of planning in large Constrained Discounted MDPs under the weakest possible assumptions in the RL literature. This is achieved with a unique combination of existing techniques. Their value estimation scheme makes use of techniques from CAPI-QPI-PLAN to create a core set in addition to off-policy evaluation via importance sampling for value estimation at core state-action pairs. The authors also adequately cite related works.
2. The submission is well written and, to the best of my knowledge, technically sound.
3. In terms of the target accuracy and feature dimension, the results are significant especially in restrictive the setting of approximate policy completeness and local access to the simulator that they consider.
Weaknesses: 1. The relevance and impact on the theoretical analysis of the off-policy evaluation scheme is not fully accessible. It would help if the authors could clarify (with pointers in the main text), how this routine helps conserve samples in theory. Precisely, does the sample complexity get worse if the authors simply apply CAPI-QPI-PLAN?
2. In Line 268, it should be $\frac{1}{K}$ not $\frac{1}{k}$ in the definition of $\overline{\pi}_{K}$.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The nature of the output policy in Line 268 is a bit ambiguous and I spent a considerable amount of time trying to understand the meaning of the notation $\frac{1}{K}\sum_{k=0}^{K-1}\pi_{k}$ means.
On one hand, the authors refer to the said notation as a mixture policy, which to my knowledge means that at deployment, the agent flips a coin at the beginning of each episode to decide which of the $K$ policies to use. In this case, it is unclear how you define $v_{\overline{\pi}\_{K}}$, and more importantly, such policies are not suitable for constrained MDPs, especially with safety constraints as they are only required to perform well on average.
On the other hand, the notation seems to mean that $\overline{\pi}\_{K}$ is an element-wise average for all state-action pairs. In this case, the policy is stochastic and $v_{\overline{\pi}\_{K}}$ well defined.
2. The complexity guarantees has worse dependence on the effective horizon $(1-\gamma)$ e.g. $(1-\gamma)^{-11}$ in Theorem 1 and $(1-\gamma)^{-14}$ in Theorem 2. Is this a consequence of the method or an artefact of the analysis? Can this be improved?
3. In Remark 3, the authors mention that the Slater's constant, which is required by their algorithm, can be approximated by another algorithm. Is this approximation trivial? How will the approximation error influence the current results?
4. In appendix A, the authors provide Confident-NPG, a version of
Confident-NPG-CMDP for the single-reward (or unconstrained MDP) case.
Furthermore, their performance guarantees for the latter algorithm appear to be
based on first relating the former to CAPI-QPI-PLAN. Therefore, to
better understand the contributions of this paper in the constrained case, the following questions are based on Confident-NPG. Can the authors kindly clarify the following:
- Why is $c > 0$ interesting to study?
- What is a suitable choice of $c$ and how does tuning this parameter show up in the performance guarantees?
- Can the authors comment on the memory requirement of Confident-NPG?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This is a purely theory paper so there are no direct
societal impacts. However, I would recommend that the authors highlight the
dependence on the effective horizon in their discussion. More
importantly, I recommend that the authors elaborate on the memory and
computational complexity as these contribute to the efficiency of their method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: You are correct that the mixture policy randomly selects an index $I \in {0, \ldots, K-1}$ with probability $1/K$ at deployment and then follows the policy $\pi_I$ for all subsequent steps. The value function of the mixture policy, $v_{\bar \pi_K}(s)$, is defined as the expected return when the mixture policy is initiated in state $s$: $v_{\bar \pi_K}(s) = \sum_{i=0}^{K-1} \text{Prob}(I = i) v_{\pi_i}(s) = \frac{1}{K} \sum_{i=0}^{K-1} v_{\pi_i}(s)$. This averaging of the value functions is used in our analysis to demonstrate that the returned mixture policy achieves the two feasibility objectives stated in sections 5 and 6.
It's important to note that the mixture policy is a history-dependent policy. The notation $\frac{1}{K}\sum_{k=0}^{K-1} \pi_k$ is defined with respect to trajectories. Specifically, this means that the probability of a trajectory is the weighted average of the probabilities of that trajectory under each of the component policies. We will also clarify this in the paper.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their clarification on the differences between CAPI-QPI-PLAN and Confident-NPG-CMDP and the nature of $\bar{\pi}_K$. Can the authors kindly comment on questions 2, 3, 4.2 and 4.3 as well?
---
Rebuttal 2:
Comment: Question 2: The complexity guarantees has worse dependence on the effective horizon e.g.
in Theorem 1 and in Theorem 2. Is this a consequence of the method or an artefact of the analysis? Can this be improved?
-----
To address the strict-feasibility problem, the algorithm must solve a more conservative CMDP, as discussed in Section 6 of our paper. However, solving this conservative CMDP incurs a higher sample complexity cost, necessitating a separate treatment for the relaxed-feasibility setting. The worse dependence on the effective horizon is a consequence of the method.
Improving the effective horizon terms in both settings remains an area of ongoing research.
-----
Question 3: In Remark 3, the authors mention that the Slater's constant, which is required by their algorithm, can be approximated by another algorithm. Is this approximation trivial? How will the approximation error influence the current results?
-----
Recall that Slater's constant is defined as $\zeta \doteq \max\_{\pi} v^{c}\_{\pi}(s_0) - b$. To approximate $\zeta$, one can run CAPI-QPI-Plan against the local-access simulator with constraint function $c$, treating it as an unconstrained problem optimizing with respect to only $c$. This algorithm yields an approximation of $\max_{\pi} v^{c}_{\pi}(s_0)$, which can then be used to calculate $\zeta$. We can run CAPI-QPI-Plan to obtain an approximate $\zeta$ before executing Confident-NPG-CMDP, and the sample complexity of CAPI-QPI-Plan is only additive to that of Confident-NPG-CMDP.
We will include the aforementioned elaboration on this remark in the paper.
-----
Question 4.2: What is a suitable choice of $c$ and how does tuning this parameter show up in the performance guarantees?
-----
We realized that the term $c$ was used in two different contexts, which may have caused some confusion. We used $c$ to denote both the constraint function and a user-defined parameter that bounds the importance sampling ratio in off-policy estimation, thereby determining the value of $m$. This overlap was an oversight on our part, and we will use two different notations to make the distinction in our paper. For the explanations that follow, we will refer to $c$ as the user-defined parameter that sets the value of $m$.
By setting $c$ to a value greater than 0 adjusts the resampling window, controlled by the quantity $m = O(\ln(1+c) \cdot \text{poly}(\epsilon^{-1} (1-\gamma)^{-1}))$, to ensure that the off-policy value estimators are well-controlled. As a result, $c$ appears in $L = \lfloor K / (\lfloor m \rfloor + 1) \rfloor$, where $L$ is the total number of data sampling phases. Additionally, $c$ appears in $n$, the number of roll-outs for each state-action pair in a core set. In the proof of Theorem 1 in Appendix C (page 29), we show that the algorithm will make $nH(L+1)|\mathcal{C}_0|$ queries to the simulator.
If $c > 0$, we see $L = \lfloor K / (\lfloor m \rfloor + 1) \rfloor \leq K/m \propto (1-\gamma)^{-3} \epsilon^{-1} (\ln(1+c))^{-1}$. All in all, we have $(1+c)^2/ \ln(1+c)$ showing up in the sample complexity. Since $c$ is a constant, we have omitted it from the tilde-big-O notation in the final sample complexity. In terms of $\epsilon$, with a factor of $\epsilon^{-2}$ contributed by $n$ and a factor of $\epsilon^{-1}$ contributed by $L$, we see that the total sample complexity is $\propto \epsilon^{-3}$.
If $c$ is set to 0, then $\ln(1+c) = 0$ and $m = 0$, reducing $L$ to $\lfloor K \rfloor$. In this case, the algorithm reverts to a purely on-policy approach and results in a sample complexity $\propto \epsilon^{-4}$. This motivated us to adopt the natural policy gradient and the off-policy approach to reduce the sample complexity from $\epsilon^{-4}$ to $\epsilon^{-3}$.
A similar analysis for the strict-feasibility setting can be found in Theorem 2 in Appendix D (page 31).
---
Rebuttal 3:
Title: Question 4.3: Can the authors comment on the memory requirement of Confident-NPG?
Comment: The overall memory requirement is $\tilde{d} n H L + \tilde{d} + L (m+1) \tilde{d} d$. The term $\tilde{d} n H L$ comes from maintaining $L+1$ copies of the core set $\mathcal{C}\_{l \in \\{0,...,L+1\\}}$, each containing no more than $\tilde{d}$ state-action pairs. For each state-action pair in $\mathcal{C}\_l$ for $l = 0,...,L$, the algorithm stores $n$ trajectories consisting of $H$ tuples $(s,a,r,c)$, requiring $\tilde{d} n H L$ memory. The additional $\tilde{d}$ accounts for the elements stored in $\mathcal{C}\_{L+1}$, which has no more than $\tilde{d}$ elements. In phase $L+1$, the algorithm terminates, so no roll-outs are required.
The term $L (m+1) \tilde{d} d$ arises from storing the least-squares weights of the estimator when a core set is extended. When a core set is extended (i.e., when discovered = true), the least-squares weights for the corresponding $m+1$ iterations are recalculated for the extended set on line 20 and stored for that extension. With a maximum of $\tilde{d}$ extensions per core set and $m+1$ corresponding iterations for each core set, and $L$ core sets in total, each weight vector of dimension $d$, this results in the $L (m+1) \tilde{d} d$ memory bound for storing the least-squares weights.
By substituting $n = \tilde{O}(\epsilon^{-2} (1-\gamma)^{-2})$, $H = \tilde{O}((1-\gamma)^{-1})$, $L = \tilde{O}((1-\gamma)^{-3} \epsilon^{-1})$, and $\tilde{d} = \tilde{O}(d)$, we obtain a total memory requirement of $\tilde{O}(d^2 \epsilon^{-3} (1-\gamma)^{-6})$.
Below, we provide a formal discussion on why and how we store the least-squares weights.
To return a mixture policy, the algorithm must access all policies $\pi_0,...,\pi_{K-1}$. Instead of storing the $\pi_k$ values for $k = 0,...,K-1$ across the entire state-action space, we store only the necessary information to reconstruct the policies when needed.
For a phase $l$, the policies $\pi_k$ for $k \in \\{ k_l+1, \dots, k_{l+1} -1 \\}$ depend on the core set $\mathcal{C}_l$. Since $\mathcal{C}_l$ can be extended multiple times during the algorithm, we track newly added states in each extension and store the corresponding least-squares weights. $\mathcal{C}_0$ can only be extended via line 12, and any other $\mathcal{C}_l$ (for $l = 1,...,L+1$) only via line 27, when the running phase $\ell = l-1$. Newly added elements are marked at lines 12 and 27. After lines 17-23 are executed, we store the least-squares weights associated with these newly added state-action pairs.
Using the tuples stored in $\mathcal{C}\_l$, along with the marking of which state-action pairs are newly added in each extension and their associated least-squares weights, we can reconstruct the policy $\pi_{k+1}$. For example, let $\mathcal{C}_l^1 = \emptyset$, and $\mathcal{C}_l^i$ denote all state-action pairs added to $\mathcal{C}_l$ in extension $i$. Let $w^i_k$ represent the least-squares weights computed using $\mathcal{C}_l^1 \cup \mathcal{C}_l^2 \cup \dots \cup \mathcal{C}_l^i$ for the $k$-th iteration. When $\mathcal{C}_l$ is extended for the $(i+1)$-th time, let $\mathcal{C}_l^{i+1}$ be the newly added pairs, making the latest $\mathcal{C}\_l = \mathcal{C}\_l^1 \cup \mathcal{C}\_l^2 \cup \dots \cup \mathcal{C}\_l^{i+1}$. The least-squares weight $w^{i+1}\_k$ is then computed using $\mathcal{C}\_l$. When line 25 of algorithm 2 (pg 12) is executed, $\pi\_{k+1}$ remains unchanged for states already in $\text{Cov}(\mathcal{C}\_l^1 \cup \dots \cup \mathcal{C}\_l^i)$, equivalent to $\text{Cov}(\mathcal{C}\_{l+1})$ in line 25, because line 27 would have been executed in the previous extension $i$ making $\mathcal{C}\_{l+1} = \mathcal{C}\_l^1 \cup \dots \cup \mathcal{C}\_l^i$. For states in $\text{Cov}(\mathcal{C}\_l^1 \cup \dots \cup \mathcal{C}\_l^{i+1}) \setminus \text{Cov}(\mathcal{C}\_l^1 \cup \dots \cup \mathcal{C}\_l^i)$ (equivalent to $\text{Cov}(\mathcal{C}\_l) \setminus \text{Cov}(\mathcal{C}\_{l+1})$ in line 25), $\pi\_{k+1}$ makes an NPG update using $w^{i+1}\_k$. For all other states not in $\text{Cov}(\mathcal{C}\_l)$, the policy remains as $\pi_k$.
A subroutine can perform these computations and return $\pi_{k}(\cdot | s)$ for any $s$ and $k$. By tracking newly added elements and the corresponding least-squares weights, the algorithm can reconstruct policies throughout its execution. From the stored data, the algorithm will have access to the constructed policies $\pi_0, \dots, \pi_{K-1}$ and return the value of a mixture policy when required.
We plan to add this discussion to the appendix in the final version of our paper.
---
Rebuttal Comment 3.1:
Comment: Dear Authors,
I appreciate the detailed responses to my questions, particularly those addressing the nature of $\pi_K$, the relevance of the off-policy evaluation scheme, the choice of $c$ (the user-defined parameter that sets the value of $m$), and the memory requirements of CAPI-NPG-CMDP. After carefully considering your response, I am convinced that CAPI-NPG-CMDP is a non-trivial extension of CAPI-QPI-PLAN, which to my knowledge addresses the task of planning in Constrained Markov Decision Processes (CMDPs) under the weakest possible assumptions so far. On this note, I will revise my evaluation of your work.
I strongly agree with including this discussion into the final version of the paper. I believe that doing so will enhance the readability of the paper and enable readers to better understand and appreciate the contribution of CAPI-NPG-CMDP in relation to CAPI-QPI-PLAN.
Regarding future directions, while mixture policies have been generally accepted in the CMDP literature, I still believe they are not well-suited for the constrained MDP setting due to their nature of only ensuring good performance on average. On this note, I propose that the authors highlight this limitation in the discussion.
Thank you again for your efforts in addressing my concerns.
---
Rebuttal 4:
Comment: We are pleased to have addressed your concerns, and we sincerely thank the reviewer for their thoughtful questions. | null | null | Rebuttal 1:
Rebuttal: We would like to begin by thanking all the reviewers for their time and dedication in evaluating our work. We start our rebuttal by emphasizing that our algorithm is not a straightforward extension of the work by Weisz et al. (2022).
First, we want to point out that the algorithm CAPI-QPI-Plan by Weisz et al. (2022) is designed for the unconstrained MDP setting, where it returns a deterministic policy. However, in the constrained MDP setting, an optimal deterministic policy for the unconstrained MDP may not be feasible. Thus, CAPI-QPI-Plan is not applicable to the constrained MDP setting. In contrast, our algorithm, Confident-NPG-CMDP, returns a soft mixture policy $\bar \pi_{K}$, ensuring that $\bar \pi_K(a | s) > 0$ for all $(s, a) \in \mathcal{S} \times \mathcal{A}$. The soft policy returned by Confident-NPG-CMDP can solve the relaxed-feasibility problem in section 5 and the strict-feasibility problem in section 6.
The main algorithmic differences between Confident-NPG-CMDP and CAPI-QPI-Plan are as follows:
1) Data Sampling: Confident-NPG-CMDP does not sample data in every iteration, unlike CAPI-QPI-Plan.
2) Dual Variable Computation: Confident-NPG-CMDP requires computing the dual variable present in a primal-dual algorithm.
3) Policy Improvement Step: Confident-NPG-CMDP utilizes a softmax over the estimated action-values, whereas CAPI-QPI-Plan is greedy with respect to the estimated action-values.
These are critical changes to ensure a feasible mixture policy for the CMDP. Moreover, it also makes the analysis considerably more challenging.
We will now explain the motivation behind the changes to the policy improvement step and why we do not sample data in every iteration. In the constrained MDP setting using a primal-dual approach, it is also crucial to control the dual variable. The dual variable is obtained using a mirror descent algorithm, introducing an additional $\epsilon^{-2}$ factor to the sample complexity. Simply applying CAPI-QPI-Plan and returning a mixture policy at the end would yield an overall sample complexity $\propto \epsilon^{-4}$, with the additional $\epsilon^{-2}$ factor arising from controlling the dual variable in addition to having to control for the estimation error. This complexity led us to adopt the natural policy gradient approach.
By using the natural policy gradient as a policy improvement step instead of CAPI-QPI-Plan's greedy policy improvement step, Confident-NPG-CMDP reduces the sample complexity from $\tilde O(\epsilon^{-4})$ to $\tilde O(\epsilon^{-3})$. This reduction is achieved by leveraging the softmax policy structure within the natural policy gradient, enabling effective use of off-policy estimation to conserve data. By employing a per-trajectory importance sampling ratio, we can weigh the Monte Carlo returns generated from data collected in an earlier on-policy phase, resulting in unbiased estimates of action values with respect to the target policy. However, this ratio can become large if there is a substantial difference between the on-policy and target policy. To address this, the algorithm collects data at intervals of $m$, effectively determining when to collect new data as the policy significantly diverges from the earlier on-policy data. By setting $c > 0$, we can bound the per-trajectory importance sampling ratio, thus controlling the interval $m = O(\ln(1+c) \text{poly}(\epsilon^{-1} (1-\gamma)^{-1}))$ for resampling on-policy data to produce well-controlled estimators. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Markov Equivalence and Consistency in Differentiable Structure Learning | Accept (poster) | Summary: This paper proposed a differentiable DAG learning method using the log-likelihood loss and the minimax concave penalty (MCP). The author proved that under such construction, the minimizer of the loss identifies the sparsest graph (i.e. it has the minimal number of edges) which can generate the observational distribution. When faithfulness is further assumed, the identified graph is equivalent to the ground truth graph. The theorems are valid for linear Gaussian DAGs and general nonlinear SEMs where the induced distribution is parametric. The proposed method is validated by extensive experiments involving various dimensions and graph structures.
Strengths: The paper is well-written and easy to follow.
The theoretical contribution is solid, demonstrating that a broad class of penalties (SCAD, MCD, quasi-MCD), along with the log-likelihood loss, identify the desired graph structure.
The simulations are well-designed and illustrate the outperformance of the proposed method.
Weaknesses: The method can be regarded as a direct combination of several works. For example, the log-likelihood loss was also proposed in GOLEM [43]. The MCD penalty was first proposed in [72].
When applied to nonlinear models, the author used $\frac{1}{2n}\sum_{i=1}^d\log\left(||x_i-\hat f_i(X)||^2\right)$ as the log-likelihood loss. This seems only valid for homogeneous Gaussian noise where the variance in the denominator can be canceled out. However, in the experimental setting, the noise variance is heteroscedastic, making the likelihood improper.
Real-world datasets can be added to validate the proposed method.
Code is not available.
Some minor flaws in the presentation:
+ The conclusion section is missing.
+ Section D.3.2, time spent on the simulation. The plots are about SHD rather than about the running time.
+ [72] in the reference should be cited correctly.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can you provide further examples/illustrations on whether BIC or $L_1$ penalty can recover minimal models? I think this will strengthen the proposed method when faithfulness fails.
The log-likelihood for nonlinear models seems improper due to the heteroscedastic noise. Either the setting or the likelihood should be corrected before rebooting the experiments.
It would be interesting to compare the proposed method with the existing ones on real datasets.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The method assumes a known likelihood function, which can be impractical when the noise distribution or the functional structure is unknown.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude to the reviewer for their time, effort, and valuable suggestions. We would like to take this opportunity to address all the concerns raised in the reviews.
> The method can be regarded as a direct combination of several works. For example, the log-likelihood loss was also proposed in GOLEM [43]. The MCD penalty was first proposed in [72].
>
Thanks for reviewer’s questions on our contribution. We would like to emphasize that our paper is not merely a combination of these previous works.
(1) **New Identifiability Result:** We utilize our log-likelihood loss with a sparsity regularizer (MCP) to prove an identifiability result under common assumptions from the optimization literature. Unlike GOLEM [1], which has specific assumptions (such as the Triangle Assumption [1]) and relies on a linear model, our method does not require these conditions. Moreover, our proof is purely based on the properties of the score function and does not rely on any assumptions about the underlying graph structure, unlike GOLEM. Additionally, as far as we know, this is the first time MCP has been introduced in the continuous DAG learning literature. Previously, $\ell_1$ was the commonly used sparsity regularizer, which led to biased estimation of the underlying parameters and could negatively affect the recovered structure. By utilizing quasi-MCP, we overcome these drawbacks of $\ell_1$ and achieve unbiased estimation. Moreover, quasi-MCP also introduces significant technical challenges that our analysis handles (see also **Common Concern (3)**).
(2) **Scale-invariant property:** We address a common concern for many DAG learning algorithms regarding scale invariance, wherein the algorithm may output different DAGs if the data is rescaled. To address this, we demonstrate that the log-likelihood score is scale-invariant under Gaussian noise. By showing this result, we aim to correct the common misunderstanding that the algorithm itself is affected by the scale of the data. Instead, it is the use of an incorrect score function that is sensitive to the scale.
We believe these points distinguish our contributions from previous works, and are significant.
> When applied to nonlinear models, the author used $\frac{1}{2n}\sum_{i=1}^d \log (\\|x_i -\hat{f}_i(x)\\|^2)$ as the log-likelihood loss. This seems only valid for homogeneous Gaussian noise where the variance in the denominator can be canceled out. However, in the experimental setting, the noise variance is heteroscedastic, making the likelihood improper.
> The log-likelihood for nonlinear models seems improper due to the heteroscedastic noise. Either the setting or the likelihood should be corrected before rebooting the experiments.
Thank you for this important question! We are happy to clarify the misunderstanding and will include the full derivations in the paper for clarity.
When applied to a nonlinear model with heteroscedastic noise, the negative log-likelihood is:
$$
\frac{1}{2n}\sum\_{i=1}^d \log (\\|x_i -\hat{f}_i(x)\\|^2)
$$
However, when the noise is homogeneous, the negative log-likelihood becomes:
$$
\frac{d}{2}\log \left(\sum\_{i=1}^d \\|x_i - \hat{f}_i(x)\\|^2\right)
$$
Although these formulations appear similar, they are fundamentally different. With heteroscedastic noise, the loss is the log of the sum of least squares. In contrast, with homogeneous noise, the loss is the sum of the logs of least squares. The full derivation can be found in [1] (which covers both heteroscedastic and homogeneous noise in the linear case, with $\hat{f}_i(x) = B_i^\top x^{(i)}$) and [2] (which uses heteroscedastic noise in their setting).
Thanks for this question! We address this point later in the Question “The log-likelihood….before rebooting the experiments.”
> Real-world datasets can be added to validate the proposed method.
>
> It would be interesting to compare the proposed method with the existing ones on real datasets.
>
We include the real-data application in attached pdf (See **Common Concern (2)**). These results will be updated into our paper.
> Code is not available.
>
Thank you for bringing this up, the code will be released in the camera ready.
> The conclusion section is missing.
>
Thanks for pointing it out. Please refer to **Common Concern (4)** for conclusion and limitation.
> Section D.3.2, time spent on the simulation. The plots are about SHD rather than about the running time.
>
Thanks for bring up this typo. We put the wrong name in the title and y-axis, it should be title: “$d$ vs Time for methods”, y-axis: "Time (second)”. The value in the plot is correct. This will be update in the paper correspondingly.
> [72] in the reference should be cited correctly.
>
Thanks; this reference will be updated in the final version.
> Can you provide further examples/illustrations on whether BIC or $\ell_1$ penalty can recover minimal models? I think this will strengthen the proposed method when faithfulness fails.
>
Thanks for the suggestion. Please refer to **Common Concern (2)** for more examples.
[1] Ng, Ignavier, AmirEmad Ghassami, and Kun Zhang. "On the role of sparsity and dag constraints for learning linear dags." *Advances in Neural Information Processing Systems* 33 (2020): 17943-17954.
[2] Bühlmann, Peter, Jonas Peters, and Jan Ernest. "CAM: Causal additive models, high-dimensional order search and penalized regression." (2014): 2526-2556.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Most of my concerns are addressed. At this moment, I would like to retain my score before further discussions with other reviewers.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you for the reviewer’s comment. We hope you can have a more comprehensive assessment after discussing with the other reviewers. We appreciate your time and effort. | Summary: This paper introduces new identifiability results (MEC) based on maximum likelihood estimation complemented with sparsity regularization (quasi-MCP) for both a Gaussian linear model and a more general, potentially nonlinear models, under the very standard faithfulness assumption. The paper contains a theoretical analysis showing the score function is invariant to rescaling (in the Gaussian linear case only) and provide experiments showing the approach compares favorably to other baselines (in the Gaussian linear case only).
**Review summary:**
This paper reads very nicely and some of its proposals are new and interesting (scale-invariant result and the usage of the quasi-MCP to get MEC identifiability), but it missed a very relevant work [a] with very similar contributions (which significantly reduce the novelty factor of this manuscript) and a more complete set of experiments (see below). For these reasons, I believe this work is not ready yet for publication. That being said, I think it could get accepted at another conference by
- rewriting the paper in a more transparent way, especially by contrasting with the contributions of [b] (and changing the title);
- highlighting what are the actual contributions (scale-invariant result + MEC identifiability via quasi-MCP regularizer); and
- providing convincing experiments with nonlinear models.
Strengths: - This paper is extremely well written and easy to follow. The notation is always well introduced with kind reminders when less standard notation is used.
- Overall the paper feels quite pedagogical.
- I believe the theoretical analysis of scale-invariance, although simple, is novel and very interesting, as it addresses problems raised with existing approaches regarding standardization.
- I think some aspect of the theoretical results is interesting, like the proof that the quasi-MCP regularizer can be used to get identifiability of the MEC. However, I believe very similar results have been shown previously in the literature (omitted from the related work) which seriously limit the novelty factor of this work.
Weaknesses: **Important prior work omitted which seriously limits novelty**
The contribution is motivated by the need to consider more general score functions beyond the mean-squared error used in NOTEARS. The authors missed the work of [a] which proves identifiability for very general likelihood functions, including universal approximators (thanks to normalizing flows). The result of [a] applies to cases where interventions are observed, but covers also the special case without interventions, in which case the result guarantees identifiability of the MEC under the faithfulness assumption. Interestingly, [a] also requires the coefficient of the regularizer to be “small enough”, as is also the case in this manuscript. A key distinction with [a] is that this manuscript is analyzing a differentiable nonconvex sparsity regularizer as opposed to an L0 regularizer, which is not suitable for gradient-descent (in practice, [a] uses gumbel-sigmoid masks and regularizes their probability of being one). Am I missing other key distinctions? Showing that the quasi-MCP regularizer can yield MEC identification is new AFAIK and interesting IMO, but this combined with the simple scale-invariance results makes for a rather limited contribution...
I believe the title reflects the lack of knowledge of the existing literature on continuous structure learning. The title is too general. It sounds like the paper introduces likelihood-based differentiable structure learning, but that’s clearly not the case. [a] is a clear example with very similar theory (although not exactly encompassing this manuscript) as well as [b] and [c] (although without identifiability theory).
**Experiments investigate only the linear case**
The paper is largely motivated by the need to go beyond linear Gaussian score functions, but the experiments fall completely short of that promise, as only the Gaussian linear case is investigated. In contrast, [a] train neural network based architecture, with similar theoretical guarantees. More experiments are needed to confirm that the quasi-MPC regularizer transfers to nonlinear setting.
**Limitations of the theory:**
Assumption A: Assuming that the cardinality of equivalence of parameters is finite feels like a strong assumption. For instance, if you’d like to parameterize the conditional using neural networks, this assumption wouldn’t be satisfied since most architecture have infinitely many equivalent parameterization (think of ReLU activation where you can rescale the output of a neuron and undo this rescaling in the following layer). The result of [a] does not make this assumption despite their very similar setup. Why is it needed here? Also, can you give a concrete example of function B(\psi) that is Lipschitz.
Assumption B: The same example with ReLU neural networks is a counterexample to this assumption. Indeed, because of this rescaling, the set of parameters yielding the same network is unbounded. Excluding NN feels limiting, especially since existing results can cover this case.
**Unclear limit statements in Theorem 1 to 4**
The usage of the limit symbol “->” is a bit strange. Usually, we say that a_n -> b as n -> \infty, but here the result says a_n = b as n -> \infty. What does the latter even mean? Does that mean there exists a large enough N such that a_n = b for all n > N? I don’t think it’s true anyway. Why not presenting the result directly for the population likelihood (i.e. with \ell instead of \ell_n), as was done in [a]? Skimming the proof very quickly suggests that’s what was proved anyway. Note that this is different from showing consistency of the procedure, i.e. that your estimator converges to the right MEC as the number of samples grows (I suspect the latter will be more difficult to show)
Minor:
- It would be nice to have a small plot of the quasi-MCP regularizer.
- Typo in Lemma 3 in appendix? I think an \mathcal{M} should be replaced by a \mathcal{G}, no?
- Text is absurdly small in Figure 1 e.g.. Please fix this.
- Is the sparsest Markov representation always unique? The phrasing used suggests it is, but it might be worth expliciting.
[a] P. Brouillard, S. Lachapelle, A. Lacoste, S. Lacoste-Julien, and A. Drouin. Differentiable causal discovery from interventional data. In Advances in Neural Information Processing Systems, 2020.
[b] X. Zheng, C. Dan, B. Aragam, P. Ravikumar, and E. Xing. Learning sparse nonparametric dags. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, 2020.
[c] S. Lachapelle, P. Brouillard, T. Deleu, and S. Lacoste-Julien. Gradient-based neural DAG
learning. In Proceedings of the 8th International Conference on Learning Representations,
2020.
Technical Quality: 4
Clarity: 4
Questions for Authors: See above.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 2
Limitations: This paper lacks a conclusion/discussion, so the limitations of the approach are not addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the insightful critiques and comprehensive understanding of our work, and for providing such useful feedback. We will try our best to address the reviewer’s concern.
> **Comparison to previous work**
>
We thank the reviewer for highlighting this related work; we will certainly update our paper to cite it and provide a discussion. Although our work and [1] prove similar results on identifiability through optimizing the log-likelihood with sparsity regularization, there are important differences. Most importantly, the use of $\ell_0$ significantly simplifies the analysis (similar identifiability results as Theorem 1, 2, and 4 are easily obtained, see **Common Concern (3)** for details) and ultimately does *not* lead to a differentiable program. Our focus is on a fully differentiable formulation. In fact, this is precisely what leads to Assumptions A and B, which are *not* needed when $\ell_0$-regularization is used. This is not surprising since $\ell_0$ leads to a combinatorial optimization problem which we are trying to avoid. Again, see **Common Concern (3)** for details.
Moreover, our results rely on slightly weaker assumptions to get identifiability results, i.e., Sparsest Markov Representation (SMR) assumption, which has been shown to be weaker than faithfulness (with only observational data, $\mathcal{I}^*$-faithfulness assumption is regular faithfulness assumption [1]). Our proof is entirely different and more straightforward, relying solely on the score function, without the consideration of different skeletons and immoralities [1]. Our paper also includes new results on scale-invariance to address current concerns with many DAG learning algorithms.
> **Experiments investigate only the linear case**
>
Originally, due to space limits, we put the nonlinear experiments results in Appendix D.3.3. (Page 31), which may have been easy to miss. Nonetheless, we have added more nonlinear experiments; please see **Common Concern (2)**.
> **Limitations of the theory**
>
Thanks for these useful comments! Assumptions are needed because the our penalty term is quasi-MCP, introducing fundamental difficulties compared to previous work [1] using $\ell_0$. If penalty term is $\ell_0$, all these assumptions can be removed, and similar results hold. Please refer to **Common Concern (3)** for more details.
> **Also, can you give a concrete example of function B(\psi) that is Lipschitz.**
>
First, in a (general) linear model, $\psi = B$. In this sense, $B(\psi) = \psi = B$, and $B(\psi)$ is 1-Lipschitz. Another example can be found in **Common Concern (1).** In this case, $\psi = \\{\beta^S\\}_{S\subseteq [p]}$, $[B(\psi)]\_{ji} = \sum\_{\{S:j\in S, i\in S, S\subseteq A\}}(\beta^{S})^2$, which is Lipschitz.
Moreover, [2] uses a neural network to model $f_j$ in Equation (2). Let $\psi_j$ be the weights that connect the input layer and the first hidden layer in the $j$-th neural network. In this case, $\psi = (\psi_1, \ldots, \psi_p)$. It turns out that $[B(\psi)]\_{ij} = \\|\text{i-th-column}(\psi_j)\\|_2$. As a consequence, $\\|B(\psi)\\|\_2 = \sum\_{i,j} \\|\text{i-th-column}(\psi_j)\\|_2 \leq p \sum_j \\|\psi_j\\|_2 \leq p^2 \\|\psi\\|_2$. Therefore, it is $p^2$-Lipschitz. In general, $B(\psi)$ is a function of $\psi$, and usually, the nonzero part of $\psi$ indicates that a certain entry of $B(\psi)$ is nonzero. $\ell_1$ or $\ell_2$ norms are used to ensure nonzeros are mapped to nonzeros and zeros are mapped to zeros, which makes $B(\psi)$ inherit the Lipschitz property of $\ell_1$ or $\ell_2$.
> **Unclear limit statements in Theorem 1 to 4**
>
Thank you for the suggestion, and we apologize for any confusion. Indeed, our results apply to the population case, which we indicate by using $n \rightarrow \infty$. For Theorems 1 to 4, we are stating that when considering the population case (infinite number of samples), certain equality relationships hold. We will revise these statement to make this more clear.
> **Minor**
>
(1) Originally, we include the plot of quasi-MCP, but it is removed due to space limit. But it seems it will be better idea to get it back again.
(2) Yes, this is a typo, it should be $\mathcal{M}(G^0) = \mathcal{G}(\mathcal{E}_{\min}(\psi^0,\xi^0))$
(3) Good suggestion! it will be fixed.
(4) Sparsest Markov representation(SMR) is not unique itself in general, but it is unique up to Markovian class.
> Limitation
>
Thanks for bring it up. We provide conclusions and limitations in **Common Concern (4).**
[1] Brouillard, Philippe, et al. "Differentiable causal discovery from interventional data." *Advances in Neural Information Processing Systems* 33 (2020): 21865-21877.
[2] Zheng, Xun, et al. "Learning sparse nonparametric dags." *International Conference on Artificial Intelligence and Statistics*. Pmlr, 2020.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for engaging with my review and providing a convincing rebuttal. The additional discussion contrasting their theoretical result with that of [1] was enlightening and very clear. I urge the authors to integrate these discussions in the next version of their work to better contextualize their contribution within existing literature. The authors should also consider changing the title to something less general, one way to do that would be to mention the quasi-MCP regularizer somehow (this paper is not the first differentiable likelihood based structure learning approach proposed!). Also, please remove the limit statement as these are inaccurate and should be replaced by "population results" (\ell instead of \ell_n).
Assuming these changes will be properly addressed in the next revision, I raise my score from 3 to 5. I do not raise further as I still believe the novelty factor fairly low.
---
Reply to Comment 1.1.1:
Title: Thanks!
Comment: Dear Reviewer,
We sincerely appreciate your positive feedback on our rebuttal. We will incorporate [1] and related works into the paper and expand the related work section to discuss the similarities and differences between our work and [1].
We acknowledge that the current title is too general, and we will refine it to better reflect the paper’s content. Additionally, all statements involving $n\rightarrow \infty$ will be revised to refer to "population" results.
Thank you once again for your invaluable suggestions and the time you’ve dedicated to reviewing our work. | Summary: The authors analyze a framework of sparsity-regularized maximum likelihood learning under the NOTEARS constraint for score-based causal discovery. Drawing from the sparsest permutations principle (a hybrid method), they show that a sparsity regularized likelihood objective is able to recover an element of the MEC even under structural non-identifiability of the SCM. The (non-identifiable) linear Gaussian setting is analyzed in detail, where it is shown that the MCP and SCAD penalties are able to recover the sparsest graph with appropriate hyperparameter settings. The authors then extend this to general likelihoods under the assumption that the (parameter) non-identifiability class (at the ground truth) is finite and other regularity conditions. For the linear-Gaussian case, the authors also prove scale invariance of the structure obtained from the method, addressing concerns of varsortability that similar methods are susceptible to. Finally, an experimental study is conducted on simulated linear and nonlinear ground truths.
Strengths: I advocate for the acceptance of this paper based on two simple dimensions:
- (Significance and Motivation): Structural identifiability is a near-universal, yet entirely convenience-driven assumption when using likelihood-based scores for causal discovery. This work theoretically justifies NOTEARS-type approaches under structural non-identifiability, which broadens the class of applicable likelihood-based scores.
- (Clarity and Execution): I found the paper easy to follow, with precise mathematical notation, careful proofs and comprehensive analysis of the proposed framework.
Weaknesses: However, I do think the paper falls somewhat short in a few (fixable) areas. Actionable questions are __bolded__, and I may raise my score if these points are clarified.
- The section on scale invariance seems out of place and somewhat weaker than the rest of the paper. It's clear that the Gaussian likelihood is scale invariant, and structure invariance is not a terribly surprising conclusion of the regularized objective also. __Unless the authors can clarify the contribution of this section, I feel like this is better off stated as a short paragraph or remark.__
- The authors state that the finite parameter equivalence class assumption in the general case is "relatively mild", and that it is satisfied by a range of models ("most exponential families"). I'm not convinced that this is the case. First, typically when statistical models with continuous parameter spaces are non-identifiable, the equivalence class is usually infinite. For example, I believe exponential families in canonical parametrization are either full-rank, where the parameter is identifiable (and hence not relevant to the motivation of the paper), or non-identifiable up to entire subspaces of the parameter space (Theorem 1 of [1]), which are infinite. __Could the authors clarify what they mean on l297-298?__
- The authors do not provide any examples, or references, of non-Gaussian log-likelihoods scores that would be useful for causal discovery. The non-linear example in the experiments seems to use a MSE loss which is the same as non-linear NOTEARS [2], and the only improvement shown in the experiments is due to changing the optimization scheme to Adam (and adding the quasi-MCD regularizer?). __Could you provide (practical) examples of likelihoods where the theory in Section 5 hold? I.e., other non-identifiable likelihood scores that are useful for causal discovery?__.
[1] Notes by Charles Geyer: https://www.stat.umn.edu/geyer/8053/notes/expfam.pdf
[2] "Learning Sparse Nonparametric DAGs", Zheng et al., AISTATS 2020.
Technical Quality: 4
Clarity: 3
Questions for Authors: ### Other questions
- You mention that $\ell_1$ loss is "not effective" compared to MCP, SCAD, is this a computational or theoretical issue? In other words, do you have a conjecture/result on whether the theoretical claims no longer hold when replaced with $\ell_1$?
- Although the likelihood is "structurally" scale-invariant (Thm. 3), the scale presumably can still affect (uniform over d) thresholding in finite samples by changing the speed at which structural zeros converge. This has always been my intuition on how var-sortability works, and not that the MSE in NOTEARS is actually "structurally" affected by scaling. Is there a reference to show that the original MSE loss is not also structurally scale-invariant? If the MSE is also structurally scale-invariant, do you conjecture that the likelihood has desirable properties in terms of thresholding?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors do not adequately discuss the limitations of the paper in my opinion (in fact, the paper abruptly ends without even a conclusion), and this is also seen in in the checklist where the justification for this section was left blank.
I think the paper deserves a proper conclusion and discussion of the (potential) limitations of Assumption A1 in Section 5. __I do not think this is a tough ask, so I may lower my score if this remains unaddressed in the rebuttal.__
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude to the reviewer for acknowledging the value of our contributions and clarity of presentation. All the points addressed below will be updated in our paper.
> The section on scale invariance …. or remark.
We will move these details to the appendix, and shorten the discussion in the main paper.
> The authors state that the finite parameter…..mean on l297-298?
First, consider simple case of Gaussian models, and then below we discuss how to generalize. Let $X \sim \mathcal{N}(0, \Sigma)$ and $\Sigma\succ0$. It belongs to the exponential family and is full-rank. The covariance $\Sigma$ is identifiable, however, we are interested in the matrix $B$ in the SEM $X = B^\top X + N$. As in Section 4.1, we know that $\Sigma = (I - B)^{-\top} \Omega (I - B)^{-1}$. As we discuss in Section 4.2, there are at most $p!$ different $B$ that satisfy this condition, making our problem **unidentifiable with respect to $B$, but with a finite number of equivalent parameters**.
**As for unidentifiable exponential families in [4],** we can always reduce it to an identifiable model by projecting canonical statistics to a subspace. For the same example, if $\Sigma$ is singular, we can decompose $X = (X_a, X_b)$ such that $\text{Cov}(X_a)\succ 0$ and has the same rank as $\Sigma$, and $X_b$ is a linear combination of $X_a$. As discussed before, there is a finite equivalence class for $X_a$, and $X_b$ has a deterministic relationship to $X_a$.
Another explicit example (GLM with binary data) can be found in the **Common Concern (1)**. This also extends to more general exponential families. When exponential families are full-rank, although their parameters are identifiable, the parameters $(\psi, \xi)$ as well as the SEM $B(\psi)$ may not be identifiable, which are our interest. As before, parameter is $\Sigma$, $\psi = B, \xi = \Omega$, and $\Sigma$ is function of $(B,\Omega)$. Moreover, in many cases, the size of $\mathcal{E}(\psi^0, \xi^0)$ is closely related to $p!$, which is finite since there are $p!$ topological sorts $\pi$. For each $\pi$, it is usually associated with a pair $(\psi(\pi), \xi(\pi))$ in a certain way such that $P(x; \psi(\pi), \xi(\pi)) = P(x; \psi^0, \xi^0)$. This is why we claim Assumption A(1) is reasonable.
Regarding Assumption A and B, more discussion can be found in the **Common Concern (3).**
> The authors do not provide…scores that are useful for causal discovery?.
The point is that *any* likelihood can be used in conjunction with the quasi-MCP. Two examples of non-Gaussian likelihoods are logistic (binary) models and additive noise models. A third example is any model for the joint distribution with GLMs for CPDs. All told, these comprise a very rich class of models with explicitly derivable score functions. Unfortunately, we have not made this point explicit and will certainly do so in the camera ready.
For additive noise models, the log-likelihood function is $\frac{1}{2n}\sum_{i=1}^d \log (\\|x_i -\hat{f}\_i(x)\\|^2)$[1], which is used in our experiments (see Line 855 in Appendix). It is different from MSE loss $\frac{1}{2n}\sum_{i=1}^d\\|x_i - \hat{f}_i(x)\\|_2^2$ used in NOTEARS [2]. For the improvement of the experiments, it is based on the correct use of loss function and penalty, thus it leads to better performance. For other examples, please refer to **Common Concern (1).**
> You mention that $\ell_1$ loss ... when replaced with $\ell_1$?
Using $\ell_1$ as a penalty can lead to biased parameter estimates because it generally shrinks coefficients to zero. In contrast, MCP provides unbiased estimates. Specifically, with MCP, $\mathcal{O}_{n,\lambda,\delta} = \mathcal{E}\_{\min}(\Theta)$ when population case is considered. However, this is not true for $\ell_1$.
Consider $X_1\sim \mathcal{N}(0,1)$, $X_2\sim X_1+\mathcal{N}(0,1)$, with the known order ($X_1\rightarrow X_2$), so that the log-likelihood with $\ell_1$ penalty is
$$
\log((1-a)^2+1)+\lambda|a|
$$
It is easy to see $a = 1$ is not minimizer. This indicates that $\ell_1$ leads to biased estimates.
> Although the likelihood is "structurally"….. in terms of thresholding?
Good point! Although scaling does not affect which entries in the adjacency matrix are zero, it changes the magnitude of the entries. With a finite number of samples, this alters the thresholding we should use, which significantly affect the recovered structure in practice.
With respect to MSE loss, it depends: It is scale-invariant for a fixed topological sort, but this is not true in general when the sort is unknown. Let us clarify:
1. For fixed topo sort, the MSE in NOTEARS is scale-invariant. Since the matrix $B$ can be recovered by linear regression, and changing the scale of the data $X$ does not change the positions of the nonzero entries in $B$, leading to invariance. This is similar to the intuition of Theorem 3.
2. However, the MSE loss in NOTEARS is not scale-invariant when the sort is unknown. For instance, consider $X = (X_1,X_2)$ where $X_1 = N_1, X_2 =aX_1+ N_2, N_1, N_2\overset{i.i.d}{\sim}\mathcal{N}(0,1)$. Then the optimal for MSE loss is
$$
B^* = \begin{bmatrix} 0 & a \\\\ 0 & 0 \end{bmatrix}
$$
However, if you standardize $X$ to get $Z =(X_1,X_2/\sqrt{1+a^2})$ and optimal are
$$
B^*_1 = \begin{bmatrix}0&a/\sqrt{a^2+1} \\\\ 0&0\end{bmatrix}\qquad B^*_2 = \begin{bmatrix}0&0 \\\\ a/\sqrt{a^2+1}&0\end{bmatrix}
$$
So MSE is not scale-invariant in the sense that global optimal solution can not have the same structure for different scale of data. However, this is not the case for likelihood (Theorem 3).
> Limitation and conclusion
Please refer to **Common Concern (4)**. More discussions on Assumption A1 can be found in previous response. We will add this to the paper with the extra page in the camera ready.
[1] Bühlmann, et al "CAM: Causal additive models, high-dimensional order search and penalized regression" (2014)
[2] Zheng, et al. "Learning sparse nonparametric dags" (2020)
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I will retain my score for now, to discuss with other reviewers, who also found the equivalence classes confusing. Based on your $p!$ characterization, it seems equivalence classes contain either DAGs, or parameter settings in 1-to-1 correspondence with DAGs. Upon reading C.2., is it the case that for a given sparsity pattern (i.e., DAG), only one B setting in the Gaussian example will yield the same distribution? I was originally thinking that rotations gave infinitely many equivalent parameters (and this generalizes in EFs) but this would break the sparsity. Could the authors comment on whether this understanding is correct?
Note this was not at all clear to me on the first few passes so I would appreciate if you could put a version of C.2 in the main text.
> As we discuss in Section 4.2, there are at most $p!$ different $B$ that satisfy this condition
I'm wondering if you mean C.2? I can't actually find this statement anywhere.
---
Rebuttal 2:
Title: Thank you for valuable comment!
Comment: Thank you for the reviewer’s further comments, and we apologize for the unclear part. We are happy to address any concerns you have and ensure a clear understanding of our work. Please rest assured, we will incorporate all the suggestions into the paper, including moving Section C.2 to the main paper to provide readers with more background knowledge.
> Is it the case that for a given sparsity pattern (i.e., DAG), only one $B$ setting in the Gaussian example will yield the same distribution?
>
You are correct! From my understanding, if a sparsity pattern here refers to a topological sort $\pi$. For each $\pi$, there will be **only one** $B$ that is consistent with $\pi$ and can generate the distribution.
Consider a simple three-node example. Let $X \sim \mathcal{N}(0, \Sigma)$, where $\Sigma \succ 0$:
$$
\Sigma = \begin{bmatrix}
1/2&1&1\\\\1&3&3\\\\1&3&4
\end{bmatrix}
$$
We would like to recover $B$ (DAG) such that $X = B^\top X + N$ and $X \sim \mathcal{N}(0, \Sigma)$. In this case, there are $p! = 3! = 6$ different topological sorts, and at most six different $B$s can generate $X$. Based on Section C2, **it is worth noting that aside from these different $B$s, no other $B$ can generate the distribution $X$ [1]**. These $B$s and $\pi$s are:
$$
B(\pi_1) = \begin{bmatrix}
0&0&0\\\\1/3&0&0\\\\0&3/4&0
\end{bmatrix}\qquad \pi_1 = [2,1,0]
$$
$$
B(\pi_2) = \begin{bmatrix}
0&0&0\\\\1/3&0&1\\\\0&0&0
\end{bmatrix}\qquad \pi_2 = [1,2,0]
$$
$$
B(\pi_3) = \begin{bmatrix}
0&1&0\\\\0&0&0\\\\1/4&1/2&0
\end{bmatrix}\qquad \pi_3 = [2,0,1]
$$
$$
B(\pi_4) = \begin{bmatrix}
0&1&2\\\\0&0&0\\\\0&1/2&0
\end{bmatrix}\qquad \pi_4 = [0,2,1]
$$
$$
B(\pi_5) = \begin{bmatrix}
0&0&0\\\\1/3&0&1\\\\0&0&0
\end{bmatrix}\qquad \pi_5 = [1,0,2]
$$
$$
B(\pi_6) = \begin{bmatrix}
0&2&0\\\\0&0&1\\\\0&0&0
\end{bmatrix}\qquad \pi_6 = [0,1,2]
$$
Here $B(\pi_5) = B(\pi_2)$, although $\pi_2\ne \pi_5$. This is why we say it at most $p!$ different $B$.
For unidentifiable exponential families, consider the case where $X \sim \mathcal{N}(0,\Sigma)$ and $\Sigma$ is singular. For example,
$$
\Sigma = \begin{bmatrix}
1&1&1\\\\1&1&1\\\\1&1&2
\end{bmatrix}
$$
We can reduce this to a lower dimension. Note that here $X_1 = X_2$, so we only need to consider $(X_2, X_3)$:
$$
\text{Cov}(X_2,X_3) = \begin{bmatrix}
1&1\\\\1&2
\end{bmatrix}\succ0
$$
Then it reduces to previous case.
As for more general model, for each $\pi$, it is usually associated with **only one** pair $(\psi(\pi), \xi(\pi))$ in a certain way such that $P(x; \psi(\pi), \xi(\pi)) = P(x; \psi^0, \xi^0)$, similar to Linear Guassian case. Another example in **Common concern (1)** follows such property.
> I'm wondering if you mean C.2? I can't actually find this statement anywhere.
>
Apologies for the misunderstanding. You are correct; it should be Section C.2. We will include more details in Section 4.2 for clarity!
As stated in Section C.2, for any topological sort $\pi$, it corresponds to a pair $(\tilde{B}(\pi), \tilde{\Omega}(\pi)) \in \mathcal{E}(\Theta)$. For all DAGs with $p$ nodes, there are at most $p!$ different topological sorts. This explains why there are at most $p!$ different $B$ matrices that satisfy this condition.
[1] Aragam, Bryon, and Qing Zhou. "Concave penalized estimation of sparse Gaussian Bayesian networks." *The Journal of Machine Learning Research* 16.1 (2015): 2273-2328.
---
Rebuttal Comment 2.1:
Title: Thank you!
Comment: Dear Authors, thank you for the clarification. This addresses my main concern, so I will raise the score to 7 assuming it is clarified in the paper.
---
Reply to Comment 2.1.1:
Title: Thank you so much!!!
Comment: We sincerely appreciate the time and effort you put into reviewing our paper and providing valuable feedback. We will incorporate your suggestions to enhance the clarity and strength of our work for our readers! | null | null | Rebuttal 1:
Rebuttal: **(1) Another example beyond the linear gaussian model.**
Note that Assumption A holds as long as $P(X_i|X_A)$ has a unique SEM parametrization for any $i$ and $A$, where $A\subseteq[p]\backslash i$. Because for any fixed topological sort $\pi$, we could let $A = \\{\text{parents node of }i \text{ in }\pi \\}$, and this results in at most $p!$ equivalence class. This is for example true in the Gaussian model.
Another example is a generalized linear model with binary output, i.e., $X = (X_1,\ldots,X_p)$ and $X_i\in\\{0,1\\}$ for $i = 1,\ldots,p$. Let $B = (B_1,\ldots,B_p)$, then $\mathbb{E}[X_i\mid X_{pa(i)}] = g(B_i^\top X)$ where $g(s) = e^s/(1+e^s)$ which is equivalent to the following SEM
$$
X_i = \text{Bernoulli}(\exp(B_i^\top X)/(1+\exp(B_i^\top X)))\qquad i = 1,\ldots,p
$$
In this case, it can be shown that
$$
X_i\mid X_A \sim \text{Bernoulli}\Big(\frac{1}{1+\exp(- \sum_{r =1}^{|A|+1}(\sum_{j_1 = i,j_2,\ldots,j_r\in A}\beta^{ij_2\ldots j_r}x_{j_2}\ldots x_{j_r}))}\Big)
$$
is the unique parametrization of the conditional distribution (in terms of the coefficients $\beta^{ij_2\ldots j_r}$, which are a function of $B$). Thus, binary models also satisfy Assumption A. We will add this example to the paper.
**(2) More experiment on Nonlinear Neural Network, and General Linear model with binary output**
We conducted more experiments on Nonlinear Neural Networks (see Appendix D for details), and generalized linear models with binary output. The results are attached in the pdf.
**(3) Assumption A, B for Theorem 4**
First of all, we would like to emphasize that when using $\ell_0$ as the penalty term, Assumptions A and B are not needed at all. The proof for this case would be significantly simplified (see below for a proof sketch). Replacing $\ell_0$ with the differentiable quasi-MCP introduces significant complications, requiring some additional assumptions that distinguish our work from [1]. In fact, our assumptions are exactly what is needed to make the problem amenable to gradient-based optimization.
Considering Assumption A, the finiteness can be relaxed. What is really needed is that the minimal nonzero edge has enough “signal,” i.e.,
$$
\min\_{(\psi,\xi)\in \mathcal{E}(\psi^0,\xi^0)}\min\_{\\{(i,j):B(\psi)_{ij}\ne 0\\}}|B(\psi)|\_{ij}>0
$$
This is trivially true when $|\mathcal{E}(\psi^0,\xi^0)|$ is finite. When it is infinite, each $|B(\psi)|\_{ij}$ could be positive, but it is possible $\lim\inf_{(\psi,\xi)\in \mathcal{E}(\psi^0,\xi^0)}\min\_{\\{(i,j):B(\psi)_{ij}\ne 0\\}}|B(\psi)|\_{ij}=0$, because $|B(\psi)|\_{ij}$ can be arbitrarily small. The $\ell_0$ penalty deals with this with its discontinuity at zero, whereas the continuity of quasi-MPC makes this more challenging. This is the cost of differentiability, which we argue is worthwhile.
As for Assumption B, this is a common assumption in the optimization literature and quite weak in general. Moreover, this is nearly necessary because quasi-MCP cannot count the number of edges in $B(\psi)$ exactly: The magnitude of the quasi-MCP penalty does not reveal the number of edges. This is the price to pay when we replace $\ell_0$ with a fully differentiable sparsity penalty. Finally, we can point out that this can also be relaxed: What is needed is that for any $\epsilon>0,$ there exists $\delta>0$
$$
\ell(\psi,\xi)-\ell(\psi^0,\xi^0)>\delta\quad \text{for }\\{(\psi,\xi)\mid \text{dist}((\psi,\xi),\mathcal{E}(\psi^0,\xi^0))>\epsilon\\}
$$
In other words, it requires a loss gap when $(\psi,\xi)$ is not in $\mathcal{E}(\psi^0,\xi^0)$.
Finally, we include a brief proof sketch of a similar result to Theorem 4 when $\ell_0$ is used. This illustrates that the quasi-MCP indeed introduces fundamental difficulties. Moreover, the reason the proof simplifies compared to [1] is a) The proof below does not consider interventions (which is a major innovation of [1]), and b) The use of the weaker SMR assumption (vs faithfulness) simplifies the analysis.
**Proof sketch:** We can also assume $|\mathcal{E}\_{\min}(\psi^0,\xi^0)|=1$, following the same reason as for Theorem 4.
When $s_{B(\psi^0)} = 0,$ the result is obvious. When $s_{B(\psi^0)}>0$ , divide the parameter space into $A_1 = \\{(\psi,\xi)\mid s_{B(\psi)} > s_{B(\psi^0)}\\}, A_2 = \\{(\psi,\xi)\mid s_{B(\psi)} = s_{B(\psi^0)}\\}, A_3 = \\{(\psi,\xi)\mid s_{B(\psi)} < s_{B(\psi^0)}\\}$. It is straightforward to verify each case. For example, for $A_2$, since $|\mathcal{E}\_{\min}(\psi^0,\xi^0)|=1$, that implies $\forall(\psi,\xi)\in\mathcal{E}(\psi^0,\xi^0)$ and $(\psi,\xi)\ne (\psi^0,\xi^0)$, it holds $s_{B(\psi)}>s_{B(\psi^0)}$. Therefore, $\forall (\psi,\xi)\in A_2$, it holds $\ell(\psi^0,\xi^0)<\ell(\psi,\xi)$. As consequence, for any $\lambda>0$, result holds.
The other cases are similar with slight modifications.
(4) **Missing conclusion and limitation:** We proposed a fully differentiable score function for causal discovery, composed of log-likelihood and quasi-MCP. We demonstrated that the global solution corresponds to the sparsest DAG structure that is Markov to the data distribution. Under mild assumptions, we conclude that all optimal solutions are the sparsest within the same Markov Equivalence Class. Additionally, the proposed score is scale-invariant, producing the same structure regardless of the data scale under the linear Gaussian model. Experimental results validate our theory, showing that our score provides better and more robust structure recovery compared to other scores.
However, there are limitations to our work. We focus on parametric models and rely on assumptions such as the finiteness of the equivalence class and the boundedness of the level set of the log-likelihood. These assumptions limit the applicability of our theorem. Future work should explore ways to relax these assumptions to extend our method’s applicability to broader scenarios.
[1] Brouillard, et al. "Differentiable causal discovery from interventional data." (2020)
Pdf: /pdf/c13330aa474673af1bdd7cda8bd32b7fc399d7bc.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Multi-times Monte Carlo Rendering for Inter-reflection Reconstruction | Accept (poster) | Summary: This paper proposes a comprehensive inverse rendering method that incorporates indirect illumination to enhance the decomposition quality of environment lighting and BRDF materials. Specifically, this work uses multi-time Monte Carlo integration to model light transport and devises algorithms to accelerate computation by precomputing the diffuse term and leveraging an SDF-based geometry for initialization. Both qualitative and quantitative results are promising, demonstrating realistic shadows on glossy surfaces and higher PSNR values.
Strengths: 1. The results exhibit a decent quality. By incorporating indirect illumination, the modeling of shadows and glossy surfaces achieves a remarkable degree of photorealism.
2. The underlying theory is sound to me. The GGX appearance model is de facto in industry, and the approximation applied for the diffuse term is standard and commonly used in real-time rendering.
3. As mentioned, I believe that some insights are derived from real-time rendering techniques and I personally appreciate it as accurately modeling indirect illumination is also important for effective inverse rendering.
Weaknesses: 1. The optimization time is relatively long compared to other methods.
2. Due to computational limitations, the number of bounces is restricted, thus limiting the ability to model shiny surfaces and confining the results to glossy surfaces.
Technical Quality: 4
Clarity: 3
Questions for Authors: In general I enjoyed reading this paper. The content is well written and clear to me. Therefore, I’m on the positive side of this submission. A few questions regarding the technical details:
1. Will the geometry obtained from Sec. 3.3 be optimized in inverse rendering? The writeup suggests this (Lines 189-190), but I didn't find any experiments addressing it. I am curious about potential geometry improvements.
2. Regarding the material editing mentioned in Section 4.5, I have some questions about its implementation. It seems that material properties are represented as MLP. How do you intuitively edit different material properties, such as roughness? Or are these properties explicitly exported and stored as UV maps?
3. What does “orm” stand for in k_orm?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Limitations and potential negative social impact are well discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. This is indeed a current limitation of our approach. Using Monte Carlo integration to calculate the reflection equation itself is more computationally intensive than the real-time rendering split sum method used by nvdiffrec, because we have to use ray tracing and integration. To make matters worse, as the tracking depth increases, the number of rays we need to emit will increase exponentially, but fortunately, as we show in Figure 4 in the paper, a tracking depth of 2 is enough to handle most In the scene, the tracking depth of 3 will greatly increase the amount of calculation, but it cannot significantly improve the final effect.
2. We will optimize the geometry in subsequent calculations. But unfortunately only PSNR can be improved. Compared with non-optimized geometry, optimized geometry generally improves PSNR by 1 to 2 dB in the new perspective verification set, but the geometry cannot be improved. We speculate that this is the process of using the implicit method to learn geometry, and there is no rasterization step. Our differentiable rendering requires rasterization, so it should be the direct application of geometric PSNR caused by the difference in the forward process. lower. Therefore, we can overcome this shortcoming and obtain higher PSNR by optimizing the geometry in subsequent inverse rendering.
3. This is a map, where R, G, and B channels represent alpha, roughness, and metallic respectively.
---
Rebuttal 2:
Title: Details about the representation of the material
Comment: Inspired by nvdiffrec's excellent work, our material learning is divided into two stage. The first stage, we use an MLP to represent the material, and the second stage, we use xatlas to generate texture coordinates automaticly. Then we sample the MLP to initialize 2D textures. Subsequently, the textures are optimized based on the gradients provided by the nvdiffrast. We experimentally compared two materials representations. Using mlp to represent materials can quickly converge,and the subsequent optimization with 2D textures can continue to improve the rendering and relighting results.
---
Rebuttal 3:
Comment: Dear authors,
Thanks for clarifying my questions! I have no further questions and overall I really enjoyed reading this paper! I will keep my score as a Weak accept.
---
Rebuttal Comment 3.1:
Comment: Thank you for your reply and appreciation. We will include the disscussion in the final paper. | Summary: This paper proposes an inverse rendering method that handles multi-time Mante Carlo integration which models indirect illumination. It reduces the computational cost by pre-computing the diffuse map based on a Lambertian model. It also proposes to use spherical Gaussian encoding to improve the initial SDF reconstruction of scenes.
Strengths: 1. The presentation is clear and easy to follow.
2. Modeling indirect illumination is so far under-explored in inverse rendering
Weaknesses: 1. Overclaim of novelty: multi-bounce MC integration has already been employed in earlier inverse rendering work [1]. On the other hand, the diffuse Fresnel term (Eq. 7) is not commonly used in recent inverse rendering works. Most related works (TensoIR, nvdiffrec/nvdiffrecmc) use exactly Eq 9, so the claim of transferring from Eq. 7 to Eq. 9 should not be counted as a contribution.
2. Continuing from (1), using a pre-computed diffuse map also prevents correct modeling of diffuse shadows, this is not discussed nor mentioned in the paper.
3. Inaccurate/incomplete description of the geometry module: IPE is used for encoding input positions to MLPs, while judging by the text (line 181) spherical Gaussian should be used for encoding viewing/reflecting directions. However, in this work, spherical Gaussian is actually used to replace IPE, which is very confusing. Also in Figure (2b) the entire Diffuse MLP is replaced with "SG" and there is no explanation of what this stands for. Also in the same figure, the term "SGE" is not explained (maybe spherical Gaussian encoding? Then what is the difference between SGE and SG?)
4. Weak evaluation: there is only a single table in the entire paper. Only PSNR and training time are reported. It is not clear how the PNSR metric is computed (is it novel-view synthesis? Is it relighting?). There is no evaluation of other intrinsic properties that are commonly evaluated in other inverse render papers, e.g. albedo, normal error, etc. The proposed spherical Gaussian encoding is not properly ablated - only qualitative comparison is presented.
5. Missing datasets: it would make the evaluation more complete if the method could evaluate more standard datasets (NeRFactor dataset, TensoIR dataset) where many other works have been evaluated. Also, there is no real-world dataset tested, making it questionable whether the proposed algorithm can work in real-world scenarios.
Factual errors: GGX has nothing to do with the diffuse term used in Disney BRDF. GGX is a microfacet normal distribution function that models specifically the specular part of the Disney BRDF. The diffuse Fresnel term in Disney BRDF is invented by the original Disney BRDF paper and is often omitted in real-time rendering engines and replaced with the Lambertian model.
[1] Mai et. al. Neural Microfacet Fields for Inverse Rendering, ICCV 2023
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 1
Limitations: A crucial limitation, i.e. diffuse shadows are not handled by the proposed method, are omitted and not properly discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***Overclaim of novelty***
The paper Neural Microfacet Fields for Inverse Rendering is indeed an excellent work that considers indirect lighting, but their method for computing indirect lighting is significantly different from ours. As stated in their paper, they approximate the rendering equation into form Equ(15), but a key variable in this equation is obtained by calculating the environment light map. This is fundamentally different from our approach. Our method involves performing Monte Carlo sampling again, essentially repeating the same method to compute it anew. Their method essentially belongs to the implicit learning of indirect lighting, which I believe does not involve computing indirect lighting using the multi-bounce MC integration method.
Furthermore, in our paper, we list Equations 7 to 9 to explain the principle of approximating the diffuse part of our indirect lighting using the diffuse map. Although real-time rendering
***Diffuse map***
Our precomputed diffuse map is used only to approximate the diffuse part of the indirect lighting. Although the diffuse map does influence the modeling of shadows, it affects the modeling of light within the shadows rather than the light directly entering the viewer's eye. In our Table 1, we also demonstrate the impact of using the diffuse map on the results. The results show that using our acceleration structure has almost no impact on the accuracy of the final results, while significantly saving computation time.
***Weak evaluation and Missing dataset:***
Thank you for pointing out the shortcomings of our quantitative comparative experiments. The PNSR metric is computed in novel-view synthesis. Indeed, our current judgment on the quality of target material decoupling is indeed at the level of subjective judgment based on life experience, so I use the table-horse data set in our data set , generated the albedo and roughness of his GT, and used this data set to quantitatively measure the decoupling effect of our work. The influence of geometry on the final effect does require quantitative experiments to prove the effectiveness of the method we use. Therefore, we have made some supplements to the experiments in the geometry part. We have put the quantitative comparison of geometry on the final effect in the PDF. Figure 5.
Otherwise, thank you for pointing out the inaccuracies in our paper. According to Disney's article and the book Real time rendering, GGX is indeed described as a function of the surface normal vector distribution, and it does not represent the BRDF proposed by Disney. Thanks again for finding our error and correcting it.
---
Rebuttal 2:
Title: Evaluation of the albedo and the k_orm
Comment: We computed the PSNR metric on our __table horse dataset__. The results are shown in the table below:
| PSNR | albedo | $k_{orm}$ | normal | |
|---|---|---|---|---|
| Ours | __21.50__ | 18.56 | __28.58__ | |
| nvdiffrecmc| 20.04 | 18.67 | 26.09 | |
| nvdiffrec | 17.58 | 18.64 | 18.31 | |
---
Rebuttal 3:
Title: Description of The Geometry
Comment: Our SG(spherical Gaussian) lobes were not used for encoding viewing/reflecting directions. Similar to the method in [1], the feature output by the sdf network is combined with a set of SG lobes to compute the color values. [2] also demonstrated the effectiveness of this approach.
However, since we are dealing with more complex reflective objects, we adopted two different structures to better distinguish between diffuse light and specular light. The SGE used for specular light is similar to SG but operates in a higher dimension. We used higher dimensions to represent specular reflection because specular reflections are view-dependent.
[1] Lior et. al. Bakedsdf: Meshing neural sdfs for real-time view synthesis, SIGGRAPH 2023
[2] Christian et. al. Binary opacity grids: Capturing fine geometric detail for mesh-based view synthesis, TOG 2024
---
Rebuttal 4:
Comment: Thanks for the reply. I have additional questions regarding the author's arguments:
1. The author argues that in Neural Microfacet Fields (NMF for short), Eq. 15 uses the environment light map which is different from your design. I believe that here you mean $E(\mathbf{p})$ which is termed **irradiance environment map**. The term might be misleading but it essentially records pre-integrated cosine-weighted incoming light over the hemisphere for each possible normal direction. This is (almost) equivalent to the pre-convolved diffuse light map in nvdiffrec (as well as this work if I understand correctly). The only difference would be that they further compress the irradiance map into SH representation, while in nvdiffrec the pre-convolved diffuse light map is stored in a 6x512x512 cubemap. On the other hand, NMF uses MC sampling to compute the rendering equation, and they trace 2 bounces for indirect illumination. Please check Sec. 4.5 and Tab. 1 of the paper. Fundamentally it uses the same MC integration with mult-bounce (2) sampling to model indirect illumination.
2. Regarding the diffuse map, the paper states that "we can precompute it via an MLP" (line 165), but how? What is the difference between your diffuse map and that of nvdiffrec, except for MLP vs. explicit tensor?
3. Ignoring diffuse shadow would not impact the final results, most likely because the tested scenes are specular. This is why I asked the author to test more datasets, especially the TensoIR dataset where objects are not that specular (but also not fully diffuse)
4. Also, NeRO's glossy-real dataset contains ground-truth geometry. It would make the evaluation more complete if the authors can provide a quantitative comparison on this glossy-real dataset against NeRO.
5. I do not find anything related to diffuse map ablation in Tab. 1, can you elaborate on which entry in the table specifically reflects the effectiveness of diffuse map? Also, what does the "w.o Acc." entry do?
---
Rebuttal 5:
Comment: Thank you very much for your comments.
1.
Based on Section 4.5 of NMF, it is indeed that our rendering method is quite similar when our depth is set to 2. However, there are still significant differences in our work:
+ NMF uses a density field to represent geometry, whereas we use a triangle surface mesh. Since our representation method is consistent with the widely adopted approach in the industry, it allows us to easily leverage hardware-accelerated ray tracing.
+ NMF mentions in Section 6, "It also does not handle interreflections very well, since the number of secondary bounces is limited." However, continuing from (1), the number of secondary bounces in our approach is not limited.
+ Additionally, our method allows for easily increasing the ray tracing depth. A higher tracing depth is more effective in scenes where light repeatedly bounces between two shiny objects.
Thank you very much for pointing out that we overlooked this excellent work. We will conduct more experiments in the future to compare the effectiveness of our approaches. Additionally, we will validate our method in challenging scenes where light repeatedly bounces between two shiny objects.
2.
We use the term *secondary* to denote a ray bounced from a surface position $p$, similar to NMF. For the secondary color, we use equation 10 to calculate the diffuse color. Furthermore, the diffuse light at point $p$ can be expressed using an equation $f(p)$, which is why we can represent it using an MLP. However, for light directly entering the eye, we still use Monte Carlo sampling to compute the diffuse light rather than relying on this MLP.
3.
Our directly viewed diffuse light includes shadows, and we use it to supervise the diffuse light of our secondary rays. This essentially means that we have baked the shadows into the diffuse light, which is a key difference from the diffuse light used in nvdiffrec. Although our focus is on reflective objects, we will also conduct additional experiments on non-reflective datasets, similar to TensoIR, in the future. We greatly appreciate your feedback regarding the datasets.
4.
We evaluated the Chamfer distance, and the results are shown in the table below:
| Chamfer Distance ($\downarrow$) | coral | bear | materials | |
|---|---|---|---|---|
| NeRO | 0.13 | 0.11 | 0.0057 | |
| Ours | 0.13 | 0.10 | 0.0030 | |
It can be seen that on the bear and coral datasets, our geometry performs just as well as NeRO's, but on the materials dataset, our results are better.
5.
Our "acc" means the diffuse map, and "w/o acc" indicates that for the diffuse color of the *secondary* rays, we still use ray tracing directly, similar to how it's done for the *primary* rays.
---
Rebuttal 6:
Comment: Thanks for the prompt reply.
1. I am convinced by the argument on the differences between NMF and the proposed method. Please include such discussion in the final version of the paper.
2-3. Regarding diffuse light, now I see there was a misunderstanding on my side. I thought the diffuse light (Eq. 7-10) refers to the outgoing radiance towards the camera/eye, but it actually describes the incoming radiance towards the intersection between the primary (camera) ray and the object surface. Eq. 7-10 does not consider the shadowing effect, but Eq. 6 considers the shadowing effect for the diffuse part of the primary ray via ray tracing. So for diffuse color, you only trace one bounce, while for specular color, you trace more than one bounce (up to 3 in the experiments). Please correct me if I am wrong in any of the above statements.
4. The results look good, I am also convinced of this part
I am now inclined to accept the paper, though I still have some additional questions regarding the diffuse light:
1. How is the environment map represented? Is it the same as nvdiffrec/nvdiffrecmc, i.e. environment map is stored as a mipmap/tensor?
2. How do you "precompute" Eq. 10 exactly? You mentioned that the whole term (or just the light integration part?) is represented as an MLP. However, the environment light is not known prior. You need to optimize the environment map during the training process as I understand. Then how do you precompute Eq. 10 and store it in an MLP, as the environment map is changing during the training process?
---
Rebuttal 7:
Comment: 1.
Thank you very much for your positive reply. We sincerely appreciate you pointing out that we overlooked an excellent work, NMF. In the final version of our paper, we will include the comparison between our work and NMF.
2.
Your statement is correct. Indeed, for diffuse color, we only trace one bounce, while for specular color, we trace more than one bounce. The light that reaches the eyes is accounted for by ray tracing, including shadows.
3.
Similar to what is described in equation 10, the diffuse light at point $p$ can be represented as $f(p)$, which is a function dependent solely on the position of the object. Empirically, we can represent it in the **same** form as $k_{d}$ or $k_{orm}$. It is the **same** as in nvdiffrec/nvdiffrecmc and is stored as a **mipmap**.
4. As for the precomputation in Eq.10, we initially had similar concerns, which led us to conduct the following three sets of experiments:
+ At the beginning of the optimization, we set depth=1 to quickly learn a rough diffuse light map.
+ Set depth=2, and at the beginning of the experiment, do not use the diffuse light map structure. Once the optimization reaches a certain level, then applying the diffuse light map.
+ Directly optimize using an unknown diffuse light.
Our results indicate that directly using the diffuse map can achieve the same effectiveness as the previous two approaches, and the unknown diffuse light map can be optimized quickly. We would include this in our final paper.
Once again, thank you for your feedback and valuable comments.
---
Rebuttal Comment 7.1:
Comment: I am mainly confused about the statement "we can pre-compute it via an MLP" (line 165). Judging by lines 199-200, it seems that this diffuse light MLP is supervised solely by the diffuse color prediction from the model. So it seems that it has nothing to do with "pre-computation". If this is the case, then the MLP is just self-supervised and the term "pre-compute" is causing unnecessary confusion.
On the other hand, precomputation of the light integral in Eq. 10 can be done by convolving the stored environment map tensor with the clamped cosine lobe - this operation is quite efficient during training. I was thinking along this way as this is also used in nvdiffrec.
Overall, I would recommend the authors to make the following changes in the final version:
1. The diffuse light MLP is used to predict only light beyond one bounce. This is hinted in Fig. 1, however, the transition around lines 153-154 is a bit hard to follow. It should be stated more clearly by changing the notation in equations to denote that diffuse light in this section is for indirect illumination only. Lines 154-162 are also unnecessary as the diffuse term is not really a part of GGX model, and most existing works (nvdiffrecmc, for example) already use the Lambertian model for the diffuse appearance
2. Describe accurately that the diffuse light MLP is self-supervised by the diffuse appearance prediction of primary rays, and remove the "pre-compute" term that might causes confusion for readers.
---
Reply to Comment 7.1.1:
Comment: Thank you for the instant reply. The "precompute" term here indeed may cause confusion and we will consider removing it in the final version. We will modify the corresponding part in the final version based on the discussion to make the sentences clearer. Thanks again for the instructive comments. | Summary: The authors propose an inverse rendering method that reconstructs the geometry, materials, and lighting of 3D objects from 2D images, effectively handling scenes with multiple reflections and inter-reflections. To address the high computation cost of Monte Carlo sampling, the authors propose a specularity-adaptive sampling strategy, significantly reducing the computational complexity. The authors also introduce a reflection-aware surface model to initialize the geometry and refine it during inverse rendering, to improve the inverse rendering reconstruction quality. Experiments show the proposed method outperforms other inverse rendering methods.
Strengths: 1. The paper is well-written and easy to understand.
2. The proposed methods are intuitive and the results look nice.
Weaknesses: 1. The validation on real data is missing. The paper only shows its results on synthetic data. It would be better to test this work on challenging real data, such as the specular real data captured by NeRO.
2. The selected baselines are incomplete. There exists a strong baseline candidate, NeRO, which can also reconstruct the geometry and materials of specular objects. (Although its main focus is geometry reconstruction). Since it has released all of its code, it would be better if the authors could also compare this work with NeRO.
3. Need more analysis on the SG encoding. SG encoding looks like a very important part of the final quality(because it plays an important role in the geometry reconstruction). However, the authors only use a small part of Fig. 2 to showcase its effectiveness. More visual comparison and quantitative comparison would be appreciated. In the meanwhile, SG encoding seems to be very similar to [1], which was released on Arxiv last year. The authors should explain the difference between its SG encoding and [1]'s encoding. Since the authors didn't cite [1], I assume that SG encoding should be the author's original contribution.
4. The quantitative comparison is insufficient. More quantitative comparisons on the relighting results and reconstruction of BRDF and geometry should be included.
[1] SpecNeRF: Gaussian Directional Encoding for Specular Reflections
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Please show some results on real data with some baseline comparison.
2. Please provide more baseline comparisons and quantitative comparisons as discussed in the weakness section.
3. Please answer the question related to SG encoding.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The limitations have been discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***Missing validation on real data:***
Thank you very much for your instructive feedback, which pointed out the oversight in our experiments. The NeRO real-world dataset is indeed a challenging dataset because the color of the object's surface can change significantly with different viewing angles, causing ambiguities in both material and geometry. We have supplemented the results on the two NeRO real-world datasets, bear and coral. On the bear dataset, we detailed our rendering results, PBR materials, and reconstructed geometry, as shown in Figure 1. Figure 2 shows our relighting results on the bear and coral datasets. From the relighting results, it can be seen that the disentangled PBR materials are reasonable, and our method effectively distinguishes between the object's material and the reflections on its surface. Figure 3 shows our geometric reconstruction results on the coral dataset. Our method achieves results very close to the ground truth, similar to NeRO, and shows significant improvements over inverse rendering methods like nvdiffrec and nvdiffrecmc.
***Comparisons with NeRO:***
NeRO is a recently outstanding work for object reconstruction, focusing on the challenging scenario of high-reflection objects and achieving excellent results. The difficulty of high-reflection objects lies mainly in two aspects: First, due to the nature of specular reflection, the observed color of an object changes significantly with the viewing angle, leading to view inconsistency. Second, for smooth objects, especially those with low roughness, indirect lighting has a greater impact and can even play a major role from certain angles. However, the calculation of indirect lighting is closely related to the object's geometry, the material properties of other regions of the object, and ambient lighting, making it a relatively difficult value to model and compute.
In the rebuttal pdf, we compared our work with NeRO in terms of rendering quality, PBR materials, geometric reconstruction, and relighting. In Figure 1, we provide a detailed comparison on the NeRF synthetic dataset materials and the NeRO real-world dataset bear. Our method achieved excellent results on both materials and bear. While NeRO performed well on bear, it encountered issues on materials. It can be seen that indirect lighting caused significant ambiguities in their geometry. Both our method and NeRO's method consider indirect lighting, but their indirect lighting is implicitly learned through neural networks, whereas ours is explicitly calculated using ray tracing and Monte Carlo sampling, which may explain the differences in our results.
Figures 2 and 3 show the geometric reconstruction results on coral and our relighting results on bear and coral, respectively. These results demonstrate that our method is not inferior to NeRO in terms of geometric and material reconstruction of high-reflection objects, and in scenarios where light continuously reflects between multiple objects, we can achieve even better results.
---
Rebuttal 2:
Title: Analysis on the SG encoding
Comment: Inspired by bakedSDF[1], we replaced the MLP with a set of SGs (spherical Gaussians) to directly obtain diffuse light. [2] also demonstrated the effectiveness of this approach. Given our focus on reflective objects, we employed different structures to more effectively distinguish between diffuse light and specular light. The SGE used for specular light is similar to SG but operates in a higher dimensionality. Since specular light is significantly influenced by the viewing angle and geometry, it is more complex than diffuse light, requiring more information for its representation.
[1] Lior et. al. Bakedsdf: Meshing neural sdfs for real-time view synthesis, SIGGRAPH 2023
[2] Christian et. al. Binary opacity grids: Capturing fine geometric detail for mesh-based view synthesis, TOG 2024
---
Rebuttal 3:
Comment: Thanks for the authors' replies, which solve some of my questions, but some questions still remain:
1. As I said in the w3, it seems that SG encoding is very important to the result quality. Fig.5 of the rebuttal file indicates the same thing. I said in my review that I wanted to know the difference between the SG encoding used in the [1] and the authors' methods. I am not sure if the SG encoding can be considered as the original contribution of this work. But the authors did not fully address this in their response.
[1] SpecNeRF: Gaussian Directional Encoding for Specular Reflections
2. I mentioned in the weakness section that the quantitative comparison of this paper is insufficient. I am not sure why the author didn't respond to this concern in the rebuttal reply for me. I noticed that Reviewer UfeL had similar concerns, which the authors addressed in their rebuttal. However, it seems that a quantitative comparison for relighting is still missing.
Given these unresolved issues, I don't plan to raise my score.
---
Rebuttal 4:
Comment: Thanks for your reply and valuable comments.
1. To better capture the geometry of reflective objects, we employed the structure shown in Fig. 2(b), which differs from the **representation** used by SpecNeRF. The Gaussian Directional Encoding in SpecNeRF utilizes 3D Gaussians, and its representation is as follows:
$$\mathcal{G}_i(\mathbf{x})=\exp \left(-\left\|\mathcal{Q}\left(\mathbf{x}-\boldsymbol{\mu}_i ; \mathbf{q}_i\right) \odot \boldsymbol{\sigma}_i^{-1}\right\|_2^2\right),$$
where $\boldsymbol{\mu}_i $ represents position, $\boldsymbol{\sigma}_i $ represents scale, $\mathbf{q}_i $ represents quaternion rotation.
In contrast, our SG encoding uses spherical Gaussians, and its representation is as follows:
$$G(\boldsymbol{\nu} ; \boldsymbol{\xi}, \lambda, \boldsymbol{\mu})=\boldsymbol{\mu} e^{\lambda(\boldsymbol{\nu} \cdot \boldsymbol{\xi}-1)},$$
where $\boldsymbol{\xi}$ is the lobe axis, $ \lambda$ is the lobe sharpness, and $\boldsymbol{\mu}$ is the lobe amplitude.
2. We used two environment lights Golden Bay and Limpopo Golf Course from Poly Haven [1] and Materials as ground truth to generate the relighting dataset. We report the PSNR results as follows:
| | Golden Bay | Limpopo Golf Poly | |
|---|---|---|---|
| Ours | __19.38__ | __19.72__ | |
| nvdiffrecmc| 16.12 | 16.47 | |
| nvdiffrec | 15.90 | 16.04 | |
We apologize for overlooking your comments on quantitative results. We present the quantitative metrics results (discussion with reviewer UfeL) as follows for you convenience:
**Evaluation of the albedo and the $k_{orm}$**
| PSNR ($\uparrow$) | albedo | $k_{orm}$ | normal | |
|---|---|---|---|---|
| Ours | __21.50__ | 18.56 | __28.58__ | |
| nvdiffrecmc| 20.04 | **18.67** | 26.09 | |
| nvdiffrec | 17.58 | 18.64 | 18.31 | |
**Evaluation of the reconstructed geometry**
| Chamfer Distance ($\downarrow$) | coral | bear | materials | |
|---|---|---|---|---|
| NeRO | 0.13 | 0.11 | 0.0057 | |
| Ours | **0.13** | **0.10** | **0.0030** | |
[1] Poly Haven. https://polyhaven.com/ | Summary: This paper presents a method for learning disentangled scene representations from images. The proposed pipeline has two stages: the first recovers geometry using an SDF-based model to learn geometry from the images through differentiable rendering. The second applies differentiable ray tracing to predict material parameters and environment lighting. The proposed method differs from previous approaches in that it renders inter-reflections between objects which create indirect lighting rather than only reflections of the environment and shadows. This is shown to produce better results than existing methods in scenes where these inter-reflections are present.
Strengths: The core idea is solid, and seems to have the expected effect on improving both recovery of environment lighting and rendering quality. I think it would also be fairly applicable to other pipelines which use ray tracing to recover lighting, most likely with similar benefits.
The explanation is clear, and I think the method should be reproducible from the paper, and the authors have also said they will release code.
Generally, I think this is a good contribution tackling a quite difficult problem. The strategies of diffuse pre-computation and spherical Gaussian radiance models will likely be useful to future efforts in this direction.
Weaknesses: It would have been nice to see some results on real data. The synthetic examples seem to show a clear improvement, but it would help to see the difference in a more practical setting.
Technical Quality: 3
Clarity: 3
Questions for Authors: Would it be possible to run on the real capture dataset from NeRD, similar to nvdiffrec?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors are clear about limitations, and I think they cover them well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***More experiments on real-world datasets***
Thank you for your valuable suggestions. Experiments on real-world datasets are important for assessing the practicality of inverse rendering tasks. Therefore, we have supplemented the experiments on two real-world datasets, i.e., NeRD and NeRO. On the NeRD dataset, we compared our method with nvdiffrec and nvdiffrecmc. Additionally, the NeRO dataset includes several high-reflection objects, making it a challenging dataset. Conducting experiments on this dataset allows for a convenient comparison with the strong baseline NeRO. Thus, we also performed experiments on this dataset, with results shown in Figures 1, 2, and 3 in the rebuttal pdf.
These results demonstrate that our method can generate detailed geometry and accurate BRDF materials from multi-view images, even when dealing with challenging real-world objects. This capability results in impressive relighting outcomes. However, our results on the NeRD dataset did not show significant improvements over nvdiffrec and nvdiffrecmc, which could be due to two main reasons: first, the object's base color is very strong, making it difficult for indirect lighting to have a significant impact. Second, the object has a predominantly convex shape, making it challenging to receive indirect light reflected from other areas. Therefore, our method's strengths are not fully demonstrated on this particular object. | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their constructive comments. Your suggestions have been invaluable in refining and strengthening our work. In this general response, we will address the three important parts that were commonly mentioned in the discussions, namely the experiment on the real-world dataset, the comparison experiment with a strong baseline candidate, and the experiment on the geometry module.
***Experiments on real-world dataset***
We acknowledge the importance of evaluating our proposed method on the real-world dataset to demonstrate its applicability and generalization. We have taken your advice into consideration and conducted extensive experiments on a diverse and challenging real-world dataset. For ease of comparison with the current strong baseline NeRO, we conducted experiments on the high-reflection real-world datasets bear and coral captured by them. The results are shown in Figures 1, 2, and 3. Additionally, we conducted experiments on the NeRD real-world dataset, with results shown in Figure 4. Through these experiments, we aim to demonstrate the effectiveness and generalizability of our approach in handling challenging real-world datasets.
***Comparison experiment with a strong baseline candidate***
We appreciate your suggestion to compare our work with the latest outstanding work. NeRO is an excellent contribution to the field of object reconstruction, focusing on high-reflection objects, much like our work. To demonstrate the effectiveness of our method, we conducted a comprehensive comparison with NeRO from three aspects. Figure 1 presents a thorough comparison on the NeRF synthetic dataset materials and NeRO’s real-world dataset bear. Figure 2 shows our relighting results on real-world datasets. Figure 3 displays our geometric reconstruction results on the NeRO real-world dataset coral. These results clearly demonstrate that our method performs on par with NeRO on their real-world datasets, and we achieve even better results in scenarios like the materials dataset, where light continuously reflects between objects.
***Experiment on the geometry module***
Thank you very much for pointing out the shortcomings in our geometric experiments. In our method, high-quality geometry is indeed a critical factor affecting the final results, as our approach to calculating indirect lighting relies on accurate geometry. Ambiguities in geometry can also lead to ambiguities in the object's material and surface color. Therefore, in Figure 5, we present the direct impact of learning geometry with our improved method compared to the previous method on subsequent results, as well as the quantitative comparison in terms of PSNR.
We believe that these revisions and additional experiments will strengthen our paper and contribute to the advancement of the field. We are grateful for your valuable comments and appreciate the opportunity to enhance the quality and impact of our work.
Pdf: /pdf/7fd8fd69ea8c57fda3e2e321e225686a2fcc1044.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Interpretable Decision Tree Search as a Markov Decision Process | Reject | Summary: This paper formulated the problem of finding an optimal decision tree as Markov Decision Problem and solve the scalability problem using an information-theoretic test generation function. This method provides a trade-off between the train accuracy and tree sizes, the decision tree naturally offers interpretability over ML algorithms.
Strengths: This paper is well-written and well-organized, it combines RL and decision tree generation, and building MDP before constructing the decision tree.
Weaknesses: 1. The paper lacks sufficient novelty. The approach of constructing a Markov Decision Process (MDP) and using Decision Trees (DT) to generate actions is not new. Specifically, Algorithm 1 appears to still rely on Classification and Regression Trees (CART) for splitting criteria, which diminishes the originality of the proposed method.
2. The evaluated scenarios in the paper are not clearly articulated. The algorithm has not been tested against well-known benchmarks, unlike other optimal DT algorithms. This makes it difficult to assess the comparative performance and robustness of the proposed approach.
3. The advantages of using this algorithm instead of CART are not clearly demonstrated. Both algorithms control tree size and depth. However, CART is known to converge faster and offers a simpler implementation. Without clear evidence of the benefits, it is hard to justify the use of the proposed method over established techniques like CART.
4. The definition of actions generated by the tree is ambiguous. It is not clear whether the actions are discrete or continuous. If the algorithm is designed to build an MDP, it should be tested on general reinforcement learning (RL) tasks to validate its effectiveness and applicability in broader contexts.
Technical Quality: 2
Clarity: 2
Questions for Authors: See weakness section above.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
Thank you in advance for engaging in a discussion with us. We appreciate your remarks that show you have studied our work well; thank you!
$\textbf{Novelty of DPDT}$
Our MDP formulation of decision tree learning is the first to be applicable to both continuous and categorical data attributes. We compare existing MDP formulations to our approach (DPDT) in Appendix G. Although Algorithm 1 still relies on CART, we believe this is where the beauty and simplicity of our work is best demonstrated. Let us explain.
The spectrum of decision tree algorithms has two ends. On one end, there is CART the well-known heuristic that chooses how to partition the training data by only looking at the current entropy maxmization in the targets. CART does not consider how the first split chosen will affect the overall performance of the future splits. On the other end optimal decision tree algorithms such as Quant-BnB to which we compare ourselves computes all possible trees (combination of splits) and returns the one with highest accuracy.
DPDT can be anywhere on that spectrum. DPDT can return the same tree than CART by setting $K=1$ in algorithm 1. DPDT can also return the same tree than optimal algorithm such as Quant-BnB By replacing splits returned by CART in algorithm 1 by the set of all possible splits. The key thing is that considering all splits is costly (combinatorial optimization problem). So by not considering all splits (optimal algorithms) and by not considering only single splits (CART) but by considering some small set of splits such as the ones from a heuristic tree obtained with CART; DPDT is somewhere in the middle of the spectrum of algorithms with accuracy close to the optimal trees and has runtimes order of magnitudes shorter than optimal algorithms. Surprisingly, we showed that DPDT can approach QuantBnB performance (table 1) despite exploring a significantly smaller susbet of splits (figure 1).
The other novely of our MDP formulation is that it allows for the computation of many trees at the same time for the training data but different regularization values (see section 5.2 and figure 3)!
$\textbf{Advantages of DPDT over CART}$
Indeed, CART is a simple heuristic that works fine in practice for generalization tasks. Our work, DPDT, offers advantages over CART when a user has interpretability or cost constraints. By definition of algorithm 1, DPDT trees will always have shorter decision path in average than CART trees for equal accuracies. In parctice we show this in Fig 2 right plot where we bar plot the gain over CART. Similarly, it is possible to give feature costs as input for DPDT, then the resulting trees will trade-off automatically between feature cost and accuracy. We show in a new experiment that DPDT pareto dominates CART trees in practice for this application too: see global rebuttal (1-page pdf at top of open review page). A real-life use of feature cost could be when testing a feature is more expensive than others, e.g, testing with an M.R.I is more expensive than testing the size of a patient. DPDT optimizes cost-accuracy or decision length-accuracy naturally trought the MDP reward.
$\textbf{Optimal decision trees benchmarks}$
The datasets on which we test DPDT are common to recent optimal tree litterature such as [1, table 1], [2, table 4] and [2, table 2]. Please let us know if there are specifics dataset on which you like us to test DPDT.
We do not understant the 4 th weakness mentioned by the reviewer but we are curious to hear more.
$\textbf{Conclusion}$
In conclusion we believe that figure 2 and 3 already demonstrates the advantage of DPDT over CART in a context of interpretability. Would the reviewer consider raising their score if we also include an experiment that demonstrates DPDT superiority over CART for a task with feature costs?
Thank you in advance.
[1] MurTree: Optimal Decision Trees via Dynamic Programming and Search, Demirovic et. al., JMLR 2022
[2] Blossom: an Anytime Algorithm for Computing Optimal Decision Trees, Demirovic et. al., ICML 2023
[3] Quant-BnB: Scalable Branch-and-Bound Method for Optimal Decision Trees with Continuous Features, Rahul Mazumder et.al., ICML 2022
---
Rebuttal Comment 1.1:
Comment: Thank you for answering my questions, I have read the authors' submitted rebuttal, and I would like to maintain my score.
---
Reply to Comment 1.1.1:
Title: Question
Comment: Dear reviewer,
What are the benchmarks you had in mind when you said "The evaluated scenarios in the paper are not clearly articulated. The algorithm has not been tested against well-known benchmarks, unlike other optimal DT algorithms. This makes it difficult to assess the comparative performance and robustness of the proposed approach." ?
Furthermore, how can we convince you that our approach has many advantages over CART in addition to the what we wrote in our rebuttal?
Thank you
---
Rebuttal 2:
Title: Rebuttal
Comment: Dear reviewer,
Please take the time to read our rebuttal, we would really appreciate it.
Thank you in advance | Summary: The authors pose binary decision tree construction within the framework of Markov Decision Processes. They first propose methods for constructing an MDP from a decision tree construction problem, exploring varying test generating functions that trade off the coverage of the search space vs the size of the search space. They then apply Dynamic programming to solve the resulting MDP and show this learnt method can both create binary trees that minimise the loss over a dataset but that it can also be used to add additional losses a user may have over decision trees such as the prior that trees should be small, making them interpretable. They evaluate their proposed method, comparing with other high performing methods such as Quant-BnB, MurTree and a DeepRL method.
Strengths: - Well written paper and easy to understand the method
- Clearly an important direction of research
- Thorough experiments with appropriate baselines and good range of datasets to ensure the conclusions generalise to a wide range of datasets
- Code fully provided, along with implementations of baselines used
Weaknesses: - Various versions of Reinforcement Learning for binary tree construction have previously been explored. While the implementation in this paper is ultimately different and appears to significantly improve performance, there is limited novelty of the approach. Novelty largely comes down to the test generating functions explored and the addition of extra losses (interpretability) in addition to just the dataset accuracy.
- Small formatting issue
- Table 1 is too small
Technical Quality: 3
Clarity: 3
Questions for Authors: - Do you have any intuition as to why Deep Reinforcement Learning methods fail in this domain, compared to non Deep RL approaches. And do these reasons present any problems when scaling this method to larger datasets where neural learning would perhaps be introduced.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: - Limitations are appropriate addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
We thank you so much for your review. Your commments really reflect your inverstment in reading and understanding our work and we are very excited to engage in a discussion with you!
$\textbf{DPDT has many advantages over other MDP formulations}$
It is true that various RL approaches already exist for constructing decision trees. As the reviewer pointed out novel MDP formulation allows for user-defined costs such as costs of adding nodes to control the complexity (interpretability) of the tree. By definition, other MDP formulations of decision tree construction might already control the tree complexity with the MDP discount factor. In the following we highlight how our MDP formulation and solver DPDT are unique and promising for constructing decision trees in real use cases.
- $\textit{Interpretabillity is not the only cost that DPDT can deal with}$. Indeed, other costs such as feature costs can be used as the reward in the MDP construction. We added an example in the global 1 page pdf rebuttal on top of the Open Review page where some features of the training data are more expensive than other to test in the decsision process. We show that compared to the widely used in practice CART, DPDT can construct trees that better trade-off between feature costs and accuracy. Other MDP formulations and RL work tackled this problem too [1] but cannot be used in practice the same way as DPDT. We explain in the second point why and highlight another promising novelty of DPDT.
- $\textit{Unlike any other MDP forumlation, DPDT computes a whole pareto front of trees at once.}$ Other MDP formulations [1, 2] that include a cost for interpretability or features in the MDP reward, DPDT solves the Bellman optimality equation for multiple costs at once (see section 5.2 of our work). This is a strong feature of DPDT as a user would not have to worry about what cost to choose for interpretability and re-run algorithms [1] or [2] if they are not satisfied with the resulting tree cost-accuracy trade-off: DPDT returns all the trees on the cost-accuracy pareto front and the user can the choose the one that best suit their need! We believe this is key novelty compared to other MDP formulations. However we aknowledge that indeed solving a set of Bellman optimality equations at once is possible for DPDT because there is no neural learning unlike [1] and [2]. This leads us to the other question of the reviewer regarding scaling with Deep RL.
- $\textit{DPDT has guarantees on accuracy thanks to our test generating functions}$. Unlike other MDP formulations that consider naïve test generating functions [2, sec 4.1, paragraph "Action Space"], we use the CART heuristic to generate tests. Those test are still heuristic but allow for DPDT to have a lower bound on train accuracy that is the train accuracy of CART (see our propostion 2).
$\textbf{Failure modes of Deep RL and scaling DPDT with neural learning}$
We believe scaling DPDT to bigger datasets with neural learning is a promising research avenue aspointed out by the reviewer. For that, existing failure modes of Deep RL should be overcome. There exists a study of failure mode of the Deep RL baseline [1] we use in our work, it is [3]. The main failure mode is as follows. Deep RL of decision trees rely on a neural network learning to predict a feature to test against a value given as input dataset features bounds. Deep learning is used for tasks where the predictions of the neural network should be "similar" for neural net inputs that are "close" from a metric perspective. In our case, the neural net might have to learn to predict a test $f_1$ for data that have feature $x_i \leq 0.5$ and an other test $f_2$ for data that have feature $x_i > 0.5$. What [3] suggests is that it is too hard for Deep RL to learn different predictions for e.g. point (0.2, 0.2, ... 0.2, 0.49, 0.2, ..., 0.2) and point (0.2, 0.2, ... 0.2, 0.51, 0.2, ..., 0.2). So essentially Deep RL cannot fit non-continuous decisions in that context, and the neural architectures used in Deep RL do not generalize properly between data feature bounds.
More specifically; the representation of the state (feature bounds) limits the generalization of the neural architecture: we want to learn a neural network that returns an optimal data split given a dataset as input but all that the neural network receives as input is a bounding box around the dataset. If we want to improve generalization, we need to take into account specificities of the problem, e.g. that an optimal split, up to translation of the threshold, remains optimal if all the data is translated. This requires receiving the whole dataset as input and not only a bounding box of where the data is located
A promising research avenue that would be worth discussing at a top ML conference would be to design neural architecture tailored for test predictions given a whole dataset as input such as [4].
$\textbf{Conclusion}$
We highlighted the advantages of DPDT compared to other MDP formulation, in particular outputing a whole pareto front of trees. The reviwer question also raised a promising research avenue that would be worht discussing in NeurIPS.
If the esteemed reviewer appreciate our effort to address its concern, we encoure them to raise their score. We are happy to engage in further discussion.
[1] Janisch, J., Pevný, T., & Lisý, V. (2019). Classification with Costly Features Using Deep Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence.
[2] Topin, Nicholay, et al. (2021). Iterative bounding mdps: Learning interpretable policies via non-interpretable methods. Proceedings of the AAAI Conference on Artificial Intelligence.
[3] Kohler, Hector, et al. (2023). Limits of Actor-Critic Algorithms for Decision Tree Policies Learning in IBMDPs, arXiv 2309.13365.
[4] Kossen et. al. Self-Attention Between Datapoints: Going Beyond Individual Input-Output Pairs in Deep Learning NeurIPS 2021
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their thorough rebuttal and am satisfied with the response. I have read the other reviews and responses. I have no further questions and would like to maintain my score.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you
---
Rebuttal 2:
Title: Rebuttal
Comment: Dear reviewer,
Please take the time to read our rebuttal, we would really appreciate it.
Thank you in advance | Summary: This paper models the construction of decision trees as a reinforcement learning problem. Currently SOTA algorithms for constructing decision trees have the drawbacks that 1) they take long to compute at depths > 3, and 2) the trees constructed are complex and difficult to interpret. By modelling the problem as an RL task, the authors hope to make the construction of decision trees scale to larger sizes. They present Dynamic Programming Decision Trees which models tree construction as a MDP solved using dynamic programming. They evaluate the accuracy of trees produced by their approach empirically against other commonly used approaches.
Strengths: **Originality:** The approach presented is a novel approach to constructing dynamic trees.
**Significance:** As datasets become larger and interpretability becomes more important, having an approach that scales DT construction to larger trees is needed now. This makes this work rather significant.
**Clarity:** The first half of the paper (up to Sec. 4) was clear. It becomes harder to understand after. Providing an intuitive explanation certain equations would be helpful. E.g., why is probability $p_l = |X_l| / |X|$?
**Quality:** The technique designed is sound and the experiments chosen were the correct ones to demonstrate their claims.
Weaknesses: The experimental evaluation is weak. From what I understood, the algorithms being evaluated were run only once and evaluated once. The results do not statistically back up the authors' claims. Multiple runs with statistical significance testing is needed.
There is no actual analysis on the interpretability of the trees produced, only the complexity of the trees.
Technical Quality: 2
Clarity: 2
Questions for Authors: My main concern is the experimental evaluation.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors mention one limitation (test generation) as a problem. It seems that that would make it difficult for DPDT to actually scale to larger trees. Is that not so? Is scalability not a limitation then?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer, thank you for your comments. We are surprised by the low rating given your positive feedbacks. Please engage with us in the following discussion so that we can convince you to raise your score and accept our work.
$\textbf{Clarifying section 4: formulating decision tree learning as a Markov decision process}$
If the reviewer is not familier with MDPs we recommend Sutton and Barto 2018 as an introduction. Now, constructing a decsion tree can be seen as sequentially splitting a dataset $X$ with $n$ samples and $p$ features. At time $t$ an action to split is taken e.g. if the $i$-th sample has feature $j$ less than some value $v$ (a node in a decision tree does this), then at time $t+1$ the dataest to be considered is made of all the samples from $X$ that verify this condition and is noted $X_l$ else, at time $t+1$ the dataest to be considered is made of all the samples from $X$ that do not verify this condition and is noted $X_r$. So when applying the test is feature $j$ less than some value $v$ on a random datum from $X$, the probability to be in $X_l$ is indeed $p_l=|X_l|/|X|$.
We also refer the reviewer to our appendix D for explanatory schematics.
$\textbf{Statistical significance of experimental results}$
We confirm to the reviewer that table 1 presents results obtained from a single seed. This is because we do not consider a train-test generalization task in this table but only compare ourselves to a deterministic baseline Quant-BnB [1] which itselfs present results on a single seed (see [1, table 2]). Furthermore, our algorithm DPDT, Quant-BnB, and CART are deterministic. The only stochastic baselines are Custard which we run on multiple seeds (see table 1).
To showcase our goodwill and for the reviewer to raise its score, we re-do both plots from figure 3 on 5 different train/test splits chosen at random. We present the results in the 1-page general rebuttal pdf (top of the page). Adding multiple seeds does not change our results and adds significance. Thanks to the reviewer for the recommendation.
$\textbf{Interpretability analysis}$
Interpretability of decision trees can have different meanings. We confirm to the reviewer that the deifinition we use in our work is closely related to the complexity of the learned decision trees, i.e. how many operations to go from input to decision. This notion is called simulatability in the explainable machine learning litterature [1] and draws a parallel between computation complexity of model inference and human user understanding of the tree decisions.
What we show in our work is that in average the trees returned by our method DPDT have shorter decision path in average than trees returned by CART (the heursitic baseline widely used in practice). It means that to make a decision given a datum, trees returned by DPDT will perform less operation in average! This what is shown in figure 2 (right plot) and figure 3. In particular, in figure 2 on the right we show that on only 1 out of 16 dataset DPDT trees are less interpretable than CART.
$\textbf{Scalability}$
We already showed that DPDT is able to return deep trees (see table 6 appendix I). DPDT deep trees have better test accuracies and shorter average decision paths than CART deep trees on most of the benchmarked datasets! In practice, since DPDT cannot explore as many splits per state when building the MDP (see our algo 1), if one needs to learn deep trees, we recommend exploring less splits closer to the root with the rationale that it is better to be greedy closer to the leafs than the root: the provided code already supports this feature and was used to get table 6.
$\textbf{Conclusion}$
Having addresssed the concerns of the reviewer and having performed the multiple seeds additional experiments, we kindly ask the reviewer to accept our paper or to engage in the rebuttal.
Respectfully.
[1] The Mythos of Model Interpretability, Zachary C. Lipton ICML 2016
[2] See 1-page pdf for the global rebuttal available at the top of the open review page.
---
Rebuttal 2:
Title: Rebuttal
Comment: Dear reviewer,
Please take the time to read our rebuttal, we would really appreciate it.
Thank you in advance
---
Rebuttal 3:
Title: Discussion
Comment: Dear reviewer we would to discuss your review. Did you have time to read our rebuttal?
Thank you in advance!
---
Rebuttal 4:
Title: Discussion period is almost over
Comment: Dear reviewer, please read our rebuttal. We really enjoyed your feedback! Thank you in advance.
---
Rebuttal Comment 4.1:
Title: Response to rebuttal
Comment: The authors do address my main concern about statistical significance. I appreciate the plots with different dataset splits and this helps to strengthen their results.
However, a couple things of note:
1. By clarifying Section 4, I think the authors should rewrite or restructure the section so that it is clearer.
2. I do not believe that the authors can claim that the trees produced are more interpretable. The most they can claim from the results is that trees are less complex.
Given the response and new results, I will increase my score.
---
Reply to Comment 4.1.1:
Title: Response to response
Comment: Thank you for raising your score! Indeed in an ideal world we would do a user study to evaluate the interpretability of our trees; however the latter being costly, complexity is the best and most justified proxy we found.
We will clarify section 4.
Thank you for engaging in the discussion. | Summary: The paper proposes to use an approach for learning interpretable
decision trees using markov decision processes. The results are shown
to be competitive with branch and bound methods.
Strengths: None of notice, given the listed weaknesses.
Weaknesses: There exists extensive experimental evidence challenging the claims
about the interpretability of decision trees, while simultaneously
demonstrating the need for decision trees to be explained, since these
can otherwise exhibit arbitrary explanation redundancy.
As a result, and at present, there is no practical justification
whatsoever to learn so-called interpretable optimal decision trees.
It is absolutely unclear that optimal decision trees will provide any
advantage, regarding computed explanations, over decision trees
induced with heuristic algorithms.
Given the above, I cannot recommend acceptance.
Some references on the necessity of explaining decision trees.
Xuanxiang Huang, Yacine Izza, Alexey Ignatiev, João Marques-Silva: On
Efficiently Explaining Graph-Based Classifiers. KR 2021: 356-367
Gilles Audemard, Steve Bellart, Louenas Bounia, Frédéric Koriche,
Jean-Marie Lagniez, Pierre Marquis: On the Computational
Intelligibility of Boolean Classifiers. KR 2021: 74-86
Yacine Izza, Alexey Ignatiev, João Marques-Silva: On Tackling
Explanation Redundancy in Decision Trees. J. Artif. Intell. Res. 75:
261-321 (2022)
Gilles Audemard, Steve Bellart, Louenas Bounia, Frédéric Koriche,
Jean-Marie Lagniez, Pierre Marquis: On the explanatory power of
Boolean decision trees. Data Knowl. Eng. 142: 102088 (2022)
Gilles Audemard, Steve Bellart, Louenas Bounia, Frédéric Koriche,
Jean-Marie Lagniez, Pierre Marquis: On Preferred Abductive
Explanations for Decision Trees and Random Forests. IJCAI 2022:
643-650
João Marques-Silva, Alexey Ignatiev: No silver bullet: interpretable
ML models must be explained. Frontiers Artif. Intell. 6 (2023)
Technical Quality: 3
Clarity: 3
Questions for Authors: NOne.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 1
Limitations: These were listed above. I believe the paper is solving a non-relevant
problem given practical and theoretical evidence regarding the
non-interpretability of decision trees, be these optimal or not.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer, we argue that learning decision trees that are interpretable in the simulatability sense (the ability for a human to read the decision path of a model from input to decision) [1, sec. 3.1.1], is a very relevant problem in machine learning. Indeed many recent works present decision tree algorithms as interpretable machine learning solutions whereas in reality their classifiers or policies [2,3] have many decision nodes hindering human readability [1, sec. 3.1.1].
Now assuming the problem we tackle is relevant, please note that in our work we do provide evidence that our proposed method learns trees that have shorter decision paths in average than trees returned by purely heuristic methods for the same accuracy. See figures 2 and 3 from our work.
If you are open to discussion, please let us know how we could convince you that are work offers an original competitve solution to the problem of learning interpretable decision trees.
[1] The Mythos of Model Interpretability, Zachary C. Lipton ICML 2016
[2] Oblique Decision Trees from Derivatives of ReLU Networks, Guang-He Lee, Tommi S. Jaakkola ICLR 2020
[3] Verifiable Reinforcement Learning via Policy Extraction, Osbert Bastani, Yewen Pu, Armando Solar-Lezama NeurIPS 2018
---
Rebuttal 2:
Title: Rebuttal
Comment: Dear reviewer,
Please take the time to read our rebuttal, we would really appreciate it.
Thank you in advance
---
Rebuttal 3:
Title: Discussion
Comment: Dear reviewer, we would liove to discuss. Did you have time to read our rebuttal?
Thank you in advance!
---
Rebuttal Comment 3.1:
Title: Reply to authors
Comment: Thank you for the constructive review.
I maintain my assessment. Interpretability is widely accepted to be a subjective topic. Arguing for the interpretability of decision trees would always be open to debate. Existing evidence suggests that attempts to learning interpretable decision trees should be expected to unproductive efforts. Assessment of the redundancy of explanations in the computed decision trees should be discussed, and it is not.
---
Rebuttal 4:
Title: Interpretability is not our only contribution
Comment: Dear reviewer, hello again and thank for pointing out references for the inspection of trees explanations.
We will highlight a discussion about the subjectivity of interpretability with respect to some existing definitions [1,2].
We will also include the pointed out references [3,4,5,6,7,8] and highlight that tree models might need further inspections to be considered interpretable.
However we would really appreciate if the reviewer could give feedback about the other contributions of our work:
- What do you think about our MDP formulation of decision tree learning? You can check out this discussion with reviewer pDp5 https://openreview.net/forum?id=TKozKEMKiw¬eId=EslLWlsjtK
- What do you think of our solver DPDT that can return a whole set of trees way faster than other optimal tree baselines?
Thank you in advance
[1] Lipton. "The mythos of model interpretability." In ICML Workshop on Human Interpretability in Machine Learning, 2016.
[2] Shen. "Interpretability in ml: A broad overview." 2020
[3] Xuanxiang Huang, Yacine Izza, Alexey Ignatiev, João Marques-Silva: On Efficiently Explaining Graph-Based Classifiers. KR 2021: 356-367
[4] Gilles Audemard, Steve Bellart, Louenas Bounia, Frédéric Koriche, Jean-Marie Lagniez, Pierre Marquis: On the Computational Intelligibility of Boolean Classifiers. KR 2021: 74-86
[5] Yacine Izza, Alexey Ignatiev, João Marques-Silva: On Tackling Explanation Redundancy in Decision Trees. J. Artif. Intell. Res. 75: 261-321 (2022)
[6] Gilles Audemard, Steve Bellart, Louenas Bounia, Frédéric Koriche, Jean-Marie Lagniez, Pierre Marquis: On the explanatory power of Boolean decision trees. Data Knowl. Eng. 142: 102088 (2022)
[7] Gilles Audemard, Steve Bellart, Louenas Bounia, Frédéric Koriche, Jean-Marie Lagniez, Pierre Marquis: On Preferred Abductive Explanations for Decision Trees and Random Forests. IJCAI 2022: 643-650
[8] João Marques-Silva, Alexey Ignatiev: No silver bullet: interpretable ML models must be explained. Frontiers Artif. Intell. 6 (2023)
---
Rebuttal Comment 4.1:
Title: Reply to Authors
Comment: I believe the questions posed by the authors miss the whole point of my review and subsequent comments. One can always envision great solutions for problems of arguable relevancy. However, and in the reviewer's opinion, those should not be the focus on papers presented at top-tier conferences.
The main issue raised by my comment is not about interpretability, but the fact that the best possible decision trees that one can construct will almost surely require being explained. Also, the size of the tree may matter very little to the computed explanations, and that is a topic of research per se.
---
Reply to Comment 4.1.1:
Title: Recent work contradicts the reviewer.
Comment: Dear reviewer,
In hope to convince you of the relevance of the problem we solve, namely learning decision trees, we provide additional references supporting interpretability of decision trees as machine learning models.
[1,2] describe decision trees as the "quintesential" interpretable model class. [1] develops the first theoretical framework for the approximation capabilities of trees with respect to their interpretability. This theory supports our contributions and especially section 6.2 and 6.3 where we study the interprerability performance trade-off of trees.
Please, could you either raise your score or explain why think learning decision tree and claiming interpretability is false?
Thank you in advance
[1] A Theory of Interpretable Approximations
Marco Bressan, Nicolò Cesa-Bianchi COLT2024
[2] Christoph Molnar. Interpretable Machine Learning. 2 edition, 2022. URL
https://christophm.github.io/interpretable-ml-book. | Rebuttal 1:
Rebuttal: Dear all, in addition to the attached 1-page pdf containing additional plots to showcase the superiority of DPDT over CART in terms of tree interpretability, with multiple seeds, we would like to share with you an open source implementation of DPDT that fits the scikit-learn framework in an effort of reproducibility and open science: https://anonymous.4open.science/r/dpdt-py-DB89/README.md
Pdf: /pdf/eaea868489a9b28ed9caadd73c02f89b79f3ebee.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Robust Conformal Prediction under Joint Distribution Shift | Reject | Summary: This paper adresses the issue of conformal prediction under distribution shift with multiple test domains. The goal is to reduce the deviation in coverage caused by different potential distribution shifts across these domains. The paper firstly proposes a way of disentangling a joint distribution shift (shift effects both in covariate and label distributions, which they term "concept") by employing weighted conformal prediction (Tibshirani et al, 2019) to address covariate shift, and then quantify the remaining shift in terms of a truncated normalized Wasserstein distance (D-NTW) between the original and weighted conformal score distributions (empirical CDFs). This D-NTW is then used as a regularizer term in a training algorithm to explicitly ensure that coverage deviations are minimized across test domains. Experiments include assessing the correlation between D-NTW and the actual expected coverage difference, which is shown to be high (vs. other distributional distance metrics), and comparing to two multi-test-domain optimization methods on a variety of datasets to show that the coverage difference is lower while not compromising on prediction accuracy.
Strengths: - Addressing the issue of distribution shift in conformal prediction is a relevant problem, particularly for the less explored label shift setting
- Attempts are made to disentangle covariate and label (concept) shifts, which can provide insights into the model's adaptation abilities
- The suggested D-NTW distance metric is well motivated and benchmarked against multiple sensible alternative distance metrics
- An interesting range of datasets from different domains is considered
- The paper is clearly written and good to follow, including Fig I visualizing the procedure (albeit it is somewhat hard to read, especially part (c))
Weaknesses: My main concerns are w.r.t. practicality and evaluation. A fundamental requirement of the proposed algorithm and D-NTW is the availability of labelled samples from every test domain $Q_{XY}^{(e)}$ in order to obtain the conformal test score distributions $F_{Q^{(e)}}$, from which then both the likelihood ratios for covariate shifts and D-NTW can be explicitly computed. Beyond existing concerns about the practicality of estimating a likelihood $\textit{ratio}$, we now require actually explicitly estimating every test domain distribution, which is prone to much error. Regardless, if we can now explicitly obtain $F_{Q^{(e)}}$ for every test domain, I am wondering why the direct solution is not to just compute a conformal quantile $q^{(e)}$ on the basis of this for every test domain and thus optimally adapt to the existing joint shifts per domain. Perhaps the loss of the disentanglement of shift contributions is the motivation? In general, I find this requirement to require knowing or estimating the test domain distributions on the basis of labelled test domain data and thus the use of extensive density estimation quite limiting, especially if we consider high-dimensional settings. Perhaps this is why the considered experiments are for 1-D regression data only. I was also wondering if the authors were able to make any connections between their proposals and obtainable coverage guarantees on the test domains. They propose a bound on the distance of D-NTW from the expected coverage difference, but perhaps more explicit connections to conformal coverage guarantees of the form in Eq. 3, e.g. by leveraging guarantees from (Tibshirani et al, 2019) and their linear decomposition of shift effects are worth investigating.
In regards to evaluation, I was missing a closer connection to existing conformal methods under shift, and the actual goals of these methods in terms of coverage guarantees on test data. In their comparison to test-domain optimization algorithms I was not surprised that their algorithm performs better on expected coverage difference, since it explicitly $\textit{optimizes}$ for this goal, while the baselines target e.g. variance minimization. It would be more interesting to compare to conformal algorithms for shift such as the mentioned [2,3], showing e.g. that those are not able to fully capture the joint shift or are overly conservative, thus compromising on the metric. For example, I was surprised that [1] was not mentioned, since it explicitly targets label shifts. Similarly, while it is nice that the relative coverage difference is minimized or correlates well with D-NTW, this does not tell explicitly how my conformal methods will now perform on the test domains. It would be nice to also obtain an explicit assessment of the coverage $(1-\alpha)$ on test domains, and the obtained prediction set sizes. Even if target coverage is not satisfied, it would already be a contribution to show that the proposed algorithm achieves better robustness by being closer to target coverage, or smaller set sizes at the same level.
Minor: multiple typos e.g. L75, Fig I caption, L89, Eq. 2
References
- [1] Podkopaev, Aleksandr, and Aaditya Ramdas. "Distribution-free uncertainty quantification for classification under label shift." Uncertainty in artificial intelligence. PMLR, 2021.
- [2] Cauchois, Maxime, et al. "Robust validation: Confident predictions even when distributions shift." Journal of the American Statistical Association (2024): 1-66.
- [3] Zou, Xin, and Weiwei Liu. "Coverage-Guaranteed Prediction Sets for Out-of-Distribution Data." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 15. 2024.
Technical Quality: 2
Clarity: 3
Questions for Authors: - Could you comment on some of the concerns raised above, e.g. on the practicality of requiring multiple density estimations, simply computing optimal quantiles directly on each test domain, or evaluating the methods w.r.t. test coverage and conformal set size.
- What methods are used to estimate the test domain distributions $Q_{XY}^{(e)}$?
- Have you considered higher-dimensional experimental settings and/or using models that are more complex than an MLP? If not, could you explain if this is related to the limitations of your method in terms of higher dimensions?
- In the experiments, all the generated test domain shifts are created manually. Have you considered datasets with unknown test shifts, e.g. the Yearbook dataset?
- In sec 5.2. the correlation is assessed via Pearson coefficient. Could you provide a reasoning on why you believe a linear relationship exists, and/or if you have considered more robust measures such as rank correlation? Given that the estimated score distributions are empirical CDFs, this seems perhaps more intuitive.
- Could you comment on the observed discrepancies in some of the results in Table 1? For example, For US-States and MLP, all distance metrics except D-NTW show negative correlation.
- Could you comment on the limitations of the made assumption in Eq. 17 that the calibration domain $P_{XY}$ is considered a linear mixture of the "unknown" test domain distributions? This seems like a restriction that is not explicitly mentioned.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The requirements of the algorithm and density estimations are mentioned, but the limitations of the approach are not explicitly discussed in terms of imposed assumptions on the problem setting. A more through discussion of the practical limitations would be helpful (some of which are inherited e.g. from (Tibshirani et al, 2019) simply by their use of likelihood ratio weights). A small subsection in sec 6 mentions difficulties of optimization algorithms.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your questions.
>My main concerns are w.r.t. practicality and evaluation. A fundamental requirement of the proposed algorithm and D-NTW is the availability of labelled samples from every test domain $Q _{XY}^{(e)}$ in order to obtain the conformal test score distributions, from which then both the likelihood ratios for covariate shifts and D-NTW can be explicitly computed. Beyond existing concerns about the practicality of estimating a likelihood ratio, we now require actually explicitly estimating every test domain distribution, which is prone to much error.
We introduce how we applied kernel density estimation (KDE) to estimate the likelihood ratio in the author rebuttal. Indeed, the process of KDE is prone to error with a limited sample size from each test domain, especially for high-dimension regression. The experiment of the Airfoil dataset has the smallest sample size, 160, of one test domain, and the highest regression dimension, 5. Figure 2 (a) shows the experiment result of that dataset, and the performance is acceptable. Nevertheless, it is still challenging for KDE with higher dimensions and fewer samples.
> Regardless, if we can now explicitly obtain $\hat{F}_Q^{(e)}$ for every test domain, I am wondering why the direct solution is not to just compute a conformal quantile q(e) on the basis of this for every test domain and thus optimally adapt to the existing joint shifts per domain. Perhaps the loss of the disentanglement of shift contributions is the motivation?
The reason why we do not optimize based on joint distribution shift but on concept shift is because covariate shift can be addressed by importance weighting, even if estimation error can be introduced by KDE. Intuitively, since a model has limited fitting ability, we want it to focus on the issue (concept shift) that cannot be addressed by importance weighting to avoid wasting its representativeness.
We do not compute a quantile for each $\hat{F}_Q^{(e)}$ and adapt to them, because a quantile value is $(1-\alpha)$-specific, but we want the trained model to be comprehensive, so test coverage approaches $1-\alpha$ no matter how $\alpha$ changes.
>In general, I find this requirement to require knowing or estimating the test domain distributions on the basis of labelled test domain data and thus the use of extensive density estimation quite limiting, especially if we consider high-dimensional settings.
For your concerns about high-dimensional settings, higher dimensions will make the KDE less reliable. However, we do not only consider 1-d regression tasks. Line 60 may cause some confusion. In fact, $\mathcal{X}\subset\mathbb{R}^d$. For the airfoil dataset, d=5 in line 463. For three traffic datasets, d=4. For epidemic datasets, d=2.
> It would be more interesting to compare to conformal algorithms for shift such as the mentioned [2,3], showing e.g. that those are not able to fully capture the joint shift or are overly conservative, thus compromising on the metric.
To illustrate the benefits of mRCP, we provide experimental results about coverage and prediction intervals in comparison with the WC method [R1] in the author rebuttal.
>Have you considered higher-dimensional experimental settings and/or using models that are more complex than an MLP?
High-dimension regression tasks are included in experiments. As we mentioned, for the airfoil dataset, d=5. For three traffic datasets, d=4. For epidemic datasets, d=2. Applying mRCP to models with higher fitting ability may obtain better performance as they can have a better trade-off between residual loss and distribution discrepancy loss.
>In the experiments, all the generated test domain shifts are created manually. Have you considered datasets with unknown test shifts?
Except for the airfoil dataset, all three traffic datasets and three epidemic datasets have natural joint distribution shifts without any manual modification. Their joint shifts are unknown, and we can only estimate their covariate shift by KDE as mentioned above.
>In sec 5.2. the correlation is assessed via the Pearson coefficient. Could you provide a reasoning on why you believe a linear relationship exists, and/or if you have considered more robust measures such as rank correlation? Given that the estimated score distributions are empirical CDFs, this seems perhaps more intuitive.
We actually only expect a monotonic positive correlation because $\mathbb{E} _{\alpha}[D^{(e)}]$
is the average of discrete $D^{(e)}$ values for $\alpha = 0.1,0.2,...,0.9$ and $d _{NTW}^{(e)}$ is the distributional distance. However, experimental results also show a strong positive linear correlation. That is because both $d _{NTW}^{(e)}$ and $\mathbb{E} _{\alpha}[D^{(e)}]$ approximate Eq. (10) well, so as Eq. (10) increases, $d _{NTW}^{(e)}$ and $\mathbb{E} _{\alpha}[D^{(e)}]$ increases proportionally.
We present the result of **Spearman's rank correlation coefficient** in the author rebuttal.
> For US-States and MLP, all distance metrics except D-NTW show negative correlation.
Please take a look at Figure 3. US-States only have four test domains; thus, the coefficient can be negative if the four points do not show a positive correlation.
>Could you comment on the limitations of the made assumption in Eq. 17 that the calibration domain $P_{XY}$ is considered a linear mixture of the "unknown" test domain distributions? This seems like a restriction that is not explicitly mentioned.
mRCP is also applicable when $P_{XY}$ is not a linear mixture of the test domains. The role of $P_{XY}$ is to provide a ‘target’ conformal score distribution for all test score distributions so that they can approach. It is worth in study the performance of mRCP if $P_{XY}$ is a more complex combination of $Q_{XY}^{(e)}$
**Reference**
[R1] Cauchois M, Gupta S, Ali A, et al. Robust validation: Confident predictions even when distributions shift[J]. Journal of the American Statistical Association, 2024: 1-66.
---
Rebuttal Comment 1.1:
Comment: Thank you for your comments and clarifications.
I believe conformal comparisons such as the additional experiments against Cauchois et al. are a stronger approach to validating the performance of your method rather than optimization baselines, and should be actively pursued. Similarly, it is good to see that the rank correlation relationships also hold, showing promise for some of your design choices such as the D-NTW distance.
That being said, after having read some of the other rebuttals and acknowledging raised concerns with the theoretical positioning and motivation of the work (e.g., its empirical vs. theoretical validity), I prefer to keep my current score at this moment. | Summary: This paper studies the coverage difference caused by covariate and concept shifts. Authors introduce the Normalized Truncated Wasserstein distance (NTW) as a metric for capturing coverage difference expectation under concept shift by comparing the test and weighted calibration conformal score CDFs. They also develop an end-to-end algorithm called Multi-domain Robust Conformal Prediction (mRCP) to incorporate NTW during training, allowing coverage to approach confidence in all test domains.
Strengths: 1. introduced NTW as a metric to capture the coverage gap
2. high correlation between NTW and coverage difference expectation; mRCP can balance residual size and coverage gap
Weaknesses: 1. section 3.1 and 3.2 introduce some important definitions: while authors provide some explanation, the theoretical understanding of them are very limited
2. simulation: authors mention the mRCP can achieve a balance between coverage gap and size of residual, further simulations need to be carried out ( I would be interested to see a plot including the avg. coverage vs avg. residual size
Technical Quality: 3
Clarity: 3
Questions for Authors: 3.1 the coverage difference works with the empirical distribution function, would it be possible to study the coverage gap in the form of P(...) - P(...)
3.1 in the paper, a likelihood ratio dQ/dP is assumed to be known, how would you estimate it in practice? The definition of $q^{*}$ in the paper seems to be inconsistent with the conformal prediction literature. The factor (n+1)/n, I believe is typically added in the i.i.d case while for covariate shift, the form is not like this if you take a look at the paper "conformal prediction under covariate shift".
4 In practice, how would you select the tuning parameter beta? Better theoretical guarantees for section 4 need to be developed.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Authors mentioned the limitation in the discussion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your questions.
>section 3.1 and 3.2 introduce some important definitions: while authors provide some explanation, the theoretical understanding of them are very limited.
Section 3.1 and 3.2 introduce the process of quantifying the empirical coverage gap caused by concept shift, and connecting it to a distributional discrepancy metric.
**The theoretical analysis of bounding the coverage gap by population distributions with a probability related to the number of calibration and test samples is discussed in Appendix C**.
>simulation: authors mention the mRCP can achieve a balance between coverage gap and size of residual, further simulations need to be carried out ( I would be interested to see a plot including the avg. coverage vs avg. residual size
To further illustrate the **coverages obtained by mRCP**, we provide experimental results about coverage and prediction intervals in comparison with the WC method [R1] in the author rebuttal.
> the coverage difference works with the empirical distribution function, would it be possible to study the coverage gap in the form of P(...) - P(...)
We developed the upper bound of the empirical coverage gap in **population form** in Appendix C.
>in the paper, a likelihood ratio dQ/dP is assumed to be known, how would you estimate it in practice?
**The likelihood ratio is not assumed known** and it is estimated by kernel density estimation. Please read the author rebuttal for details.
>The factor (n+1)/n, I believe is typically added in the i.i.d case while for covariate shift, the form is not like this if you take a look at the paper "conformal prediction under covariate shift".
Using the factor $(n+1)/n$, the quantile under exchangeability assumption is given by
$$
q=\text{Quantile}\left(\frac{\lceil(1-\alpha)(n+1)\rceil}{n},\frac{1}{n}\sum_{v_i\in V_c}\delta _{v_i}\right).
$$
In Lemma 1 of [R2], the quantile is calculated by
$$
q=\text{Quantile}\left(1-\alpha,\frac{1}{n+1}\sum_{v_i\in V_c}\delta _{v_i}+\delta _{\infty}\right).
$$
Both forms produce the same quantile value under the i.i.d. case.
[R2] calculates the importance-weighted quantile based on the latter one, whereas we present Eq. (6) based on the former one. However, there is a minor difference when applying these two forms to importance weighting, regarding how $p _i^w$ and $p _{n+1}^w$ are calculated in Eq. (7) of [R2].
In Eq. (7) of [R2], the $p _i^w$ and $p _{n+1}^w$ are functions of the test input $x$, as $\delta _{\infty}$ is included its Lemma 1. However, in our work, the weights of conformal scores are not functions of $x$.
In Section 4.5 of [R3], another form is provided to get rid of the factor $(n+1)/n$ and $\delta _{\infty}$. We think that is a good example to represent the importance-weighted quantile and would like to take it in the later version.
>In practice, how would you select the tuning parameter beta?
$\beta$ is the hyperparameter for our proposed method in Eq (19). Therefore, we need to try different $\beta$ values to draw a Pareto front of the prediction residuals (horizontal axis) and coverage gap (vertical axis) in Figure 2. For each $\beta$ value, we train a model and obtain an optimal Pareto solution of (residual, coverage gap) and finally, we get curves in Figure 2. The baseline V-REx is also tuned by a hyperparameter, so we tried different $\beta$ for V-REx as well in Figure 2. We show our selected $\beta$ values in Table 3. The basic goal of selecting $\beta$ values is to make the Pareto solutions more uniformly distributed and dense enough to obtain reliable Pareto curves.
>Better theoretical guarantees for section 4 need to be developed.
When a theoretical coverage guarantee under distribution shift is obtained, it is very hard to keep prediction sets small at the same time. For instance, the worst-case (WC) method [R1] holds the guarantee under distribution shift at the cost of inefficient large prediction sets.
The purpose of Section 4 is to get **a good trade-off between the coverage and prediction interval size**. This good trade-off can be obtained because of two reasons. First, we do not follow conservative methods as mentioned above. Secondly, prediction interval size is highly related to the magnitude of conformal scores which is absolute residual for regression tasks, and Eq. (19) actually balances the residual loss and the NTW loss, which represents the gap to $1-\alpha$.
In Figure 5 of the author rebuttal attachment, even if mRCP coverage is not at least $1-\alpha$, it approaches $1-\alpha$ very close.
**Reference**
[R1] Cauchois M, Gupta S, Ali A, et al. Robust validation: Confident predictions even when distributions shift[J]. Journal of the American Statistical Association, 2024: 1-66.
[R2] Tibshirani R J, Foygel Barber R, Candes E, et al. Conformal prediction under covariate shift[J]. Advances in neural information processing systems, 2019, 32.
[R3] Angelopoulos A N, Bates S. Conformal prediction: A gentle introduction[J]. Foundations and Trends® in Machine Learning, 2023, 16(4): 494-591. | Summary: The paper "Robust Conformal Prediction under Joint Distribution Shift" investigates the problem of predictive inference under the setting where we have both covariate shift and concept shift. The authors propose a conformal prediction-based procedure that accounts for such distribution shifts and illustrate the performance through experiments.
Strengths: This work provides extensive and thorough experimental results.
Weaknesses: The paper doesn't read very well as some notations appear without definitions and relevant assumptions are not written clearly. For example:
- It is assumed that the likelihood ratio $dQ/dP$ is known, but this was not clearly stated in the problem setting.
- Some assumptions are not written in advance but are rather introduced when they are needed.
It would be better if the authors could provide sufficient intuition and motivation for their methodology.
Technical Quality: 1
Clarity: 2
Questions for Authors: 1. First of all, as a theoretical statistician who is not very familiar with the standards in the machine learning community, I wonder what the motivation of this work is. Specifically, the main advantage of the conformal prediction framework is that it theoretically provides an exact finite sample guarantee without distributional assumptions. In this work, the proposed methods have no theoretical guarantee, and the focus is more on empirical illustrations. While I understand that machine learning emphasizes practical applications and performance on real data, why should one apply conformal prediction here if we cannot exploit the advantage of the conformal inference framework? In other words, If we are not aiming for a distribution-free guarantee (or any other theoretical guarantee), aren't there methods that work better for specific datasets?
2. Evaluating the difference under distribution shift using (5) doesn't seem very reasonable to me. The prediction set is constructed using the calibration set, and the authors estimate the 'CDF of the score under no distribution shift' by $\hat{F}_P$, which is also a function of the calibration set (though estimating the 'CDF under shift' with $\hat{F}_Q$ sounds reasonable).
3. In fact, $\hat{F}_P(q)$
seems to be just $1-\alpha$ (more precisely, $\lceil(n+1)(1-\alpha)\rceil / n$)? According to the definition, $q$ is the quantile of the distribution $\frac{1}{n}\sum_{i=1}^n \delta_{v_i}$, while $\hat{F}_P$
is the CDF of the same distribution. So it seems to me that $\hat{F}_P(q) = \lceil(n+1)(1-\alpha)\rceil / n$ holds exactly.
Then it is quite strange why they replace this known value with $\hat{F}_{Q/P}(q^*)$, generating an additional error.
4. Additionally, I'm a bit confused about equation (13), where again the exact equality seems to hold. The two empirical CDFs $\hat{F}_Q$
and $\hat{F}_{Q/P}$ are step functions where the jumps occur at the same values $v_1, \cdots, v_n$, so it seems to me that exact equality holds. In that sense, the remaining discussion in section 3.2 sounds a bit odd. For example, the empirical CDFs have values exactly 1 for $v \geq \max v_i$. What does the 'long tail' mean in this case?
5. Could the authors clarify the motivation behind the need for 'multi-domain' inference? In the beginning of Section 4, it just says, "The domain $P_{XY}$ can be decomposed into $M$ multiple domains," so it's a bit unclear what settings are under consideration. For example, are they considering a setting where there is a natural partition of the domain, and we aim for a good quality of inference conditional on each element of the partition? Or are they considering a setting where we artificially construct a partition (if so, what is the reason for that?), etc.? This might be provided in previous literature, but it would be better if relevant discussions are also provided in this work.
6. The notation in equation (17) is hard to understand. The notation $P_{XY}$ was originally used to denote the distribution of $(X,Y)$, but the authors use it to denote the domain (in the line above equation (17)), and in equation (17) it is written as if the distribution is a mixture distribution of $M$ distributions $(Q_{XY}^{(e)})$ with mixture probability $(\frac{1}{M}, \cdots, \frac{1}{M})$, rather than the domain being partitioned into $M$ bins. Does $Q_{XY}^{(e)}$ denote the conditional distribution of $(X,Y)$ under $P_{XY}$, given $(X,Y) \in e$? If that's the case, how do we know that the probability $P((X,Y) \in e)$ is $\frac{1}{M}$ for every $e$ in the partition?
7. I wish I could come up with more constructive questions and suggestions, but it was a bit hard for me to do so unless the above points (which are related to the technical foundation of the main results) are resolved. I understand that some of the points above might be due to my misunderstanding or confusion, and I'd be happy to have a further discussion or update the scores once I receive a reply from the authors.
* Minor comments:
In equation (10), the notation `E' doesn't seem appropriate as it's a sample mean rather than the population mean.
I believe It is more standard to denote $v_i$ as 'nonconformity score' rather than 'conformal score.'
* Typos:
above equation (3): 'coverage guarantee'
Equation (4): $v$ -> $v_i$
Equation (11): $dx$ -> $dv$
Confidence: 4
Soundness: 1
Presentation: 2
Contribution: 2
Limitations: I don't think this work has negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your questions.
>It is assumed that the likelihood ratio dQ/dP is known...
**The likelihood ratio is not assumed known and it is estimated by kernel density estimation (KDE).** Please read the author rebuttal for details.
> In this work, the proposed methods have no theoretical guarantee, and the focus is more on empirical illustrations. ... Why should one apply conformal prediction here if we cannot exploit the advantage of the conformal inference framework?
Standard CP can provide coverage guarantee under exchangeability assumption for distribution-free cases. However, **prior knowledge** about the extent of the shift is necessary to maintain the coverage guarantee when a distribution shift happens. For instance, [R1] holds the coverage guarantee by constraining the test distribution within an $f$-divergence ball centered at a calibration distribution. Similarly, [R3] also develops the guarantee based on the knowledge of how distributions are contaminated.
Furthermore, even if the theoretical coverage guarantee based on prior knowledge of distribution shift is obtained, it is **hard to keep prediction sets small** at the same time. For instance, the worst-case (WC) method [R1] holds the guarantee under distribution shift at the cost of inefficient large prediction sets.
Experiments in the author rebuttal prove that our method can get **a better trade-off between the coverage and prediction interval size.**
>Evaluating the difference under distribution shift using (5) doesn't seem very reasonable ...
Eq. (5) characterizes the coverage difference under distribution shift. We minimize this coverage gap by Wasserstein distance regularization in Eq. (19) so we can use the quantile of calibration conformal scores to construct prediction sets.
Specifically, we first use importance weighting [R2] with KDE to address covariate shift and approximate a weighted version of calibration conformal scores. Secondly, Wasserstein distance is reduced between test and the weighted calibration conformal score distributions to reduce the coverage gap caused by concept shift, so the coverage on test data approaches $1 –\alpha$.
> it is quite strange why they replace this known value with $\hat{F}_{Q/P}(q^∗)$, generating an additional error.
Appendix B justifies the error between $\hat{F} _{Q/P} (q^*)$ and $\hat{F} _P (q)$ can be controlled by the increasing size of calibration data $n$. Secondly, $\hat{F} _{Q/P} (q^*)$ in Eq. (9) makes $D _{concept}$ a function of only one variable $q^*$, thus facilitating the introduction of NTW in the following sections. Please check Figure 1 (b) for the relationship between $\hat{F} _P (q)$ and $\hat{F} _{Q/P} (q^*)$ with given $\lceil{(1-\alpha)(n+1)} \rceil/n$.
> I'm a bit confused about equation (13), where again the exact equality seems to hold. The two empirical CDFs $\hat{F} _Q$ and $\hat{F} _{Q/P}$ are step functions where the jumps occur at the same values $v_1$,⋯,$v_n$, so it seems to me that exact equality holds.
Eq. (13) does not hold exact equality. Because $\hat{F} _Q$ is the test conformal score CDF, it jumps over the test conformal scores $V_t$ as defined in Eq.(4), not calibration conformal scores $V_c$.
>In that sense, the remaining discussion in section 3.2 sounds a bit odd. For example, the empirical CDFs have values exactly 1 for $v≥\max v_i$. What does the 'long tail' mean in this case?
The purpose of truncation in Section 3.2 is to estimate Eq. (10) by modifying the Wasserstein distance in Eq. (13). In Eq. (10), each pair of $|\hat{F} _Q(v _i)-\hat{F} _{Q/P}(v _i)|$ has the same weight. In Eq. (13), the weight of $|\hat{F} _Q(v _i)-\hat{F} _{Q/P}(v _i)|$ is $v _{i+1}-v _i$, so we want $v _{i+1}-v _i$ is similar for different $i$ as well to approximate Eq. (10). However, the slop of conformal score CDFs tend to be flatter when the CDFs converge close to 1 but not reaching 1, which means $v _{i+1}-v _i$ there are quite large, so we need to truncate that part (long tail) in Eq. (15) to estimate Eq. (10). Table 1 justifies the necessity of truncation and normalization.
> Could the authors clarify the motivation behind the need for 'multi-domain' inference?
$P _{XY}$ is a mixture distribution of $Q _{XY}^{(e)}$ for $e\in\mathcal{E}$ as shown in Eq. (17). This can be a natural partition. For instance, $Q _{XY}^{(e)}$ can be the patient data of a hospital $e$, and $P _{XY}$ is the patient distribution of multiple hospitals. $1 – \alpha$ coverage on $P _{XY}$ can not ensure the coverage on $Q _{XY}^{(e)}$, so we expect $1 − \alpha$ coverage on $Q _{XY}^{(e)}$ as well. Or $Q _{XY}^{(e)}$ can be traffic data from a single hour $e$, and $P _{XY}$ is the data collected within a day. Please check Appendix D for the experiment setups and dataset preprocessing details.
> how do we know that the probability P((X,Y)∈e) is 1/M for every e in the partition?
$P _{XY}$ is denoted as calibration domain and distribution at the same time. $Q _{XY}^{(e)}$ denotes the conditional distribution of $(x,y)$ given $(x,y)$ is from domain $e$.
For the weight $1/M$, $P_{XY}$ can be the distribution of daily traffic sensor data, and $Q _{XY}^{(e)}$ is the data of an hour $e$, so the weight is $1/24$. Or taking the hospital example, we can fairly give $Q _{XY}^{(e)}$ of each hospital $e$ the same weight if their patient amounts are similar.
**Reference**
[R1] Cauchois M, Gupta S, Ali A, et al. Robust validation: Confident predictions even when distributions shift[J]. Journal of the American Statistical Association, 2024: 1-66.
[R2] Tibshirani R J, Foygel Barber R, Candes E, et al. Conformal prediction under covariate shift[J]. Advances in neural information processing systems, 2019, 32.
[R3] Sesia M, Wang Y X, Tong X. Adaptive conformal classification with noisy labels[J]. arXiv preprint arXiv:2309.05092, 2023.
---
Rebuttal 2:
Title: Follow-up discussion
Comment: Thank you for the authors' responses. However, I feel that many of the answers do not fully address the questions I raised. Below, I have included further questions and clarifications.
1. "The likelihood ratio is not assumed known and it is estimated by kernel density estimation (KDE)"
This is exactly what I meant. In practice, the likelihood ratio must be estimated, even though the theory is often written as if it is known. For instance, the weighted conformal prediction method by Tibshirani et al. provides a comprehensive theoretical framework for the case where the likelihood ratio is known (and clearly states it), and then also mentions the need for estimation in practical scenarios. A similar level of clarity would be beneficial in this work (though I'm not very sure how much it will make sense in this work where there's no concrete theory even for the known-likelihood ratio setting).
2. "However, prior knowledge about the extent of the shift is necessary to maintain the coverage guarantee when a distribution shift happens..."
I understand that distribution-free guarantees are not possible in the distribution-shift setting. My question is about the benefits of applying the conformal prediction framework in this context. Again as an example, Tibshirani et al. introduce weighted conformal prediction with a theoretical guarantee for the known likelihood ratio case, and Lei and Candes further develops this by showing that using an estimated likelihood ratio provides a coverage lower bound of $1-\alpha$-(TV distance based on the accuracy of estimation). This clearly illustrates the advantage of the conformal-type approach, as it requires only a good estimate of the likelihood ratio and offers 'distribution-free' inference with respect to the conditional distribution of $Y$ given $X$.
In this work, the theoretical components are largely intuitive explanations rather than formal mathematical results. What I'm curious is why one should try to provide such intuition for the conformal-type approach. If some kind of distribution-free guarantee is not the goal, couldn't similar intuition be applied to other approaches that might perform better in practice?
3. Regarding equation (5):
So it is true that $\hat{F}_P(q) = \lceil (1-\alpha)(n+1) \rceil/n$ holds exactly?
I was asking whether it is necessary to replace this known value with one that introduces error, rather than the magnitude of the error. $D_\text{concept}$ is still a function of only $q^*$ if we simply plug in $\hat{F}_P(q) = \lceil (1-\alpha)(n+1) \rceil/n$, so the authors' response is not very convincing to me.
---Actually, upon closer inspection, it seems like the replacing term $\hat{F}_{Q/P}(q^*)$ is also exactly $\lceil (1-\alpha)(n+1) \rceil/n$? So it now seems to me that there is actually no replacement or error?
4. Thank you for the clarification regarding the difference between the supports of the two CDFs. As a quick follow-up, is this work more focused on large sample settings rather than small/finite sample settings? (since there are statements of the form "approximately holds when $n$ is large.") If so, could the authors further address point 1 above, concerning the need for a conformal approach whose main advantage is exact finite sample guarantees?
5. Regarding the mixture distribution representation, I'm not seeking examples. I am wondering, e.g., whether it is an assumption (that all mixture probabilities are equal, which seems quite strong), or something controllable through experimental design, etc. In the paper, it says the ``domain" is partitioned into $M$ bins, and then the mixture distribution representation suddenly appears. I'm just asking for some clarities here. The examples that the authors mentioned seem to be natural partitions rather than an experimental design, but then it doesn't seem very reasonable to just assume that all the mixture probabilities are equal.
---
Rebuttal Comment 2.1:
Comment: Thank you for your comments.
>This is exactly what I meant. In practice, the likelihood ratio must be estimated, even though the theory is often written as if it is known. For instance, the weighted conformal prediction method by Tibshirani et al. provides a comprehensive theoretical framework for the case where the likelihood ratio is known (and clearly states it), and then also mentions the need for estimation in practical scenarios. A similar level of clarity would be beneficial in this work (though I'm not very sure how much it will make sense in this work where there's no concrete theory even for the known-likelihood ratio setting).
Indeed, kernel density estimation (KDE) errors can be propagated to the CP coverage gap, and a theoretical analysis of the error propagation is beneficial. We would like to develop it in a later version.
>In this work, the theoretical components are largely intuitive explanations rather than formal mathematical results. What I'm curious is why one should try to provide such intuition for the conformal-type approach. If some kind of distribution-free guarantee is not the goal, couldn't similar intuition be applied to other approaches that might perform better in practice?
In Appendix C, we develop an upper bound guarantee for the empirical coverage gap, leveraging the fact that the coverage guarantee holds under the assumption of exchangeability, just like Section 1.2 of [R1].
Specifically, Wasserstein distance is applied to quantify the extent of violating exchangeability and bound the empirical coverage gap with a probability related to the number of calibration and test samples. This theoretical upper bound takes advantage of CP, while other uncertainty quantification methods, such as Bayesian neural networks, fail to do so.
>Regarding equation (5)...
The presentation in Section 3.1 should be improved and the logic should be as follows.
First, let's consider the population case, thus $D_{joint}=F_Q(q)-F_P(q)$ by rewriting Eq. (5) in a population form.
Similarly, $D_{covariate}$ is the reduced coverage gap after applying $q^*$ to $F_Q$.
$$
D_{covariate} = F_Q(q)-F_Q(q^*)
$$
Also, after importance weighting, calibration conformal scores are weighted, so the coverage on it is $F _{Q/P}(q^*)$. The remaining gap is caused by concept shift.
$$
D _{concept} = F _{Q}(q^*)-F _{Q/P}(q^*)
$$
Because $F _{Q/P}(q^*)=F _{P}(q)=1-\alpha$ in population case, $D _{joint}=D _{covariate} +D _{concept} $ holds by
$$
D _{covariate} + D _{concept} = F _Q(q)-F _Q(q^*) + F _{Q}(q^*)-F _{Q/P}(q^*) = F_Q(q) - F _{Q/P}(q^*) =F_Q(q) - F _{P}(q)=D _{joint}.
$$
Now let's consider the empirical case. As you suggested, $\hat{F}_P(q)=\lceil(1-\alpha)(n+1)\rceil/n$. This is because the weight of each $v _i \in V _c$ is $1/n$, so there will be a conformal score just at the position $k$ satisfying
$$
\sum_{i=1}^k \frac{1}{n}=\frac{\lceil(1-\alpha)(n+1)\rceil}{n}.
$$
However, for $\hat{F}_{Q/P}(q^*)$, the weight of each $v _i \in V _c$ is $p_i$ after importance weighting, and we can not ensure that there is a position $k$ satisfying
$$
\sum_{i=1}^k p_i=\frac{\lceil(1-\alpha)(n+1)\rceil}{n}.
$$
As a result, $\hat{F} _{Q/P}(q^*) \geq \lceil(1-\alpha)(n+1)\rceil/n$ and the equality does not hold between $\hat{F} _{Q/P}(q^*)$ and $\hat{F}_P(q)$, so we need to bound the error in Appendix B due to discretization of CDFs.
>Is this work more focused on large sample settings rather than small/finite sample settings?
All works based on KDE rely on data accessibility as more samples will make KDE more accurate, but the work is not heavily dependent on large sample settings. For example, compared with other datasets, the airfoil dataset has the smallest sample size for each domain, 160, and the highest feature dimension, 5. The performance of KDE and mRCP on it is acceptable in Figure 2 (a). Theoretical analysis of the estimation error is helpful.
>whether it is an assumption (that all mixture probabilities are equal, which seems quite strong), or something controllable through experimental design, etc.
This is not a requirement of the proposed method and it is just an experiment design, which means $P_{XY}$ does have to be divided equally.
[R1] Barber R F, Candes E J, Ramdas A, et al. Conformal prediction beyond exchangeability[J]. The Annals of Statistics, 2023, 51(2): 816-845.
---
Rebuttal 3:
Comment: I appreciate the authors' detailed responses.
I think I have now received answers regarding points 1, 2, and 4 in the follow-up discussion, and I hope there will be improvements and clarifications in the final version of the paper.
I'd like to add some follow-up questions regarding points 3 and 5.
3. I see that for $\hat{F}_{Q/P}$, the exact equality does not hold---I think what I was trying to say is that it is still a known value that is close to $1−α$ (thanks for pointing this out). So, there is an error, and it now returns to the original question.
In my original comment, what I meant was that viewing the empirical version of $F_P(q)$ as $\hat{F}_P(q)$
doesn't sound reasonable, since $\hat{F}_P$
and $q$
are derived from the same datasets (which involves double-dipping). It also applies to $\hat{F}_{Q/P}(q^*)$.
The main question that remains unanswered is whether, if we proceed with this empirical version, it is necessary to replace this known value with something that includes an error. Are there any challenges in applying a similar approach, if we just plug in the exact value?
5. I understand that it might not make much sense to ask if this statement is some kind of assumption, as it is not as though a theorem is based on it. However, I think the following clarifications would be helpful:
- Is the idea of the proposed method indeed based on model (17), which suggests that the sampling distribution is a mixture distribution with equal mixture probabilities?
- If it indeed requires equal mixture probabilities, then 'partitioning the domain into $M$ bins' is not an accurate statement. Is this what the authors agree?
- My concern is that the dataset examples still seem more fitting to a setting where the domain is partitioned according to some natural rule, rather than by experimental design. So I believe some justification is needed here.
---
Rebuttal Comment 3.1:
Comment: Thank you for your questions. We hope the explanation below is helpful.
>In my original comment, what I meant was that viewing the empirical version of $F_P(q)$ as $\hat{F}_P(q)$ doesn't sound reasonable, since ...
We think your point is: since $q$ is actually calculated from $\hat{F} _P$ according to Eq.(1), it looks strange to apply $q$ backward to $\hat{F} _P$ and calculate the coverage on calibration data as $\hat{F} _P(q)$.
Indeed the coverage on calibration data can be written as $\lceil(1+\alpha)(n+1)\rceil / n$, so Eq.(8) can be written as
$$
\hat{F}_Q(q^*)-\lceil(1+\alpha)(n+1)\rceil / n.
$$
So why don't we optimize based on this form?
Based on the definition of $D _{concept}$, coverage on importance-weighted calibration data is $\hat{F} _{Q/P}(q^*)$ instead of $\lceil(1+\alpha)(n+1)\rceil /n$, so $D _{concept} =\hat{F} _{Q}(q^*)-\hat{F} _{Q/P}(q^*)$ .
In other words, the position of the discretization error, $\epsilon$, should be
$$
D_{joint}=D_{covariate}+D_{concept}+ \epsilon
$$
If we optimize $D _{concept} =\hat{F} _{Q}(q^*)-\lceil(1+\alpha)(n+1)\rceil / n$, we are actually blaming concept shift for the introduction of $ \epsilon$. However, $\epsilon$ is caused by discretization instead of concept shift.
Potential risks of differentiability may be raised if optimizing $\hat{F} _{Q}(q^*)-\lceil(1+\alpha)(n+1)\rceil / n$ as well.
>I understand that it might not make much sense to ask if this statement is some kind of assumption, as it is not as though a theorem is based on it. However, I think the following clarifications would be helpful:
>Is the idea of the proposed method indeed based on model (17), which suggests that the sampling distribution is a mixture distribution with equal mixture probabilities?
The idea of the proposed method is not based or limited on Eq.(17), but we think coverage on subdomains is a practical application scenario of the proposed method. Eq.(17) represents an equally weighted special case.
>If it indeed requires equal mixture probabilities, then 'partitioning the domain into M bins' is not an accurate statement. Is this what the authors agree?
Also, even for equally weighted mixture probabilities, we do not find statements like 'partitioning the domain into $M$ bins' in our work and this statement is not accurate. $M$ bins sound like the subdomains have disjoint feature spaces, but this is not the case for Eq. (17). Line 155 may mislead to 'partitioning into bins' and should be improved.
>My concern is that the dataset examples still seem more fitting to a setting where the domain is partitioned according to some natural rule, rather than by experimental design. So I believe some justification is needed here.
Indeed, Eq.(17) can be improved to a more general form, like $P_{XY}$ is a convex combination of $Q_{XY}^{(e)}$ for $e\in\mathcal{E}$, without limiting the weight of each $Q_{XY}^{(e)}$ as $1/M$. Then we should clarify the weights when we mention experiments on specific datasets. This will make the method more generally applicable. | Summary: The authors propose a method to train a predictive model (a regressor in their experiments) that minimizes an objective comprising the average performance loss across multiple domains (environments) along with a penalty term for the normalized truncated Wasserstein (NTW) distance between the non-conformity score CDFs of each environment and the importance-weighted one used to address covariate shift. Their experimental results demonstrate that the proposed NTW distance objective is correlated with coverage differences due to concept shift and can achieve different tradeoffs with the average prediction residuals, thereby reducing this gap.
Strengths: Overall, the paper is easy to follow and the reasoning behind the proposed formulation is compelling. The problem that the authors address is relevant. The experimental results show that the proposed NTW distance is capturing the coverage difference due to concept shift.
Weaknesses: While the motivation behind the regularization is compelling, it is not entirely clear, both empirically and theoretically, what specific benefits it offers over other state-of-the-art approaches that address differences in the non-conformity score distributions. To highlight the advantages of the proposed NTW regularization over simply minimizing the prediction residuals and then applying various post-hoc conformal prediction techniques, I suggest that the authors demonstrate the validity and efficiency of the prediction intervals obtained for different alphas (error levels) on test data. Additionally, they should provide empirical comparisons or some fundamental discussion/theoretical results in relation to the following approaches:
* Split conformal prediction and group conditional split conformal prediction (where each environment is treated as a separate group). The latter can include standard SCP conditioned on each group or an approach such as the one in section 4.1 of [Barber et al. 2020, "The Limits of Distribution-Free Conditional Predictive Inference"] or BatchGCP/BatchMVP in [Jung et al. 2022, "Batch Multivalid Conformal Prediction"].
* Performance of the covariate shift split conformal prediction, as already discussed in the paper. For example, if this is built on top of a model that minimizes ERM, DRO or V-REx does the proposed approach provide better prediction sets in terms of conditional coverage/validity on the domains.
* An adaptive approach such as the one by [Amoukou and Brunel 2023, Adaptive Conformal Prediction by Reweighing Nonconformity Score].
Providing such comparative results or analysis would significantly strengthen the paper's argument.
I also think the authors should discuss how their work relates to [Barber, Rina Foygel, et al. "Conformal prediction beyond exchangeability"], where it is suggested that the non-conformity scores should be weighted based on the total variation distance between the source and target distributions. This approach could potentially serve as another baseline to consider, given the distance between the distributions of the non-conformity scores under P and Q(e).
Technical Quality: 2
Clarity: 2
Questions for Authors: In the experiments, how do the authors compute the importance weights for covariate shift?
How do you pick \sigma?
Revise typos such as "Dcoviriate" in Figure 1 caption.
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: yes, they mention some of the limitations of the proposed work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your questions.
> What specific benefits it offers over other state-of-the-art approaches that address differences in the non-conformity score distributions? I suggest that the authors demonstrate the validity and efficiency of the prediction intervals obtained for different alphas on test data.
When a theoretical coverage guarantee under distribution shift is obtained, it is hard to keep prediction sets small at the same time. For instance, as mentioned in lines 41-43 the worst-case (WC) method [R1] holds the guarantee under distribution shift at the cost of inefficient large prediction sets by applying the highest $\lceil{(1-\alpha)(n+1)} \rceil/n$ quantile to all test distributions.
The proposed method can get **a good trade-off between the coverage and prediction interval size** because of two reasons. First, we do not follow conservative methods as mentioned above. By minimizing the distribution discrepancy via NTW, we make the calibration and test distribution score distributions satisfy exchangeability better, so we can use the calibration conformal score quantile (not the largest quantile of multiple test distributions) to generate prediction sets. Secondly, prediction interval size is related to the magnitude of conformal scores which is absolute residual for regression tasks, and Eq. (19) can balance the residual loss and NTW loss.
We demonstrate the efficiency of the proposed method in author rebuttal.
>Split conformal prediction and group conditional split conformal prediction ... The latter can include ... section 4.1 of [Barber et al. 2020, "The Limits of Distribution-Free Conditional Predictive Inference"] or BatchGCP/BatchMVP in [Jung et al. 2022, "Batch Multivalid Conformal Prediction"].
The performance of Split CP (SCP) with a model trained by ERM is highlighted in the author rebuttal.
For the comparison with BatchMVP [R2], first BatchMVP does not decompose distribution shifts between each group into covariate and concept shifts. Secondly, Batch MVP is trained based on a given $\alpha$ value, so the multi-valid coverage guarantee only holds at a specific error level. However, mRCP minimizes the distribution discrepancy between calibration and test conformal scores, so a single trained model can make the coverage on test data approach $1-\alpha$ no matter how $\alpha$ value changes.
For the comparison with Section 4.1 of [R3], in Eq. (10) of [R3], it takes the supremum of quantiles for groups in $\hat{\mathfrak{X}} _{n _1}$ to ensure the coverage for any group $\mathcal{X}\in\hat{\mathfrak{X}} _{n _1}$. As a result, [R3] has the same problem as [R1] and is likely to cause unnecessarily large prediction intervals.
>Performance of the covariate shift split conformal prediction...
The result of importance-weighted SCP [R6] with a model trained by ERM is highlighted in author rebuttal.
Models trained by V-REx can be applied to standard SCP or importance-weighted SCP. The result of standard SCP based on V-REx is shown as the blue curves in Figure 2. The result of importance-weighted SCP based on V-REx will be worse than standard SCP. Because V-REx aligns the unweighted score distributions by minimizing its regularization term, weighting conformal scores afterward will disturb the previous alignment and make score distributions more discrepant, thus enlarging the coverage gap.
The same also happens to the importance-weighted SCP with the model trained by DRO.
>An adaptive approach such as the one by [Amoukou and Brunel 2023, Adaptive Conformal Prediction by Reweighing Nonconformity Score]
[R4] applies a novel conformal score function to make the prediction set more customized to the input $x$. However, it is based on the exchangeability assumption, having a different problem setup from ours. The position of [R4] is in the first row of Table 2.
>the authors should discuss how their work relates to [Barber, Rina Foygel, et al. "Conformal prediction beyond exchangeability"]...
[R5] uses total variation to bound coverage gap. However, total variation measures half the absolute area between two probability density functions, without considering how the probabilities are distributed across the support of the distributions. On the contrary, Wasserstein distance considers how probabilities are dispersed along the support. Therefore, the same total variations can come up with different Wasserstein distances in Figure 1 (c). Wasserstein distance can reflect the overall difference of two conformal score CDFs along the support, but total variation fails to do so.
> How do you pick $\sigma$
The experiment results can be different with different $\sigma$ values. $\sigma$ is selected based on the properties of conformal score distributions. Usually, $\sigma$ should be smaller (truncate more) for more concentrated distributions. We choose $\sigma$=0.8 for MLP and 0.95 for physics-informed models, because MLP has a better fitting ability, making conformal scores more concentrated.
**Reference**
[R1] Cauchois M, Gupta S, Ali A, et al. Robust validation: Confident predictions even when distributions shift[J]. Journal of the American Statistical Association, 2024: 1-66.
[R2] Jung C, Noarov G, Ramalingam R, et al. Batch multivalid conformal prediction[J]. arXiv preprint arXiv:2209.15145, 2022.
[R3] Foygel Barber R, Candes E J, Ramdas A, et al. The limits of distribution-free conditional predictive inference[J]. Information and Inference: A Journal of the IMA, 2021, 10(2): 455-482.
[R4] Amoukou S I, Brunel N J B. Adaptive conformal prediction by reweighting nonconformity score[J]. arXiv preprint arXiv:2303.12695, 2023.
[R5] Barber R F, Candes E J, Ramdas A, et al. Conformal prediction beyond exchangeability[J]. The Annals of Statistics, 2023, 51(2): 816-845.
[R6] Tibshirani R J, Foygel Barber R, Candes E, et al. Conformal prediction under covariate shift[J]. Advances in neural information processing systems, 2019, 32. | Rebuttal 1:
Rebuttal: **Kernel Density Estimation for Likelihood Ratio**
The likelihood ratio is not assumed to be known and is approximated by kernel density estimation (KDE), which can estimate the calibration and test feature distributions. In our experiments, we applied the Gaussian kernel, a positive function of $x\in\mathbb{R}^d$ as follows, where ‖∙‖ is Euclidean distance and $h$ is bandwidth.
$$
K(x,h)=\frac{1}{(\sqrt{2\pi}h)^d} e^{\frac{-‖x‖^2}{2h^2}}
$$
Given this kernel form, the density estimated at a position $x_p$ within a group of points $x_{1:n}$ is given by
$$
ρ_K (x_p )=\sum_{i=1}^n K(x_p-x_i;h).
$$
To find the optimized bandwidth value for each dataset, we applied scikit-learn package [R1] using the grid search method with a bandwidth pool. With the approximated calibration and test feature distribution, we can calculate the likelihood ratio to implement importance weighting.
**Experiment Results of Coverage and Prediction Interval**
As mentioned in lines 41-43, the worst-case (WC) method [R2] only selects the highest $\lceil{(1-\alpha)(n+1)} \rceil/n$ quantile for all test distributions. Even if the theoretical coverage guarantee is ensured, it will cause excessively large prediction sets and overestimated coverage for test distributions with smaller quantiles.
We provide experimental results about coverage and prediction intervals by mRCP in comparison with the worst-case (WC) method. We denote $C^{(e)}$ and $I^{(e)}$ as the coverage and prediction interval length of domain $e$ with a given $1-\alpha$. $\mathbb{E}_e [C^{(e)}]$ and $\mathbb{E}_e [I^{(e)}]$ are expectations of $C^{(e)}$ and $I^{(e)}$ over $e\in \mathcal{E}$.
In **Figure 5** of the attachment, even if mRCP coverage is not guaranteed at least $1-\alpha$, it approaches $1-\alpha$ very close with relatively small standard deviations. However, WC causes excessive coverage. As a result, WC generates larger prediction intervals compared with mRCP in **Figure 6**, causing less prediction efficiency.
$\beta$ value of mRCP is selected as 1 for the airfoil dataset and 10 for the other six datasets. The same MLP architectures are applied to WC and mRCP. Other experiment settings are the same as in Appendix D.
**Spearman’s Rank Coefficient of the Experiment Results in Section 5.2**
We provide the experiment result of Spearman’s rank coefficient in **Table 6** in the attachment as supplementary material to Section 5.2. In Table 6, NTW holds the highest Spearman coefficient on average, indicating a strong positive correlation with Eq. (10).
**Explicitly Highlighted Baselines in Figure 2**
The performance of standard Split CP (SCP) with a model trained by V-REx is shown in Figure 2 as the blue curves. Therefore, the result of standard SCP with empirical risk minimization (ERM) is presented as the most left side of blue curves (V-REx) in Figure 2, where the regularization weight $\beta$ of V-REx is small, and thus V-REx can be regarded as ERM.
For importance-weighted SCP [R3] with a model trained by ERM, its performance is presented as the most left side of the orange curve (mRCP) of Figure 2. Because at the left side of the orange curve (mRCP), the weight $\beta$ of NTW regularization is small enough that mRCP can be regarded as ERM. How wide coverage gap can be reduced by importance weighting depends on the extent of covariate shift between test and calibration distributions
We take **Figure 2 (a)** as an example in the attachment to highlight the results of these two setups. The results for other datasets can be checked in other subplots of Figure 2 in the same way.
**Reference**
[R1] Pedregosa F, Varoquaux G, Gramfort A, et al. Scikit-learn: Machine learning in Python[J]. the Journal of machine Learning research, 2011, 12: 2825-2830.
[R2] Cauchois M, Gupta S, Ali A, et al. Robust validation: Confident predictions even when distributions shift[J]. Journal of the American Statistical Association, 2024: 1-66.
[R3] Tibshirani R J, Foygel Barber R, Candes E, et al. Conformal prediction under covariate shift[J]. Advances in neural information processing systems, 2019, 32.
Pdf: /pdf/0038f2d7bcc6f49a9ba5f30a6c1a9140ba680d86.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper tackles the challenge of obtaining conformal predictions that remain robust under distribution shifts. This is an important issue in many machine learning applications where the underlying data distribution may change across the training (or calibration) and test data sets. The authors propose an algorithm that appears promising based on empirical results. However, significant improvements are needed in terms of writing quality, clarity, citation of relevant literature, mathematical rigor, and explanation of the main ideas. Addressing these substantial weaknesses would make the paper more accessible and impactful for the research community.
Strengths: - The problem of robust conformal predictions under distribution shifts is timely and relevant.
- The paper introduces a concrete algorithm that demonstrates promising performance in some practical scenarios.
Weaknesses: - Writing Quality: The paper is difficult to read and understand, even for experts. Key concepts are not clearly explained, and the text contains numerous awkward or unclear sentences, as well as pervasive grammatical errors and typos. Some sections seem poorly written, possibly by AI, while others could have been significantly improved with better editing.
- Missing References: Important related works, such as "Conformal prediction beyond exchangeability" by Barber et al. (2023), are not discussed, which limits the paper's contextual grounding in existing literature.
- Lack of Statistical/Mathematical Rigor: The mathematical details in the paper are imprecise. Assumptions and approximations are not clearly stated, and there is a frequent confusion between population and sample quantities in key sections.
- Unclear Core Idea: The main idea of the proposed algorithm, particularly how it handles concept shift through covariate shift adjustments (as in Equation 10), is not clearly articulated and remains confusing.
Technical Quality: 1
Clarity: 1
Questions for Authors: - Writing Quality: The current writing is difficult to understand, and professional editing may be an option if needed. Parts of the paper read like they were written by Chat-GPT, but not in a good way, with lots of very ackwards sentences and passages that don't make sense. Other parts could have been improved by Chat-GPT, due to the large number of grammatical errors and typos.
- Clarity of Introduction: The introduction does not clearly explain the problem being addressed or the novelty of the proposed approach. Could you clarify these points to help readers understand the significance of your work?
- Mathematical Precision: The paper seems to systematically confuse population and sample quantities, starting from Equations (7)-(9) and (10). For instance, why is an empirical distribution used in Equation (7) instead of a population distribution, in the definition of a quantity (the coverage gap) which should be a population quantity? Additionally, the "expected value" in Equation (10) does not seem correctly formulated, since the left-hand-side of the equation is a population quantity (hence fixed) but the right-hand-side is a sample quantity (hence random). I don't think this is correct math.
- Assumptions and Justifications: Lines 112-115 introduce an assumption that is neither explained nor motivated, followed by a vague statement about a small error bound. Can you provide a clearer explanation and justification for this assumption? Is this an assumption or an approximation?
- Core Concept Explanation: The explanation of handling concept shift via covariate shift, particularly in Equation (10), is confusing. Could you clarify this core idea, as it is central to your paper's contribution? It seems from Equation (10) that concept shift was reduced to a (much simpler) covariate shift problem. I don't understand how this happened.
- Background on Conformal Prediction: Section 2.1 lacks precision. For example, Equation (2) describes a special case for regression, but the referenced works use different approaches. Can you clarify these points to make the section more accessible to a broad readership? Is this paper focusing on regression or classification? Is it limited to a specific type of conformity scores, such as that in Equation (2), or is it more general?
- Connection to Related Work: Is there a connection between the coverage difference you considered and that studied in "Adaptive conformal classification with noisy labels" by Sesia et al. (2023)? It seems there might be a relationship, even though Sesia et al. consider a specific case of distribution shift. Can you discuss any similarities or differences?
Confidence: 4
Soundness: 1
Presentation: 1
Contribution: 2
Limitations: The paper's main limitations are its poor writing quality and lack of mathematical precision. These issues make it difficult to understand the main ideas and verify the soundness of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your suggestions and questions.
>Missing References: Important related works, such as "Conformal prediction beyond exchangeability"...
[R1] uses total variation to bound coverage gap. However, total variation measures half the absolute area between two probability density functions without considering how the probabilities are distributed across the support of the distributions. On the contrary, Wasserstein distance considers how probabilities are dispersed along the support. Therefore, the same total variations can come up with different Wasserstein distances in Figure 1 (c). As a result, Wasserstein distance can reflect the overall difference of two conformal score CDFs along the support, which is related to the coverage gap, but total variation fails to do so.
>Why is an empirical distribution used in Equation (7) instead of a population distribution, in the definition of a quantity (the coverage gap) which should be a population quantity?
We focus on the empirical coverage gap with finite samples. This is widely discussed and important to practical applications, such as Section 1.2 of [R1]. **The empirical coverage gap can be bounded with a probability related to the numbers of calibration and test samples** in Eq. (30) of Appendix C. Besides, the upper bound builds a connection between population forms of conformal score distributions and the empirical coverage gap.
>Lines 112-115 introduce an assumption that is neither explained nor motivated...
The assumption in lines 112-115 can be presented more rigorously. In Appendix B, we develop the following inequality for Eq. (7).
$$
D_{concept}=\hat{F}_Q(q^{\ast})-\hat{F}_P(q)=\hat{F}_Q(q^{\ast})-\hat{F} _{Q/P}(q^{\ast})+\hat{F} _{Q/P}(q^{\ast})-\hat{F}_P(q) \leq \hat{F}_Q(q^{\ast})-\hat{F} _{Q/P}(q^{\ast})+|\hat{F} _{Q/P}(q^{\ast})-\hat{F}_P(q)|.
$$
Since the error can be bounded in Appendix B as follows and can be controlled by increasing the size of calibration data $n$,
$$
|\hat{F} _{Q/P}(q^{\ast})-\hat{F}_P(q)| < \max(\hat{F} _{Q/P}(q^{\ast} _{+})-\lceil{(1-\alpha)(n+1)} \rceil/n, \hat{F} _{P}(q _{+})-\lceil{(1-\alpha)(n+1)} \rceil/n ),
$$
we can focus on $\hat{F}_Q(q^{\ast})-\hat{F} _{Q/P}(q^{\ast})$.
Intuitively, since $q$ and $q^*$ are $\lceil{(1-\alpha)(n+1)} \rceil/n$ quantiles of $\hat{F} _{P}$ and $\hat{F} _{Q/P}$ respectively, with a large $n$, it is reasonable to expect the difference between $\hat{F} _{Q/P}(q^{\ast})$ and $\hat{F}_P(q)$ is very small. Please check Figure 1 (b) for the relationship between $\hat{F} _{Q/P}(q^{\ast})$ and $\hat{F}_P(q)$ with a given $\lceil{(1-\alpha)(n+1)} \rceil/n$ value.
>The explanation of handling concept shift via covariate shift, particularly in Equation (10), is confusing...
**We do not intend to address concept shift by covariate shift in Eq. (10)**. At first, we approximately address covariate shift by importance weighting [R4] based on kernel density estimation. Therefore, we can estimate the weighted calibration conformal score CDF $\hat{F} _{Q/P}$ from the unweighted one $\hat{F} _P$. The remaining gap between $\hat{F} _{Q/P}$ and the test conformal score CDF $\hat{F} _{Q}$ is caused by concept shift in Eq. (8). Then, we quantify the distribution discrepancy between $\hat{F} _{Q/P}$ and $\hat{F} _{Q}$ by NTW, and minimize it during training.
mRCP can make the coverage on test data approach $1-\alpha$, rather than applying conservative methods, like the worst-case (WC) method [R5] as mentioned in lines 41-43, which is likely to cause overestimated coverage and prediction sets.
We demonstrate **a better trade-off between coverage and prediction intervals** of the proposed method compared with WC method in the author rebuttal.
> Section 2.1 lacks precision. For example, Equation (2) describes a special case for regression...Is it limited to a specific type of conformity scores, such as that in Equation (2), or is it more general?
We focus on regression tasks as mentioned in line 65 with the conformal score function defined in Eq. (2), which is **commonly used** for regression tasks [R1][R2][R4]. The proposed method is also applicable to other conformal score functions for regression tasks, like the one proposed in localized split CP [R6].
> Is there a connection between the coverage difference you considered and that studied in "Adaptive conformal classification with noisy labels" by Sesia et al. (2023)?
[R3] considers CP on classification problems whereas we focus on CP on regression tasks. [R3] aims to maintain the coverage guarantee if the labels of calibration samples are contaminated during sampling, which means the exchangeability assumption is violated if test samples are drawn from the same population distribution. Therefore, [R3] considers concept shift but not covariate shift as the marginal distribution of features does not change.
Our work investigates the coverage difference under joint distribution shift between calibration and test distributions, which means the covariate shift and concept shift can occur simultaneously.
**Reference**
[R1] Barber R F, Candes E J, Ramdas A, et al. Conformal prediction beyond exchangeability[J]. The Annals of Statistics, 2023, 51(2): 816-845.
[R2] Romano Y, Patterson E, Candes E. Conformalized quantile regression[J]. Advances in neural information processing systems, 2019, 32.
[R3] Sesia M, Wang Y X, Tong X. Adaptive conformal classification with noisy labels[J]. arXiv preprint arXiv:2309.05092, 2023.
[R4] Tibshirani R J, Foygel Barber R, Candes E, et al. Conformal prediction under covariate shift[J]. Advances in neural information processing systems, 2019, 32.
[R5] Cauchois M, Gupta S, Ali A, et al. Robust validation: Confident predictions even when distributions shift[J]. Journal of the American Statistical Association, 2024: 1-66.
[R6] Han X, Tang Z, Ghosh J, et al. Split localized conformal prediction[J]. arXiv preprint arXiv:2206.13092, 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for your understanding of my feedback. However, I believe that my concerns go beyond what can be resolved through this discussion alone. I believe this paper requires significant improvements in both the clarity of writing and the level of mathematical rigor before it can undergo another thorough review.
Other reviewers have also expressed concerns regarding the clarity of the paper, the theoretical foundations and relation to other approaches, and the soundness of the mathematical arguments. While the method appears to perform well in certain empirical scenarios, it's not enough. It either needs to be clearly presented as a paper suggesting a heuristic algorithm, or the theoretical ideas behind it need to be articulated much better. | null | null | null | null | null | null |
Differentially Private Stochastic Gradient Descent with Fixed-Size Minibatches: Tighter RDP Guarantees with or without Replacement | Accept (poster) | Summary: The paper studies the Rényi differential privacy (RDP) guarantees of the subsamplied Gaussian mechanism with fixed-sized random minibatches, when the so called add/remove neighborhood relation of datasets is considered. It uses similar techniqes as were used in the paper
Mironov, Ilya, Kunal Talwar, and Li Zhang. "R\'enyi differential privacy of the sampled gaussian mechanism." arXiv preprint arXiv:1908.10530 (2019),
which augmented the seminar paper Deep Learning with DP (Abadi et al., 2016) and gave rigorous RDP guarantees for the Poisson subsampled DP-SGD.
The paper also studies both analytically and numerically the benefits of carrying out subsampling with fixed-sized minibatches instead of Poisson subsampling.
Strengths: - The analysis looks solid, generally a well-written paper
- The variance analysis and the numerical experiments showing the benefits of fixed-size subsampling without replacement seem novel and interesting
- The analysis for the "with replacement" subsampling seems interesting though it is quite limited
Weaknesses: - Clearly the biggest deficit of the paper is that it does not take into account some of the recent research in this area. Most importantly, it overlooks the work
Zhu, Yuqing, Jinshuo Dong, and Yu-Xiang Wang. "Optimal accounting of differential privacy via characteristic function." International Conference on Artificial Intelligence and Statistics. PMLR, 2022. https://proceedings.mlr.press/v151/zhu22c/zhu22c.pdf
The main result of this paper is a special case of the Thm. 11 by Zhu et al. (2022) : If $(P,Q)$ are a dominating pair of distributions for the base mechanism in substitute relation, then the fixed-size random subsampling without replacement give a dominating pair $(q P + (1-q) Q, Q)$ for removal neighbours and $(P, (1-q) P + q Q)$ gives a dominating pair for add neighbors. Then, we know the dominating pair $(P,Q)$ of the Gaussian mechanism in case of substiture relation of datasets (pair of one-dimensional Gaussians) and the result of this paper's mmain result (Thm. 3.1) follows since if a pair if dominating pair for the hockey-stick divergence, by the Blackwell theorem it dominates also for other $f$-divergences.
Since this "without replacement" result consitutes such a big part of the paper, I think this is a major deficit and there should be a major revision before accepting this paper.
This with replacement upper bound seems interesting, however my impression is that it is quite conservative (see e.g. Fig. 7 of the appendix). Only the experimental results for the lower bound in case of "with replacement" are given in the main text.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can you comment on the accuracy of the upper bound in case of "with replacement"? Is it still open/unclear, how tight those bounds are?
Could you provide "with replacement" upper bounds also for the hockey-stick divergence, i.e., do you obtain a dominating pair of distributions for the hockey-stick divergence?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 1
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Relation to Zhu et al. (2022)**
Please see the response to this comment in the general rebuttal section above. As the analysis of Zhu et al. does not investigate application to DP-SGD, the results of Theorem 11 in the DP-SGD setting are hypothetical until or unless directly demonstrated. The reviewer provides a proof sketch that does not result in a rigorous reduction of Thm. 11 (Zhu et al.) to our Thm. 3.1. We note that DP-SGD operations in practice are much more complicated than simple Gaussian Mechanism.
**Accuracy of the upper bound "with replacement"**
Our results, which include both a lower and upper bound, show that sampling with replacement catastrophically increases in the Renyi bounds. This aligns with intuition as the fact that one (i.e., an adversary) can get multiple same samples in the same batch is a drastic violation of privacy, and this is what we show in our Figure 7.
**With replacement upper bounds for hockey-stick divergence**
Bounding HS divergence with replacement is a nice research question, but is not trivial and requires substantial work that warrants further study. Our study focuses on Renyi divergence.
---
Rebuttal Comment 1.1:
Comment: Thank you for the replies! Thm. 11 by Zhu et al. 11 is independent of any accountant used to compute the privacy profiles. As I have pointed out in my review, from that theorem follows the Blackwell dominance as well and an existence of a post-processing function and furthermore the RDP bound you state in your paper. This unfortunately reduces the relevance of that particular result I think, and therefore I plan to keep my score.
---
Reply to Comment 1.1.1:
Comment: Thanks for further clarification. We investigated the extension of Zhu et al.'s Thm 11 using Blackwell Thm as you suggested to validate if one can obtain our bounds in that manner.
In the case of replace-one adjacency relations, we do not believe that the general Blackwell theorem ideas along with the results in Zhu et al are sufficient to reproduce our results. Theorem 11(b) in Zhu et al applies to add or remove adjacency for the subsampled mechanism. Proposition 30 of Zhu et al does address replace-one adjacency, but it does not derive a full dominating pair, as it must break the analysis into two cases based on alpha>=1 and alpha<1. Corollary 32 in Zhu et al does give general theory for constructing a dominating pair, but it is not explicit or readily implementable in practice due to the need to compute a Legendre of transform (Fenchel conjugate) as a function of $x$ for `every' $x\in[0,1]$. Therefore to the best of our knowledge, the result in Zhu et al. cannot be directly used to reproduce our new RDP bound for replace-one adjacency as found in our equation 110.
---
Rebuttal 2:
Comment: Thanks for the answers. But why would you need a result for replace-one adjacency? You define the neighborhood relation to be the add/remove neighborhood relation. On lines 80-81:
> We define datasets $D$ and $D′$ to be adjacent if one can be obtained from the other by adding or removing a single element.
Also, the proof of Thm 3.1 clearly shows that $D$ and $D'$ are of different sizes.
Then, I think [Zhu et al., Thm. 11b](https://proceedings.mlr.press/v151/zhu22c/zhu22c.pdf) will give your result. On the right hand side of the bound of your Thm 3.1, you exactly have the dominating pair for the Gaussian mechanism under the replacement relation, just as required by [Zhu et al., Thm. 11b](https://proceedings.mlr.press/v151/zhu22c/zhu22c.pdf). Then you can use the results of Section 3 of [Mironov et al.](https://arxiv.org/pdf/1908.10530) to show that one of the Rényi divergences dominates the other which exactly gives your Thm. 3.1 (you also seem to use those results by Mironov et al.).
---
Rebuttal Comment 2.1:
Comment: Thanks for asking for clarification on this. We agree with you that the steps you outline could provide an alternative path to the add/remove result in our Theorem 3.1. What we were highlighting in our last response is that our method of proof leads to a unified method for treating both add/remove and replace-one. The replace-one result is also of significant application value as was pointed out by JRin. Originally, we had the add/remove result in Theorem 3.1 in the main text while discussing replace-one in Appendix D.4. Based on your and JRin's comment, we have changed the main text to highlight the replace-one as well. We have also pointed out that the add/remove result could alternatively be obtained via the method you outline.
---
Rebuttal 3:
Comment: Thank you for the reply. I confused your Section C.2 with your Section D.4. There, in Section D.4, it is diffiult for me to see why that RDP bounds should hold. Namely, you have a bound with a Rényi divergence between the mixtures $q \cdot \mathcal{N}( \Delta_1, \sigma^2) + (1-q) \cdot \mathcal{N}( 0, \sigma^2)$ and $q \cdot \mathcal{N}( \Delta_2, \sigma^2) + (1-q) \cdot \mathcal{N}( 0, \sigma^2)$. The derivation of this bound is omitted. Recently, [Lebeda et al.](https://arxiv.org/pdf/2405.20769) showed that the bound for the subsampling without replacement under the substiture relation does not at least have a hockey-stick upper bound with mixtures $q \cdot \mathcal{N}( -1, \sigma^2) + (1-q) \cdot \mathcal{N}( 0, \sigma^2)$ and $q \cdot \mathcal{N}( 1, \sigma^2) + (1-q) \cdot \mathcal{N}( 0, \sigma^2)$, one has to use bounds such as Prop. 30 by [Zhu et al.](https://proceedings.mlr.press/v151/zhu22c/zhu22c.pdf). Thus, I am a bit suprised to see this result given in Section D.4, and I think a detailed derivation would be needed. Also, I don't see where the bound of Eq. (108) comes from, why is there no additional factor 2 (or 4 after squaring).
I do acknowledge the contribution in making the RDP bound calculations faster, but I think the paper would benefit some polishing and by piutting the results in a context.
---
Rebuttal Comment 3.1:
Comment: Thanks for the question. We want to clarify the notion of "worst-case" in this context that might have created some confusion: Our method gives "an" upper bound vs. the "tightest" bound. Based on your comments and also those of uSpK, we have added additional details to Appendix D to make our derivation clearer. We hope that the following outline clarifies it:
To show that the Gaussian mixtures provide an upper bound on the worst case, we use the probabilistic Lemma A1 that decomposes the transition probabilities of mechanism. In the replace-one case the role of $D$ and $D'$ are symmetric and so they both satisfy a decomposition of the form (31). Then using convexity, the Renyi divergence between Eq.31(with $D$) and Eq.31(with $D'$) is bounded above by maximum over $b^\prime,\widetilde{b}$ of the Renyi divergence between Gaussian mixtures, as in Eq.102. The only property of the divergence used in these steps is quasiconvexity, and so they apply to the hockey-stick divergence as well, though our subsequent Taylor expansion calculations that lead to explicit computable bounds are specific to Renyi. We emphasize that we only compute an upper bound and not the tightest upper bound, so there is no conflict with Lebeda et al, though we still obtain tighter bounds than the previous state-of-the-art explicit computable bound in Wang et al. The examples in Lebeda et al. seem to highlight the difficulty of obtaining the tightest possible bound on the hockey-stick divergence, which is an interesting but different theoretical problem than what we achieved in this work.
As to your other question, the factor of 4 is contained in our definition of $r_t$; see eq. (34). | Summary: This paper analyzes Differentially Private-SGD with fixed batch size (with and without replacement), through the lens of Rényi-DP. The bounds without replacement have a much better constant than previous ones; the ones with replacement are brand new.
Strengths: - The results are important and likely to be impactful in the niche of DP deep learning. It’s cool to see progress on DP analysis that gets closer to what practitioners actually do.
- The paper is really well written and thorough in its discussions. I really enjoyed reading it.
Weaknesses: It’s a bit of a nitpick, but the paper keeps claiming that this analysis is DP-SGD specific, and that’s what enables the tight bounds. It seems like it is specific to:
- the Gaussian mechanism (with specific subsampling approaches)
- and I guess additive contributions of datapoints to the function?
Anyway, it might be good to give the specifics in a slightly more general/abstract fashion, and then discuss why it applies to well to SGD? Just so that the reader doesn’t expect tight couplings with the optimization or something, and also because it may be slightly more broadly useful.
Minor:
- several citations seem off: the “standard differential privacy (DP)” citation is CDP, poisson sampling from RDP cites the RDP paper instead of this one I assume https://arxiv.org/pdf/1908.10530.
- "even without using the convexity technique” p. 6 -> this has never been mentioned before. You need more details here for context.
Technical Quality: 4
Clarity: 4
Questions for Authors: Since you reduce to the same intermediary quantity as https://arxiv.org/pdf/1908.10530 (in Eq. 6), does it mean that:
- the bound of that paper applies as-is to fixed batches using your proof (i.e., if one were using the Opacus accountant with fixed batches as a heuristic, it’s actually not a heuristic)?
- if not, why is that?
- if so, how does your new bound compares? (other than enabling the analysis with replacement, which is cool!)
I think that the paper would benefit from discussing those to give a bit more context. S4.2 is related, as it gives an interesting discussion of where the different factor comes from (so the quantity is not identical I guess), but not all of this, like whether you could use the previous approximation as is, and how the two numerical procedures compare.
After reading S4.2, I am wondering if the difference is a bit artificial, or at least tied to the notion of sensitivity. For instance, Poisson sampling would also pay a factor two under the change one definition (I think?) whereas your approach would mostly not change I believe (I only skimmed the appendix on sensitivity to I’m not completely sure, but intuitively it’d make sense—the section doesn’t seem to discuss the implications on Poisson vs. fixed batch). Is it the case that your approach pays a 2x factor, but also gets the stronger change-one for free, whereas Poisson gets the better guarantee under add/remove, but its equivalent under change-one? It would be an interesting thing if that’s the case I think (change-one is pretty popular in other applications, I don’t think it’s really less conventional as the paper claims).
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **DP-SGD Specific**
That is an accurate comment, in that we do not use the fact that the additive terms involve gradients. Our method is applicable to any Gausian Mechanism with fixed-size subsampling where each sample contributes an additive term to the mean and those terms are uniformly bounded. The only aspect of DP-SGD that we do not explicitly use is the fact that each additive term is a clipped gradients. These specific structural aspects of DP-SGD are what allow us to obtain tigher bounds than Wang et al. (2019).
**Relationship to Mironov et al. 2019**
Thanks for pointing this out. The answer is no. The reason is that if one uses Opacus implementation with Fixed Size sampling mechanism that would lead to incorrect privacy bounds. Our previous wording gave the false impression that the Mironov et al.'s method applies to the cases under consideration here and so we have removed that sentence. The relation between Mironov et al. and our work is discussed in detail in Section 4.2.
**Section 4.2: Adjacency Relations**
Thank you for this insightful question. You are correct, the 2x factor is the same whether one considers add/remove or replace adjacency relations, as we show in Appendix D4. Therefore our approach gets change-one for free as you say, while Poisson achieves tighter bounds under add/remove but gives equivalent bounds to leading order under replace-one adjacency, as shown by a similar computation to what is currently in Appendix D4. We have added text in the introduction as well as in Sections 4.2 and Appendix D4 to further highlight this fact.
---
Rebuttal Comment 1.1:
Comment: Thank you. My impression looking at how the theory works in your paper was that it did reduce to the same accounting at Mironov et al. 2019 (Since you reduce to the same intermediary quantity as https://arxiv.org/pdf/1908.10530 (in Eq. 6)). Could you give an intuition / place of divergence in your proof for why that is not the case? Thanks!
---
Reply to Comment 1.1.1:
Comment: Thanks for asking for clarification on this. We get to a similar, but not identical expression, to that of Mironov et al. The difference is in the right-hand side of our eq.6, which differs from Mironov by the factor of 1/4 in the variances. This is due to the effects of fixed-size vs. Poisson subsampling. The key difference in the derivation in our Appendix A is contained in our Lemma A.1, as compered to Theorem 4 in Mironov et al. There we construct the random variables that lead to the appropriate decomposition of the mechanism. In particular, compare the construction of our $J$, $B'$, and $\widetilde{B}$ in the paragraph about Lemma A.1 with the random variable T defined at the start of the proof of Theorem 4 in Mironov, along with (implicitly) the random variable that selects whether the additional element x is included, leading to the mixture at the bottom of page 3 of Mironov. The construction of our $J$, $B'$, and $\widetilde{B}$ are the analogues that lead to the appropriate decomposition of the mechanism in the case of fixed-size subsampling, as proven in our Lemma A.1. This difference then propagates through the derivation, eventually leading to the difference by a factor of 1/4 in the variance as noted above. The key reason for this is that our eq 34-35 shows that the mechanism in the case of fixed-size subsampling has a sensitivity of twice that in the Poisson case. This is because in Poisson subsampling when the minibatch from $D$ differs from that of $D^\prime$ it is due to the inclusion of a single additional element. However, in fixed-size subsampling, when the minibatches are not identical then they differ by a replacement; this contributes more to the difference in means by a factor of $2$. Furthermore, in our analysis of fixed-size subsampling with replace-one adjacency in Appendix D.4 there are even greater differences between our analysis and that of Mironov et al.
We added the text to highlight this difference with Mironov et al. and directed readers to our Appendix A and Lemma A.1 in this regard. Also, in our Appendix A, we added the intuition described above. | Summary: The paper first proves new privacy bound for the subsampled Gaussian mechanism under fixed-size sampling with and without replacement, which improves over the tightest known prior results in (Wang et al. 2019) by a constant of four. The proofs rely on a careful coupling of the sampling processes on neighboring datasets (similar to the design in Wang et al. 2019) to reduce the problem of analyzing divergence between Gaussian mixtures, for which the analyses in (Mironov et al. 2019) are applicable.
- For fixed-size sampling without replacement, the paper proves integral upper bounds for the remainder term of the Taylor expansion of the privacy bound, to obtain tighter constants compared to the privacy bound in Wang et al. 2019.
- The method extends to fixed-size sampling with replacement. It allows an upper bound that is similar to the upper bound for sampling without replacement in the leading term to sampling probability q. The authors also prove a lower bound under such settings and numerically investigate its dependence on the batch size and the Renyi divergence order.
- The authors analytically showed that the empirical gradient variance is larger under Poisson sampling compared to fixed-size sampling, but the privacy bound is smaller under Poisson sampling compared to fixed-size sampling, indicating an interesting privacy-utility trade-off depending on the choice of sampling schemes, when fixing the sampling probability the same.
Strengths: - An interesting way of computing DP guarantee for subsampled Gaussian mechanism, via computing integral upper bounds for the remainder term of its Taylor expansion.
- The proposed method yields a tighter RDP guarantee for subsampled Gaussian mechanism under fixed-size sampling without replacement, compared to prior results (Wang et al. 2019) by a constant factor of four.
- The method extends to fixed-size sampling with replacement. It allows an upper bound that is similar to the upper bound for sampling without replacement in the leading term to sampling probability q. The authors also prove a lower bound under such settings and numerically investigate its dependence on the batch size and the Renyi divergence order.
Weaknesses: - The reason for the improved constant factor compared to (Wang et al.) is not crystal clear. Is the integral upper bound for the remainder term of Taylor expansion of the DP bound contributing to the tighter constant? Unfortunately, the Taylor expansion is not explained in much detail in the main paper (e.g., in Theorem 3.2, the crucial terms related to $A$ and $M$ are not presented nor explained).
- Although it is interesting that the paper shows tighter constant in leading order term of the DP bound (via analytical approach), the value of this contribution in practice needs more explanation. As the divergence between Gaussian mixtures could be tightly computed numerically following (Mironov 2019.), it is unclear why we need an tighter analytical bound given by integral upper bound for the remainder term of Taylor expansion of the DP bound.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. The reason for improved tightness of DP bound by a constant factor (see weakness for more details)
2. It is interesting that the paper shows that Poisson sampling enables smaller RDP bound than fixed-size sampling when keeping the sampling ratio the same. However, I wonder if it is a phenomenon that is unique to the add-or-remove-one notion of differential privacy. Could the authors comment on whether there will be a similar gap between Poisson sampling and fixed-size sampling when considering the replace-one notion of differential privacy?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **"The reason for the improved constant factor compared to Wang et al. is not crystal clear"**
Thanks for pointing this out. We clarified further at the end of the introduction (see also the discussion in Appendix D4) that we compute the Taylor expansion of the Renyi divergence in the sampling probability, q, with explicit upper bounds on the error terms. As q is small in practice, the error terms are small. Therefore the fact that we exactly capture the leading-order behavior in q leads us to have nearly optimal bounds in practice, as seen by our comparison between upper and lower bounds in Figure 2. This is in contrast to the method of Wang et al, which does not capture the leading order behavior in q and therefore is not tight for DP-SGD.
**"The value of this contribution in practice needs more explanation..."**
The Taylor expansion is used because it allows the leading order behavior in q to be captured exactly, which provides tighter RDP bounds. We used explicit bounds on the error terms rather than the numerical approach of Mironov et al. because it requires computing fewer terms.
**"Could the authors comment on whether there will be a similar gap between Poisson sampling and fixed-size sampling when considering the replace-one notion of DP?"**
Thank you for this insightful question. No, under replace-one regime, there would be no gap between the Poisson and fixed-size, to leading order. We clarified this point in the comparison in section 4.2 as well as in Appendix D4.
---
Rebuttal Comment 1.1:
Comment: We thank reviewer kPGW again for their time and useful insights. As the discussion period comes to a close we ask that you consider our responses and whether your concerns have been adequately addressed. We tried to pay particular attention to clarifying remarks about improvement of the constant factor and the value of the contribution. Thanks again! | Summary: This paper studies the Renyi DP guarantees of a with- or without replacement subsampled Gaussian mechanism. Authors present a privacy analysis tailored for the subsampled Gaussian mechanism, which improves the earlier general bound for $\epsilon(\alpha)$ by Wang et al. 2019 by a factor of four. Authors show analytically, that the subsampling induced variance (i.e. the variance of the noise arising from using minibatches instead of full data) is smaller for without replacement subsampling than for the more commonly used Poisson subsampling. Authors also give a theoretical analysis of the differences between WOR and Poisson subsampling, showing that the Poisson subsampling leads to approximately half the $\epsilon$ of the WOR sampling. Authors demonstrate empirically that for fixed noise level, the WOR leads to better accuracy than Poisson subsampling in a CIFAR10 based deep learning task, suggesting that the difference is due to the lower subsampling noise. Finally, authors show that the WOR sampling leads to more stable memory usage.
Strengths: The DP-SGD algorithm is by far the most widely applied tool for DP machine learning. Since WOR sampling is more commonly used in non-DP ML than Poisson subsampling, improving the privacy bounds for WOR sampling is a interesting and important contribution. The theoretical analysis based on Taylor expansion of the Renyi divergence is a novel contribution and allows stricter bounds than the general result by Wang et al. 2019. Furthermore, the fact that WOR results into smaller subsampling noise is an interesting finding as well.
The numerical results for the accounting highlight the significant improvement over the current state-of-the-art WOR sampling privacy accounting, showing over a factor of two improvement in the $\epsilon$ after conversion to approximate DP bounds. Also the empirical comparison on memory usage demonstrates the benefits of WOR sampling.
Weaknesses: While the presented analysis provides important insights and improvements over the previous RDP analysis for WOR sampled Gaussian mechanism, I wonder if the problem is already solved with the modern privacy accounting tools. For example, Zhu et al. 2022 solve the WOR sampled Gaussian mechanism in their characteristic function formalism. Given that the conversion from RDP to approximate DP is lossy, I would imagine the bounds presented in this work are more loose than the bounds of Zhu et al when converted to approx DP domain. Since the approx DP bounds are more commonly used than RDP bounds, I'm not sure if the tight RDP analysis is really needed. Or is there some practical reason, e.g. an implementation difficulty, that would not allow using the characteristic function accounting for this problem?
Zhu, Yuqing, Jinshuo Dong, and Yu-Xiang Wang. "Optimal accounting of differential privacy via characteristic function." International Conference on Artificial Intelligence and Statistics. PMLR, 2022.
Technical Quality: 2
Clarity: 2
Questions for Authors: - As you acknowledge towards the end of Section 3.5, Wang et al. 2019 use the substitute neighbourhood relation which differs from the add/remove used in this paper. Can you clarify, is this difference taken into account in Figure 3?
- Fig. 2: I'm a bit confused on the upper bounds show in this Figure. It seems that after some $\alpha$, your proposed upper bound exceeds the one from Wang et al. Does this suggest that the Wang et al. bound is tighter in some regime of $\alpha$?
- In the Appendix D, you derive the RDP bounds for the substitute relationship using mixture of Gaussians with weight $1-q$ on Gaussian centered at $0$ and weight $q$ for the "adjacent" point. A recent work by Lebeda et al. 2024, suggest that this might not be the worst-case distribution for woR-sampling (see Section 7 in Lebeda et al.). While their analysis is tailored for approximate-DP and not RDP, I wonder if the same holds RDP.
**Typos and other minor things**
- Eq. 57, extra parenthesis in the expression for $M_{\sigma, 4}$
- I guess the NN abbreaviation is never explained. However, I don't think using it is necessary for the paper to begin with as you analysis applies to any learning task using DP-SGD.
- Fig. 2: the caption is overlapping with the axis label. Also, I think this figure is never referred to in text.
- "... even without using the convexity technique; ...": which convexity technique are you talking about?
- "$a_i = \nabla_\theta L(d_i) \cdot v$", what is the $v$ here? Also, since the gradient is multidimensional, are you talking about dimension-wise variance?
- "addtional"
Lebeda, Christian Janos, et al. "Avoiding Pitfalls for Privacy Accounting of Subsampled Mechanisms under Composition." arXiv preprint arXiv:2405.20769 (2024).
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: I think authors should still address whether other accounting tools can solve this problem more efficiently that their method in the approximate DP domain. Other than that I believe the limitations are well addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Relation to Zhu et al. (2022)**
Please see the response to this comment in the general rebuttal section above.
**Lossy conversion to $(\epsilon, \delta)$ guarantees**
We acknowledge (as the reviewer correctly states) that RDP guarantees are not lossless when converted to $(\epsilon, \delta)$ guarantees. However, for RDP accountants, regardless of the conversion that is used, with our method it is possible to significantly improve guarantees (by a factor of 4). To our knowledge this is the strongest improvement on RDP for DP-SGD. Given that RDP is one of the most widely used methods in DP libraries, using our method leads to substantial improvements in a myriad of real-world applications.
**Substitute neighborhood relation of Wang et al.**
Thanks for asking for clarification on this. We show in our Appendix D.4 that the behavior of our bounds is the same for both replace-one and add/remove adjacency notion to leading order. To be consistent across the paper, in all of our implementations, we used the add/remove adjacency notion in Figure 3 and other Figures. We clarified this in the main body of the manuscript.
**Upper bounds in Fig. 2**
The behavior of bounds for large $\alpha$ is addressed beginning on Line 196. In practice, for any choice of $\delta$ one need only consider the bound in a range of $\alpha$'s near 1 to convert from RDP to $(\epsilon,\delta)$-DP and so the behavior of our bounds for large $\alpha$ is irrelevant in practice. This can be seen by Figure 3. In addition, by combining our bounds with the convexity technique discussed in Appendix D.3 one can eliminate this large $\alpha$ issue; see the solid black line in Figure 9.
**Worst-case bounds for woR-sampling**
The worst case in Lebeda et al. 2024 holds for HS-divergence and not RDP. As these are different divergences, the worst case distributions aren't necessarily the same. For RDP, we prove in our Appendix D that the worst-case is what we presented.
**Convexity technique**
The convexity technique we refer to on p. 6 was introduced in Wang et al. (2019). Please see Appendix D.3 for more details on application of the convexity technique to Theorem 3.1.
---
Rebuttal Comment 1.1:
Comment: We thank reviewer uSpK again for their time and helpful feedback. As the discussion period comes to a close we ask the reviewer to consider our responses and whether their concerns are adequately addressed. In particular we have provided extensive discussion relating our work to the work of Zhu et al. in the general comments, as well as in response to reviewer Rv66. Thanks again! | Rebuttal 1:
Rebuttal: We thank all reviewers for their time and very useful feedback! We provide responses to each reviewer individually. Please see below for responses to a common point.
**Relation to Zhu et al. (2022)**
Reviewers uSpK and Rv66 ask whether our WOR result of Theorem 3.1 is a corollary of Theorem 11 in Zhu et al. (2022). Their work leads to the analytical Fourier accountant (AFA) for Gaussian mechanisms in general. AFA is not specific to DP-SGD, which is the main topic of our work. Our work should be viewed as an extension of Wang et al. (2019) that is specific to DP-SGD with Gaussian noise and focuses on the strong composition afforded by RDP. Theorem 11 of Zhu et al. (2022) accounts for addition and removal of neighbors in the general Gaussian mechanism case. We could not find any evidence that the analysis in Thm 11 yields a privacy bound for DP-SGD with a large number of training steps that is tighter than and as rigorous as what we have provided. Our reasons are as follows:
DP-SGD requires the consideration of Gaussian mixtures, and therefore the characteristic function required by the method of Zhu et al must be computed numerically. We anticipate this can be difficult for a large number of compoisitions, as is typical in DP-SGD, due to the required integrations of a sum of a large number of terms, as required by Algorithm 1 in Zhu et al. In addition, there is potential privacy leakage due to lack of rigorous error bounds for the double (Gaussian) quadrature numerical integration approach as discussed in their Appendix E.1. Using Gaussian quadrature for DP-SGD presents major practical difficulties that do not seem to be addressed in Zhu et al's work. Our approach is a specialized method for DP-SGD that avoids the practical concerns associated with numerical integration.
Hayes et al. (2024) uses the results in (Zhu et al., 2022) to provide a bound specific to DP-SGD when one is only concerned with training data reconstruction attacks rather than the membership inference attacks, which leads to a relaxation of DP to provide a bound that is only applicable to defending against data reconstruction and not membership inference. Our work can be viewed as a complimentary work to Hayes et al. (2024) that is specific to DP-SGD and provides conservative bounds protecting against membership inference attacks (and thus any other type of model inversion attacks including training data reconstruction) in the RDP context. Overall, the applicability of Thm 11 in Zhu et al. (2022) to fixed-size subsampled DP-SGD is hypothetical until or unless they are explicitly demonstrated in the DP-SGD context.
We discussed both of these studies in our background and related work.
* Zhu, Y., Dong, J. and Wang, Y.X., 2022, Optimal accounting of differential privacy via characteristic function. In International Conference on Artificial Intelligence and Statistics (pp. 4782-4817). PMLR.
* Hayes, J., Balle, B. and Mahloujifar, S., 2024. Bounding training data reconstruction in dp-sgd. Advances in Neural Information Processing Systems, 36. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Improving Viewpoint-Independent Object-Centric Representations through Active Viewpoint Selection | Accept (poster) | Summary: This paper introduces an image selection method designed to incrementally enrich an observation set with informative images from an extensive unknown set, aiming to maximize information gain with a limited or specific number of images. The methodology innovatively integrates Multi-Viewpoint Slot Attention for getting object-centric representations, enabling the gradual identification of the most informative images through a comparative analysis of representations before and after the inclusion of newly generated images. Additionally, it leverages a diffusion model in conjunction with slot attention and viewpoint encoding to synthesize images for image prediction and new viewpoint validation. The experimental outcomes demonstrate improvements in segmentation and reconstruction tasks, underscoring the effectiveness of the proposed approach.
Strengths: +The paper presents a novel method that effectively disentangles content and viewpoint information using Multi-Viewpoint Slot Attention. This decoupling approach could offer a fresh perspective in the field.
+The validation of the proposed method across four task scenarios provides a comprehensive evaluation of its advantages in supporting various downstream multi-view applications.
Weaknesses: -This method may be time consuming, since there are two loops in the selection strategy. It is recommended to conduct a detailed analysis of the trade-off between processing time and performance gains, potentially including a comparative study with alternative methods to gain further insights into the method's efficiency.
-While the method claims the ability to accurately predict images from unknown viewpoints, the exact benefit of the viewpoint selection strategy in this context remains unclear. Moreover, Can the S_view be obtained in image prediction tasks?
-The necessity of the generating Prediction Set within the proposed framework may not be very clear. Based on the pipeline depicted in Figure 2, an investigation into the framework's performance when the Prediction Set is set as same as Unknown Set could provide valuable insights into the role and impact of the Prediction Set. A comparative experiment exploring this scenario is suggested.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please refer to the Weaknesses
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Limitation is discussed in the work and the work will not have any potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your valuable feedback and constructive comments. We have carefully considered each point raised and provide our responses below.
**1. Computational Complexity**
We acknowledge that our method may be more time-consuming due to the presence of two loops in the selection strategy. However, it is important to highlight that this increased computational complexity is offset by our method's unique capability to predict unknown viewpoints, a feature not available in alternative methods. Additionally, our approach demonstrates superior performance in scene segmentation compared to other methods.
**2. Benefit of the Viewpoint Selection Strategy**
We appreciate your observation regarding the role of the viewpoint selection strategy in predicting images from unknown viewpoints.
- The viewpoint selection strategy is integral to enhancing the quality of viewpoint-independent object-centric representations. It ensures that the most informative viewpoints are selected for observation, thereby reducing redundancy and improving prediction accuracy. Accurately predicting images from unknown viewpoints is both a necessary condition for the effectiveness of our viewpoint selection strategy.
- $S^{view}$ can be obtained in image prediction tasks by inputting the target viewpoint timestep into the viewpoint encoder. The model uses $S^{view}$ to guide the prediction of the image from target viewpoint.
**3. Necessity of the Prediction Set**
We apologize if Figure 2 is not sufficiently clear. In our framework, the images from the Unknown Set are not directly accessible to the model. Only the viewpoint timesteps of the Unknown Set are available. The Prediction Set consists of images predicted from these viewpoint timesteps. Generating the Prediction Set is a necessary step to estimate the information increment of the unknown viewpoints, which plays a crucial role in our viewpoint selection strategy. | Summary: This work introduces a method for multi view object centric reconstruction. The authors extend a previous work LSD[10] from single view to multiple unposed views of the same scene and introduce an active view selection mechanism at training time. The method takes N views, decomposes the scene into K slots and decodes them through a conditioned latent diffusion model. During training an active viewpoint mechanism is used, rather than random sampling sets of viewpoints, as a form of hard mining that helps the model converge to better solutions. The authors evaluate their proposal on synthetically created datasets based on CLEVR-TEX, GSO and ShapeNet where they demonstrate better unsupervised segmentation of objects, scene reconstruction and novel view synthesis on par or better wrt to previous approaches not based on diffusion decoders.
Strengths: + Strong improvement wrt to previous multi-view methods that did not use a diffusion based decoder.
+ Additional improvements achieved using something resembling hard mining while training that while making training more complex do not have side effect on the complexity at inference time.
+ The paper introduces 3 synthetic multi-view datasets to evaluate the model that could be used by the community to further develop this specific field if the code to replicate them will be made available.
Weaknesses: a. Presentation can be improved. In particular Sec. 3.1.2 would benefit by more details in the text. For example it is not clear to me what $h_m$ or $v$ in the pseudo code are. Also in terms of figures and plots many of the results are really small and hard to see. I would have preferred less results but bigger in the main paper and the rest in the appendix. Also I feel like Tab 5 should be in the main paper in place of Fig. 3 and Tab. 1 and 2.
b. Limited evaluation. While on one hand it is nice that the authors contributed a new set of 3 datasets to evaluate their method, these are rather small and synthetic. UNfortunately this is inline with most of the competitors the authors compare to, but would be nice to see the field move more towards realistic scenarios. An additional note is that is not clear to me why the authors re-rendered datasets rather than using the one [made available by OCLOC](https://huggingface.co/datasets/jinyangyuan/ocloc-data) which per my understanding should be quite similar.
c. Slightly unclear evaluation settings. I have not found in the paper clear mention on how the unsupervised segmentation is obtained from the model (Sec. 4.2) or how the temporal timesteps used in Sec. 4.3 and 4.4 are injected into the model to make it conditioned on a certain viewpoint. Are they additional conditioning input to the latent diffusion model?
Technical Quality: 3
Clarity: 2
Questions for Authors: **Questions**
1. Do you have an intuition on the big discrepancy in the ranking defined by LPIPS and FID in Tab. 1 and 2? Usually the two metrics tend to be very correlated in my experience.
2. Can you clarify my doubts with respect to weakness [b] and weakness [c]?
**Typos**
* Discrepancy between Fig. 3 and Tab. 5 → ARI-O == ARI-FG?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Some limitations have been discussed in the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your valuable feedback and constructive comments. We have carefully considered each point raised and provide our responses below.
**1. Presentation**
Thank you for your feedback on the presentation. We will revise Section 3.1.2 to provide more details and clarity regarding the notation used in the pseudocode. These variables are defined as follows: $h_m$ represents the image features extracted by DINO, and $v$ represents the value function of the attention mechanism. We agree that larger figures and plots would enhance readability, and will enlarge key figures and plots in the main paper, moving additional results to the appendix to maintain focus.
**2. Dataset Selection and Re-Rendering**
- Our choice to re-render datasets rather than using the OCLOC datasets was primarily due to the need for specific viewpoint parameters ($\rho$, $\phi$, $\theta$) that were not available in the OCLOC datasets. These parameters were initially intended to be used in our method, but we later shifted to using timesteps for simplicity.
- For the datasets, we aimed to maintain consistency with the configuration and rendering methods used in OCLOC. However, we chose the CLEVRTEX dataset over the simpler CLEVR and SHOP datasets used by OCLOC to provide more complex scenes with richer textures and variability, which better test the model's capabilities.
**3. Real-World Data Evaluation**
We understand the importance of assessing the model's generalization to real-world data. However, there is currently a lack of suitable multi-viewpoint real-world datasets for comprehensive evaluation of our model and similar methods. This poses a challenge in demonstrating real-world applicability. Nonetheless, the objects in the GSO and ShapeNet datasets closely resemble real-world objects, providing a realistic basis for testing our model's generalization beyond purely synthetic environments. These datasets effectively bridge the gap between synthetic and real-world scenes.
**4. Evaluation Settings**
We appreciate the reviewer's feedback regarding the clarity of the evaluation settings. We apologize for any confusion and provide the following clarifications:
- The unsupervised segmentation results are derived from the attention masks computed within the multi-viewpoint slot attention module.
- The temporal timesteps are encoded using the viewpoint encoder. These encoded timesteps are then concatenated with the viewpoint-independent object-centric representations to serve as conditioning input for the diffusion model, guiding the image generation process.
**5. Discrepancy Between LPIPS and FID Rankings**
Thank you for pointing out the discrepancy between LPIPS and FID rankings in Tables 1 and 2. This discrepancy may arise from the different aspects of image quality they measure. LPIPS evaluates perceptual similarity at a fine detail level, while FID assesses global distributional similarity. The models being evaluated might perform differently in preserving local details versus maintaining overall image realism and distribution, leading to variations in how each metric ranks the results.
**6. Discrepancy Between Fig. 3 and Tab. 5 → ARI-O == ARI-FG?**
Thank you for pointing out this discrepancy. We apologize for any confusion caused. The notation ARI-O and ARI-FG should indeed be consistent across Figure 3 and Table 5.
---
Rebuttal Comment 1.1:
Title: Acknowledgement
Comment: Thanks authors for providing a rebuttal and addressing my questions.
I would suggest to include these discussions in the next version fo the manuscript.
I tend to disagree with this strong claim made in the rebuttal
"Nonetheless, the objects in the GSO and ShapeNet datasets closely resemble real-world objects, providing a realistic basis for testing our model's generalization beyond purely synthetic environments. **These datasets effectively bridge the gap between synthetic and real-world scenes**."
Such strong claims would require very substantial evidences. From my experience If this was true the domain adaptation litterature would not exhist in the first place.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback and for taking the time to review our rebuttal. We appreciate your suggestion to include these discussions in the next version of the manuscript, and we will certainly consider that in our revisions.
Regarding the statement about the GSO and ShapeNet datasets, we acknowledge your concerns and agree that the claim may have been overstated. Our intent was to emphasize that while these datasets are synthetic, the objects they contain closely resemble real-world objects. However, we fully recognize that there remains a significant gap between synthetic datasets and real-world environments.
We will revise our statement to avoid making an overly strong claim. We appreciate your guidance in ensuring that our manuscript accurately conveys the limitations and strengths of our approach. | Summary: This paper describes a novel active viewpoint selection strategy (AVS) for enhancing multi-viewpoint object-centric learning methods. The core idea is to select the most informative viewpoints actively rather than using random or sequential strategies, which can be inefficient and may omit critical scene information. The model enhances viewpoint-independent object-centric representations, leading to better understanding and perception of visual scenes. It can predict and generate images from unknown viewpoints.
Strengths: 1. The active viewpoint selection strategy (AVS) is a novel concept that addresses the limitations of traditional random or sequential viewpoint selection methods in multi-viewpoint object-centric learning. The paper demonstrates through experiments that AVS significantly enhances the performance of segmentation and reconstruction tasks compared to random viewpoint selection strategies.
2. The proposed method leads to better viewpoint-independent object-centric representations, which are crucial for accurately understanding and perceiving visual scenes from various angles.
3. The model's ability to predict images from unknown viewpoints is a significant strength, allowing it to work effectively even with limited observational data. The model can generate images with novel viewpoints that it hasn't been explicitly trained on, showcasing its generative capabilities and the robustness of the learned representations.
4. Despite using fewer viewpoints for training, the proposed model achieves superior results, indicating that it can efficiently learn comprehensive representations.
5. Evaluation: The paper includes a thorough evaluation using multiple datasets and various metrics, providing a comprehensive understanding of the model's strengths and areas of improvement. The model's performance is benchmarked against other contemporary methods, such as SIMONe, OCLOC, and LSD, showing its competitive edge in the field.
Weaknesses: 1. The active selection process of the proposed model has high computational complexity, which is directly proportional to the number of selected viewpoints and the diffusion sampling steps. This could affect the training speed and efficiency.
2. Reliance on Viewpoint Continuity: The method's effectiveness is contingent on the continuity of multiple viewpoints. If the viewpoints are not continuous or related, the model may struggle to perform novel view synthesis.
3. Generalization: While the model performs well on the datasets presented in the paper, its generalization capabilities to other datasets or real-world scenarios are not fully explored. The active viewpoint selection strategy assumes that specific scenes may be more sensitive to information from certain viewpoints. This assumption might not hold true for all types of scenes or objects. Although the active selection strategy aims to avoid redundancy or omission of scene information, there is still a possibility that the selected viewpoints may not always capture the most informative aspects of a scene.
4. Overfitting to Synthetic Data: The experiments were conducted on synthetic datasets, which might not fully represent the complexity and variability of real-world data. There is a risk that the model could overfit to the synthetic data and not perform as well on real images.
5. No Discussion on Computational Resources: While the paper mentions the GPU type used, it does not provide a detailed analysis of the computational resources needed for training and inference, which is important for assessing the scalability of the approach.
6. No Open Access to Code and Data: At the time of submission, the paper did not provide open access to the code and data, which is important for reproducibility and further research by the community.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. How does the model generalize to real-world datasets and scenes that may have more variability and complexity than the synthetic datasets used in the experiments?
2. Can you provide more details on the computational complexity of the active viewpoint selection process and how it scales with the number of viewpoints and scene complexity?
3. What strategies are employed to make the training process more efficient, given the high training complexity mentioned as a limitation?
4. How does the model perform when the continuity assumption of viewpoints is violated? Are there any fallback mechanisms or alternative strategies?
5. How robust is the model to noise, occlusions, and other common challenges present in real-world visual data?
6. Are there plans to release the code and data used in the experiments to ensure reproducibility and facilitate further research by the community?
7. Can you provide examples of failure cases where the model did not perform well, and what insights can be gained from these cases?
Theoretical Foundation Question:1
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: 1. high training cost
2. relying on viewing continunity which might not always hold in real-world scenarios
3.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your valuable feedback and constructive comments. We have carefully considered each point raised and provide our responses below.
**1. Computational Complexity**
We acknowledge that our method may be more time-consuming due to the presence of two loops in the selection strategy. However, it is important to highlight that this increased computational complexity is offset by our method's unique capability to predict unknown viewpoints, a feature not available in alternative methods. Additionally, our approach demonstrates superior performance in scene segmentation compared to other methods.
**2. Viewpoint Continuity**
We appreciate your concern regarding the reliance on viewpoint continuity. For scenes involving discontinuous viewpoints, we can use specific viewpoint parameters (denoted as $\rho$, $\phi$, and $\theta$ and defined in Appendix A) instead of relying on viewpoint timesteps. This approach allows for more precise control over the viewpoint and improves the model's ability to handle novel view synthesis.
**3. Generalization and Evaluation on Real-World Data**
We understand the importance of assessing the model's generalization to real-world data. However, there is currently a lack of suitable multi-viewpoint real-world datasets for comprehensive evaluation of our model and similar methods. This poses a challenge in demonstrating real-world applicability. Nonetheless, the objects in the GSO and ShapeNet datasets closely resemble real-world objects, providing a realistic basis for testing our model's generalization beyond purely synthetic environments. These datasets effectively bridge the gap between synthetic and real-world scenes.
**4. Open Access to Code and Data**
We are preparing our code and datasets for public release and plan to make them available on Github upon acceptance of the paper. This will ensure reproducibility and facilitate further research by the community. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Tropical Expressivity of Neural Networks | Reject | Summary: This paper proposes new methods to count the number of linear regions in neural networks by viewing them as tropical Puiseux rational maps. By computing their Hoffman constant, the authors are able to identify a sampling radius which ensures that all the network’s linear regions will be intersected. They use this insight to propose algorithms for counting the number of linear regions for both invariant and traditional networks.
Strengths: The paper is well-written with virtually no typos and errors. The technical content is accessible and not unnecessarily convoluted and the proofs and concepts are presented clearly.
Weaknesses: The main weakness of the work, in my opinion, can be summarized in the following points:
- the connections to tropical geometry and group theory are not rigorous beyond the point of simple notational fixes
- the motivation for the work and how it fills gaps in the existing literature is unclear, and
- the effective utility of the approach is not convincingly demonstrated by the theory or experiments.
**Rigor of tropical and group theories**
I believe this point is the biggest weakness of the paper. From the perspective of tropical algebra, vectors and polynomials live in $\bar{\mathbb{R}}$. The authors mention $\bar{\mathbb{R}}$ in line 87, but then this is never used in most of their work. This might seem as a notational fix, but it is not, as it introduces problems in virtually every single result in the paper. This first becomes a real issue in (4), where maximums are taken over, potentially, $\infty$. How is it guaranteed that (4) exists in the context of tropical algebra? This is a recurring problem that appears in (5), (6), and (7). Another important issue at the intersection of group theory and tropical algebra is how groups are defined. Semirings, by construction, are objects that do not admit additive inverses. This means that, if one wants to define groups on such structures, great care needs to be taken as to how groups are defined, how they act on vector spaces, and what groups are actually permissible in this context.
From the perspective of group theory how is the group action defined? How do the group elements act on vectors in tropical spaces? Group representations $\rho: G \to \operatorname{GL}$ require the concept of an invertible matrix, however that concept is ill-defined in tropical vector spaces.
(Moreover, the authors define incorrectly $\bar{\mathbb{R}}$. The infinity element needs to be the identity element of tropical addition: if one opts to use the max-plus semiring, $-\infty$ should be used. If we use the min-plus semiring, $\infty$ should be used. However, this is a notational fix.)
**Motivation**
In terms of motivation it is unclear how the work is related to the existing works. There have been countless results on the number of linear regions of neural networks, and quite a few results using tropical geometry at that. What void in the literature does this paper fill? The related work paragraph lists some of the works in tropical geometry, but doesn’t highlight where these works come short and how the proposed manuscript fills that void. Moreover, there is no discussion of why the existing works on linear counting that do not utilize tropical geometry are also not able to handle the presented context.
**Effective utility**
At the end of the day, I’m not sure I understand what the utility of the method is. Ignoring here the questions on motivation, the goal is to make deep learning more interpretable. However, the authors’ experiments diverge when input sizes are larger than $6$ and the networks are deeper than $4$ layers. Modern deep learning uses input sizes significantly larger than $100$ and decade old networks are deeper than $10$ layers. So what effectively are we learning about deep networks?
Unfortunately, there are some larger issues with the method and experiments. Beyond the concerns from the perspective of tropical geometry, we have zero guarantees about the upper and lower bounds of the Hoffman constant. There is no asymptotic analysis on the sample complexities of the bounds, no analyses about tightness or optimality, or even, at the very least, an analysis that the bounds are not vacuous or trivial. On line 272 the authors claim that even though their estimate diverges, that’s acceptable because frequently we’re interested is an upper bound on expressivity. However, how can we guarantee that there the number of regions is not undercounted (I couldn’t find a proof)? For that statement to be true the bound needs to be tight, but from their own experiments (Tables 1 and 2), the true $H$ ends up being larger than the upper bound, which is used to calculate the radius and eventually the radius. Obviously, then, the computed bounds are not representative: then, since the estimate Hoffman constant is not accurate and the algorithm diverges, what essentially do we gain?
Technical Quality: 1
Clarity: 2
Questions for Authors: I have some more fine-grained comments and questions.
**Major**
There is a recurring discussion about massively improving efficiency. However, that efficiency is when compared against what? There is not a single comparison against existing works, so the improvement is over what? Regardless, as the authors discuss in Algorithm 2 and in the section about limitations, there is an *exponential* scaling with the number of inputs. Even if the authors introduce a factorial reduction for invariant networks, they require exponential sampling, which leads to no real improvement, as is evident by their really small parameter values and large running times. How where these sample numbers chosen in the first place? I couldn’t find any asymptotic analysis that guarantees with high probability that the whole space will be sampled with such a number of samples. Overall, the computational complexity of the approach is prohibitive, as it required $N^n$ Jacobian computations, which is nigh impossible for parameter values found in real networks.
There is a significant lack of attribution to prior work. In the introduction, textbooks on tropical geometry are cited as well as papers on the counting of linear regions. However, when the intersection of tropical geometry and machine/deep learning is discussed, zero references are provided, whereas the field has been fairly active in the last 5 years with dozens of highly cited papers. Similarly, in the related works section the final citation has no connection to the presented paper beyond being at the intersection of tropical geometry and machine learning. However, there are much older and much more well-cited papers to include as general references on the field, both as general reviews and as specific applications (tropical compression of neural networks comes to mind). As a final comment on this train of thought, would the authors like to elaborate on line 352? Specifically, both [11] and [12] analyze the expressivity of neural networks using tropical geometric quantities (through zonotopes and Newton polytopes). How do the authors view their work as the first in doing that?
As a last major comment, there is another lack of preciseness, however to a lesser extend as to what I mentioned above. There is a recurring discussion about one method being more precise than the other: however, what does that mean? Preciseness is never defined, and it’s not straightforward how the authors compute the ground truth of the number of linear regions. For example, how can we say that Table 10 is more precise than Table 11? In a similar vain, there are underlying assumptions that are not discussed or communicated. For example, what are the assumptions of 4.3 from the perspective of group theory? Does the theorem work for any, possibly infinite group? Similarly, there are constraints in Definition 3.5 that are not communicated. The way A-surjectivity is introduced, it is required that $m < n$. However, that makes explicit assumptions about the networks that can be analyzed, which is never communicated or considered.
**Minor**
There are some minor things that are unclear in the text. For example, it is unclear what Section 5.2 refers to. Is it a table or a plot, and which one? In Theorem 4.3 $\mathcal{L}$ is mentioned which is never used or defined, and it is unclear what the term bias is referring to in 256 (I’m assuming $\lambda$?). Table 1 (and others) are lacking headers and it’s impossible to understand what the different columns represent. Finally, the term maximal elements is used with respect to sets of sets. However, sets lack canonical order, so it’s unclear what “maximal element” means in this context.
Overall the notation is good, but at a few points it is confusing. Matrix indices usually refer to columns, not rows, which can lead to confusion and the choice of $\rho$ for the singular values (over the universally accepted $\sigma$) can hurt readability. In 329 the input dimension is denoted by $d$, in contrast to the rest of the manuscript and the notation $\min\{y_j, j\in J\}$ is not consistent with set theory nor optimization (and also in conflict with the authors own notation, for example, in (10) or 295. Finally, $S_n$ is used as the permutation group but also as a set in 295.
Confidence: 5
Soundness: 1
Presentation: 2
Contribution: 1
Limitations: I think the authors accurately identify the main limitations of their approach, which relates to the large computational complexity.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough reading of our work and detailed feedback. We would like to address weaknesses raised by the reviewer.
### Rigor of Tropical Geometry and Group Theory
We apologize for the confusion caused: while we focus on the tropical geometric interpretation of neural networks, we are actually not working exclusively in the tropical setting, because some of the concepts would not make sense, as pointed out by the reviewer.
Our results do not require us to work in $\overline{ℝ}$. We never work with tropical
vector spaces, only tropical polynomials, which can be understood without
reference to $\overline{ℝ}$. In (4), we are taking a maximum over a finite set of Hoffman
constants, each of which lies in $ℝ$ rather than $\overline{ℝ}$ (one concern here might be
that we are allowing coefficients of the tropical Puiseux rational function to be $−∞$, but we can ignore these). Similar observations can be made about (5), (6), and (7).
On the matter of group actions, we never claim to be working with group actions in a tropical sense. In our submission, we are not working with tropical vector spaces and group actions, but simply with $ℝ^n$ and group actions in the usual sense.
### Motivation
We have conducted a second literature review and found one citation missing from our original submission: *On the Decision Boundaries of Neural Networks: A Tropical Geometry Perspective* by Alfarra et al. (2022). We apologize for this oversight. This reference provides important insights that demonstrate the value of tropical geometry in deep learning. To the best of our understanding our work does not overlap with theirs in either theoretical focus or novel computational contributions.
As far as we know, all existing computational methods for finding the number of linear regions of a neural network rely on sampling which doesn’t guarantee an exact solution, or impose some restrictions such as boundedness on the input variables rather than the total number of linear regions.
1. We provide new tools for computing the size of the domain needed to count the linear regions, and we give a new method for computing the exact number of linear regions of a neural network on an unbounded domain.
2. We provide a direct way of computing a novel representation of neural networks, namely, the tropical Puiseux rational form. This opens the door to further connections with tropical geometry such as the number of (non-redundant) monomials that appear in the tropical expression of a neural network.
### Effective Utility
The main goal of our paper was to present a proof of concept, not to develop highly optimized code. We believe there is room for improvement (particularly for symbolic computations), and better algorithms could greatly boost our approach’s performance, which we noted in the Discussion section as a future research direction. We focused on smaller networks for simpler analysis. However our code can handle more complex architectures than those we used. Our symbolic method can process networks with inputs of dimension 784 in a minute (Table 1 in the attachment).
Finally, the reviewer noticed that the two methods we outlined yield different results. The discrepancy is because the numerical algorithm may undercount the number of linear regions in certain cases due to not taking into account connectivity of regions. We will clarify this in the revision.
We thank the reviewer for identifying errors in Table 1–3 regarding the computed Hoffman constants. After reviewing the public code by Peña et al. (2018), we found that it was incorrect. We have since computed the values by brute force, and the updated tables show that the lower bounds are always below the true values. Regarding upper bounds, Theorem 3.9 states that the Hoffman constant is bounded by the inverse of the smallest nonzero singular value of submatrices of $A$. We used a threshold of $10^{−10}$ to determine nonzero values, but found that many small singular values ranged between $10^{−30} ∼ 10^{−14}$. This confirms that the singular values provide upper bounds. We agree that these bounds can be tightened as a direction for future research. This study could also be promising for new results concerning the Stewart–Todd condition measure of a matrix.
### Improving Efficiency
Our claim of improving efficiency refers to the factorial reduction in scaling for invariant networks. We believe that the ideas we introduce in this manuscript may also lead to similar improvements for other methods, in a symmetric setting.
### Lack of Attribution to Prior Work
We do not claim that this work is the first to analyze expressivity of neural networks using tropical geometry; we explicitly mention previous work in this direction in our manuscript in lines 39–41. We are not aware of work that implements algorithms to compute the *exact* number of linear regions of a network with unbounded inputs.
### Lack of Precision
To clarify, the symbolic method for computing linear regions yields the ground truth, by converting the network to a tropical Puiseux rational function and applying Algorithm 3.
For missing assumptions, we acknowledge that Theorem 4.3 is missing the assumption that the group is finite. We believed it was implicit in the context (cardinal arithmetic would be implausible in the context of machine learning). Moreover, we disagree that our definition of $A$-surjective sets introduces any new assumptions on $m$ and $n$. The definition makes sense for any matrix $A$ and the assertion that the existence of $A$-surjective sets implies that $m<n$ is incorrect, as such sets exist for all $A≠0$.
We thank the reviewer for flagging typos and inaccuracies; these will be addressed in the revision.
**References:**
Alfarra, M., Bibi, A., Hammoud, H., Gaafar, M., & Ghanem, B. (2022).
*On the decision boundaries of neural networks: A tropical geometry perspective.* IEEE Transactions on Pattern Analysis and Machine Intelligence, 21 45(4), 5027–5037
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal and for engaging with the reviewing process.
Regarding the tropical geometry, I have to say I'm confused by the response. If the Puiseux polynomials are interpreted using $\mathbb{R}$, and all the other concepts in the paper are independent of tropical geometry, then what is the connection to tropical geometry?
---
Rebuttal 2:
Comment: We thank the reviewer for reading our rebuttal and giving us this important opportunity to clarify our approach further.
### The role of infinity
The tropical semiring is a fundamental algebraic object in tropical geometry where all other constructions are based. For the sake of completeness and correctness, we must introduce $-\infty$ to the extended real line since it is the neutral element for tropical addition. In our application to machine learning, the main way the tropical framework appears is that it gives a very convenient (and, as our work demonstrates, computable) way of representing neural networks. In other words, we are interested in *tropical* representations these functions. While tropical Puiseux polynomials naturally yield functions defined over the whole extended real line, this is generally not the case for neural networks, and thus we usually restrict to $\mathbb{R}$. We would also like to emphasize that while the mathematics are mostly developed over the real numbers, a large portion of our code uses OSCAR's implementation of the tropical algebra.
### Connection to tropical geometry
Tropical geometry is a broad field and intersects with many other mathematical fields including polyhedral geometry, algebraic geometry, combinatorics, etc. For example, tropical Puiseux polynomials, tropical monomials are naturally related to linear regions and polyhedra in ways which are fully elaborated in our paper. The connection between neural networks and tropical geometry is *crucial* since it allows us to use OSCAR to develop symbolic computations.
While we acknowledge that we don’t use any advanced tropical geometry for our results (and instead use techniques from other fields e.g., polyhedral geometry), we maintain that our work is nevertheless tropical geometric in nature: Firstly, our results are built on the tropical geometric interpretation of neural networks. Secondly, we principally interpret tropical Puiseux polynomials as functions over $\mathbb{R}$, which is the natural approach for studying their function-theoretic properties as we do in our work. Despite this interpretation, they still are rigorously tropical objects.
We understand that the mix of tropical and classical approaches and perspectives in our work may have inadvertently introduced confusion, which we will carefully clarify in the revision.
### Responding to other concerns
We are very grateful for this opportunity to engage with the reviewer who gave us a very detailed, thoughtful, and constructive review, which we appreciated very much. We would like to ask if the reviewer has any other questions pertaining to our rebuttal of the concerns in the original report? We are aware that our rebuttal was brief, given that we were constrained to the quite strict character limit, but we would very much like to engage further and respond in greater detail to the reviewer’s observations in the original report above.
---
Rebuttal 3:
Comment: As we approach the end of the discussion period, we would like to take this opportunity to address some additional concerns raised in the reviewer’s original report. Due to the character limit for rebuttals, our initial response had to be concise, and we regret not being able to provide the level of detail that the depth of the original report warrants.
It is important to clarify that some statements made in the review above are incorrect. Additionally, as with reviewer vnGc, we are concerned that our significant contributions of adapting symbolic computation to the study of neural networks were not fully recognized, given the lack of acknowledgment and comments.
### Rigor of Tropical Geometry and Group Theory
We apologize for the confusion raised in our work, since, indeed, while we focus on the tropical geometric interpretation of neural networks, we are actually not working fully and exclusively in the tropical setting throughout our work, precisely because some of the concepts would not make sense, as pointed out by the reviewer. We now elaborate on the specific concerns raised by the reviewer.
Our results do not require us to work over $\overline{\mathbb{R}}$. In particular, the absence of $\overline{\mathbb{R}}$ is intentional: we never work with tropical vectors or vector spaces, but only with tropical (Puiseux) polynomials, which can be understood with no reference to $\overline{\mathbb{R}}$. Referring specifically to the expressions mentioned by the reviewer: in (4), we are taking a maximum over a finite set of Hoffman constants, each of which lies in $\mathbb{R}$ rather than $\overline{\mathbb{R}}$ (one concern here might be that we are in principle allowing coefficients of the tropical Puiseux rational function to be $-\infty$, but we can simply ignore such coefficients). Similar observations can be made about (5), (6), and (7), which should now clarify that these expressions are well-defined.
We respond similarly to the concern about group actions in the tropical setting: we never claim to be working with group actions in a tropical sense. In our submission, we are not working with tropical vector spaces and group actions, but simply with $\mathbb{R}^n$ and group actions in the classical sense.
In our revision, we will explicitly state that these specific concepts should not be evaluated tropically so as to diffuse any misunderstandings.
### Motivation
We thank the reviewer for this question and would like to take this opportunity to respond to concerns raised by the reviewer.
With regard to motivation and contribution, we respond by stating that to the best of our knowledge, all computational methods (that have been implemented!) for computing the number of linear regions of a neural network resort to some form of sampling that doesn't guarantee an exact solution, or impose some restrictions such as boundedness on the input variables (for instance, as our Jacobian sampling method does) rather than computing the *total* number of linear regions. Our work fills this void in two different ways:
1. We provide new tools for computing the size of the domain that needs to be restricted to in order to count the correct number of linear regions, and we give a new method (the so-called symbolic method) for computing the exact number of linear regions of a neural network on an unbounded domain;
2. We also provide a new avenue for understanding the expressivity of neural networks (or more generally, studying interpretability) - specifically, we provide a direct way of computing another representation of neural networks, namely, the tropical Puiseux rational form. This representation is important because it opens the door to using other theory from tropical geometry, e.g., computing other measures of complexity, such as the number of (non-redundant) monomials that appear in the tropical expression of a neural network.
(continued in the next official comment)
---
Rebuttal 4:
Comment: ### Effective Utility
Firstly, we want to clarify that the main goal of our paper was to present a proof of concept, not to develop highly optimized code for large-scale stress tests on modern deep neural networks. Our focus was on laying the theoretical groundwork and demonstrating the feasibility of our approach. Therefore, we did not prioritize optimizing certain parts of the code for performance. This was a deliberate choice to highlight the direct applicability of symbolic computation techniques to neural networks. For example, in the tropical Puiseux polynomial library, we implemented simple algorithms, rather than using more sophisticated computer algebra methods. We believe there is considerable room for improvement in this area, and more efficient algorithms could significantly boost our approach's performance, which we noted in the Discussion section as a future research direction.
Secondly, our experiments mainly focused on smaller networks to allow for clear and understandable analysis. We believe that understanding small networks is helpful for making deep learning more interpretable. However, our code is capable of handling more complex architectures than those used in our experiments. For instance, our symbolic method can process neural networks with inputs of dimension of 28 × 28 (the dimension of inputs in the MNIST dataset) in about 1 minute. This shows that our approach is scalable, contrary to what our smaller experiments might suggest. We have added new tables in the attachment to demonstrate this on various architectures with higher-dimensional inputs (see Table 1). In our revision, we will include these larger-scale results alongside the smaller-scale experiments, as we believe the smaller-scale ones provide a clearer and more accessible proof of concept, allowing us to illustrate the key insights and mechanisms of our approach more easily.
Finally, the reviewer raised an important issue regarding the two methods we outlined for computing linear regions, which yield different results for networks deeper than 4 layers. We appreciate the reviewer for pointing this out and giving us the chance to clarify. The discrepancy is because the numerical algorithm version implemented in our submission allows for disconnected linear regions, while the symbolic method accounts for connectivity. In other words, the numerical algorithm may undercount the number of linear regions for certain networks. We will address this discrepancy in the revision.
We thank the reviewer for their careful reading and for identifying errors in Table 1--3 regarding the computed Hoffman constants. The lower bounds should indeed be below the true Hoffman constants. After reviewing the public code by Peña et al. (2018), we found that it returned incorrect values, leading to the errors. We have since re-computed the values using a brute force method, and the updated tables show that the lower bounds are always below the true values, consistent with the theory.
Regarding upper bounds, Theorem 3.9 (line 214 of our original submission) states that the Hoffman constant is bounded by the inverse of the smallest nonzero singular value of submatrices of $A$. We used a threshold of $10^{-10}$ to determine nonzero values, but found that many small singular values ranged between $10^{-30} \sim 10^{-14}$. This confirms that the singular values provide upper bounds, aligning with the theory. We agree with the reviewer that these bounds can be tightened, and we will mention this as a direction for future research in our revision. This study could also be promising for new results concerning the Stewart--Todd condition measure of a matrix.
We now turn to addressing the questions asked by the reviewer. We summarize the major concerns in subtopics.
### Improving efficiency
We would like to clarify that our claim of improving efficiency refers to the factorial reduction for invariant networks, when compared to naively sampling without taking the symmetry into account. In particular, we do not claim to introduce computational improvements on already existing works, although we believe that the ideas we introduce in this manuscript may also lead to similar improvements for other methods.
How our methods scale and the question of their utility in the context of modern deep learning have been addressed earlier in this response.
(continued in the next official comment)
---
Rebuttal Comment 4.1:
Comment: ### Lack of attribution to prior work
We are confused by the reviewer's claim that ''when the intersection of tropical geometry and machine/deep learning is discussed, zero references were provided'' when these appear in a subsection on related work (lines 39-48 of our submission).
To clarify our intentions on line 352, we are certainly not claiming that this work is the first to analyze expressivity of neural networks using tropical geometric quantities; we explicitly reference previous work in this direction in our manuscript in lines 39-41 (''Tropical geometry has been used to characterize deep neural networks with piece wise linear activation functions, including two of the most popular and widely-used activation functions, namely, rectified linear units (ReLUs) and maxout units.''). However, we are not aware of previous work that provides and implements exact algorithms that compute the total number of linear regions of a network-this is what the sentence on line 352 is stating.
### Lack of precision
We now address the reviewer's comments about the lack of precision in our comparison between our two methods: it is useful to state explicitly that the symbolic method for computing linear regions does already compute the ground truth. As a reminder, this is based on Algorithm 3, which computes the *exact* number of linear regions of a tropical Puiseux rational function. More specifically, the symbolic method converts the neural networks to a tropical Puiseux rational function, and then applies Algorithm 3.
With that in mind, our claim that Table 10 in our original submission is more precise than Table 11 is communicating the fact that **the numerical method (used for Table 11) does not always give the exact answer (because, for instance, some regions might be missed), while the symbolic method does.**
As for missing assumptions, we acknowledge that Theorem 4.3 is missing the assumption that the group is finite.
We left this assumption out because it is implicit in the context (for instance, the statement is clearly implausible in the context of machine learning if it is to be interpreted in terms of cardinal arithmetic). Moreover, we disagree that our definition of $A$-surjective sets introduces any new assumptions on $m$ and $n$. **The definition makes sense for any matrix $A$ (with no assumption that $m < n$) and the claim that the existence of $A$-surjective sets implies that $m < n$ is incorrect: For any non-zero matrix $A$, there exists an $A$-surjective set of row indices. In particular, our use of this notion does not introduce any additional assumptions on the networks we consider.**
---
Rebuttal 5:
Comment: We appreciate the time and effort the reviewer has invested in evaluating our work. However, we were concerned by the tone of the recent exchange, which we believe deviated from the professional and respectful dialogue we value. Our intent in pointing out mathematical inaccuracies was not to discredit the review but to ensure that our work is assessed with the utmost accuracy. We hope to steer the conversation back to a constructive and respectful engagement and remain grateful for the opportunity to further clarify and discuss important aspects of our research.
We also want to clarify that our response was not limited to a single sentence/paragraph in **Questions** section. We noted that there were additional incorrect statements and misunderstandings other than the ones we highlighted in our previous response. Our intention is to address these comprehensively, ensuring that our work is accurately represented and understood. We appreciate your attention to detail and the opportunity to clarify these points.
### Group actions
Our original reading of the section in the review about group actions was that there was some confusion about how group actions are defined in the tropical setting, to which we answered that our group actions are not tropical. Group actions are quite a basic concept, but for clarification, we think it is worth being explicit about what we mean. By a left group action of $G$ on a set $X$, we mean a map $(g, x) \mapsto g \cdot x$ that satisfies the following properties:
1. For any $x$ in $X$ and $g, h$ in $G$, we have $(g h) \cdot x = g \cdot (h \cdot x)$
2. For all $x$ in $X$, we have $1_G \cdot x = x$.
A right group action can be defined in a similar way (see, e.g., the Wikipedia page on group actions). In accordance with standard mathematical terminology, we use the term \emph{group action} to refer to a map $G \times X \to X$ that is either a left or a right action. In particular, we are making \emph{no} assumptions on the group $G$ beyond finiteness. The group $G$ acting on $\mathbb{R}^n$ further induces an action on the finite set of linear regions $\mathcal{U}$. In this case, for a linear region $x\in \mathcal{U}$ and a group element $g\in G$, the action of $g$ sends $x$ to another linear region $g\cdot x$. Thus we are in the case of finite groups acting on finite sets, without more complicated group representation theory involved. In particular, we do not mention group representations since they are not relevant here: in this level of generality, there is no natural way attaching a representation $\rho : G \to \mathrm{GL}(\mathbb{R}^n)$ to the group action $G \circlearrowright \mathbb{R}^n$ as the reviewer seems to be suggesting. It is of course possible to define certain group actions using representations, which gives rise to the class of *linear* group actions, but in our work there is no need to impose this additional restriction since it is not necessary for our results to hold. Given that this is seemingly a source of confusion, we will clarify this in the revision.
### Tables
We apologise for the lack of clarity in Tables 1-6. As explained in Appendix D, we compute the Hoffman constant together with upper and lower bounds for 8 different Puiseux rational functions, and each column of Tables 1-3 corresponds to the outputs given for of these 8 samples (which are randomly generated with different structural parameters for each table). Tables 4-6 provide more detail about compute times, and again each column corresponds to a different sample. We will clarify this in the revision.
### Tropical geometry
The reviewer is philosophically correct in observing that the entire paper could theoretically be rewritten without reference to tropical geometry, however, in the same line of reasoning, *any* mathematical theory could be rephrased starting from set theory. Doing so would diminish the significance of the specific mathematical contributions that tropical geometry brings to our work. Our choice to use the language of tropical geometry was twofold: (i) it provides a unifying and systematic framework for the general study of expressivity of neural networks; and (ii) it builds upon and extends the work of Zhang et al. (2018), where the main contribution of that paper was to provide an alternative perspective to studying neural networks through the lens of tropical geometry. Furthermore, we would like to claim that the use of tropical geometry is in agreement with other well established works in the linear regions literature, and does not obscure the mathematics of our paper.
Finally, concerning the OSCAR implementation, we agree that in principle our code could be reimplemented without using tropical notions. However, one of the four cornerstones of the OSCAR library is precisely the tropical geometry library (building from polymake) and our work opens the door for researchers to apply these very powerful comptuational tools to study neural networks.
---
Rebuttal Comment 5.1:
Comment: ### Effective utility
1. We don't claim the new experiments are large scale, simply that they are larger scale than the original ones.
2. Concerning the scale, we would also like to point out that that other papers in the area work with small, shallow networks, and have large compute times; see e.g., https://arxiv.org/pdf/1810.03370 and https://arxiv.org/pdf/1711.02114.
3. We are confused about the reviewer's comment concerning our symbolic method: ''As I mentioned in my original review, there is no asymptotic analysis nor a statement about the tightness of the bounds. In essence, running the proposed method on a network may or may not reveal something about the network, and we have no way of knowing how accurate the method might be.''
We acknowledge that such considerations are sensible for our numerical method, but these concerns are not relevant to the symbolic method, where, by the very nature of symbolic computation, **our method *does* reveal something about the network, and is *fully accurate*.**
4. While Section 3 does present one of the important contributions of our work, it is certainly not the only contribution we make and while we do appreciate feedback on this portion of the work that occupies 3 out of the 9 allowed content pages, we would like to point out that the entirety of Section 5 is also an important contribution because it presents our symbolic method and how it compares to the numerical one. As mentioned above, there are fundamental differences to the two approaches and while there exist other numerical approaches in the literature, **ours is the first to propose symbolic approaches which essentially mitigate all of the concerns raised by the reviewer** on interpretability and accuracy (which are indeed important questions in numerical computation).
### Lack of attribution
Thank you for your feedback. We would like to respectfully address the comment regarding the absence of references when discussing the intersection of tropical geometry and machine/deep learning. **The statement that ''zero references were provided'' is entirely false**, particularly considering the subsequent clarification that ''no general references on the field are provided.'' We have indeed cited relevant works within this intersection, including on topics beyond expressivity and linear region counting.
There are other references (listed below) around the intersection of tropical geometry and machine learning, but most of these do not directly overlap with the focus of our work, so we are not convinced that including them would be any more helpful than some of the existing references (which the reviewer already raised concerns on in terms of relevance to our work). Two notable ones that we do agree to include are Maragos, Charisopoulos, and Theodosis (2021), which provides a detailed survey of the general field, and Montufar, Ren, and Zhang (2022), which uses tropical geometry to study the linear regions of neural networks with neural networks with maxout units. We would like to point out that among the list below, the paper that we agree to include by Brandenburg et al. (2024) falls under the category of ''contemporaneous work'' by NeurIPS (as it appeared online within 2 months of our submission). If there are other particular papers that the reviewer believes we have overlooked that are directly relevant to our work, we would be grateful for the opportunity to review and incorporate them.
**Additional references:**
Maragos, Charisopoulos, and Theodosis. *Tropical Geometry and Machine Learning*, 2021.
Brandenburg, Loho, and Montúfar. *The Real Tropical Geometry of Neural Networks*, 2024.
Smyrnis and Maragos. *Tropical Polynomial Division and Neural Networks*, 2019.
Montufar, Ren, and Zhang. *Sharp bounds for the number of regions of maxout networks and vertices of minkowski sums*, 2022.
---
Rebuttal Comment 5.2:
Comment: Hello again,
### Group theory
I'm familiar with abstract group theory, group theory on topological spaces, representation and character theory. I appreciate the clarification. As $\mathbb{R}^n$ is a topological space (where standard definitions define a group acting on such a space as $g \cdot f(x) = f(g^{-1}x)$, see Dehmamy et al., 2021 or Cohen et al., 2019), a vector space (where standard definitions include group representations and generally $g \cdot f(x) = \rho(g) f(g^{-1}x)$, see Cohen and Welling, 2017 or Weiler and Cesa, 2019), and a set (where abstract group definitions could potentially suffice), stating that you define group actions in the classical sense is not descriptive, hence why I asked for the clarification. It's still not clear to me how you practically define the group action, but at this point I think at least the theoretical concepts are covered.
### Tropical geometry
I understand the authors perspective, but I disagree. Zhang et al., 2018 was explicitly using concepts from tropical geometry, which is not present in this work.
### Effective utility
1. These considerations were for the numerical method as they explicitly mentioned the Hoffman constant and the point still stands.
2. The technical content of the paper spans pages 3 to 8. Three of those 5 pages are devoted to the numerical method, one page to Section 4, and one page to Section 5. The majority of the technical content is on the Hoffman constant.
### Lack of attribution
The statement is not false. Please read the full statement, printed here for your convenience: "In the introduction, textbooks on tropical geometry are cited as well as papers on the counting of linear regions. However, when the intersection of tropical geometry and machine/deep learning is discussed, zero references are provided, whereas the field has been fairly active in the last 5 years with dozens of highly cited papers". The comment explicitly mentions the introduction and where the intersection of tropical geometry and machine learning is first introduced. To further avoid confusion, I explicitly stated that general textbooks and cited papers are given for each of the fields separately, as a way to explicitly specify what I was referring to. As a final guard to misinterpretation, the following sentence reads "Similarly, in the related works section the final citation has..." which further reinforces that the previous part was not talking about the related works section. I further explained this in my previous response, and the statement is correct, I do not understand why this is such a point of contention. I think the listed works in your response act as adequate general references.
Dehmamy et al., 2021: Automatic Symmetry Discovery with Lie Algebra Convolutional Network
Cohen et al., 2019: A General Theory of Equivariant CNNs on Homogeneous Spaces
Cohen and Welling, 2017: Steerable CNNs
Weiler and Cesa, 2019: General E(2) - Equivariant Steerable CNNs
---
Reply to Comment 5.2.1:
Comment: We would once again like to thank the reviewer for taking the time to reply promptly to our previous responses.
### Group actions
In light of the reviewer's response, we believe there has been a misunderstanding in our convention, and we would like to clarify the following points:
1. In section 4.1, we work with *arbitrary* group actions, in the sense that we are assuming we have been given a group $G$ together with an action map $G \times \mathbb{R}^n \to \mathbb{R}^n$ that satisfies the properties we mentioned in our previous response.
2. In section 4.2, we work with *permutation invariant neural networks*, i.e., networks that are invariant under permutations of their inputs. In other words, we are working with a specific (and standard!) group action of the symmetric group $S_n$ on $\mathbb{R}^n$ called the *natural permutation representation*, given by $\sigma \cdot (x_1, \dotsc, x_n) = (x_{\sigma(1)}, \dotsc, x_{\sigma(n)})$ (i.e., the $i$-th coordinate of $x$ becomes the $\sigma(i)$-th coordinate of $\sigma \cdot x$). This is a linear representation and can thus also be defined in terms of a representation of $S_n$.
We would also like to point out that the formulas the reviewer provided in their response appear to be definitions of *examples* of actions rather than definitions of the *notion* of an action, which is what we are working with. For example, the first formula given by the reviewer appears to be taken from equation (1) in Dehmamy et al. (2021); this defines an *specific* action on the space of functions $f : \mathcal{S} \to \mathbb{R}^n$ where $\mathcal{S}$ is some topological space. Notice that the formula given is also already assuming we are given a group action on the space $\mathcal{S}$.
### Tropical geometry
We acknowledge that the second main contribution by Zhang et al. (2018) is the use of theory from tropical geometry to derive an analytic upper bound on the number of linear regions. The first is a representation of neural networks in terms of tropical rational functions, which, strictly speaking, is defined by tropical algebra (in the sense that tropical algebra is used to define these functions). We are following suit: while our interpretation of neural networks are tropical Puiseux rational maps is tropical algebraic, we maintain that our framework is *tropical*. We would like to add that tropical algebra was first introduced and studied in the context of tropical geometry, rather than as an independent theory (even though since the introduction of tropical geometry, work on tropical linear algebra, for instance, has been well-studied). See Maclagan and Sturmfels, *An Introduction to Tropical Geometry* (2015) and Joswig, *Essentials of tropical combinatorics*' (2021).
### Effective utility
We appreciate the reviewer's observations regarding the lengths of the sections in our paper. While it is true that the symbolic section is shorter than the part on the numerical estimate, it nonetheless constitutes a significant portion of the main technical content and is highlighted as one of the key contributions of our paper in the introduction as well as the conclusion. Therefore, we feel it would be more balanced to give it due consideration in the review. For context, we would also like to note that the reviewer provided detailed feedback on Section 4, which is shorter than Section 5, indicating that length alone should not diminish the importance of the content. | Summary: This study investigates the expressive power of deep fully-connected ReLU networks (or a piecewise linear function) from the perspective of tropical geometry. The number of linear regions gives an estimate of the information capacity of the network, and the authors provide a novel tropical geometric approach to selecting sampling domains among linear regions.
Strengths: - An effective sampling domain is proposed as a ball of radius bounded by Hoffman constant, a combinatorial invariant
- The proposed sampling algorithm is doable and implemented.
Weaknesses: - The proposed algorithms suffer from the curse of dimensionality
Technical Quality: 3
Clarity: 2
Questions for Authors: - The authors mention in the abstract that the number of linear regions is an estimate of information capacity of the network. I need more clarifications, because this fact bridges tropical geometry and machine learning study.
- (minor) l.133: I was a bit confused here. Is the matrix-vector product $Ax$ in the sense of tropical algebra or in the ordinal sense? What does a “vector-inequality” $Ax \le b$ mean?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: it is discussed in Section 6
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their feedback on our work. We are
pleased that the reviewer found our work to be sound and would like to take
this opportunity to respond to the concerns raised.
### Linear Regions as an Estimate of Information Capacity
In choosing the number of linear regions in the domain of a neural network as our measure of its expressivity, we are following the precedent set by seminal existing work, such as those by Zhang et al. (2018), Montúfar et al. (2014), and Raghu et al. (2017). To briefly summarize the motivation, the idea is that by counting these linear regions, we get an estimate of how many classes a neural network could classify in theory if the weights were ‘hand-tuned,’ so to speak. It is also a measure of the model’s flexibility—the more linear pieces there are, the easier it is to approximate smooth functions, in the spirit of Riemann integration, for example.
### Matrix Product Notation
We apologize for any confusion caused by our choice of notation on matrix products. In Section 3.1 on line 133 of our submission, the matrix product refers to classical matrix multiplication as opposed to tropical matrix multiplication. In general, throughout our submission, we did not work with tropical linear algebra. The inequality is used component-wise, so that we are considering the set of points such that each component satisfies the inequality. In practice, this means we are taking the intersection of a set of linear inequalities in the components of x, which is useful for defining a polyhedron.
### Scalability and the Curse of Dimensionality
We now turn to addressing to the weakness of the curse of dimensionality raised by the reviewer. In our work we presented a novel algorithm for using symmetry in the domain to improve sampling efficiency to numerically estimate the linear regions. Indeed, we are aware that our technique suffers from the curse of dimensionality, which we explicitly discussed in the Limitations section. Despite the computational complexity being exponential in the number of inputs, we believe that our contribution still has academic value for two reasons: Firstly, it lays the ground for further investigation in using symmetry to accelerate difficult computations; the arguments concerning the fundamental domain are very general and can be applied to any symmetric neural network. Secondly, while the overall computational complexity may not be improved, the case we investigate in our submission does lead to a factorial improvement in complexity, as it scales with the input dimension, which is a nontrivial improvement.
For our algorithm which computes the symbolic representation of a neural network and uses it to compute the exact number of linear regions, the scaling is much better, allowing up to 784-dimensional inputs. We point out that this achieves the dimension of important and widely-used benchmarking inputs, such as MNIST. The technique already scales well enough to yield interesting results concerning the number of monomials and linear regions, and with further algorithmic improvements and better hardware utilization (which are directions for future research that we identified in our original submission), it would likely improve even further.
Thank you again for your feedback on our work and the opportunity to respond to questions and weaknesses. We hope we were able to clarify the reviewer’s concerns.
**References:** (cited in our original submission)
Liwen Zhang, Gregory Naitzat, and Lek-Heng Lim. *Tropical geometry
of deep neural networks.* In International Conference on Machine Learning
(2018), pages 5824–5832. PMLR (2018).
Guido F. Montúfar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. *On the number of linear regions of deep neural networks.* In Advances
in Neural Information Processing Systems (2014), 27 (2014).
Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, and Jascha
Sohl-Dickstein. *On the expressive power of deep neural networks.* In International Conference on Machine Learning (2018), pages 2847–2854. PMLR
(2017).
---
Rebuttal 2:
Comment: Thank you for detailed responses. I will keep my score as is.
> Linear Regions as an Estimate of Information Capacity
> ... by counting these linear regions, we get an estimate of how many classes a neural network could classify in theory if the weights were ‘hand-tuned,’ ...
Thank you for clarifying it. I skimmed Zhang et al. (2018), Montúfar et al. (2014), and Raghu et al. (2017), but none of the authors called it information. Is the number of linear regions equivalent to either Shannon's mutual information (aka channel capacity), the information bottleneck in the rate-distortion theory, or any other information measure? If not, I recommend the authors not to call it 'information' to avoid miscommunication.
---
Rebuttal Comment 2.1:
Comment: Thank you for your continued engagement with our work and for the thoughtful feedback.
To the best of our knowledge, the number of linear regions in a neural network is not equivalent to any of the information measures listed by the reviewer. We understand the potential for miscommunication and will be more careful with our terminology in the future to ensure clarity and precision.
Thank you once again for your valuable input. | Summary: The paper studies the expressivity of neural networks as captured by the number of linear regions using tools from tropical geometry. There are three main contributions, two of which are theoretical and the other is about open source library that allows the analysis of neural networks as Puiseux rational maps.
The first theoretical contribution is that they propose a new approach for selecting sampling domains among the linear regions and the second is a way to prune the search space for network architectures with symmetries.
Prior work on tropical geometry and deep neural nets have analyzed ReLU and maxout units. Contrary to prior works, this work makes an effort to understand the geometry of the linear regions not just their number. To do so the authors propose a way of sampling the domain that leads to more accurate estimates compared to random sampling from the input space, which is a previously used alternative that can result in some missed linear regions and hence in inaccurate estimates about the information capacity of the neural network. This insight about sampling, allows then the authors to reduce the time to estimate the linear regions of special types of neural networks that exhibit some symmetry. This essentially reduces the number of samples needed and they experimentally verify their results.
Finally, the authors release OSCAR an open source library.
Strengths: -connections of neural networks expressibity with tropical geometry, though they have been exploited in the past, are strengthened in this paper
-the paper presents a nice story that leads to faster sampling methods, both in theory and in practice.
Weaknesses: -the main weakness I see in the paper is that, though well-motivated and interesting, it lacks technical depth. For example, there is essentially one main result stated as Theorem 4.3, and some intermediate results stated as Lemma 3.3 and Proposition 3.4. On the one hand, the latter two are simple observations about Puiseux polynomials, and on the other hand the proof of the Theorem 4.3 is not more than 3 lines (as shown in Appendix C). As such, I believe it is a nice transfer of ideas from tropical geometry to neural networks, but given that the connection was already there and used in prior works more than 10 years back, I don't think the better sampling algorithm is solid enough.
Technical Quality: 3
Clarity: 3
Questions for Authors: -Would it be possible perhaps to strengthen the paper by proving depth-width separation results analogous to Telgarsky "Benefits of Depth in Neural Networks" using your techniques?
-Is there some notion of optimality associated with your sampling algorithm? Could it be further improved?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for their feedback on our work. We are happy that the reviewer found the soundness and contributions of our submission to be good; and that the reviewer appreciated the practicalities of our work in relation to existing theory at the intersection of tropical geometry and machine learning.
We now respond to the questions and concerns raised by the reviewer.
### Technical Depth
Our work lies at the intersection of pure mathematics and computer science and we are pleased that all reviewers found our presentation of the background and our contributions accessible and clear. As the reviewer pointed out among the strengths of our work, indeed, a major goal of our work was to adapt an existing theoretical connection to the practical setting to strengthen the current level of understanding of information capacity of neural networks, which we believe that we have achieved. We would like to
further elaborate on several aspects concerning the technical depth of our contributions.
The conciseness of our proofs should not be mistaken for a lack of technical depth, nor should it imply that our results lack utility. We strengthen existing theory by introducing powerful ideas into the established intersection of tropical geometry and neural networks. Two explicit examples are connecting the Hoffman constant and machine learning inductive biases to tropical geometry, which we have explicitly studied with symmetric group actions.
The technical depth and impact of our work is further demonstrated by its implications and applications. Our geometric characterization of linear regions, for example, not only advances theoretical understanding but also has practical implications for improving computational efficiency in neural network analysis. The integration of our methods with the OSCAR system broadens the scope of symbolic computation in neural network research. In particular, the tools we provide can be used to analyze the tropical expressions of concrete neural networks; it also paves the way for the application of other computational tools and theory from the rich field of tropical geometry. For instance, this allows for the computation of some standard measures of complexity, such as the number of non-redundant monomials.
We hope this clarifies the depth and significance of our contributions. We are confident that our work meets the high standards of technical rigor and innovation expected in the field, and we are grateful for the opportunity to further elaborate on these points.
### Depth–Width Separation
The behavior of depth–width separation is not within the scope of interest of our work, so we did not explore this question of depth–width separation theoretically, though upon further reflection on this relationship in order to respond to the reviewer’s question on this relationship, we conclude that our work may be able to offer some empirical interpretation of the depth–width separation, since the number of non-redundant monomials gives a concrete limit on what functions may be represented. Empirically, from the perspective of our methods, it is 'easier' to increase width than depth, since increasing the latter can lead to a significant increase in the number of monomials that appear and thus to more complicated functions. We computed the number of monomials (after removing redundant ones from the numerator and the denominator) in the tropical expression of random neural networks with the same number of hidden units. As expected, the deeper networks have more monomials than the shallow ones; see Table 2 attached.
### Optimality of Algorithms
For our sampling algorithm, we con confirm its optimality in two aspects:
1. Sampling from the fundamental domain (Algorithm 2 of our original submission) gives an estimation of the true value of number of linear regions. We further have tightness of the upper bound and lower bound of the sampling ratio (Theorem 4.3 of our original submission, line 242). In fact, in experiments, we always obtain a 100% sampling ratio (Figure 1 of our original submission), which means the sampling algorithm outputs the true value.
2. The computation of Hoffman constant gives an estimation of the smallest sampling radius (Proposition 3.4 of our original submission, line 174), which further gives the smallest (optimal) sampling box that hits all linear regions. In other words, it gives the optimal region for our Algorithm 4 to be theoretically correct.
We thank the reviewer once again for the helpful feedback provided and
hope that we were able to adequately address all concerns.
---
Rebuttal 2:
Title: post-rebuttal reviewer comment
Comment: The reviewer appreciates the author's response. The reviewer has read their comments and reread parts of the paper but still maintains the score unchanged. The reviewer would rewrite parts of the paper to highlight the novelty of the proofs, the proofs themselves and how they were significantly different from prior works, and finally the author's suggested connections to depth-width tradeoffs.
---
Rebuttal Comment 2.1:
Comment: Thank you very much for your thoughtful response and for taking the time to revisit our paper in light of our comments.
In our revision, we will ensure that the novelty of our proofs, their distinction from prior works, and the connections to depth-width tradeoffs are further emphasized.
One of the key contributions of our paper is demonstrating the computational feasibility of using the tropical form of a neural network to gain deeper insights into the geometry of linear regions and the complexity of the resulting tropical expression. **We noticed that the review has not yet mentioned this aspect**, and we would greatly appreciate any feedback you could provide on our symbolic computations for linear regions and tropical forms, as **we consider this to be a central component of our work**.
Thank you once again for your time and constructive feedback. | Summary: This work provides a geometric characterization of the linear regions in a neural network via the input space. Although linear regions are usually estimated by randomly sampling from the input space, stochasticity may cause some linear regions of a neural network to be missed. This paper proposes an effective sampling domain as a ball of radius R and computes bounds for the radius R based on a combinatorial invariant known as the Hoffman constant, which gives a geometric characterization for the linear regions of a neural network. Further, the paper exploits geometric insight into the linear regions of a neural network to gain dramatic computational efficiency when networks exhibit invariance under symmetry. Lastly, the paper provides code for converting trained and untrained neural networks into algebraic symbolic objects, useful for precisely the kinds of analysis this paper performs.
Strengths: 1. The authors present an interesting and novel way to analyze the capacity of a neural network using fundamental notions from tropical geometry.
2. The paper and theory were very clearly presented. In terms of writing, the presentation of the relevant tropical geometry for purposes of Hoffman constant estimation Section 3 was excellent.
Weaknesses: 1. My greatest concern in this paper stems from the experiments for upper and lower bound estimation for Hoffman constants given in Tables 1, 2, and 3 in the appendix. It seems the experimental upper and lower bounds computed there do not actually bound the true Hoffman constant. I understand that the upper bound may be loose due to the way it is estimated, but the lower bound should always be below the true Hoffman constant, as per my understanding. Yet for, say, the first of eight computations in table 1, the lower bound $H_L$ is $0.5460$, which is clearly above the true value of $H$, given to be $0.3298$. For that example, the upper bound $H^U$ is given to be $0.2081$, which is clearly not above the true value. This pattern continues, and the lower and upper bounds for the Hoffman constants seem to fluctuate somewhat arbitrarily around the true Hoffman constants, which is concerning. I am currently assuming that there is some kind of mistake with these experimental values and would like a clarification from the authors regarding this.
2. Due to the curse of dimensionality, this method for estimating the expressiveness of neural networks can only be applied to simple neural networks in practice. This is very apparent due to the way the numerical approach requires sampling on a mesh grid in an $n$-dimensional box (but is also true for the symbolic approach, that relies on the computation of the Puiseux rational function associated with a neural network, which becomes increasingly quite hard in higher dimensions). To the credit of the authors, they are up-front about this limitation, but this does significantly hinder the applicability of the presented results.
Technical Quality: 2
Clarity: 4
Questions for Authors: My questions to the authors are listed below:
1. Why do the computed upper and lower bounds for the Hoffman constants in Tables 1, 2, and 3, seem wrong? Is there a mistake with this computation, and if so, can you provide the correct tables so that I may judge how tight/loose the bounds are? I elaborated on this in the first weakness of the "Weaknesses" section above.
### Additional Comments and Minor Corrections
The writing in this paper is quite good and the material is very clearly introduced. Nonetheless, I would like to give a non-exhaustive list of minor corrections below:
L103, 105: Please use \citet for in-text citations. This happens several times, but I will only mention it once.
L349: "to interpret and analyzed" -> "to interpret and analyze"
Confidence: 4
Soundness: 2
Presentation: 4
Contribution: 3
Limitations: The authors adequately discuss the limitations of their work in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their careful and thoughtful reading
of our work. We were pleased to read that the reviewer found our work
novel and interesting, and that we were able to communicate and present
the concepts and our contributions clearly. We especially appreciate that
important issues on existing literature was raised via the reviewer’s questions,
to which we now respond.
### Aim and Proof of Concept
Firstly, we would like to clarify that the primary aim of our paper was
to present a proof of concept rather than to develop highly optimized code
for large-scale stress tests. Our focus was on establishing the theoretical
foundations and demonstrating the feasibility of our approach. Consequently,
we made no significant effort to optimize certain parts of the code for perfor-
mance; this choice was made in order to demonstrate the direct applicability
of established symbolic computation techniques to study neural networks.
For example, the tropical Puiseux polynomial library, we utilized a direct
implementation and relatively straightforward algorithms. We believe that
there is substantial potential for improvement in this area, and more efficient
algorithms could significantly enhance the performance of our approach,
which we mentioned in the Discussion section of our submission for future
research directions.
### Scalability and Higer-Dimensional Inputs
Secondly, our experiments primarily focused on smaller networks to
facilitate clear and comprehensible analysis. Nevertheless, the code we
developed is capable of handling higher-dimensional cases than those presented
in our experiments. For instance, our code can process networks with input
dimension 28 × 28 (MNIST dimension) in roughly 1 minute. This capability
demonstrates that our approach is in fact scalable, contrary to what our
illustrative experiments might have suggested. In our revision, we plan to
include these larger scale results alongside the smaller scale experiments,
though we believe that the smaller scale ones provide a clearer and more
readily accessible proof of concept in the sense that small networks allow us
to more easily illustrate the key insights and mechanisms of our approach.
We do appreciate the importance of these concerns and hope that we
have addressed them adequately. We do agree with the reviewer that with
further optimization and development, the techniques we proposed could be
applied to more complex and larger-scale neural networks.
### Hoffman Constants
We are very grateful to the reviewer for their careful reading of our work
and especially for pointing out the mistakes in Table 1–3 concerning the
computed Hoffman constants. Indeed, the lower bound should always be
below the true Hoffman constants. Upon further careful investigations of the
public code we used to compute the true Hoffman constants due to Peña et
al. (2018), we find that the reviewer was indeed correct that the values
we reported were erroneous because the public code does not always return
correct value. To complete our experiments, we then tested examples without
using the code of Peña et al. (2018), and instead implemented a brute
force computation to find the maxima over all submatrices. We have provided
updated tables for these experiments and these new results show that the
lower bounds are always below true values, which is consistent with the
theory.
With regard to upper bounds, Theorem 3.9 (line 214) in our original
submission shows that the Hoffman constant of a matrix $A$ is bounded by
the inverse of the smallest nonzero singular value of $A_J$ , where $J$ ranges over
all rows of $A$. In our implementations, to determine a nonzero value, we
set the threshold to be $10^{−10}$. However, it seems that a value of $10^{−10}$ is so
strict that in fact many small singular values are ruled out. After revisiting
our computations and listing all the singular values, we found that many
small singular values have magnitude between $10^{−30} ∼ 10^{−14}$. Thus the
singular values indeed provide upper bounds, in practice, so the theory is
again consistent. In practice, however, in agreement with the reviewer, we
also acknowledge that the upper bounds can be tightened and in the revision,
we will mention this as a noteworthy direction of future research. We believe
that such a study could also have positive implications on the Stewart–Todd
condition measure of a matrix.
### Minor Comments and Corrections
We appreciate the few typographical and formatting errors flagged by the
reviewer and will commit to careful editing and proof-reading in our revision.
Thank you once again for your valuable feedback.
**References:** (cited in our original submission)
Javier F. Peña, Juan C. Vera, and Luis F. Zuluaga. *An algorithm to compute the Hoffman constant of a system of linear constraints.* arXiv preprint
arXiv:1905.06366 (2019).
---
Rebuttal Comment 1.1:
Title: Rebuttal Response
Comment: I would like to thank the authors for providing a rebuttal. My concerns regarding the Hoffman constants have been somewhat assuaged, but I am surprised that the code the authors were using from prior work was wrong and that this was caught only during the review process. Out of an abundance of caution, I would like to maintain my reject rating. I believe this work has some merit, but greater care needs to be taken when computing bounds. I hope that the authors will improve their work and produce a higher quality resubmission.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to carefully review our work and for providing your thoughtful feedback. We appreciate your acknowledgment of the improvements we’ve made in addressing the concerns regarding the Hoffman constants. While we respect your decision to maintain your initial rating, we believe it is important to note that despite satisfactorily addressing the concerns raised, this has not influenced the final evaluation. This process and outcome diminishes the impact that constructive dialogue is intended to have on the review process.
This being said, we have found this process to be an invaluable opportunity to further refine our research, and we remain committed to making any necessary improvements. We sincerely thank you again for your constructive comments and the time invested in reviewing our work and engaging in discussion. | Rebuttal 1:
Rebuttal: We would first and foremost like to thank all the reviewers for their time invested in reading and providing thoughtful feedback on our work. We were pleased to find that they found the work well-written and clearly presented, and that they found the intersection of tropical geometry with neural networks to be interesting and insightful in assessing network information capacity.
We appreciate that our contribution straddles quite different areas of mathematics and computer science, thus, we would like to take this opportunity to reiterate the implications of our work, and especially to emphasize their novelty and highlight their importance. Existing work by Zhang et al. (2018) establishes the equivalence between feedforward neural networks with ReLU activation and tropical rational functions; this theoretical connection was then used to give an analytic upper bound on the number of linear regions of a network. The number of linear regions of a neural network is an important quantity that has been well-studied in the machine learning literature that
provides a measure of the network’s information capacity. Our work builds on the earlier work of Zhang et al. (2018) to adapt these concepts to practicality, for instance, where the number of linear regions is expected to evolve during training. One of our main contributions is an efficient numerical method to compute this quantity and thus better understand how training affects the information capacity of a neural network. Furthermore, by considering not just the number of linear regions but also their geometric characteristics that can be described algebraically (as in the spirit of inductive biases in geometric deep learning), we are able to further increase the computational efficiency of our numerical approach to understanding information capacity.
A question that was common to several reviewers asks about the scalability and curse of dimensionality of our work. We understand these these concerns pertain to the applicability of our contributions, given that our numerical experiments were carried out on rather small networks. Our reasoning for these proof-of-concept-scaled experiments was to highlight our third contribution, which is a practical connection between neural networks and not just tropical geometry, but also the vast existing literature and codebases on symbolic computation in computational algebraic and polyhedral geometry, in the sense that, thanks to our theoretical contributions, these can be directly and immediately applied to study neural networks (specifically, OSCAR was used in our implementations). In response to the question on effective utility given the smaller scale of our experiments: Firstly, we acknowledge that there exist many computational improvements that can be implemented to the existing computer algebra software to better understand neural networks, such as parallelization and GPU execution; this is a promising and important line of future research (mentioned in the Discussion section of our submission). To the best of our knowledge, ours is the first work to connect neural networks to computer algebra software. Secondly, we would like to point out that our methods do indeed scale to the input dimensionalities of common and well-known benchmarks, such as MNIST. Additional experimental results are provided in Table 1 attached.
**References:** (cited in our original submission)
Liwen Zhang, Gregory Naitzat, and Lek-Heng Lim. *Tropical geometry
of deep neural networks.* In International Conference on Machine Learning
(2018), pages 5824–5832. PMLR (2018).
Pdf: /pdf/9bc604ef2839de074ccafad29a237a7644181e56.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Meaningful Learning: Enhancing Abstract Reasoning in Large Language Models via Generic Fact Guidance | Accept (poster) | Summary: The study introduced a specialized dataset and learning approach that aimed to enhance LLMs' use of generic facts in reasoning. The results indicate that this methodology not only improved their general reasoning skills but also significantly developed their abstract reasoning abilities, suggesting a shift from mere memorization to a deeper, more nuanced understanding and application of information. The paper is well composed and demonstrates good, consistent performance across several benchmarks.
Strengths: - the concept of AbsAcc is interesting and natural, as demonstrated by the experiments in Table 1, where the small difference between vanilla accuracy and AbsAcc for human subjects highlights that vanilla accuracy alone may not be an adequate metric for assessing abstract reasoning.
- the paper is well-composed and straightforward to comprehend.
Weaknesses: - the experiments currently focus on earlier versions of LLMs, including LLaMA-2 and GPT-3.5-turbo-0125. It is advisable to also include more recent models like LLaMA-3 and GPT-4/4o to ensure the consistency of the performance evaluations.
- some of the terminology used appears complex; for instance, in the memory learning section, the distinction between Knowledge and Reasoning examples (K-example and R-example) seems minimal. In the K-example, when a user poses a question, a supporting fact is cited while the R-example not. Additionally, I'm confused by the definition of abstract reasoning. It looks akin to tasks in commonsenseQA (https://aclanthology.org/N19-1421.pdf). The authors might need to elucidate the differences between their approach and that body of research.
Technical Quality: 3
Clarity: 4
Questions for Authors: - see the questions in weakness section
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: - some of the terms mentioned in the paper maybe a little bit fancy and overestimated, please refer to the weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your insightful comments, the following is the detailed response:
# 1. More experiments (Weakness 1)
We have conducted experiments on LLaMA-3 which is shown in Table 5, our MeanLearn can improve the performance of LLaMA-3 on both vanilla accuracy and AbsAcc. We do not test GPT-4/4o is because: (1) our main focus is to improve small LLMs, which is also crucial and have a broader range of application. They are preferred for deployment on devices like cell phones due to their lower resource demands. (2) GPT-4o is released about a week before the deadline of NeurIPS 2024, leaving insufficient time to conduct experiments with GPT-4o. Furthermore, we are glad to offer more exploration in the future.
# 2. Some of the terminology used appears complex (Weakness 2)
Your understandings about K-example and R-example are correct. While for abstract reasoning, it is not a specific reasoning type. We can compute the performance of abstract reasoning for any task or benchmark, not just CommonsenseQA. For example, in math problems, abstract reasoning can help estimate the ability of LLMs to use the generic fact “x + y = y + x” to solve a series of problems in various scenarios that rely on this fact. While in commonsense problems, it can help estimate the ability of LLMs to use the generic fact “acid is corrosive” to solve a series of problems in various scenarios that rely on this fact.
---
Rebuttal Comment 1.1:
Title: Follow-up Rebuttal
Comment: ### Dear Reviewer Fwhs:
We appreciate your thoughtful review. As the rebuttal deadline approaches, we kindly ask if our responses have sufficiently addressed your concerns. If you require further clarification, we are prepared to provide additional information.
Sincerely,
Authors
---
Rebuttal Comment 1.2:
Comment: Thanks for the clarification. I have looked through all the reviews and rebuttals and I will keep my rating positive for this paper. | Summary: This paper explores the abstract reasoning abilities of LLMs by creating a specific evaluation metric and a dataset called AbsR, developed using GPT-4. It presents a method, Meaningful Learning (MeanLearn), to improves both general and abstract reasoning accuracies of LLMs by teaching them to apply generic facts in varied scenarios.
Strengths: - Proposing a dataset to study the potential of LLMs in abstract reasoning.
- Proposing MeanLearn to improve the abstract reasoning capabilities of LLMs by teaching them to use generic facts for reasoning.
Weaknesses: - The paper is rushed, and not well organized. For instance, Some experimental results are presented in section 2, and others appear later in section 5 and after.
- The methodology section is half a page. More details on the model is needed. For instance, in Eq. 3, how the first and second equations will be utilized in the model.
- Incomplete related work section, specifically, insufficient related work on Abstract Reasoning.
- Some experimental results are in section 2: Abstract reasoning study, some later in section 5: Experiments. In section 2, the paper jumps into some results and experiments too early and without proper background.
- Experiments are limited. For example, it is unclear how the methods compare to [1]
- [1] Xu, Yudong, et al. "LLMs and the Abstraction and Reasoning Corpus: Successes, Failures, and the Importance of Object-based Representations." Transactions on Machine Learning Research.
Technical Quality: 2
Clarity: 1
Questions for Authors: - How the methods introduced in the paper can be compare to [1]?
- The paper states that "Our findings reveal a substantial discrepancy between their general reasoning and abstract reasoning performances". However, based on [2], LLMs "know" more than they can "say". Please justify how your statement remains valid.
- Considering that MeanLearn does not have many advantages over LLaMA-3, how do you justified the use of MeanLearn?
- [1] Xu, Yudong, et al. "LLMs and the Abstraction and Reasoning Corpus: Successes, Failures, and the Importance of Object-based Representations." Transactions on Machine Learning Research.
- [2] Li, Kenneth, et al. "Inference-time intervention: Eliciting truthful answers from a language model." Advances in Neural Information Processing Systems 36 (2024).
Confidence: 5
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: - Lack of sufficient experiments.
- Limited related work, and background sections.
- A thorough writing revision would be beneficial.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to clarify some critical misunderstandings in the preliminary review, which may have negative impacts on your assessment of our contributions.
# 1. The paper is rushed (Weakness 1)
Both sections 2 and 5 contain results, and this arrangement is intentional and not rushed. As in many prior works [1][2], the results in section 2 are preliminary and are used to demonstrate our motivations and introduce our following method. Meanwhile, the results in section 5 are related to our proposed methods and baselines. Thus, the results in sections 2 and 5 are independent and serve different purposes within our paper.
[1] Zhang et al., 2023: Automatic Chain of Thought Prompting in Large Language Models
[2] Jin et al., 2024: The Impact of Reasoning Step Length on Large Language Models
# 2. The methodology section is half a page (Weakness 2)
Due to the page limit, we are trying to put more important content in the main paper. To accommodate as much as experimental results, we describe our approach as concisely as possible (More details can refer to Appendix C and D). It is worth noting that our method has two sections, the construction method of AbsR dataset (section 3) and MeanLearn pipeline (section 4), a totally of 1.5 pages.
In Eq. (3), the first equation describes using input (X) to autoregressively model output (Y), the second is using X and generic fact r to autoregressively model Y, which is usually adopted by current LLMs posting-training works [3][4].
[3] Orca 2: Teaching Small Language Models How to Reason
[4] WizardLM: Empowering Large Language Models to Follow Complex Instructions
# 3. Insufficient related work on Abstract Reasoning (Weakness 3)
To the best of our knowledge, this is the first work to highlight the importance of abstract reasoning in the field of natural language processing. Thus, we include the most similar research directions: “Reasoning with LLMs” and “LLMs-as-annotators” in the related work section. Due to page limitations, we have condensed the related work. These sections will be expanded in the revised version.
# 4. Comparison to paper [5] (Weakness 4 and Question 1)
We believe there may be some critical misunderstandings concerning our paper and your referred paper. A comparison with this reference is unnecessary because: (1) it focuses on the reasoning on images or symbols, whereas our work is centered on abstract reasoning in natural language; (2) the paper does not propose a new method but merely presents an empirical study using ChatGPT and LLaMA on image tasks.
[5] LLMs and the Abstraction and Reasoning Corpus: Successes, Failures, and the Importance of Object-based Representations
# 5. The statement in our paper "Our findings reveal a substantial discrepancy between their general reasoning and abstract reasoning performances" and "LLMs know more than they can say" in [6] (Question 2)
These two statements indeed do not conflict. Most researches follow the conventional evaluation process to directly evaluate LLMs based on their outputs, quantifying their reasoning abilities. Our proposed abstract reasoning metric AbsAcc based on cognitive theory, and our results show a substantial disparity between general and abstract reasoning. On the other hand, the statement “LLMs know more than they can say” may be derived from the domain of interpretability. It concerns the LLMs’ inference processes on tasks, which may not be in a form understandable to humans, leading to performance loss when translating these processes into natural language. The focal points of our work and the work you referred to are distinct, each addressing different aspects of LLM capabilities and behaviors.
[6] Inference-time intervention: Eliciting truthful answers from a language model.
# 6. the effectiveness of MeanLearn (Question 3)
The benefits of our method MeanLearn do not diminish, but it is LLaMA-3 may need more data compared with other base models. The reason is because LLaMA-3 breaks conventional data scaling laws [7] by achieving a token-to-parameter (T/P) ratio of 1875:1, far surpassing the 286:1 ratio of LLaMA-2. This results in dense knowledge in the parametric memory of LLaMA-3. However, to ensure fair comparisons across different LLMs, we train MeanLearn based on LLaMA-3 with the same dataset size as the other LLMs. Furthermore, MeanLearn achieved satisfied improvements on LLaMA-2 and Orca-2.
[7] Pearce and Song, 2024. Reconciling Kaplan and Chinchilla Scaling Laws
---
Rebuttal Comment 1.1:
Title: Follow-up Rebuttal
Comment: ### Dear Reviewer zthW:
We appreciate your efforts in the review process. As the rebuttal deadline approaches, we kindly ask if our responses have sufficiently addressed your concerns. If you require further clarification, we are prepared to provide additional information.
Sincerely,
Authors
---
Rebuttal 2:
Comment: Thanks for your reply. We want to clarify the critical misunderstandings in your response:
## 1. Q6 (the author should conduct experiments on LLaMA-3-13B) in the response
**We do not incorporate LLaMA-3-13B because there is NO 13B version of LLaMA-3.** (refer to https://huggingface.co/collections/meta-llama/meta-llama-3-66214712577ca38149ebb2b6) The second smallest LLaMA-3 is 70B. We indeed do not have the resources for training and inference. By the way, our MeanLearn method is effective on LLaMA-2-7B, LLaMA-2-13B, LLaMA-3-8B, Orca-2-7B, Orca-2-13B, and Mistral-7B (as mentioned in response to Reviewer XVL7, and you can refer to the table below).
| Method | AbsR | Comm. | MMLU | RACE | * Average* |
| ---------------------- | :---: | :---: | :---: | :---: | :---------------: |
| ***Vanilla Accuracy*** | | | | | |
| Mistral-7B-Instruct | 83.00 | 66.53 | 57.02 | 75.33 | 70.47 |
| MeanLearn (Ours) | 84.50 | 74.56 | 58.28 | 76.65 | **73.50 (+3.03)** |
| ***AbsAcc*** | | | | | |
| Mistral-7B-Instruct | 72.92 | 58.87 | 40.19 | 52.69 | 56.17 |
| MeanLearn (Ours) | 73.96 | 68.59 | 41.14 | 55.77 | **59.87 (+3.60)** |
About the performance on LLaMA-3-8B, we have offered some clarification in the rebuttal period, which is summarized as follows:
* We have improvements on LLAMA-3-8B;
* LLaMA-3 may need more data compared to other base models. The reason is that LLaMA-3 breaks conventional data scaling laws [1] by achieving a token-to-parameter (T/P) ratio of 1875:1, far surpassing the 286:1 ratio of LLaMA-2. This results in dense knowledge in the parametric memory of LLaMA-3. However, to ensure fair comparisons across different LLMs, we trained MeanLearn using LLaMA-3 with the same dataset size as the other LLMs.
* Although incorporating more data to train LLaMA-3 is unfair for the comparisons with other LLMs, we are interested in adopting more data to train LLaMA-3 in the future. Due to time limits and computational cost, we do not provide it in the rebuttal.
## 2. Q5 in the response
* For your referred paper[2], we do not incorporate it for comparison because: (1) it is a paper of abstraction AND reasoning on **Visual Inputs** and (2) it is an **empirical study** without proposing new methods or datasets. While we focus on abstracting reasoning on **Natural Language**.
* For the baseline proposed by Reviewer vYe8, we do not incorporate tree-of-thought (tot) for comparison because: tot focuses on the **decoding stage**, while we focus on the **post-training stage** to enhance the fundamental capabilities, they are **complementary**. We adhere to conventional standards by selecting baselines of similar types and scales to ensure a fair evaluation.
* Both Reviewers XVL7 and Fwhs think the soundness of our work is good.
[1] Pearce and Song, 2024. Reconciling Kaplan and Chinchilla Scaling Laws
[2] Xu, Yudong, et al. "LLMs and the Abstraction and Reasoning Corpus: Successes, Failures, and the Importance of Object-based Representations." Transactions on Machine Learning Research.
Title: Replying to Official Comment by Reviewer zthW | Summary: This paper introduces a novel framework aimed at enhancing the abstract reasoning capabilities of large language models (LLMs) through a method called "Meaningful Learning." It specifically targets the challenge LLMs face in abstract reasoning despite their robust general reasoning abilities. The authors identify a notable gap in performance between general and abstract reasoning tasks and propose a structured approach to narrow this gap by using AbsR, a tailored dataset that includes generic facts coupled with guided explanations to foster deeper learning and understanding.
Key contributions include:
1. Introduction of the Meaningful Learning framework for improving abstract reasoning in LLMs.
2. Development of the AbsR dataset for training.
3. New metrics and Empirical evaluation across different settings
Strengths: 1. Novel approach to improving abstract reasoning in LLMs through generic fact guidance.
2. Comprehensive evaluation across multiple settings demonstrating reasonable performance improvements.
Weaknesses: 1. Limited scale: The experiments are conducted on relatively small models (7B-13B parameters) compared to state-of-the-art LLMs. Without such it's hard to judge such method's applicability in real world and more complex tasks as mentioned in 5.1.
2. Human evaluation scale: The human evaluation of the AbsR dataset is conducted on a relatively small sample (200 instances) (Appendix E).
3. Lack of comparison to more recent reasoning techniques: The paper doesn't compare MeanLearn to recent advances in LLM reasoning capabilities.
4. Weaknesses in Evaluation Metrics: It's not convincing that Perplexity-based evaluation for classification tasks is good and the use of AbsAcc needs more clarification.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. How did you determine that perplexity-based evaluation was the best approach for classification tasks, given its limitations mentioned in Appendix A? Were other methods considered?
2. Have you considered comparing MeanLearn to more recent reasoning techniques, such as tree-of-thoughts or other advanced prompting methods
3. The paper mentions that "MeanLearn can make LLMs implicitly learn generic facts and solve problems under the guidance of generic facts" (Section 4). Can you provide more concrete examples or analysis of how this implicit learning occurs?
4. How do you envision MeanLearn scaling to larger models (100B+ parameters), and what challenges or benefits do you anticipate in applying this method to more advanced LLMs?
5. how do you ensure that the AbsAcc metric is truly capturing abstract reasoning abilities rather than other factors like memorization or pattern matching?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your insightful comments, the following is the detailed response:
# 1. Limited scale (Weakness 1 and Question 4)
## 1.1 Limited scale
Due to limited computational resources, we conduct our experiments on small LLMs (7B-13B). It is worth noting that our main focus is also to improve small LLMs, which is also crucial, as evidenced by considerable research focus in this area [1][2]. Small LLMs have a broader range of applications and are preferred for deployment on devices like cell phones due to their lower resource demands. Despite their wider applicability, small LLMs are significantly weaker than their larger counterparts, underscoring the importance of prioritizing their improvement.
[1] Orca 2: Teaching Small Language Models How to Reason
[2] WizardLM: Empowering Large Language Models to Follow Complex Instructions
## 1.2 Applying MeanLearn to 100B+ LLMs
We are attempting to discuss to apply our method to 100B+ LLMs, but it is out of the scope of this paper. Applying MeanLearn to 100B+ LLMs would presents both challenges and benefits:
Challenges: (1) Resource Requirements: the post-training of such large LLMs demands additional and more stable computational resources to manage the high costs associated with their scale; (2) Data Demands: according to the data scaling law [3], there is a significant need for extensive training data to achieve adequate coverage and ensure the generalization capabilities of these large models. Acquiring such large datasets is often costly.
Benefits: (1) Enhanced Learning Abilities: larger LLMs possess stronger learning capacities, enabling them to more effectively comprehend and assimilate the knowledge presented to them; (2) Superior Performance: these models typically outperform smaller LLMs, offering better overall performance in tasks due to their advanced capabilities.
[3] Pearce and Song, 2024. Reconciling Kaplan and Chinchilla Scaling Laws
# 2. Human evaluation scale (Weakness 2)
We mainly follow previous works [4] [5], which conduct human evaluation on 100-200 instances. We choose 200 instances for evaluation, which is a balance between maintaining evaluation quality and managing labor costs.
[4] Du et al., e-CARE: a New Dataset for Exploring Explainable Causal Reasoning
[5] Ying et al., Intuitive or Dependent? Investigating LLMs' Robustness to Conflicting Prompts
# 3. Lack of comparison to more recent reasoning techniques (Weakness 3 and Question 2)
We adhere to conventional standards by selecting baselines of similar types and scales to ensure a fair evaluation. Our focus is on enhancing the fundamental capabilities of LLMs through post-training techniques. In contrast, methods like Tree-of-Thought (ToT) primarily target improvements during the decoding stage, which complements our proposed method.
# 4. Perplexity-based evaluation (Weakness 4 and Question1)
We align with the evaluation criteria established by prior research [6][7] and leaderboards [8][9], adopting perplexity as our evaluation metric. The advantages of employing perplexity are twofold:
* Clarity in Evaluation: evaluation with perplexity can extract an answer for each example, while generation-based evaluation is hard to do this, since LLMs might either refuse to provide an answer or treat any option as valid;
* Efficiency: Calculating perplexity requires only a single forward pass through the LLMs, making it more cost-efficient compared to generation-based methods, which require text to be generated in an autoregressive manner. This streamlined process enhances both the speed and the resource efficiency of the evaluation.
[6] Sun et al., 2024: A Simple and Effective Pruning Approach for Large Language Models
[7] Zhao et al., 2024: Deciphering the lmpact of Pretraining Data on Large Language Models through Machine Unlearning
[8] OpenCompass, 2023: OpenCompass: A Universal Evaluation Platform for Foundation Models
[9] Huggingface 2024: Open LLM Leaderboard
# 5. About AbsAcc (Weakness 4 and Question 5)
Theoretically, AbsAcc offers a more reliable measurement of abstract reasoning. For each generic fact, we have multiple examples to test LLMs, effectively reducing the influences of memorization and pattern matching. Ideally, increasing the number of examples for each generic fact would enhance the reliability of AbsAcc. We do not incorporate more examples per generic fact for the balance between cost and effectiveness. Please note that, our method is scalable with respect to the number of examples for each generic fact.
# 6. Examples or analysis of of the implicit learning process (Question 3)
Humans can apply the grasped generic fact to solve problems in different scenarios. For example, in Figure 1 of our paper, if humans grasp “acid is corrosive”, they can deduce “rock dissolved” and “the skin suffers pain” when respectively given “adding rock into hydrochloric acid” and “acid touches human skin”. MeanLearn is designed to imitate the above meaningful learning process of humans.
As for analysis, this question is indeed interesting and merits exploration. However, it falls outside the scope of the main contributions of our work. Our focus is not on demonstrating the implicit learning processes within a black-box LLM, as that pertains more to the field of interpretability. Addressing this would require a significantly different research approach and substantial additional effort.
---
Rebuttal 2:
Title: Follow-up Rebuttal
Comment: ### Dear Reviewer vYe8:
We appreciate your thoughtful review. As the rebuttal deadline approaches, we kindly ask if our responses have sufficiently addressed your concerns. If you require further clarification, we are prepared to provide additional information.
Sincerely,
Authors
---
Rebuttal Comment 2.1:
Comment: Thanks for the clarification. I have looked through all the reviews and am less concerned about the evaluation; Yet I think there is still a lack of comparison with existing post-training methods which should have been detailed in the related work section, I will keep my rating neutrally positve for this paper.
---
Rebuttal 3:
Comment: Thanks for your response, we are glad to have addressed your concerns in your preliminary review, and we want to kindly remind that:
Actually, **in Table 5 of our paper, we have provided the post-training baselines** (Vicuna [1], WizardLM [2], Orca-2 [3]) , and you can refer to the table below (the average performance on 7B models), **we have varying advantages over these methods on both vanilla accuracy and AbsAcc**.
| Method | *Average* |
|------------------|:-------:|
| ***Vanilla Accuracy*** | |
| Vicuna | 56.48 |
| WizardLM | 31.28 |
| Orca-2 | 61.84 |
| MeanLearn (Ours) | **65.28** |
| ***AbsAcc*** | |
| Vicuna | 39.84 |
| WizardLM | 20.42 |
| Orca-2 | 48.86 |
| MeanLearn (Ours) | **53.39** |
If you have any further concerns, feel free to leave comments.
[1] https://lmsys.org/blog/2023-03-30-vicuna/
[2] WizardLM: Empowering Large Language Models to Follow Complex Instructions
[3] Orca 2: Teaching Small Language Models How to Reason | Summary: This paper addresses the challenge that LLMs face in abstract reasoning, where they often struggle to apply general facts to new situations despite their impressive performance in other reasoning tasks. To tackle this issue, the authors introduce an abstract reasoning dataset called AbsR, which incorporates generic facts and guided explanations to teach LLMs how to leverage such facts for reasoning. They also propose a learning paradigm named Meaningful Learning (MeanLearn) that simulates the human process of implicit knowledge acquisition, enabling LLMs to implicitly learn and apply generic facts without explicit input. Through experiments on various out-of-distribution reasoning and language understanding benchmarks, the paper demonstrates that MeanLearn improves the general and abstract reasoning capabilities of LLMs, moving beyond simple memorization towards a more nuanced understanding and application of knowledge.
Strengths: 1. Useful Resource: This paper introduces an abstract reasoning dataset (AbsR) that provides generic facts and guided explanations for reasoning tasks. The dataset and code will be publicly available, facilitating reproducibility and further community research.
2. Empirical Evidence: Comprehensive experimental results and ablation studies that validate the effectiveness of the proposed method. Improvements are observed in both general and abstract reasoning performance of LLMs across various benchmarks.
3. Broad Applicability: The approach's effectiveness is shown on multiple LLMs of varying sizes, indicating broad applicability.
Weaknesses: 1. Evaluation on Advanced LLMs: The observation that the benefits of MeanLearn seem to diminish with better-trained LLMs, like LLaMA3, raises a crucial question about its broader applicability. While the method shows promise with smaller models, it's essential to assess its performance on more powerful LLMs like Mistral, Phi3, and Qwen 2. This would provide a clearer picture of its potential contribution in the context of rapidly advancing language models.
2. Data Augmentation and Training Dynamics: The use of GPT4 for constructing the AbsR dataset, with annotation quality comparable to human annotators, opens up interesting possibilities for data augmentation. Exploring the impact of increasing the training data size, potentially using more cost-effective models like GPT4o or open-source LLMs, could reveal the potential for further performance gains. Additionally, analyzing the training dynamics of MeanLearn by varying the proportion of training data (e.g., 20%, 50%, 80%) would provide valuable insights into its saturation point and the diminishing returns of additional data.
3. Expanding Reasoning Benchmarks: The paper's focus on commonsense reasoning benchmarks (Com. and ARC) should be complemented by evaluation on other reasoning domains, such as arithmetic reasoning. Including benchmarks like GSM8K or MATH would provide a more comprehensive understanding of MeanLearn's capabilities across different reasoning tasks.
4. Presentation Clarity and Improvements: The paper's presentation could benefit from several improvements. The introduction should explicitly connect abstract reasoning with other types of reasoning, providing a broader context for the research. The pilot investigation in the introduction is quite confusing as it does not introduce the datasets, metrics, and setup used. Confusing figures, like Figure 2, should be explained in detail to ensure clarity and understanding.
5. Updated Related Work: The related work section should be updated to include recent efforts in tuning-based methods, ensuring a comprehensive overview of the current landscape in abstract reasoning research.
Technical Quality: 3
Clarity: 2
Questions for Authors: See above,
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your insightful comments, the following is the detailed response:
# 1. Evaluation on Advanced LLMs (Weakness 1)
The benefits of our method MeanLearn do not diminish, but it is LLaMA-3 may need more data compared with other base models. The reason is because LLaMA-3 breaks conventional data scaling laws [1] by achieving a token-to-parameter (T/P) ratio of 1875:1, far surpassing the 286:1 ratio of LLaMA-2. This results in dense knowledge in the parametric memory of LLaMA-3. However, to ensure fair comparisons across different LLMs, we trained MeanLearn using LLaMA-3 with the same dataset size as the other LLMs.
Due to limited time, we train MeanLearn based on Mistral-7B-Instruct-v0.2, and evaluate them on Com., MMLU, RACE and AbsR, the results are shown in the following table:
|Method|AbsR|Com.|MMLU|RACE|* Average*|
|---------------------|:-----:|:-----:|:-----:|:-----:|:---------:|
|***Vanilla Accuracy***||||||
|Mistral-7B-Instruct|83.00|66.53|57.02|75.33|70.47|
|MeanLearn(Ours)|84.50|74.56|58.28|76.65|**73.50 (+3.03)**|
|***AbsAcc***||||||
|Mistral-7B-Instruct|72.92|58.87|40.19|52.69|56.17|
|MeanLearn(Ours)|73.96|68.59|41.14|55.77|**59.87 (+3.60)**|
On average, our proposed MeanLearn can outperform Mistral by 3.03% in vanilla accuracy and 3.70% in AbsAcc, which demonstrates the superiority of MeanLearn. The main reason is that Mistral does not have a dense knowledge in parameters as LLaMA-3.
[1] Pearce and Song, 2024. Reconciling Kaplan and Chinchilla Scaling Laws
# 2. Data Augmentation and Training Dynamics (Weakness 2)
GPT-4o was released about a week before the NeurIPS 2024 deadline, making it too late to use for synthesizing AbsR. Indeed, leveraging cost-effective models like GPT-4o or open-source LLMs for data generation is crucial for future performance improvements. We are enthusiastic about exploring this and the training dynamics in the future to achieve continual performance gains.
# 3. Expanding Reasoning Benchmarks (Weakness 3)
There are mathematical tasks in MMLU, following [2], we select five mathematical reasoning datasets (abstract algebra, college mathematics, elementary mathematics, high school statistics, and high school mathematics) from MMLU to demonstrate the superiority of MeanLearn. The results are shown in the following table:
|Size|Method|Abstract Algebra|College Mathematics|Elementary Mathematics|High School Statistics|High School Mathematics|*Average*|
|------------------|-----------|:----------------:|:-------------------:|:----------------------:|:----------------------:|:-----------------------:|:-------------:|
|***Vanilla Accuracy***||||||||
|7B|Orca-2|23.00|32.00|33.07|37.95|28.15|30.83|
||MeanLearn (Ours)|36.00|35.00|33.07|40.28|28.89|**34.65 (+3.82)**|
||Mistral|25.00|30.00|38.89|46.76|32.96|34.72|
||MeanLearn (Ours)|31.00|34.00|37.83|47.69|34.07|**36.92 (+2.20)**|
|8B|LLaMA-3|30.00|36.00|41.27|50.00|40.74|39.60|
||MeanLearn (Ours)|31.00|33.00|44.44|49.54|40.74|**39.74 (+0.14)**|
|13B|Orca-2|27.00|35.00|35.45|34.72|28.15|32.06|
||MeanLearn (Ours)|27.00|37.00|37.30|43.06|27.78|**34.43 (+2.37)**|
|***AbsAcc***||||||||
|7B|Orca-2|0.00|4.88|25.00|25.17|14.63|13.94|
||MeanLearn (Ours)|2.86|7.32|24.02|27.97|16.26|**15.69 (+1.75)**|
||Mistral|2.86|7.32|25.49|36.36|14.63|17.33|
||MeanLearn (Ours)|5.71|9.76|24.55|37.76|16.26|**18.81 (+1.48)**|
|8B|LLaMA-3|2.86|9.76|28.43|38.46|18.70|19.64|
||MeanLearn (Ours)|5.71|12.20|30.88|34.97|19.51|**20.65 (+1.01)**|
|13B|Orca-2|2.86|4.88|24.02|24.48|13.82|14.01|
||MeanLearn (Ours)|2.86|17.07|22.55|29.37|10.57|**16.48 (+2.47)**|
On average, MeanLearn outperforms baselines on both vanilla accuracy and AbsAcc. It is interesting to note that we do not synthesize math questions to train MeanLearn, improvements on math reasoning are largely due to the enhancement of abstract reasoning.
Meaningful Learning is one of the main contributions of our work, and we are excited about the improvements brought by MeanLearn. Building on this foundation, we are now conducting a new work to expand our investigations with additional tasks such as mathematical and logical reasoning.
[2] MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark
# 4. Presentation Clarity and Improvements (Weakness 4)
Abstract reasoning is not a specific type of reasoning; rather, it measures the ability of employing generic facts to solve problems. For each reasoning type, an abstract reasoning performance metric can be derived. For example, in math problems, abstract reasoning can help estimate the ability of LLMs to use the generic fact “x + y = y + x” to solve a series of problems in various scenarios that rely on this fact. While in commonsense problems, it can help estimate the ability of LLMs to use the generic fact “acid is corrosive” to solve a series of problems in various scenarios that rely on this fact.
The pilot investigation results presented in introduction serve to intuitively highlight the disparity (Table 1) between general and abstract reasoning, which is used to demonstrate our motivation that LLMs have a substantial discrepancy between their general reasoning and abstract reasoning performances. Detailed discussions of this investigation are available in sections 2.1 and 2.3.
Furthermore, Figure 2 illustrates the calculations for vanilla accuracy and AbsAcc, employing the same symbols and formulas outlined in section 2.1 and equation (1).
# 5. Updated Related Work (Weakness 5)
Due to page limitations and to place more experimental results, we have condensed some sections of the related work. We will expand the related work in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. The experimental results addressed some of my concerns. I am willing to increase my rating for this paper, but I would like to ask one more question. Why do you use MMLU to test the mathematical reasoning abilities instead of GSM8K and MATH?
---
Reply to Comment 1.1.1:
Title: Replying to Comment by Reviewer
Comment: Thanks for your comment. We are glad to have addressed some of your concerns. We choose the mathematical reasoning tasks in MMLU for experiments mainly due to:
* MMLU has a more fine-grained classification of the tasks, such as algebra and statistics. By using this, we can obtain a more comprehensive evaluation of the methods;
* Previous researches [1][2] utilize this subset to evaluate the mathematical capabilities of LLMs;
* We have already conducted experiments on MMLU in our submission, due to time limitations, we can quickly obtain the mathematical reasoning results from previous experiments.
We hope this answers your question. Moreover, we are interested in incorporating more datasets like MATH and GSM8K for further investigation, and this is what we are doing in our new work.
If you have any further questions or concerns, feel free to leave comments.
[1] Want et al., 2024. Mmlu-pro: A more robust and challenging multi-task language understanding benchmark
[2] Luo et al., 2023. Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct
---
Rebuttal 2:
Comment: Thanks for the clarification. I would like to mention that MATH also has a fine-grained classification for analysis. I will keep my rating positive for this paper. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Boosting Perturbed Gradient Ascent for Last-Iterate Convergence in Games | Reject | Summary: This paper studies last-iterate convergence rates of online learning in monotone games. The main contribution is an algorithm called Gradient Ascent with Boosting Payoff Perturbation (GABP). The GABP algorithm achieves (1) $O(\log T / T)$ last-iterate convergence with full gradient feedback, which is near-optimal; (2) and $O(1/T^{1/7})$ last-iterate convergence with noisy gradient feedback (the noise is zero-mean with bounded variance). The latter result improves prior results of $O(1/T^{1/10})$. Moreover, the GABP algorithm guarantees an individual dynamic regret of $O(\log^2 T)$ under full gradient feedback, slightly worse than the state-of-the-art bound of $O(\log T)$. This paper also contains numerical experiments on small game instances to demonstrate the effectiveness of GABP.
Strengths: The problem of last-iterate convergence rates of no-regret learning algorithms in monotone games is relevant and interesting. Most existing results focus on the full gradient feedback, while only a few provide concrete convergence rates under the noisy gradient or the bandit feedback. The proposed GABP algorithm has near-optimal $O(\log T / T)$ last-iterate convergence rate under full gradient feedback. It also improves the convergence rates under noisy gradient feedback from $O(1/T^{1/10})$ to $O(1/T^{1/7})$. This is a solid contribution to learning in games, although the rate for the noisy gradient feedback setting may not be tight.
Weaknesses: 1. The proposed GABP algorithm does not achieve the optimal $O(1/T)$ last-iterate convergence rate under full gradient feedback. The $O(1/T^{1/7})$ last-iterate convergence rate is also not tight for the noisy feedback.
2. The relationship between the proposed GABP algorithm and the AOG algorithm in [1] and the intuition behind the fast last-iterate convergence rates is not clearly discussed. These two algorithms are different (as shown in Appendix F) but share similar ideas. The anchoring term in both algorithms comes from the (implicit) Halpern iteration algorithm, which can not be run directly. The difference is that GABP views each step of Halpern iteration as a fixed point problem (Line 170) and uses an inner loop of $\log (1/\epsilon)$ steps to get an $\epsilon$-approximation (this is called updating the reference strategy in the paper.); In contrast, AOG directly uses optimism to approximate the implicit update. This leads to GABP being a log factor slower than AOG in the full gradient setting. However, the approximating the fixed point approach is more robust in the noisy gradient setting due to strong monotonicity. Moreover, the potential function and the approximately non-increasing potential analysis are very similar to that used in [1]. If they are inspired by [1] then this should be acknowledged.
[1] Doubly Optimal No-Regret Learning in Monotone Games, Cai and Zheng, ICML 2023.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Could you explain Equation (5) and the related discussion? This inequality is not consistent with the text above. In particular, this inequality contains only $k(t-1)$, and the superscript $k(t)-1$ does not appear in the inequality.
2. What is the intuition behind the $O(1/T^{1/7})$ convergence rate in the noisy gradient setting? In your opinion, what is the best possible rate that can be achieved by the current approach?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are deeply appreciative of your positive feedback and constructive comments, especially your invaluable suggestions on improving our presentation. We will incorporate your feedback into our presentation to make it better. The detailed answers to each of the questions can be found below.
---
### Weakness and Question 1
> The proposed GABP algorithm does not achieve the optimal $O(1/T)$ last-iterate convergence rate under full gradient feedback. The $O(1/T^{1/7})$ last-iterate convergence rate is also not tight for the noisy feedback.
>
> What is the intuition behind the $O(1/T^{1/7})$ convergence rate in the noisy gradient setting? In your opinion, what is the best possible rate that can be achieved by the current approach?
>
### Answer
As you pointed out, it is still unrevealed whether our convergence rate in the noisy feedback setting is tight or not, given the absence of existing results on lower bounds and faster rates than ours. Nevertheless, we anticipate that the convergence rate we have achieved in the noisy feedback setting may not be optimal. This is largely due to the fact that we have roughly derived the upper bound on $\sum_{l=1}^{k(t)}(l+1)^2\langle \pi^{\mu, \sigma^l} - \sigma^{l+1}, \hat{\sigma}^l - \pi^{\mu, \sigma^l}\rangle$ and $\sum_{l=2}^{k(t)}l(l+1)\langle \pi^{\mu, \sigma^{l-1}} - \sigma^{l}, \pi^{\mu, \sigma^l} - \hat{\sigma}^l\rangle$ by using Cauchy–Schwarz inequality in the proof of Lemma B.2. If we can tighten this upper bound, it could potentially lead to an improved convergence rate of $O(1/T^{1/4})$ in the noisy feedback setting for the same algorithm.
---
### Weakness and Question 2
> The relationship between the proposed GABP algorithm and the AOG algorithm in [1] and the intuition behind the fast last-iterate convergence rates is not clearly discussed. These two algorithms are different (as shown in Appendix F) but share similar ideas.
>
> Moreover, the potential function and the approximately non-increasing potential analysis are very similar to that used in [1]. If they are inspired by [1] then this should be acknowledged.
>
### Answer
We agree with you that GABP can be viewed as an approximation of Halpern iteration, whereas it has a different update scheme from the AOG algorithm. From the perspective of theoretical guarantee, our potential function (in Eq. (8)) is indeed inspired by the one used in [1] while there are slight variations. We will ensure to elucidate the detailed relationship between our study and [1] in the revised manuscript.
---
### Weakness and Question 3
> Could you explain Equation (5) and the related discussion? This inequality is not consistent with the text above. In particular, this inequality contains only $k(t-1)$, and the superscript $k(t)-1$ does not appear in the inequality.
>
### Answer
Thank you for pointing out the typo regarding Equation (5)! As you pointed out, the term should be $k(t-1)$, not $k(t)-1$. We will correct this in the revised version of the manuscript.
---
Rebuttal Comment 1.1:
Comment: I acknowledge that I have read the rebuttal. I will keep my current score. | Summary: The paper introduces a novel algorithmic approach to enhance the convergence of first-order methods in the context of monotone games. The authors propose a payoff perturbation technique that introduces strong convexity to players' payoff functions, which is crucial for achieving last-iterate convergence. This technique is particularly designed to handle scenarios where the gradient of the payoff functions is monotone and potentially noisy. The paper presents a method called Gradient Ascent with Boosting Payoff Perturbation (GABP), which incorporates a unique perturbation into the payoff function and maintains a periodically re-initializing anchoring strategy. The authors demonstrate that GABP offers faster last-iterate convergence rates compared to existing algorithms, even in the presence of additive noise.
Strengths: Originality: The paper presents a unique perturbation technique that addresses the challenge of last-iterate convergence in monotone games. The proposed GABP algorithm is an innovative modification of Adaptively Perturbed Mirror Descent (APMD), offering improved convergence rates.
Quality: The theoretical development is thorough, with rigorous proofs provided for the convergence rates of GABP in both full and noisy feedback settings. The paper also includes a detailed analysis of the algorithm's performance in terms of individual regret.
Clarity: The paper is well-organized, with clear explanations of the algorithm, theoretical results, and experimental setup. The use of pseudo-code for GABP aids in understanding the algorithm's implementation.
Significance: The work contributes to the field of online learning in games, providing a solution that is particularly relevant for applications such as Generative Adversarial Networks (GANs) and large language model fine-tuning, where last-iterate convergence is desirable.
Weaknesses: Experimental Validation: While the paper provides empirical results, the experiments could be expanded to include a broader range of game types and noise levels to further validate the robustness and generalizability of GABP.
Comparison with State-of-the-Art: The paper compares GABP with APMD and Optimistic Gradient Ascent (OGA) but could benefit from a more comprehensive comparison with other existing methods in the literature to better situate its contributions.
Practical Considerations: While the paper addresses the theoretical aspects of GABP, it could provide more insights into practical considerations, such as the implementation challenges and potential modifications needed for real-world applications.
Technical Quality: 3
Clarity: 3
Questions for Authors: None
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your positive feedback and constructive comments. The detailed answers to each of the questions can be found below.
---
### Weakness 1
> Experimental Validation: While the paper provides empirical results, the experiments could be expanded to include a broader range of game types and noise levels to further validate the robustness and generalizability of GABP.
>
### Answer
We would like to note that the main contribution of this paper lies in its theoretical aspects, and the experimental validation serves as supportive evidence. Indeed, most existing studies on learning in monotone games either completely lack experimental results or solely focus on experiments with zero-sum games at a single noise level. That being said, if you are still concerned about this, we will be able to conduct some additional experiments.
---
### Weakness 2
> Comparison with State-of-the-Art: The paper compares GABP with APMD and Optimistic Gradient Ascent (OGA) but could benefit from a more comprehensive comparison with other existing methods in the literature to better situate its contributions.
>
### Answer
We can argue that OGA is a representative baseline that ensures last-iterate convergence under full feedback. Among payoff-perturbed algorithms, there are some highly regarded ones, such as Sokota et al. [2023] and Liu et al. [2023], although they do have no theoretical guarantee under noisy feedback. In contrast, we are the first to provide a theoretical guarantee for last-iterate convergence under noisy feedback.
---
### Weakness 3
> Practical Considerations: While the paper addresses the theoretical aspects of GABP, it could provide more insights into practical considerations, such as the implementation challenges and potential modifications needed for real-world applications.
>
### Answer
We believe that our technique could be applied to real-world applications, due to its simplicity. While this issue is beyond the scope of this paper, as you are pointing out, our perturbation approach would help us to solve large-scale imperfect information games [Perolat et al. 2022] and minimax optimization for preference learning with LLMs [Munos et al. 2023], especially because our technique is effective for training even with deep neural network architectures. We will extend our algorithm in a follow-up paper.
- Munos, Rémi, et al. “Nash learning from human feedback.” 2023.
- Perolat, Julien, et al. "Mastering the game of Stratego with model-free multiagent reinforcement learning." 2022. | Summary: This work focuses on last-iterate convergence of game dynamics. A payoff perturbation technique is proposed by adding strong convexity to players' payoff functions. Despite it is a well studied technqiue in learning in repeated games with first-order methods, especially in last-iterate convergence, a novel perturbation scheme introduced in this paper allows on to provide faster last-iterate convergence compared to previous works.
Strengths: This paper provides a relatively complete result containing last-iterate convergence rate of the proposed algorithm GABP in full feedback and noisy feedback. The faster rate of convergence is an improvement compared to existing works. Except for some weakness (will be stated later), the presentation of this paper is clear to understand. The authors have reviewed most related works to my best knowledge, so that the contributions claimed are easy to follow. Addition to theoretical works, this paper has provided experiments (sufficient in my opinion) showing the comparison of GABP and existing algorithms such as Adaptively perturbed gradient ascent and Optimistic gradient ascent.
Weaknesses: One obvious spot that should be added to improve the presentation is the following. The game considered in this paper is motivated by real-life examples. But the authors only give one example motivating monotone games. Part of contributions of the paper is claimed to be the study of two feedback models: full feedback and noisy feedback, but there is not specific examples and applications illustrating the importance of these settings. For sure readers can always find related works even just by googling the keywords, but providing concrete application scenes where the gradient of payoff can be achieved perfectly or only partially achievable gradients can be obtained is important, especially "noisy feedback" can be just a model of many cases.
Technical Quality: 3
Clarity: 3
Questions for Authors: Noisy feedback has been studied in some previous works e.g. Mertikopoulos and Zhou, 2019, Hsieh et al. 2019. What is technical difference/challenge of this paper comparing to aforementioned ones?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This is theory based paper, no potential negative impact will cause.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your positive feedback and constructive comments. The detailed answers to each of the questions can be found below.
---
### Weakness 1
> The game considered in this paper is motivated by real-life examples. But the authors only give one example motivating monotone games.
>
### Answer
As we have elaborated in the introduction section, monotone games include a wide range of applications other than concave-convex games, such as Cournot competition, two-player zero-sum imperfect information games, and zero-sum polymatrix games. For instance, Munos et al. [2023] and Perolat et al. [2022] have demonstrated the perturbation of the payoff function is effective for equilibrium learning in large-scale imperfect information games, as well as minimax optimization for preference learning with large language models. We believe these examples highlight the adaptability and practicality of our technique in real-world applications.
- Rémi Munos, et al. “Nash learning from human feedback.” 2023.
- Julien Perolat, et al. "Mastering the game of Stratego with model-free multiagent reinforcement learning." 2022.
---
### Weakness 2
> Part of contributions of the paper is claimed to be the study of two feedback models: full feedback and noisy feedback, but there is not specific examples and applications illustrating the importance of these settings. For sure readers can always find related works even just by googling the keywords, but providing concrete application scenes where the gradient of payoff can be achieved perfectly or only partially achievable gradients can be obtained is important, especially "noisy feedback" can be just a model of many cases.
>
### Answer
The full feedback setting serves as an ideal and significant benchmark for evaluating algorithms. We can argue that the noisy feedback setting is more practical, as feedback or observations in real-world scenarios often fluctuate. For instance, when training neural networks or learning in large-scale imperfect information games, it becomes necessary to estimate the gradient from sampled data. This process inevitably prevents the observation of a perfect gradient vector.
---
### Question 1
> Noisy feedback has been studied in some previous works e.g. Mertikopoulos and Zhou, 2019, Hsieh et al. 2019. What is technical difference/challenge of this paper comparing to aforementioned ones?
>
### Answer
While Hsieh et al. [2019] and Mertikopoulos and Zhou [2019] have assumed **strictly variational stability** or **strong** **monotonicity**, our study has achieved last-iterate convergence in (**not necessarily strongly) monotone games**. Last-iterate convergence in non-strongly monotone games with noisy feedback has been largely unexplored, with only asymptotic convergence being provided, except for the work by Abe et al. [2024]. Contrarily, we have derived a faster last-iterate convergence rate in the noisy feedback setting by introducing a novel perturbation payoff function.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response, I have no further questions and will keep my original evaluation. | Summary: This paper studies first order methods to solve monotone games where the gradient of the payoff function is monotone in the strategy, along with additive noise. The authors introduce a payoff perturbation technique which introduces strong convexity to the to the payoff functions and thereby derive last iterate convergence rates.
Strengths: Overall the paper is well written and the method and results are interesting.
Weaknesses: The authors should include a table which compares their paper with others in the literature. This would make it easier for the reader to place the results in context and see where improvements are made more easily.
(for example comparison to [Yoon and Ryu, 2021, Cai and Zheng, 2023] including constants)
Technical Quality: 3
Clarity: 3
Questions for Authors: The idea of changing the anchoring slowly seems very interesting. How different is this approach from the two-scale GDA type algorithms that have been studies recently
(For example, see Lin, Tianyi, Chi Jin, and Michael Jordan. "On gradient descent ascent for nonconvex-concave minimax problems." International Conference on Machine Learning. PMLR, 2020. and follow up papers).
Can a similar algorithm be derived from the updates of the algorithms proposed by the authors of this paper?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See Weaknesses and Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your positive feedback and constructive comments. The detailed answers to each of the questions can be found below.
---
### Weakness 1
> The authors should include a table which compares their paper with others in the literature. This would make it easier for the reader to place the results in context and see where improvements are made more easily. (for example comparison to [Yoon and Ryu, 2021, Cai and Zheng, 2023] including constants)
>
### Answer
Thank you for your insightful suggestion. We agree that a comparative table would indeed provide a clearer context for the readers. Please find below a table that compares our work with the existing literature, including Yoon and Ryu [2021] and Cai and Zheng [2023]. In summary, our GABP enjoys a nearly optimal convergence rate under full feedback and a much faster rate under noisy feedback, compared to existing studies. We will include this table in the updated manuscript.
| | Results under Full Feedback | Results under Noisy Feedback |
| --- | --- | --- |
| Extragradient [Cai et al., 2022a,b] | $\mathcal{O}(1/\sqrt{T})$ | N/A |
| Optimistic Gradient [Golowich et al., 2020a, Gorbunov et al., 2022, Cai et al., 2022a] | $\mathcal{O}(1/\sqrt{T})$ | N/A |
| Extra Anchored Gradient [Yoon and Ryu, 2021] | $\mathcal{O}(1/T)$ | N/A |
| Accelerated Optimistic Gradient [Cai and Zheng, 2023] | $\mathcal{O}(1/T)$ | N/A |
| Iterative Tikhonov Regularization [Koshal et al., 2010, Tatarenko and Kamgarpour, 2019] | N/A | Asymptotic (also holds for bandit feedback) |
| Adaptively Perturbed Gradient Ascent [Abe et al., 2024] | $\mathcal{O}(\ln T/\sqrt{T})$ | $\mathcal{O}(\ln T/T^{1/10})$ |
| Ours | $\mathcal{O}(\ln T/T)$ | $\mathcal{O}(\ln T/T^{1/7})$ |
---
### Weakness 2
> The idea of changing the anchoring slowly seems very interesting. How different is this approach from the two-scale GDA type algorithms that have been studies recently (For example, see Lin, Tianyi, Chi Jin, and Michael Jordan. "On gradient descent ascent for nonconvex-concave minimax problems." International Conference on Machine Learning. PMLR, 2020. and follow up papers). Can a similar algorithm be derived from the updates of the algorithms proposed by the authors of this paper?
>
### Answer
GABP and two-timescale GDA are fundamentally different from both the algorithmic and convergence guarantee perspectives. From an algorithmic viewpoint, two-timescale GDA uses different learning rates for each player, whereas our GABP employs identical learning rates for all players. Furthermore, GABP introduces the payoff perturbation technique, which is not utilized in two-timescale GDA. This perturbation technique is an essential element for our GABP to achieve last-iterate convergence even under noisy feedback.
From the perspective of convergence guarantee, our GABP achieves **last-iterate convergence,** whereas two-timescale GDA has only been shown to **converge in an average-iterate sense**. However, it is indeed an intriguing direction to improve the convergence results of two-timescale GDA by incorporating our payoff perturbation technique. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
From Biased to Unbiased Dynamics: An Infinitesimal Generator Approach | Accept (poster) | Summary: @Authors, I would appreciate it if you could point out any inaccuracies in the following summary since it took me a long time to understand your paper and am still not completely certain my understanding is correct.
The paper aims to address a common problem for molecular dynamics simulations, which is to obtain collective variables that can be used to bias simulations such that, e.g., rare transitions occur more frequently and important observables can be computed faster. The authors follow previous work and learn the system's generator, whose eigenvalues inform transition timescales and whose eigenfunctions can be used as collective variables to bias simulations along. The novelty is in deriving an approach that allows learning the generator from biased enhanced sampling simulations which converge faster and observe more transitions than conventional simulation. The authors evaluate their method on toy systems.
Strengths: 1. Novel approach to accelerate obtaining scientifically important collective variables by enabling their computation from simulations biased by enhanced sampling. This approach requires developing new theory and to derive a training objective. The procedure to do so seems non trivial to me but I cannot sufficiently judge the value of the provided theory as I can only follow the broad strokes and am not well versed in the used math.
I hope other reviewers can provide more useful signal for this aspect.
2. Impactful application: The paper provides useful ideas for a problem with significance for scientific applications that can have large downstream impacts.
3. Very well written introduction and overview of the transfer operator and generator formalism.
Weaknesses: My points contain many understanding questions and some "criticizing" questions. It would be great if you could also answer the understanding questions.
1. Experiments: The experiments are carried out on three toy systems and the proposed approach visually improves upon two deep learning baselines in one of the three experiments.
1. I understand that the cited works on generator learning provide the same or fewer experiments. However, from skimming the cited works it also seems to me that one of the main areas of application would be simulations on proteins. Is that incorrect and if not, why do you and others not provide experiments on proteins? Aren't there proteins with well known dynamics on which we could evaluate the methods?
2. **I think this method should be tried on systems of increasing sizes until it fails. The experiments of a paper introducing a new method ideally provide information on the capabilities of a method.** Why do you (and the rest of the generator learning papers) not do so?
3. Why do you omit the deep learning baselines from the ALDP experiments?
2. For an understanding of how your method fits into the broader landscape of approaches to determine CVs, I think it would be important to also provide comparisons with non-deep learning approaches. The relationship to DL methods is clear but to understand the general "usefulness" it would be nice to have delineation from classical methods in terms of approach and experiments. Are there classical approaches that can be used as baselines that would outperform all DL methods?
3. Speed comparisons: it seems to me that the underlying tradeoff for all methods is between speed and accuracy and that accuracy alone might be less meaningful. Is there nothing to be said about the runtimes for data collection and training?
Minor:
1. To make the paper more broadly accessible to the wider deep learning community, I think it would be helpful to describe your final operator learning approach more procedurally on e.g. the ALDP example. What is the neural network input and output dimensionality? Why do you have separate neural networks for each operator output dimension? What is the input and output to your neural network?
Once you trained your neural network, how do you obtain the eigenfunctions from it.
2. How do you obtain your plots in e.g. the ALDP figure Figure 2.? To understand this: what concretely is the eigenfunction in practice once you computed it? How is it represented? Once you have it, how does it assign different values to each molecular structure?
Technical Quality: 4
Clarity: 2
Questions for Authors: I would appreciate any time you can take to answer some of my questions.
1. 107 "for the transfer operator we can only observe a noisy evaluation of the output": Do you mean a noisy version of the expectation that defines a transfer operator? Or that the output one can observe after simulating stochastic dynamics is "noisy"?
2. I assume your underlying goal is to discover CVs as the eigenfunctions of your learned generators. Your generators can be learned from biased enhanced sampling simulations where e.g. more transitions occur. Then we can obtain the CVs from your generator. With the CVs we can run biased simulations. Why are these biased simulations more informative than the biased simulations which you ran to train your generator? Can we compute different quantities from your CV biased simulations? Is it just a matter of your CVs being better biases than the bias of the original enhanced sampling bias?
Confidence: 3
Soundness: 4
Presentation: 2
Contribution: 4
Limitations: The paper discusses some limitations such as not being time dependent - the significance of which I cannot assess. Limitations such as the limited evaluations and missing understanding of when the method fails seem more significant to me but are not mentioned or explained why they are present.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's insightful evaluation and valuable comments. In what follows, we aim to address the highlighted weaknesses and respond to the reviewer's questions.
## Summary & strengths:
- Thank you for your summary, while it captures the general idea of the paper, maybe it lacks one important aspect that, to the best of our knowledge, __we are the first to show__ that covariance reweighting in transfer operator (TO) models built from biased simulations fails, while infinitesimal generator (IG) models do not suffer from this effect. Indeed __we prove that one can learn eigenvalues and eigenfunctions__ of the IG via its resolvent __in a scalable and statistically consistent__ way from a single biased trajectory.
- Related to the first strength, please refer to our general reply, part _Clarity of presentation_. Besides proposing Alg. 1 (methodological contribution), we also __theoretically prove its statistical consistency__. Let us provide the context: data-driven IG methods emerged relatively recently in the literature [1,15,20,21,46], and up to our knowledge, the only method that possesses finite-sample statistical guarantees for the proper spectral estimation is [21]. However, all of the above approaches analyzed learning from _unbiased_ simulations, which is unfeasible in many realistic scenarios. Moreover, the method in [21] suffers from scalability issues inherent to kernel methods. On the other hand, a deep learning method for the IG [46] was applied to biased simulations , but its statistical consistency was an open question.
## Weaknesses:
1. Noting that learning the spectral decomposition of IG is a problem of interest besides protein dynamics, e.g. crystallization, we agree with the suggested best practice. We did our best to meet such a requirement, as reported in _Experiments_ of our general reply and the attached pdf.
- We believe that the main reason why IG and TO methods are not initially tested on larger scale systems is the complexity of the task. Namely, recovering proper time scales and the transition state ensemble is a complex task that highly depends on the descriptors of the system and their effective dynamics. Indeed for large proteins one of the main challenges is to design appropriate descriptors so that effective Langevin dynamics can reveal the true dynamics of a protein [D]. These are highly dependent on the protein, and their design is a problem per se. Once this has been resolved, the next challenging task is to explore the rare transitions by biasing dynamics and recover proper time scales and transition state ensembles. While our method is designed to solve the latter, the former is an active field of research that currently limits the development of the full pipeline for large scale complex molecular systems.
- In order to fully address the question on the capability of our method, we have performed an additional experiment. To the best of our knowledge, the most complex benchmark for all TO and IG methods is the (very long) unbiased simulation dynamics of a small protein, typically Chignolin [7,19]. So, we designed a biased simulation experiment based on our method, which is, to the best of our knowledge, the first experiment of its kind for generator learning.
- Concerning baselines for ALDP, since we don’t have ground-truth it would be just a qualitative comparison.
2. Most non DL CV are heuristics based on chemical/physical intuition, they are system dependent and often suboptimal. This has stimulated intense research towards ML based alternatives. In this context, TO/IG methods are well-motivated, since eigenfunctions of these operators are “optimal” CVs for biasing simulations, see e.g. [A]. While some classical ML methods are available for this [21,23], they are intractable at large scale since at each time step one needs to compute the kernel function with respect to the current ensemble of the simulation.
3. Our method is as expensive as standard CV based methods, but it reliably leads to more accurate results, which is important when one wants to reconstruct the free energy profile and use them iteratively to learn the generator. See also our additional experiment on Chignolin, see the global reply and the attached pdf where we highlight the computational speedup relative to unbiased bruteforce simulations.
4. We incorporated your suggestion in the general reply on the general pipeline. Here we address your related questions. First, since the input of the DNN encoder in the representation learning part are the descriptors, the input dimension depends on the problem. On the other hand, the output (latent features) dimension is typically very small, reflecting the number of slow time-scales to learn (typically around 3-5). Second, the choice of using separated DNNs per latent feature is made to aid learning of orthonormal features, since, depending on the architecture, shared weights may implicitly introduce prohibitive constraints. Third, eigenfunctions, defined in line 13 of Alg. 1, map descriptors of the system to a real number that, when properly scaled, has mean zero and variance 1 w.r.t. equilibrium distribution.
5. Eigenfunctions can be plotted in the plane of relevant features to analyze the data. For example, for alanine dipeptide we have for each configuration the values of the angles $\phi$ and $\psi$ and the values of the eigenfunction, therefore, we may plot the points in the $\phi-\psi$ plane colored according to the eigenfunction value.
## Questions:
1. If we understand well, two statements are equivalent. Indeed the noise of the feature is $z(X_{t+s}) - \mathbb{E}[z(X\_{t+s})\vert X_t] = z(X\_{t+s}) -[ {\cal T}\_s z ] (X_t)$, and from one trajectory we only observe an instance of the r. v. $z(X_{t+s})$, and not the output of TO.
2. Please see our general reply.
Ref:
[D] Zhang et al., Effective dynamics along given reaction coordinates and reaction rate theory. Faraday Discuss. (2016)
---
Rebuttal Comment 1.1:
Comment: Thank you for these nice explanations! I fully agree that studying and recovering dynamics from biased simulations is an important problem and appreciate any ML work done for it. I think the paper should be accepted.
I appreciate the chignolin experiment.
1. Minor: Could you put your work a bit into context with "Implicit Transfer Operator Learning: Multiple Time-Resolution Surrogates for Molecular Dynamics" https://arxiv.org/abs/2305.18046 . They only learn the transfer operator. However, they also provide experiments on Chignolin. Are there inherent reasons why learning the generator might scale worse than learning the operator?
My main concern (maybe it was naive) was in thinking that CVs (which you are learning by learning the generator and using its eigenfunctions) main or only value is in using them for further biased simulations. Thus, the question of why one biased simulation with your CVs would be better than the first biased simulation. This also seems valuable, and you suggest it in the general rebuttal, but do not try it. \
Important:\
However, as far as I understand, you suggest that I was wrong in assuming that learning the generator is only valuable for obtaining CVs and using them for biasing simulations. Learning them is also valuable to e.g. recover the free energy surface from a biased simulation or to recover dynamical quantities about transitions - please let me know if this understanding is correct.
(sorry for the delayed reply - I will be quicker to respond in the remaining days)
---
Reply to Comment 1.1.1:
Title: Acknowledgement of reviewer's comments
Comment: We thank the reviewer for appreciating our rebuttal and suggesting our paper for acceptance. In the following we address the reviewer's additional questions.
- __Concerning the reference.__ Thanks for bringing this paper to our attention, we will include it in the revision. In this work the authors build their model from D.E. Shaw dataset formed by a long trajectory containing already all the necessary information for training, without the need for biasing. They learn a transition kernel (i.e. a conditional probability to go from $X_t$ to $X_{t+\Delta t}$), which, as discussed in Sec. 3, has an inherent difficulty to be adapted to biased simulations.
- __Scaling of generator algorithms.__ First, note that transfer operator (TO) based methods apply only to equally spaced data and that the sampling frequency $1/\Delta t$ must be high enough to distinguish all relevant time-scales in the dynamics. Otherwise, since TO eigenvalues are $e^{\lambda_i \Delta t}$, small spectral gaps complicate learning (see Thm. 3 [23]). Conversely, our IG method, which uses gradient information, is time-scale independent, handles irregularly spaced measurements, and does not rely on time discretizations. This important aspect allows one to learn from biased simulations without notorious time-lag bottlenecks inherent to TO methods. Alas, as there is no free lunch, this incurs higher computational complexity. However, his complexity is to an extent mitigated through our representation learning, by exploiting automatic differentiation tools in deep learning.
- __IG’s eigenfunctions.__ Indeed, the learned eigenfunctions of the IG can be used as CVs, which in fact are optimal CVs, as discussed in the general reply, see [A]. Moreover, the leading eigenfunctions of IG encode the true dynamics. Hence, once they are properly learned, no more biasing is needed. For example, they can be used to infer the transition mechanism, see for instance figure 2, c) in the alanine dipeptide experiment, where we show that even with few transitions, we manage to recover a linear relationship between $\theta$ and $\phi$. Another important aspect is that one can build good approximations of all transfer operators $\cal{T}_{\Delta t}$, $\Delta t>0$, from the leading eigenpairs of IG, enabling forecasting of system’s observables see e.g. [21].
If there are any remaining concerns and/or questions, we are happy to discuss more.
---
Rebuttal 2:
Comment: Thank you very much for confirming some points, and for addressing the reference. I would like to take the freedom to raise my score from 6 to 8 and to increase my confidence.
---
Rebuttal Comment 2.1:
Title: Acknowledgement to the reviewer
Comment: We are happy that our replies were helpful. We thank reviewer for all their comments and discussions, which I will improve our paper. We commit to incorporate them all in the revision. | Summary: In this paper, the authors investigate the possibility of estimating the leading eigenvalues and the corresponding eigenvectors for the evolution operators of Langevin dynamics using biased simulations. To this end, they rely on strong statistical guarantees and on the use of deep learning regression to build a suitable Hiltbert space that approximates the evolution operator of the unbiased process using only biased data. They evaluate the reliability of this approach in three increasingly complex problems and show that this approach successfully recovers the slowest relaxation modes in toy models and obtains better approximate values for the eigenvalues than competing state-of-the-art methods.
Strengths: The work deals with a very important problem in chemistry and physics, namely the identification of the slowest modes of a dynamic process by means of simulations. In complex cases, simulations are not long enough to observe the most prominent transitions, and enhancing sampling methods are used to facilitate the observation of rare events. The problem is that it is often difficult to identify the key collective variables to facilitate the required slow movements. In this paper, the authors propose a simple way to do this using biased simulation data based on strong theoretical guarantees.
Weaknesses: I find it difficult to read the paper and follow it, perhaps because I am too far from the field
Technical Quality: 3
Clarity: 3
Questions for Authors: I did not understand well the physical meaning of the modes they extract with their approach. Could they give a physical explanation of them or of the order parameter associated?
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Authors discuss correctly the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's insightful evaluation and valuable comments. In what follows, we aim to address the highlighted weaknesses and respond to the reviewer's questions.
## Weaknesses:
We hope that our general reply brings more clarity. We commit to incorporate all the suggestions and additional discussions in the revised version of the paper. If the reviewer would like us to clarify any other specific aspects, we are happy to address them during the discussion period.
## Questions:
Thank you for raising this question, in the following we try to briefly review the usefulness of generator eigenfunctions (“modes”) for interpretability and control of Langevin dynamics. We plan to add it in the revision to further clarify the rationale of our method for the general reader. From the spectral decomposition of the generator in Equation (4) and its link to the transfer operator in Equation (5) , it can be seen that each mode $f_i(x)\langle f_i,f \rangle$ (essentially the eigenfunction $f_i$) is related to a timescale of the process ($-\lambda_i>0$). This means that eigenfunctions corresponding to the slow time-scales (eigenvalues close to zero ) give information on __metastable states__. Recalling that normalized eigenfunctions (in $\mathcal{L}_\pi^2$) have mean zero and unit variance, they tend to have constant values for ensembles $x$ in such meta-stable states (meaning at time-scale $-\lambda_i>0$ there is no dynamics), while it changes sign when ensembles move from one meta-stable state to another. This is what the reviewer called “order parameter”. But it is more than this, because contrary to an order parameter which only discriminates two states, the eigenfunction will also give valuable information on how the transition takes place, see illustrative example in appendix E.1 and corresponding Figure 5. This feature of the eigenfunctions is of particular relevance, since the knowledge of rare transitions can be used by experimentalists to accelerate or slow down the process, see e.g. Wei Zhang, Christof Schütte, _Understanding recent deep‐learning techniques for identifying collective variables of molecular dynamics_. PAMM 2023, 23 (4).
---
Rebuttal Comment 1.1:
Comment: I really appreciated the authors' explanations, including the new figures (pipeline and the experiment with the proteins). I will change my rating to accept. | Summary: This paper studies an infinitesimal generator approach for learning the eigenfunctions of evolution operators for Langevin SDEs. Due to the slow mixing caused by the high potential barriers, direct learning from simulation data can be sample inefficient. Biased simulation (based on a biased potential) is used to explore the space faster, and importance weights are constructed to get an unbiased loss function; such unbiased construction is more natural and feasible in the generator approach compared to the existing transfer operator approach. The minimizer of the resulting quadratic loss function, given a dictionary of basis functions, can be obtained by ridge regression type algorithms as it is a linear method. The authors also propose a loss function to learn a good dictionary that approximates the space of eigenfunctions more accurately; this nonlinear dictionary learning improves accuracy. Experiments on molecular dynamics benchmarks demonstrate the effectiveness of the approach.
Strengths: The introduction and motivation are exceptionally well written and demonstrate the benefits of the generator approach compared to the transfer operator approach for biased simulations. The numerical experiments, especially in Figure 1, show significant improvements in accuracy compared to previous methods.
Weaknesses: The mathematical presentation of the technical details, namely sections 4 and 5, could be made clearer. Many notations are introduced, but the ideas seem simple; see Questions. I would appreciate an overarching description highlighting the key idea and the difference between the proposed approach and existing works using generators.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Is the estimator in Section 4 simply the Galerkin approximation of the inverse of $\eta I - \mathcal{L}$ with the basis functions in $\mathcal{H}$?
I found the descriptions in Section 4 complicated and not easy to digest. As an example, for equation (11), by definition, the formula will differentiate the term $\|\chi_\eta(x) - \hat{G}^Tz(x)\|_2$ (which is not differentiable at zero) which looks strange. And I didn't understand the sentence "we contrast the resolvent ..." on page 5, line 182: what do the authors mean by "contrast" here? And the explanation of motivation of using the generator regression problem (11) rather than the mean square error also appears less clear.
- I am also curious about the mechanism behind the improvement of accuracy in Figure 2. It appears to me that the approach in this paper first uses the loss function in equation (19) to find approximate eigenfunctions and then uses the generator approach in section 4 to refine the estimation of the eigenfunctions and eigenvalues. If so, an understanding of which step of the two plays the significant role will be useful. For example, if the authors apply the generator approach in section 4 to the approximate eigenvalues obtained in the work of Zhang et al. (2022), will the resulting accuracy in Figure 2 also be significantly improved?
- Potential typos: Page 2, line 67: "principle" -> "principled".
Page 3, line 125-126: the notation of $C_{\gamma}$ is not introduced. Moreover, it seems $\lambda_i = \log \mu_i / t$ rather than $\log (\mu_i/t)$.
Page 8, line 282, "approqch"
- Page 3, line 129: "If t is chosen too small, the cross-covariance matrices will be too noisy for slowly mixing processes" Could the authors elaborate more on this?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's insightful evaluation and valuable comments. In what follows, we aim to address the highlighted weaknesses and respond to the reviewer's questions.
## Weaknesses:
Thank you for motivating us to present the overarching summary of our approach. While we discuss this in the general reply, we would like to provide a few more relevant details here.
- First, data-driven generator methods emerged relatively recently in the literature [1,15,20,21,46], and up to our knowledge, the only method that possesses finite-sample statistical guarantees for the proper spectral estimation is [21]. However, all of the above approaches analyzed learning from unbiased simulations, which is unfeasible in many realistic scenarios. Moreover, the method in [21] suffers from scalability issues inherent to kernel methods. On the other hand, deep learning methods for the generator were applied to biased simulations [46], but their statistical consistency was not proven. __In sharp contrast to all of the above, our work is the first to show that one can learn eigenvalues and eigenfunctions__ of the infinitesimal generator via its resolvent __in a scalable and statistically consistent way from biased simulations__.
- To summarize, we extend the method in [21] by __incorporating biased dynamics__, developing appropriate __representation learning__, and designing the overall scalable and statistically consistent approach. Our method is twofold: first, we learn a basis set using Theorem 2 (representation), and then we learn the generator’s resolvent on this basis set using Theorem 1 (regression). This approach is common in many fields, notably in transfer operator approaches [24,29], but is applied for the first time ever, to the best of our knowledge, to the case of the infinitesimal generator. In doing so, we followed the core idea of [21] to formulate the problem of learning the resolvent so that the geometry of the process is efficiently exploited via the energy norm, see also reply to Q1. Practically, this means that one can avoid inverting the shifted generator by rescaling the norm in which one learns the resolvent. This allows one to work in the latent space where the state becomes represented as $(\eta I - L)^{1/2} z$ and all the relevant inner products and norms can be computed using the features $z$ and their gradients.
## Questions:
- __Q1:__ In fact, the estimator is not a Galerkin projection of the resolvent onto $\mathcal{H}$ as a subspace of $\mathcal{L}^2_\pi$. This would be the case if in (11) we replace the energy with the expectation w.r.t. invariant distribution. However, note that this is not feasible since we cannot compute/estimate the action of the resolvent. To solve this issue we change the domain of the problem from $\cal{L}^2_\pi$ (expectation) to $\cal{H}\^\eta\_\pi$ (energy). So, the reviewer can think of our estimator as a __(regularized) Galerkin projection of the resolvent acting on $\cal{H}\^\eta\_\pi$ onto a subspace $\cal{H}$.__ The reason why we do so, as shown in (12), lies in the fact that Hilbert Schmidt norm of an operator $A$ on $\cal{H}^\\eta_\\pi$ becomes $\rm{tr}(A^*(\eta I-L)A)$. That is, the features $z\in\mathcal{L}^2_\pi$ are transformed into features $\tilde z:=(\eta I - L)^{1/2}) z$, so inner products $\langle \widetilde z, \widetilde z’ \rangle = \langle z,(\eta I - L) z’ \rangle$ and $\langle \widetilde z, (\eta I - L)^{-1} \widetilde z’ \rangle = \langle z, z’ \rangle$ can be now estimated from data. Using this trick we neatly avoid estimating the action of the resolvent, and no differentiation in (11) is needed since integral in $\chi_\eta$ is essentially canceled out by differential operator $\eta I -L$. This is what we meant by contrasting the resolvent in line 182. To further clarify our approach to the reader, we will include this discussion in the revision.
- __Q2:__ Do we correctly understand that the reviewer is referring to Figure 1 instead of 2, since there we compare to [46] (Zhang et al. 2022)? If so, let us note that, as pointed out, we can use both methods for representation learning (loss (19) or loss 39 from [46]) to identify a subspace and then apply our regression method. Based on the theoretical background given in [46], we believe that their method indeed reduces representation error in Theorem 1, making this approach sound. However, when we performed this additional test, the first eigenvalue went from 0.043 to 0.013, the second one went from 1.13 to 4.19. This indicates that even with regression the result does not significantly improve. To strengthen our point, we also did the opposite operation: we computed the Rayleigh quotient from our features without doing any regression, just as is done in the work [46] we obtain for the first eigenvalue 1.79e-4 and 0.36 for the second one, which are close to the ones found by regression and very close to the ground truth values. We are speculating that the main reason is in the efficiency of our loss and its robustness on the hyperparameter choices.
- __Q3 and Q4:__ Thank you for pointing out typos, we will correct them. Also, sorry for the confusion, we intended to say that if $t$ is chosen too small w.r.t. the mixing time, the noise in the corresponding transfer operator regression dominates the signal, meaning that the learning becomes harder. We will correct this discussion in the revision.
---
Rebuttal 2:
Comment: Thank you for the detailed response. I have some follow-up points.
- **Q1**. I now agree that the estimator is a regularized Galerkin projection of the resolvent onto the space of basis functions, using the energy inner product. This interpretation seems more convenient from an expository standpoint. It would be helpful if the authors can incorporate this perspective into the paper and clarify the writing. In my experience, the current mathematical exposition, with its various operators, norms and formulas, did not provide a very pleasant reading experience.
- **Q2**. Yes, it is Figure 1. Thank you. The explanation and results are interesting. Do the authors have any insights regarding the superiority of the loss (19) for representation learning? It seems to me the loss is not that simple and easy to optimize or compute. There is a penalization term to make the $z_i$ normalized and one has the hyper parameter $\alpha$ to tune. Furthermore, based on the authors' response, do I understand correctly that the primary improvement stems from the loss function (19) for representation learning, which is more significant than the regression formulation proposed in the paper? If so I think these insights should be added to the paper.
---
Rebuttal Comment 2.1:
Title: Further discussion
Comment: We thank the reviewer for the prompt reply.
- __Q1__ We agree that Galerkin projection interpretation seems more convenient from an expository standpoint. So, based on the reviewer’s feedback, we propose to modify the content in lines 185-191, by briefly explaining why Galerkin projection should be done in energy space $\cal{H}^\eta_\pi$ instead of $\cal{L}^2_\pi$ space. Then, since the statistical learning risk formulation is the key to understand how to generalizes w.r.t. (unobserved) true distribution $\pi$ from biased simulations, we briefly discuss its equivalence with the Galerkin viewpoint. We believe that this change can help the reader to better grasp the main idea behind the energy, and connect it to the material that follows. Does the reviewer find this proposal adequate?
- __Q2: about the loss:__ To answer this question, let us compare (19) to the two losses used in the most related works, [24] and [46]. While the former one (only tested on unbiased dynamics of a toy system) depends on one hyperparameter, the latter loss has $1+m$ hyperparameters, where $m$ is the latent dimension. Importantly, __both losses suffer from statistically biased estimation__ of gradients when optimized over the sample distributions, which may negatively impact the optimization. Moreover, as reported in [46] (see also lines 287-289) additional $m$ hyperparameters in loss of [46] are delicate to tune. __In sharp contrast__, our empirical loss, as stated in Theorem 2, __is an unbiased estimate__ of the true loss in Equation (18), and we didn’t experience any particular difficulty in tuning our only hyperparameter $\alpha>0$. Note also that, in principle $\alpha>0$ is optional to speed up the training, and we may as well use $\alpha=0$. Concerning the computation, we respectfully disagree that the loss is hard to compute. Indeed, for two size $n$ batches of data, it just relies on computation of covariances of features $C$ and their gradients $W$, which is of the order $\cal{O}(n m^2 d)$. Since in the molecular dynamics setting $m$ is typically small, computing the loss is very efficient, similarly to losses in self-supervised learning based on canonical correlations, see e.g. [A]. The only computational bottleneck lies in using the gradients for $W$. However, we believe that this is a necessary price to pay to be able to generalize properly from biased simulations, as motivated in Sec. 3. Finally, we remark that optimization of our loss follows interpretable dynamics, see Figure 3 of the Appendix where plateaus reveal discovery of new relevant features, helping practitioners to decide when to stop the training.
- __Q2: about the contribution:__ As the reviewer rightfully recognizes, introducing loss (19) for representation learning is an important contribution of this work. However, our main focus is on __biased simulations__, or, in other words, __learning dynamics that ML algorithm did not observe__. With this in mind, our main contributions are summarized in lines 67-74. For the reviewer's convenience we rephrase and slightly expand them here. To the best of our knowledge, for the first time
- __(1)__ we __formalize the problem of learning__ the spectral decomposition of the infinitesimal generator __from biased simulations__ as a regression,
- __(2)__ we __derive empirical estimators with guaranteed statistical consistency__ (Theorem 1), and
- __(3)__ we propose an end-to-end __representation learning + regression__ pipeline that efficiently scales to large problems.
- __(4)__ we __empirically validate our methodology__ on molecular dynamics datasets of increasing complexity. That said, __we will follow reviewer’s suggestion and emphasize better the important role of representation learning__ for the generator regression.
Once more we thank the reviewer, and if any other doubt remains or clarification is needed, we are happy to discuss in more detail.
Ref.
[A] Balestriero et al., A Cookbook of Self-Supervised Learning. Arxiv:2304.12210 (2023). | null | null | Rebuttal 1:
Rebuttal: We wish to thank all reviewers for their insightful evaluation of our paper. We appreciate all their comments and remarks, which we’ll incorporate in our revision. Before addressing each review in detail, we’d like to point out some general remarks that apply to all of them.
## Clarity of presentation
To improve the clarity of our work in Sec. 4 and 5, and emphasize its impact, let us provide a general overarching view of our method. Dealing with rare events implies a large timescale, thus generator’s eigenvalues close to zero, whose eigenfunctions reveal transitions between meta-stable states. To solve these learning tasks via transfer operator approaches one typically needs a prohibitively large amount of data from simulations with very small time-increments. We overcome this problem by proposing to work with the infinitesimal generator which is, contrary to transfer operators, time-scale independent and, as we show, can be successfully combined with biased simulations.
In particular, we propose to work with the resolvent of the infinitesimal generator, which shares the same eigenfunctions with the generator. However, this approach introduces a new issue: an operator needs to be inverted. To overcome this, we work in a new Hilbert space $\cal{H}\_\pi^\eta$ with inner product defined in Eq. (10). In this space, the Hilbert Schmidt norm of an operator $A$ on $\cal{L}^2_\pi$ becomes $\rm{tr}(A^*(\eta I-L)A)$, that is, features $z$ are transformed into features $(\eta I-L)^{1/2}) z$. Using this trick, __the resolvent may be regressed easily__ in a theoretically grounded way knowing only the diffusion part of $L$, see also [21] for more details on this approach using kernels. On the other hand, to obtain a method that is truly scalable to large problems, one needs a good dictionary of functions on which to regress the generator. This __representation learning problem__ is addressed in Section 5, where we use the same weighted norm to define a novel loss. While we show in Thm. 2 that deep neural network representations, when properly trained, converge towards the eigenfunctions of the resolvent, due to imperfect dataset or parameter choices, this convergence might be slow. However, the learned features are useful to identify a subspace of the $\cal{L}^2_\pi$ domain with small representation error, and one can efficiently regress the resolvent with the method presented in Sec. 4 to obtain the overall statistical consistency from Thm. 1.
## General pipeline
Following the comments of the referees, we also believe that a description of our method pipeline would be beneficial. We propose it in the form of Figure 1 in the attached pdf, and commit to include it in the revision, together with the following discussion and more detailed version of the current pseudocode (Alg. 1) in the appendix:
First, one chooses a molecular system to study, and, based on this choice, identifies a __collective variable (CV)__ on which to bias the dynamics to observe transitions from one state to the other. Often this CV is heuristic and suboptimal, leading to only a few transitions. To apply our method (and any deep-learning based method), one then needs to __choose descriptors__ of the system to put as input of the model. These descriptors will encode the symmetries of the system (often global translation/rotation) and only take into account the most important degrees of freedom: for example for the folding process of a protein in water, one may only use the interatomic distances between heavy atoms. This phase is highly system dependent, and the complexity of choosing descriptors grows with the system size. Once descriptors are chosen, a __representation is learned__ and the derivatives present in the loss function are computed with respect to atomic positions. After learning the representation, __the resolvent is regressed__ on the learned dictionary of functions. If the initial dataset does not have enough transitions, it might be that the eigenfunctions obtained are not good enough. In that case, one can use the learned eigenfunction as a collective variable for biasing the simulation to collect more transitions and further improve our model of the generator, and hence the final estimation of its eigenfunctions. Note, however, that the last point is based on the fact that the eigenfunctions of the generator are optimal CVs, see [A]. While we did not need to implement it in our experiments, future work will study more complex systems where this iterative process can be beneficial.
## New experiment
Finally, to showcase the power of our method, we implement it for the Chignolin miniprotein: we first performed a 1 microsecond biased simulation using as CV the deep-TDA one [B]. This allowed us to gather transitions. Then, this trajectory was used to train our method, which is, to the best of our knowledge, the largest scale work on generator learning. To validate our results, we trained our method on a very long unbiased trajectory of 106 microseconds from D.E. Shaw research [C]. The results are presented in Fig. 2 of the attached pdf. The two presented eigenfunctions are completely similar, but one is from only 1 microsecond long simulation, while the other is from 106 microseconds (so a 106 times more costly computations). This shows that __our work on biased simulations paves the way towards scalable and reliable estimation of dynamical quantities in realistic problems of molecular dynamics__.
We once more thank all the reviewers whose comments inspired the discussion above which, we believe, further strengthens our paper, and demonstrates its impact.
The Authors
### Ref.
[A] Zhang & Schütte, Understanding recent deep‐learning techniques for identifying collective variables of molecular dynamics. PAMM (2023)
[B] Trizio & Parrinello, From enhanced sampling to reaction profiles. J. Phy. Ch. Letters (2021)
[C] Lindorff-Larsen et al. “How Fast-Folding Proteins Fold”. Science (2011)
Pdf: /pdf/9d65637850c31b22051859dd5dc764393df956b1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Lever LM: Configuring In-Context Sequence to Lever Large Vision Language Models | Accept (poster) | Summary: This paper proposes Lever-LM, a small language model designed to configure effective in-context demonstration (ICD) for improving the in-context learning performance of large vision-language models. The authors construct a dataset of effective ICD sequences to train Lever-LM, which then generates new ICD configurations for novel queries to solve vision-language tasks through in-context learning. Experiments on image captioning and VQA tasks demonstrate that Lever-LM outperforms strong baselines and can capture statistical patterns in ICD sequences to effectively leverage LVLMs.
Strengths: - The paper introduces an interesting method using a small language model (Lever-LM) to configure in-context demonstrations for LVLMs. Empirical results on image Captioning and VQA demonstrate the effectiveness of Lever-LM across these settings.
- The paper includes a wide range of ablation studies exploring various aspects of the proposed method, such as different configurations for constructing the training dataset, different model architectures, and the impact of ICD sequence length. These studies provide valuable insights into the factors affecting the performance of Lever-LM.
- The paper is easy to understand.
Weaknesses: - In practice, we use ICL because we do not want to update the model parameters or use more computing, while this approach requires training a small model for ICD selection. Thus, the authors need to show that this method can generalize to more models (e.g., Qwen-VL, InternLM-XComposer2, IDEFICS2, GPT4, etc) and new tasks beyond VQA and captioning (e.g., [1]), and therefore show whether the additional computations worth it.
- Zero-shot performance should also be shown as a reference to few-shot performance in Table 1, 2, 3, and more.
- Lever-LM is trained on 2-shot and the authors show the extrapolation to up to 8 shots, and the performance is almost constant wrt. to the number of shots for extrapolation except for OF IC. This may indicate the small number of shots during training makes this strategy hard to generalize to more shots beyond, which limits more performance gain and applications such as many-shot ICL.
[1] VL-ICL Bench: The Devil in the Details of Benchmarking Multimodal In-Context Learning. arXiv, 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: There are many studies arguing about the poor metrics (Rouge, CIDEr, BLEU, etc) of image captioning because the n-gram based metrics give higher scores for captions that have a similar format as the GT. This is especially true for the ICL setting because the model will follow the given ICD, and therefore the predicted caption will just be more similar to the given caption but not necessarily better in terms of accuracy, factuality, etc. Have you tried to use LLM-based evaluation for image captioning?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1.1.Why training auxiliary model.**
Just as you commented, ICL does not need to update the model parameters. However, this refers to LLM or LVLM, not to the auxiliary models used for selecting good ICDs. In fact, NLP researchers have developed various auxiliary methods to retrieve and order ICDs [41,42,44,45], while few studies have done so for VL ICL as of the submission deadline. Our work is a pioneering effort in this area. Furthermore, we are the first to apply the concept of treating ICD generation as a Language Modeling task, a strategy yet unexplored in NLP, to the more challenging VL field.
**1.2. More benchmarks and models.**
We test Lever-LM on two tasks of VL-bench [A] due to computation limitation and show the results in Table I. It can be found Lever-LM achieves higher performance than other retrieval-based methods, validating the generalizability of Lever-LM. We also follow your suggestion to validate Lever-LM's generalization ability using IDEFICSv2 from the diverse LVLMs you mentioned, due to computational constraints. Our choice is based on two factors. First, IDEFICSv2 is an open-source model specifically designed for ICL. Second, modern LVLMs with robust ICL ability belong to two mainstream architectures : (1) Flamingo-based (using cross-attention to fuse vision and language, like Open-Flamingo or IDEFICSv1 that are tested in our paper) and (2) LLaVA-based (directly concatenating image and language tokens like IDEFICSv2). Thus, testing with IDEFICSv2 assesses Lever-LM's generalization across both architectures. Since IDEFICSv2 directly concatenate vision and text tokens, its input sequence will be longer than IDEFICSv1 and 4-shot inference reaches our GPU limitation, thus we report the results of 1-4 shots. The results in Table J show that Lever-LM achieves better results than RS and SIIR. Note that RS outperforms SIIR here, one possible reason is that IDEFICSv2 is more likely be damaged by short-cut inference due to similar ICDs are used, while Lever-LM is more robust.
To validate our hypothesis, we review examples of text generated by SIIR, such as the following:
```
SIIR Model Output [The first is the most similar ICD, and the last is the query]:
Caption:Many children are posing together outside of the building window. [ICD1]
Caption:A group of children are sitting together wearing dresses and suits and ties. [ICD2]
Caption:Many children are posing together outside of the building window. [Query Prediction]
Lever-LM Model Ouput:
Caption:School children cheer a tennis match with a pirate and giant tennis racket. [ICD1]
Caption:A boy in a baseball cap holding baseball mitt. [ICD2]
Caption:A group of school children posing for a photo. [Query Prediction]
Ground Truth
1. Many small children are posing together in the black and white photo.
2. A vintage school picture of grade school aged children.
3. A black and white photo of a group of kids.
4. A group of children standing next to each other.
5. A group of children standing and sitting beside each other.
```
It is evident that the model directly copied the caption from the first ICD, and such errors are not uncommon. We speculate that the LVLM's coarse visual models cannot distinguish excessively similar images, leading the LVLM to treat the ICD and the query as the same image, resulting in the model copying the ICD's caption.
Additional, we test Lever-LM in NLP ( **A.2.1 of R.RjFC**) and show more fine-grained analyses based on IC and VQA (**A.3.2 of R.sise**). **A.2.2 of R.RjFC** also discusses why Lever-LM is a generalizable ICD configuration strategy in VL.
[A] VL-ICL Bench: The Devil in the Details of Benchmarking Multimodal In-Context Learning
**2.Zero-shot Performance.**
We evaluate the zero-shot performance in Table H, showing that when compared with zero-shot performance, all the ICL methods achieve significantly improvements. We will incorporate zero-shot results in all Tables in revision.
**3.Constant performance of more shots.**
Many-shot ICL performance is influenced by two factors: LVLM's capacity for addressing long input sequences and the in-context sequence quality.
Studies on Open-Flamingo v2 and IDEFICSv1, Which are two LVLMs used in our experiments, indicate that these LVLMs exhibit limited ability with many-shot ICDs [12][13][29]. Specifically, these LVLMs are trained by 5-shot ICEs, meaning limited capacity for more shot inputs. Our findings, such as Table 1, also show limited performance gains from various retrieval-based methods using more-shot ICDs. Yet, Lever-LM outperforms these methods on average. This does not imply Lever-LM's generalization to more-shot cases is compromised. We appreciate your feedback and plan to test Lever-LM with LVLMs better suited for longer inputs.
**4.ClipScore.**
We appreciate your suggestion about calculating the CLIPScore. High-quality captions generated from a model should hinge on two aspects: linguistic correctness and visual congruity. CIDEr and CLIPScore respectively values the former and the latter.
The CLIPScores of diverse methods are given in Table K. We can find that RS has higher CLIPScore than SIIR, while SIIR has higher CIDEr than RS (Table 1). One possible reason is that when using SIIR, LVLM might copy the captions of ICDs with similar images as the query and overlook the vision content of query image. However, since RS returns less similar in-context images as the query, the short-cut inference is alleviated and thus has higher CLIPScore. Furthermore, we find Lever-LM has the highest CLIPScore, suggesting Lever-LM generates dissimilar ICDs to the query while meantime help LVLM generate the captions which are more grounded to the images. These comparisons validate that Lever-LM can capture useful statistic patterns between high-quality ICDs instead of simply generating similar ICDs.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the clarification and additional experiments. I'm raising my score to 5, and hope these discussions/experiments can be integrated into the final version. \
I have one additional question: in Table I, why the performance of "Avg:4∼8" is lower than "Avg:1∼2", any intuition?
---
Rebuttal 2:
Comment: We appreciate your response. Due to the time constraints during the rebuttal phase, we only generated 150 high-quality ICD sequence data, which may have led to a decrease in extrapolation ability. After Rebuttal, we continued to generate more data (500 high-quality ICD sequences) and in the VL-ICL benchmark and trained a Lever-LM with it. The results are as follows:
| Task | Method | Avg:1~2 | Avg:4~8 | Avg:1~8 |
|----------------|----------|---------|---------|---------|
| VL-ICL clevr | RS | 0.145 | 0.209 | 0.188 |
| | SIIR | 0.170 | 0.260 | 0.230 |
| | Lever-LM | **0.305** | **0.31** | **0.308** |
| VL-ICL OCRText | RS | 0.1923 | 0.230 | 0.218 |
| | SIIR | 0.155 | 0.164 | 0.161 |
| | Lever-LM | **0.265** | **0.274** | **0.270** |
The results show that the Lever-LM still has the extrapolation abilities. | Summary: This paper presents a novel approach called Lever LM, which uses a tiny language model to configure effective ICD sequences for LVLMs. The key innovation is leveraging the small Lever LM to select and order ICDs, improving the ICL performance of LVLMs on tasks like VQA and image captioning. The paper demonstrates that Lever LM captures statistical patterns in ICD configurations, leading to significant performance gains over existing methods.
Strengths: This paper proposes an innovative approach: training Lever LM to select ICD for in-context learning.
Treating ICD selection as a sequence generation task is novel. The experimental results also demonstrate the effectiveness of this modeling approach. Compared to heuristic ICD selection methods, Lever LM demonstrates greater robustness, significantly outperforming other selection methods in ICL across multiple models and tasks, achieving optimal performance.
In addition, the method shows the surprising ability to perform k-shot learning despite being trained in a 2-shot setting. The golden fixed set is an interesting phenomenon, which points that finding golden fixed set may become an important research direction in the future.
Weaknesses: 1. The model needs to be encoded with CLIP first. It's seemed CLIP is also a big model compare than Lever LM.
2. There is no comparison with traditional ICDs ranking methods.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Will CLIP model as data encoder affect inference efficiency?
2. Typo: In Figure 3, what ICD-LM means Lever LM?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. The role of CLIP**
The role of the CLIP model is to encode the data in the supporting set. Therefore, in practical applications, the dataset can be pre-encoded and stored locally.
**2. Lever-LM Inference Time.**
During inference, only two layers of Transformer decoders are needed, which results in low computational overhead. We show the inference time of diverse methods in Table G. It can be found that Lever-LM does not have a significant gap in inference time compared to the SIIR method.
**2. Compared with other Ranking Methods.**
In Line 102, we mention that the ordering of ICD sequences has been primarily studied within NLP [17,24]. Their methods are mostly tested on datasets with limited labels (e.g., binary sentiment classification) or require calculating the probability of each input token. Transferring these methods to VL tasks presents two challenges: 1. VL tasks often include continuous image features, and modeling the probability distribution of continuous features is difficult. 2. VL tasks like IC or VQA are often open-ended word generation tasks, which do not have a limited label space. Facing these challenges, there is limited study explores how to order VL ICDs for improving ICL performance. In fact, we are the first to model the ordering of VL ICD sequences up until the submission deadline, thus it is hard to find suitable comparable methods. Table 4 demonstrates that when the quality of the selected ICDs is the same, the order produced by Lever-LM is optimal, indicating that Lever-LM can generate the proper order.
**3. Typo in Figure 3.**
We apologize for the typo, and we will revise this part in revision.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer Djwv
Comment: Thanks for the response. I also roughly browse the comments of the other reviewers and the corresponding feed. After reading the response to Reviewer g8yL, I agree with his comment that Image Captioning is an ill-posed task and the main reason is the n-gram based metric, while I think VQA is still a suitable task for evaluating the effectiveness. Also, it is glad to see the additional experiments on other benchmarks and the results on CLIPScore.
I have a few additional questions related to the results on IDEFICs v2. From the results in Table J, the authors still report the performance by CIDEr, what about the CLIPScore? Also, it is interesting to find that IDEFICs v2 may directly copy the captions in the ICDs, can the authors provide more discussions about this?
---
Reply to Comment 1.1.1:
Comment: Thanks for this question. We follow your suggestion to test CLIPScore and CHs/CHi scores to measure hallucinations. The results averaged on 1 to 8-shot are given as follows:
| | Lever-LM | RS | SIIR |
|-----------|----------|-------|-------|
| CLIPScore | **0.782** | 0.770 | 0.761 |
| CHs | **5.48** | 6.08 | 7.5 |
| CHi | **4.56** | 5.04 | 5.7 |
For IDEFICSv2, since its vision encoder can only grasp global semantics, and a high similarity in images often may imply that the features encoded by the LVLM are also similar, leading to a tendency for the LVLM to produce identical outputs when applying similar-based retrieval methods like SIIR. However, Lever-LM is more robust that it does not generate the similar images as the query one, and thus alleviating the hallucinations. Here we show a few more examples to demonstrate this.
### Example 1
SIIR: **A bowl filled with fruit and vegetables on a counter.** [ICD1] A pile of bananas, oranges and pears sitting next to each other. [ICD2] **A bowl filled with fruit and vegetables on a counter.** [Query]
LeverLM: A red two story bus drives down the street. [ICD1] A woman wearing a short black skirt while holding a tennis racquet. [ICD2] A black and white drawing of fruits and vegetables. [Query]
Ground Truth Captions:
- A black and white image of a lot of round objects.
- Two apples, an orange, some grapes and peanuts.
- A black and white photo of nuts and fruit.
- A pile of nuts in front of some assorted fruit.
- Apples, grapes, an oranges and peanuts on the white surface in a picture.
---
### Example 2
SIIR: A little boy stands outdoors on a rainy day with a pink umbrella. [ICD1] **A little girl that is in the grass with an umbrella.** [ICD2] **A little girl that is in the grass with an umbrella.** [Query]
LeverLM: Two airplanes sitting on top of an airport tarmac. [ICD1] A man is smiling while sitting on a horse. [ICD2] A young girl is holding an umbrella. [Query]
Ground Truth Captions:
- A little girl holds up a big blue umbrella.
- A young girl stands with her arms wrapped around a large blue umbrella.
- A girl in a pink shirt holding a blue umbrella.
- A little girl who is holding an umbrella.
- A little girl with a big, blue umbrella.
---
### Example 3
SIIR: A traffic light sitting on the side of a road. [ICD1] **A traffic light sitting above a white sign.** [ICD2] **A traffic light sitting above a white sign.** [Query]
LeverLM: A woman holding two different items of food in her hands. [ICD1] A boy in a baseball cap holding baseball mitt. [ICD2] A man standing in front of a traffic light. [Query]
Ground Truth Captions:
- A man standing next to a light and a sign.
- A man standing next to a traffic light in Australia.
- A man that is standing next to a traffic light.
- A man standing in front of a sign under a street light.
- A man in shorts is taking a picture next to a red light. | Summary: This paper proposes using a Tiny Lever-LM to assist in ICD selection for LVLM's ICL scenarios, thereby enhancing ICL performance without significantly increasing computational costs. Lever-LM unifies the modeling of multiple scenarios (VQA, IC) in complex multimodal ICL, eliminating the need for manually designed heuristic ICD selection strategies. Additionally, Lever-LM jointly learns ICD selection and ordering, achieving end-to-end learning of ICD sequences. Lever-LM achieves excellent performance across multiple LVLM models, significantly outperforming existing multimodal ICD heuristic selection methods.
Strengths: 1. Lever-LM's structure uses only two layers of Transformer Decoder, making it a highly efficient model for generating ICD sequences.
2. Lever-LM unifies the modeling of complex multimodal ICL scenarios, eliminating the need for manually designed heuristic ICD selection strategies. Furthermore, Lever-LM simultaneously models both ICD selection and ordering steps, achieving end-to-end optimization of ICD sequence modeling. Besides, It is the first attempt to establish the ICD selection and sorting task as a generation task.
3. Lever-LM demonstrates robustness on multiple levels. First, the authors test it across various tasks and models, consistently outperforming the best ICD selection methods in the current multimodal context. Meanwhile, manually designed heuristic ICD selection strategies show significant performance fluctuations across different models. Additionally, they evaluate the use of different metrics for constructing ICD sequence datasets and experiment with different language model architectures, all of which show excellent performance.
4. The authors validate that the sequence order generated by Lever-LM performs better than randomly generated sequence orders, proving that Lever-LM indeed learns the order information of ICD sequences.
5. Lever-LM demonstrates interesting length extrapolation ability after being trained on a 2-shot ICD sequence dataset. It performs strongly even when generating 4-shot, 6-shot, and 8-shot sequences. This proves the efficiency of the Lever LM method: it requires only short-shot ICD sequences to exhibit excellent performance on long-shot ICL.
Weaknesses: 1. It has not been explored whether the performance of Lever-LM strongly depends on its size.
2. It seems like LLMs can also use this method for ICL tasks. It may be better to evaluate Lever-LM in LLMs to demonstrate its versatility.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How would a smaller or larger Lever-LM affect the results?
2. Can you show the input format of LVLM when doing ICL? Do Idefics will use the same format of OF?
3. Can you evaluate Lever-LM in LLM? I believe it can better demonstrate the generalizability of the method.
4. Compared with RS/SIIR methods, how about the inference time of Lever-LM
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please see the Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. Lever-LM with Different Sizes.**
We follow your suggestion to obtain different sizes of Lever-LM by controlling the number of Transformer layers and test these Lever-LMs on the IC. As shown in Table E, we evaluate 1-layer/4-layer Transformer decoder layers. We find that the size of Lever-LM has minimal impact on performance. We believe that capturing the ICD sequence distribution is a simple task that can be learned by only a few Transformer Decoder layers. Moreover, our motivation is to use a small model to enhance the ICL performance of a large model, so it is not appropriate to design Lever-LM to be too large.
**2.1 Lever-LM in NLP.**
We follow your suggestion to train a Lever-LM in NLP. Specifically, we use Qwen1.5 [A] 1.8B and generate 2-shot ICD datasets for SST-2 [B], which is a sentiment classification task. The accuracy results are displayed in the Table F. It can be observed that our Lever-LM outperforms the Random method and STTR in Avg:1\~2 and Avg: 4\~8, demonstrating the potential of Lever-LM in NLP.
[A] Qwen Technical Report
[B] Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank
**2.2. Generality in VL.**
Our approach is actually a generalizable method in VL. Different VL tasks require diverse configuration strategies. For example, in IC, [20] proposes that the image and caption need to be matched. In VQA, [21] proposes the best ICD selection strategy involves sorting based on image similarity first and then question similarity. In other words, LVLM requires different heuristic ICD selection strategies for different tasks, and it is difficult to find a universal and effective configuration strategy. Our method, on the other hand, provides a generalizable solution that can directly adapt to different models and tasks.
**3. The format of IC and VQA.**
We have detailed the prompt template in A.3.Prompt Template of Appendix. Our prompt template is entirely based on the settings described in the Open-Flamingo and IDEFICS.
**4. The Inference Time of Lever-LM.**
Please refer to **A.2 of R.Djwv** where we show the inference time of Lever-LM and SIIR.
---
Rebuttal Comment 1.1:
Comment: I appreciate your response and the inclusion of new experiments regarding Lever-LM sizes, new benchmarks, and NLP tasks, which have addressed my concerns. However, I have one further question: Lever-LM appears to learn more abstract knowledge compared to a conventional machine learning model such as an image classifier. How should we interpret this model in relation to more standard ones?
---
Reply to Comment 1.1.1:
Comment: This is a good question and the following shows my thinking. Traditional machine learning models, which learn specific patterns from human-annotated datasets, can be seen as first-order learning problems. Conversely, Lever-LM represents a second-order learning problem, as it acquires potentially abstract knowledge from a model previously trained on a human-annotated dataset. Prior to the proposal of LLMs, research emphasis was placed on training models to resolve specific tasks like image captioning or visual question answering, with the task serving as the primary research subject. However, the proposal of LLMs has shifted focus towards understanding their inherent characteristics, thereby making these models the new subjects of study. This shift led to the discovery of numerous emergent properties in LLMs, such as prompt engineering and in-context learning. Using these emergent properties as a foundation, researchers employed statistical observation methods to investigate internal characteristics of LLMs, including attention flow [A] and patterns between different layer representations [B]. A parallel research trajectory can be observed in the field of in-context learning. Initial heuristic methods examined the external influence of different ICDs on ICL [C], which subsequently transitioned to statistical observations of the internal characteristics when LLMs perform ICL [D]. Lever-LM aligns with this approach, using a model to discern the internal statistical characteristics of a large model, which suggests that though these characteristics may remain unknown to humans, they objectively exist.
[A] Quantifying attention flow in transformers
[B] ShortGPT: Layers in Large Language Models are More Redundant Than You Expect
[C] Exploring diverse in-context configurations for image captioning
[D] Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning | Summary: The authors focus on configuring effective in-context demonstration (ICD) sequences to improve the In-Context Learinng (ICL) performance of LVLMs. The proposed Lever-LM enables the step-by-step generation of ICD configurations and simultaneously considers the selection of ICDs and the ordering of ICD sequences. Experimental results validate that Lever-LM can capture the statistical patterns for levering LVLMs compared with similarity-based retrieval methods.
Strengths: The tokens of the vocabulary in Lever-LM are the samples from the supporting set. Given the query sample, the ICDs can be selected one by one based on the token distribution produced by the trained Lever-LM.
Weaknesses: 1. In lines 196-197, what is the reason for “randomly” choosing samples to build the sub-supporting set? I think this method is suboptimal, personally.
2. Table 2 (15) uses fixed ICD configuration for any query input, and still returns significantly improved performance compared with other methods in IC. I think this phenomenon shows that the ICL method fails to work on IC. Or I am curious about the performance of “randomly” selected fixed ICD configuration compared with model selected.
3. The experiments part is not convincing in the field of MLLMs. I hope to see results with more benchmarks and models (e.g., M-ICL[1]). To assess the generalization capability of their approach, it would be advantageous for the authors to evaluate their methodology using benchmarks that emphasize fine-grained analysis, such as MM-Vet and SEED-Bench, particularly focusing on segments that require in-depth evaluation.
4. I think the performance between two order strategies are comparative in Table 4, personally.
[1] What Makes Multimodal In-Context Learning Work?, CVPR 2024.
Technical Quality: 2
Clarity: 2
Questions for Authors: More comparisons with recent MLLMs are required.
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: See weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. Random Sampling.**
The goal of randomly selecting samples to form the sub-supporting set $\mathcal{D_{\mathcal{M}}}$ is to enhance training data diversity, promoting Lever-LM to capture complementary knowledge among ICDs. We initially deemed this to be a strategy sub-optimal, exploring alternatives like selecting similar texts and images (Lines 265-269). Surprisingly, Table 2 (9)-(11) shows random sampling as most effective. This finding (Lines 288-291) suggests that using similar ICDs to form $\mathcal{D_{\mathcal{M}}}$ may make Lever-LM generated ICDs contain redundant knowledge, hindering ICL performance. This also aligns with multiple studies underscoring the benefits of diversity for ICL performance [43,54,A,B].
[A] Diversity of Thought Improves Reasoning Abilities of LLMs
[B] In-Context Learning with Iterative Demonstration Selection
**2.Fixed ICD Configuration.**
Actually, the existence of a good fixed set for IC proves that ICL works on IC. This is because ICL's main goal is to facilitate efficient learning of new tasks with minimal examples. To achieve this, one ongoing research direction in ICL is to find a minimal supporting set for ICD selection [C,D], and our discovery of a "golden fixed set" requiring only 8 ICDs is an extreme instance of this direction. Hence, our findings robustly suggest ICL's potential for IC.
We also follow your suggestion to compare the fixed set learned by Lever-LM with the one constructed by random selection.
Specifically, we use OpenFlamingo to randomly select 3 sets of k-shot ICD sequences with different seeds for the image captioning task and then conduct ICL tests. As shown in Table A in the rebuttal PDF, it can be observed that randomly selecting a fixed set of ICD sequences results in relatively poor performance. The Golden-Set outperforms the best Fix Random set (seed 1) in Avg:1~8 by 16.14 points. This also demonstrates the importance of high-quality ICD sequences.
[C] Finding Support Examples for In-Context Learning
[D] Compositional Exemplars for In-context Learning
**3.1.Why VQA and IC.**
It should be stressed that our experimental settings are different from M-ICL [E] that only needs to forward LVLM for ICL inference, while we need to train Lever-LM by modifying various settings (Lines 262-292) to identify which factors influence the learning. Given limited resources, we focus on essential vision-language tasks, IC and VQA, to evaluate Lever-LM. IC and VQA also play important roles in LVLM ICL. First, they are commonly used for evaluating LVLM's ICL performance [12,13], even are specifically studied to help understand the LVLM ICL abilities [20,21]. Also, IC and VQA incorporate diverse vision-language skills: IC tests object, attribute, and relation recognition, while VQA covers diverse question types, integrating various capabilities like classification ('what is the object?') or counting ('how many of one object?'). Modern benchmarks for evaluating LVLM often derive from the IC (COCO dataset) and VQA (VQAv2 dataset). For example, SEED-Bench [F], MME [G] and MMBench [H] incorporate core characteristics from IC and VQA tasks, assessing various aspects such as scene understanding, instance identity, spatial relations, and diverse question types.
[E] What Makes Multimodal In-Context Learning Work?
[F] SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension
[G] MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
[H] MMBench: Is Your Multi-modal Model an All-around Player?
**3.2.Fine-grained Analyses.**
We value your fine-grained analysis suggestions but find MM-Vet [I] and SEED-Bench not fitting our study. MM-Vet's limited scale (200 images, 218 questions) is not sufficient for effective Lever-LM training. SEED-Bench, designed for multiple-choice questions, does not match well with LVLMs that focus on word generation in ICL, requiring specific adaptations that are challenging to us given the rebuttal's time constraints. However, as previously mentioned, VQA and IC are appropriate for fine-grained analysis. For IC, we assess object hallucination using CHAIR [J], providing object-level results, where larger CH$_s$ and CH$_i$ values indicate more object hallucinations. As Table B\&C shows, Lever-LM ICDs, compared to random sampling-based and similarity-based retrieval methods, achieve smaller hallucination scores because they contain fewer similar images to the query, preventing LVLM from short-cut learning. For VQA, we calculate the accuracy of three question types for fine-grained analyses: `Yes/No`, `Counting`, `Others` (Table D), which shows that Lever-LM significantly outperforms other methods in `Counting`, suggesting Lever-LM ICDs helps LVLM get stronger fine-grained object recognition ability.
[I] MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
[J] Object Hallucination in Image Captioning
**3.3.More Models and Benchmarks.**
We follow your suggestion to test Lever-LM on more models and benchmarks. Specifically, we examine the generalizability in VL-ICL-Bench (**A.1.2 of R.BDyQ**), Qwen1.5 model in NLP (**A.2.1 of R.RjFC**), and by another LVLM architecture, IDEFICSv2 (**A.1.2 of R.BDyQ**).
**4.Order strategies.**
The comparative results in Table 4 primarily stem from Lever-LM unifying the generation and ordering of ICDs. Lever-LM generates ICDs with complementary knowledge, enhancing the overall performance and diminishing the importance of ICD ordering. Thus, it is hard to mirror the significant improvements as the studies focusing solely on ICD ordering. Yet, Table 4 shows that Lever-LM's generated order outperforms random order given high-quality generated ICDs, demonstrating its effectiveness in ordering ICDs. | Rebuttal 1:
Rebuttal: We gratefully thank all the reviewers for their valuable and constructive feedback. We are pleased to see that the reviewers recognize our motivation: to use a tiny model to enhance the performance of in-context learning (ICL) for LVLMs. We are encouraged to see that they find our method is novel, interesting (Reviewer Djwv, g8yL, RjFC), our method is a unified method for ICL in NLP and Visual Language domain (Reviewer RjFC), our experiment is very detailed and effective (Reviewer g8yL, Djwv, sise, RjFC).
We address the concerns and questions in detail below and have appended a PDF file with tables. **Note that to distinguish from the Tables and References in the submitted manuscript, the Table and Reference numbers in the rebuttal are marked with A, B, C, etc.**
Based on these comments, we have summarized the common questions and our responses as follows:
1. We conduct supplementary experiments on other tasks and LVLMs to demonstrate the general applicability of Lever-LM (Table F/I/J, to Reviewers sise, RjFC, g8yL).
2. We provide more analysis of Lever-LM, including a fine-grain analysis of VQA and IC and another metric to evaluate different ICL methods' performance on IC (Table B/C/D/K, Reviewer sise, BDyQ).
3. We show the inference time of Lever-LM and one retrieval-based method to show that the Lever-LM will not influence the LVLMs inference time (Table G, Reviewer RjFC, Djwv).
We also address other specific concerns in separate responses.
Pdf: /pdf/fe3bf894c9ec2abce819e19a2dce7aed76bb59a0.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
CAT3D: Create Anything in 3D with Multi-View Diffusion Models | Accept (oral) | Summary: This paper proposed CAT3D, a pipeline that enabled the production of 3D representations from one or a few input views. CAT3D comprises a multi-view generation model to synthesize novel images from different viewpoints and a Zip-Nerf to achieve 3D reconstruction based on generated views. The 3D results shown in this paper are very impressive with high quality, and the authors provided sufficient experiments to verify the effectiveness of CAT3D.
Strengths: 1. CAT3D achieves impressive results of 3D generation.
2. The overall pipeline of multi-view generation is straightforward, yet effective with good robustness as verified in the experiments.
3. Although CAT3D is trained with constrained multi-view images (no background: objaverse; object-centric: CO3D, MVImgNet, most indoor scenes: RealEstate10k), it still enjoys good generalization.
4. This paper can be seen as evidence to verify that scaling up 3D generation through multi-view synthesis is feasible.
5. The ablation study is convincing and sufficient.
Weaknesses: 1. Most techniques have been proposed by previous works, including the raymap, Nerf with LPIPS loss (IM-3D). But I think this point is not the main issue of this paper, while CAT3D proves the scalability of combining all these techniques together.
2. Since ZIP-Nerf is utilized in CAT3D for dense 3D reconstruction, it is unclear whether the results shown in Figures 1, 2, 4, and 5 in the paper are derived from ZIP-Nerf or generated from the multi-view diffusion process. Are all these results exclusively from ZIP-Nerf? If not, it would be beneficial to provide more results directly from the diffusion model. This distinction is important, especially considering the potential inconsistencies noted in the limitations section.
3. It is not inappropriate to provide the inference efficiency with 16 A100 GPUs (Line573) only, which is not a regular setting for most users. The authors should provide the inference time with one GPU. More importantly, it is unclear whether the efficiency results from Table 2 are all fairly compared with one GPU or the same hardware condition.
4. CAT3D seems only being trained with 1cond+7target and 3cond+5target. Could it address the combination of arbitrary conditions and targets without fine-tuning? For example, 4cond+4target or 2cond+6target.
5. CAT3D enjoys good generalization, which is just trained on objaverse, CO3D, MVImgNet, and RealEstate10k. However, these datasets are all constrained (no background: objaverse; object-centric: CO3D, MVImgNet, most indoor scenes: RealEstate10k). How to confirm the generalization of full-model trainable StableDiffusion, especially for some text-to-image samples shown in the supplementary? This point is not mentioned in the main paper.
6. Unfortunately, the authors did not promise to release the code as shown in the checklist. Therefore, some implementation details should be further clarified as mentioned in the "Questions".
Technical Quality: 4
Clarity: 3
Questions for Authors: Some unclear implementation:
1. Lines 190-191 are unclear. How to deal with images with different aspect ratios during training and inference?
2. Missing details about the shift of the noise scheduler.
3. What is meant by "drop the text embedding"? Does it imply using an empty string "" as the input text for all samples, or does it completely remove all cross-attention layers? Additionally, how is stability maintained when all cross-attention layers are removed at the beginning of training?
Some other minor questions:
1. As mentioned in the paper, a mask channel is concated to the inputs to distinguish conditions and targets. Why not try to use SD-inpainting as the initialization to cover this task?
2. Could 3DGS be converged with fewer generated images? Generating 80/720 views is too costly in my opinion.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The authors discuss the limitations. However, no qualitative limitations are shown in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and careful review of our work. Below we address your questions and weaknesses mentioned:
> Most techniques have been proposed.
We agree with you that CAT3D leverages existing techniques for individual components of the system. The innovation of CAT3D lies in the effective integration and scalability of those components: a multi-view diffusion model, efficient sampling strategies, and a robust 3D reconstruction pipeline, resulting in a powerful and practical system for 3D content creation given any number of input views. CAT3D decouples the generative prior from the 3D extraction process, which not only contributes to its efficiency and simplicity, but also allows for potential improvements in either component without affecting the other in future research.
> Results from the 3D reconstruction / multi-view diffusion.
All results in the main text are renderings of optimized mip-NeRF models, but the Appendix also includes quantitative results of images produced by the multi-view diffusion model (Table 3), and the anonymous website in the supplementary contains qualitative samples from the multi-view diffusion model. We will clarify this in the revised draft, and are happy to include more results from the diffusion model in the main text if the reviewer deems this to be useful.
> Run time on one GPU.
Thank you very much for pointing out the differences in timings. We have added a note to the Table indicating that our timings take place on 16xA100 GPUs, and are working to evaluate the timing for a single GPU. One advantage of our approach is that it can benefit from parallelism when generating novel views, while other methods built around distillation and feedforward prediction do not.
> Support arbitrary conditions and targets.
Interestingly, we find our model can work on test settings different from what the model is trained for to some extent. For example, we find that doubling the target views or tripling the conditional views still works (see Figures 2 and 3 in the rebuttal PDF). We still anticipate training or fine-tuning the model on the actual test settings would lead to better results.
> Full-model generalization.
This is a good question and a valid concern. We tried to verify the generalization ability of our model empirically by, e.g., testing our model on a wide variety of captured or generated images that are far out-of-distribution from what the model is trained on (see gallery.html in the supplementary). In Table 3, we observe that the model initialized from the pretrained text-to-image model performs better than the model initialized from scratch quantitatively (suggesting that the pre-trained priors are still present or useful in some capacity). To further maintain the generalization ability, a possible future direction is to jointly train the model on our datasets mixed with the text-to-image data that the model is pretrained on.
> Images with different aspect ratios
While the multi-view latent diffusion model we trained only supports 512x512 images, we found the model still performed well when padding non-square images to square. However, this method often reduces resolution, so we also run our model on a square-cropped version of the inputs, and then compose the square-cropped outputs with the edges from the padded outputs to create a different aspect ratio image. We will add these details to the Appendix.
> shift of the noise scheduler.
As we describe in the text (lines 143-145), we add $\log(N)$ to the log signal-to-noise ratio, where $N$ is the number of target images. More concretely, assuming for the base text-to-image model each time step $t$ is mapped to a log signal-to-noise ratio $\log \lambda_t$; then for our model, $t$ is mapped to $\log \lambda_t - \log(N)$. In settings with a mixed number of target views (e.g. 5 and 7 target views), we pick $N$ to be the smallest number of target views (i.e., 5). The logic is similar to [67] where the noise schedule is shifted for higher resolution generation (see Eqn (4) in [67]).
> Drop the text embedding.
It is removed completely for model simplicity. While architecture changes like this can impact the stability of fine-tuning, we found that a learning rate warmup is sufficient to mitigate potential instabilities.
> Mask and SD inpainting.
The mask we used is to specify conditioning vs. target images. It is constant per image (i.e., not spatially varying), whereas the mask in SD inpainting is used for indicating unknown pixels within an image, which may not align with our task.
> 3DGS.
As far as we know, all radiance field models including 3DGS and NeRFs are data hungry and 3DGS doesn’t necessarily require fewer captured (or generated) images. It’s worth noting that generating novel views is not the main computational bottleneck in the whole system. For example, in the single-image-to-3D setting, it takes 5 seconds to generate 80 views while it takes 55 seconds to run 3D scene extraction.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Thanks for the rebuttal. Most concerns about implementation details have been addressed; however, I still have some questions that warrant further discussion.
1) It is interesting to see that CAT3D can be generalized to various condition and target views. Is this capability attributable to that no positional encoding is used in CAT3D? To my knowledge, video models with positional encoding fail to be extended to inference with arbitrary lengths.
2) For the shift of the noise scheduler, I think the conclusion of using log(N) to shift the scheduler (N is the number of target images) is not convincing.
Because when CAT3D is trained with 5 and 7 target views, it is just evaluated with N=5 in the scheduler. This only confirms that using large N helps multi-view training. The specific relationship between the number of views and the scheduler has not been thoroughly assessed in this paper, leaving the log(N) shifting here somewhat ambiguous.
Furthermore, it's important to note that multi-view images should not simply be equated with high-resolution images, given that the overlap among multi-view images can vary greatly and is stochastic.
3) I think that providing multi-view generation from diffusion before Nerf learning is very important to confirm the capacity limit of stablediffusion.
Even if the model itself cannot be publicly released, sharing samples—such as 80 views generated from Stable Diffusion before NeRF optimization (in the gallery.html)—would greatly benefit the community. This would enable researchers to reproduce performance based on these multi-view images in their own NeRF optimizations.
---
Rebuttal 2:
Title: Thanks for the reply
Comment: Thanks for the reply. Please find our response below:
> It is interesting to see that CAT3D can be generalized to various condition and target views. Is this capability attributable to that no positional encoding is used in CAT3D? To my knowledge, video models with positional encoding fail to be extended to inference with arbitrary lengths.
While CAT3D does not use an embedding of time (e.g. positional encoding of the time index for each frame), it does use an embedding of camera pose (via the raymap). Unlike time embeddings where during training you only see a small but finite number of time embeddings, we see a much larger continuous set of pose embeddings which may aid in generalization. In video and language models, one still can get generalization depending on how you structure and interpolate the time embeddings (see e.g. [1, 2]).
[1] Chen, Shouyuan, et al. "Extending context window of large language models via positional interpolation."
[2] Kazemnejad, Amirhossein, et al. "The impact of positional encoding on length generalization in transformers."
> For the shift of the noise scheduler, I think the conclusion of using log(N) to shift the scheduler (N is the number of target images) is not convincing. Because when CAT3D is trained with 5 and 7 target views, it is just evaluated with N=5 in the scheduler. This only confirms that using large N helps multi-view training. The specific relationship between the number of views and the scheduler has not been thoroughly assessed in this paper, leaving the log(N) shifting here somewhat ambiguous. Furthermore, it's important to note that multi-view images should not simply be equated with high-resolution images, given that the overlap among multi-view images can vary greatly and is stochastic.
It is common practice in training video diffusion models to shift the noise schedule based on the number of target frames, to compensate for the amount of redundant information which may exist across pixels. This is similar to what is done when increasing a models spatial resolution. In reality, the amount of redundant information in a video is a function of the amount of camera and scene motion, i.e., how many pixels are similar or the same across frames, but the number of frames serves as a reasonable approximation. In practice, our model is adapted from a single image diffusion model, and therefore modifying the noise schedule from that base model is necessary, since supervising and predicting multiple frames strictly has more redundant information. And indeed, in practice, we found shifting noise (while training and sampling) improves the quality of results. It is true that we don’t dynamically adjust the schedule based on the number of target frames, we just use the same shift of log(5) for both 5 and 7 target frames, but we found that the difference between that shift in practice is small (e.g. the average LPIPS on the in-domain diffusion samples is 0.235 for shifting log(5) vs. 0.240 for shifting log(7)). The precise formula for log(N), while not explicitly defined in prior work, was something we derived approximately from the numerical noise schedule information provided in the video model literature [3,4]. We will add this detail to the paper. Future work on multi-view models may want to instead condition the model on logSNR and use different shifts when training and sampling with different numbers of frames.
[3] Blattmann, Andreas, et al. "Align your latents: High-resolution video synthesis with latent diffusion models."
[4] Blattmann, Andreas, et al. "Stable video diffusion: Scaling latent video diffusion models to large datasets."
> I think that providing multi-view generation from diffusion before Nerf learning is very important to confirm the capacity limit of stablediffusion. Even if the model itself cannot be publicly released, sharing samples—such as 80 views generated from Stable Diffusion before NeRF optimization (in the gallery.html)—would greatly benefit the community. This would enable researchers to reproduce performance based on these multi-view images in their own NeRF optimizations.
Great point, we will include more samples alongside our NeRF results in the project page (and will also need to combine those videos with a serialized form of the camera pose trajectories).
---
Rebuttal Comment 2.1:
Title: Thanks for the reply
Comment: Thanks for the reply. My concerns are all addressed. So I raise my score to 7.
---
Reply to Comment 2.1.1:
Comment: Thank you very much! | Summary: In this paper, the authors propose a two-stage method for 3D creation. Specifically, they introduce a multi-view diffusion model to generate novel views given observed input views. Then using these views, they perform a robust 3D reconstruction using a Zip-NeRF variant. To generate consistent views, they also design a data-dependent sampling strategy. In addition, they also conduct extensive experiments to validate the effectiveness of the proposed method.
Strengths: + The proposed method is simple but effective. It also obtains a superior 3D creation on scenes.
+ The experiments are exhaustive and validate each components comprehensively, making this paper solid.
+ This paper is easy to follow and provides much details. Thus, it is easy to reproduce it.
Weaknesses: - It is certain that using larger diffusion models can boost the performance. But it is interesting to showcase the improvement trend with increased diffusion models.
Technical Quality: 4
Clarity: 3
Questions for Authors: See the Weaknesses.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Yes, the authors have adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and feedback on our work.
> larger diffusion models can boost the performance
We certainly expect that larger models will lead to improved performance and can generate more consistent novel views. One relevant piece of evidence: we experimented with different model variants (Table 3) and found that increasing the amount of computation (e.g., by adding 3D attention operations at every resolution layer of the UNet) can improve performance (albeit at the expense of efficiency in training and sampling). We chose one of the smaller models as our showcase result with the intention of striking a balance of efficiency and performance.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. My concerns have been addressed. So I retain my score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much! | Summary: This paper introduces CAT3D, a novel approach for generating 3D representations from a flexible number of input images. The authors tackle the challenge of limited input data, a common bottleneck for 3D reconstruction, by leveraging the power of multi-view diffusion models. Their method generates a collection of novel viewpoints consistent with the input, effectively transforming a sparse-view reconstruction problem into a more manageable dense-view scenario. These generated views are then fed into a robust 3D reconstruction pipeline based on a modified Zip-NeRF to produce the final 3D model.
The authors demonstrate impressive results on various benchmarks, showcasing CAT3D's ability to handle single images, sparse multi-view captures, and even text prompts as input. The method exhibits state-of-the-art performance on few-view 3D reconstruction tasks, outperforming existing methods in terms of both speed and accuracy on established datasets. While single-image 3D generation shows promise, the authors acknowledge the performance is not yet on par with leading methods specifically designed for that task, particularly for single objects. The paper presents a compelling advancement in 3D content creation by unifying different input modalities within a single framework and showcasing significant efficiency gains.
Strengths: - The paper presents a novel approach to 3D content creation by reframing the challenge of sparse input as a view generation problem. This core idea of generating the data needed for robust reconstruction is a valuable contribution to the field.
- The authors demonstrate the effectiveness of CAT3D through comprehensive experiments on established benchmarks. Their results on few-view 3D reconstruction tasks are particularly impressive, showcasing state-of-the-art performance on standard metrics and surpassing existing techniques in terms of both speed and accuracy. The ablation study is well-executed, providing valuable insights into the contribution of different components of their method.
- The paper is generally well-written and easy to follow. The authors clearly motivate their work, provide sufficient background information, and describe their methodology in a structured manner. The figures are informative and complement the textual descriptions well.
- The ability to generate high-quality 3D content from a flexible number of input images, as CAT3D aims to achieve, has substantial practical significance. This flexibility is highly desirable for various applications. The demonstrated speed improvements over existing iterative optimization methods further add to its potential impact by enabling more efficient workflows.
Weaknesses: - While CAT3D aims to handle sparse inputs, its dependence on calibrated camera poses, presents a significant limitation. How does performance degrade with increasing sparsity and decreasing pose accuracy?
- The paper relies on manually designed camera trajectories for novel view synthesis, which limits practicality and scalability. The authors briefly mention adapting trajectories based on scene characteristics but provide no concrete details. Developing an automated trajectory selection or optimization procedure, potentially guided by learned priors or scene understanding techniques, would significantly enhance the method's value and broader applicability.
- The paper, at times, overstates its accomplishments (e.g., "achieving state-of-the-art performance across nearly all settings") and does not adequately address its limitations. A more nuanced and critical self-assessment would strengthen the work.
Technical Quality: 3
Clarity: 4
Questions for Authors: - Could the authors provide a more quantitative assessment of how performance degrades with increasing pose noise or sparsity?
- Have the authors considered incorporating an automatic trajectory optimization scheme within CAT3D?
- While open-sourcing the code and trained models is ideal, could the authors at least elaborate on their plans for sharing their work and facilitating reproducibility? Providing more details about the training procedure and hyperparameters would also be beneficial.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors list several limitations, including the reliance on constant camera intrinsics during training, the expressiveness limits of the base text-to-image model, the small number of output views, and the need for manual camera trajectories. However, the discussion lacks concrete examples or quantifiable measures of the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the careful reading and kind words. Below we address your questions and weaknesses mentioned:
> performance with increasing sparsity and decreasing pose accuracy
In terms of performance while varying the number of input views with accurate camera poses, Table 1 includes qualitative results for 3, 6, and 9 view sparse reconstruction. The CAT3D model was not trained with missing or noisy poses, but some of our training data (CO3D) has imperfect pose which leads to some small amounts of robustness. To verify this, we conducted an experiment by perturbing the camera rotations of the input conditioning 3 views by a certain number of degrees, and measuring the error between the generated and ground truth target images. See Figure 1 in the rebuttal PDF for average PSNR across varying amounts of rotation perturbation on the MipNerf-360 dataset. Our model can handle small rotation perturbation. Fine-tuning the model with more perturbations on camera pose should lead to improved performance in this setting, which would be an interesting direction for future work.
> manually designed camera trajectories
We agree with the reviewer that jointly learning a trajectory model or inferring trajectories given scene content is an exciting direction for future work. Empirically, we found a handful of simple heuristic trajectories worked well for our experiments, but ideally the trajectories should cover the scene without being placed inside of objects or walls as mentioned in line 167-169.
> overstates its accomplishments
We are happy to revise language to better reflect limitations. The “state-of-the-art performance across nearly all settings” is referring to Table 1, where our method indeed exhibits stronger performance than all prior work. Were there any other passages of text that you would like us to alter? We discuss several limitations in the discussion section, and can expand for the camera ready (especially discussing the challenges we mentioned above regarding trajectory selection).
> reproducibility
In the Appendix, we aimed to provide all details necessary to reproduce CAT3D on top of an open-source latent diffusion model. If there are any additional details that the reviewer thinks are missing, we are happy to include them in an updated draft.
> constant camera intrinsics, other limitations
It’s worth noting that our model does not rely on constant camera intrinsics during training. This is an artifact of our current training dataset where the camera intrinsics are approximately fixed for each scene. If we were to capture and include additional training data with camera intrinsics varying within a scene, we expect our model would be able to perform camera intrinsic manipulation. As far as limitations in expressiveness of the base model, one notable example is that our model performs poorly on human faces, as the base model was not trained on much human data. A showcase of the limitation of producing a small number of output views can be seen in the Supplementary website, where the generated spin video is clearly not perfectly 3D consistent. The need for manual camera trajectories is shown in Fig. 8, by the fact that it was necessary to create different types of trajectories based on characteristics of different datasets. We will further emphasize these points in the paper text.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Thanks for the clarification. After reading the rebuttal and other reviews, I decide to retain my score (accept). I believe the work deserves to be presented at NeurIPS. | Summary: The objective of this paper is to achieve single-view or few-view to 3D. The core of their method lies in a multi-image-based diffusion model that leverages 3D attention and raymap encoding for the camera poses. This setup is different from concurrent work, IM-3D, which repurposes video generation model to achieve 3D, or ReconFusion, which iteratively refine novel view with diffusion model conditioned on PixelNeRF embeddings. Experiments are conducted against several competitive Baselines, like IM-3D and ReconFusion. Result-wise, their proposed method showcases a nice balance between quality and efficiency.
Strengths: 1. The quality of the generated/reconstructed 3D scene is state-of-the-art, and it is more efficient compared with ReconFusion and other iterative methods.
2. The proposed multi-view diffusion model is effective. In LRM related literature, image-based diffusion is tricky to generate multi-view consistent. The proposed CAT3D certainly gives better multi-view image generation quality without explicit maintaining a 3D representation, which can benefit many related tasks.
3. The paper is clearly motivated, and the experiments are carefully designed and reported.
Weaknesses: 1. It seems cumbersome to generate a large number of viewpoints(Line:174) by doing anchor first, and enriching frames in between by repeated running CAT3D. Indeed, more view generated together from the multi-view diffusion is good. But running it iteratively will still produce inconsistencies over runs over different set of camera viewpoints, as is can be seen from Fig.6.
2. The trajectory shape seems to influence the quality of the multi-view diffusion. This limits its generalizability to arbitrary/scattered image viewpoints towards the same scene target, which are common in daily life.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. What happens if we only use anchor views to run recon in step2? How the quality compared with the current setup?
2. What if we densify view through some existing video interpolation model (instead of re-run CAT3D)? Would be more efficient and better quality?
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and careful review of our work. Below we address your questions and weaknesses mentioned:
> cumbersome to generate a large number of viewpoints
We agree that jointly generating all target frames from the multi-view diffusion model would enable more consistent samples. However, simultaneously generating a large number of frames with video diffusion-like architectures is still an active area of research, with many SOTA text-to-video models using autoregressive generation in time to produce a larger number of frames (while losing consistency due to limited context length). By splitting the generation into anchors and then independent blocks of generation at different camera locations, we can efficiently generate a large number of frames *in parallel*. Building efficient architectures that can support the joint generation of a larger number of frames is an exciting direction for future research.
> trajectory influences quality
Our multi-view model can generalize to arbitrary camera trajectories, and we show results for other input capture types, e.g., forward-facing scenes like LLFF which contain several images pointed towards the same scene target. It is true that the model performance varies for different camera trajectories, and this is likely due to some of the biases inherited from our relatively limited training dataset. Mixing in training data with more diverse camera distributions could be an interesting future direction.
> only use anchor views for reconstruction
Our model was only trained to produce at most 8 views total (conditioning + target), and the 3D reconstruction methods we use (i.e., Zip-NeRF) are seldom able to reconstruct plausible geometry from 8 views (as a reference, see results with Zip-NeRF and 9 *real* views which are strictly better than 8 generated views in Table 1). Including additional frames through AR generation is critical to yield the high-quality reconstruction and NVS results we present in the paper (see also Fig. 6 that compares 80 vs. 720 frames).
> densify views through video interpolation
Using a video interpolation model to generate frames is an interesting idea. However, it is not clear how to use such a model to incorporate more than 2 frames, and how to leverage the resulting frames in 3D reconstruction without explicit camera control. It’s also not clear whether this approach would be more efficient or would yield better quality results: if using an architecture similar to ours, the compute cost would be similar, and one would also have to estimate camera poses for the resulting frames (which adds an additional computational expense that our pose-conditioned multi-view diffusion models do not incur).
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed clarification! It is one solid work. I retained my score | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time and critical feedback to improve our work. We appreciate that the reviewers found our method simple, effective, and efficient, leading to a “compelling advancement in 3D content creation.” We address individual questions below, but first highlight some shared concerns.
One of the biggest limitations of our work is the need to specify a desired camera trajectory, along which to generate novel views that supervise 3D reconstruction. Choosing the correct trajectory can be challenging for complex scenes, and we believe this is an exciting direction for future work. We found that simple orbit trajectories and forward-facing explorative trajectories can be useful across a wide variety of inputs (see Supplementary website), but for more complicated scenes a more automated strategy for bespoke trajectory selection would be useful.
Regarding our autoregressive sampling strategy, we found that first producing anchors and then generating sets of views in parallel produced mostly consistent views while drastically accelerating sampling time. This strategy is distinct from video diffusion models where all frames are generated at once, and allows us to produce 3D scenes faster.
We’ve aimed to ensure all details needed to reproduce this work are included in the text, and we are happy to iterate with reviewers to ensure this is the case.
Pdf: /pdf/46429078efe8a0fc9f206948a8e77ea9314c5a39.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
On the Scalability of Certified Adversarial Robustness with Generated Data | Accept (poster) | Summary: The paper presents an empirical study on how synthetic data can help improve the robustness accuracy and clean accuracy of certified adversarial robustness. While existing studies have shown promising results in using synthetic data to improve empirical adversarial robustness, the effectiveness of synthetic data on certified robustness has never been explored. This paper bridges this gap by designing experiments on $l_2$ and $l_\infty$ robustness of multiple models over the CIFAR-10 and CIFAR-100 datasets. Experiment results show that synthetic data improves certified robustness but in a different way than their effects on the empirical one. The paper also provides ablation studies and guidance on different hyperparameters, such as dropout rate, epochs, model sizes, learning rate schedulers, quantity, and ratios between original and synthetic data.
Strengths: 1. The paper conducts experiments with multiple models and two datasets, demonstrating the generalizability of their study.
2. The ablation studies consider several hyperparameters such as dropout rate, epochs, model sizes, learning rate schedulers, quantity, and ratios between original and synthetic data.
3. The paper provides guidance on comparing certified accuracy among different approaches. This guidance is important and should be prompted in this research field.
Weaknesses: 1. My biggest concern is that the paper misses a large chunk of related works in section 2 and its experiments.
* In section 2, when discussing the deterministic approaches of certified robustness, the paper writes, "one deterministic approach consists of ..." There are other groups of deterministic approaches using convex bound propagation, such as IBP, SABR, TAPS [1, 2, 3]. The paper focuses only on approaches bounding the Lipschitz constant of each neural network layer. This narrow focus hurts the generalizability of the paper and should be justified. For example, the paper mentions that this group (bounding the Lipschitz constant) of approaches generates a robust guarantee by computing the distance between the highest two logits in the output space. This sentence justifies why not focus on the convex bound propagation approaches.
* In experiments, the paper's generalizability can be enhanced by comparing against or combining state-of-the-art convex-bound-propagation approaches, such as SABR and TAPS, SOTA, to see how the synthetic data can help.
2. In section 4.3, the paper discusses the correlation between the generalization gap and certified accuracy. The reasoning in this section makes sense. However, the paper might neglect the fact that the model trained with dropout and synthetic data hurts the certified accuracy. This is a weird result. Could you provide the generalization gap of the model trained with $\rho=0.85$ and synthetic data to try to explain this wired result?
[1] On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models
[2] Certified training: Small boxes are all you need
[3] Connecting Certified and Adversarial Training
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. In section 3.2, the paper mentions that "we naively sub-sample the 1m images from the 5m image dataset to..." Are those images from generated data or original data? Also, the sizes of the generated data are different (50K~5m). Do you always perform a 20% sub-sampling, and where do the constants "1m" and "5m" come from?
2. Does the diffusion model used to generate synthetic data see the robust classifier? Will it generate different synthetic data for different classifiers?
3. In section 5, the paper mentions "overfitting between best and last epoch." Does the paper perform experiments comparing one model trained by early stopping with the other without the early stopping?
Comment:
The paper points out a distinction between the scale of synthetic data on certified robustness and empirical robustness, i.e., scaling beyond one million CIFAR-10 generated images did not further improve certified accuracy, regardless of model size. This observation might be related to Bayes error [4,5].
[4] Certified Robust Accuracy of Neural Networks Are Bounded due to Bayes Errors
[5] How Does Bayes Error Limit Probabilistic Robust Accuracy
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Although the paper mentions some limitations in their checklists, one significant limitation is whether synthetic data can help other deterministic training methods, such as convex bound propagation, as mentioned in the weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments, which we address below.
1. While we would have loved to include an even more extensive set of models (and datasets), our focus was on the best two models for the ℓ2 and ℓ∞ norms on CIFAR-10. In particular, for CIFAR-10 with ℓ∞, ε=8/255 threat model the IBP has 32.04% accuracy (67.96% error rate), SABR has 35.13%, and TAPS 35.10%. We have, however, now expanded our discussion of related work to include convex bound propagation, and are actively trying to add experiments for one paper of that family of certified methods.
2. We are uncertain whether we have correctly interpreted your inquiry, so what follows is our best attempt at resolving any confusion. The best SortNet model w/o auxiliary data has 39.72% certified accuracy, with a generalization gap of 4.00%pt. When removing dropout (going down Tab. 1), the gap increases to 12.55%pt. When adding auxiliary data (going to the right in Tab. 1) the gap decreases to -0.82%pt, i.e., testing accuracy is better than training accuracy, indicating negative effects of dropout. Hence the overall best SortNet model with 41.78% certified accuracy has no dropout, and is able to benefit from longer training. We’d also like to point out that for ρ=0.85, the model w/ auxiliary data (41.32%) still outperforms the one w/o auxiliary data (39.72%), and the bold simply denotes the best model w/o auxiliary data.
Regarding your questions, we answer them as follows.
1. Wang et al. provide their generated data at https://github.com/wzekai99/DM-Improves-AT, which is also where the 1m, 5m, and 10m constants are from. However, their 1m dataset was subsampled from the 5m using the confidences of a classifier trained on the original CIFAR-10 images. This is why we instead use the first 20% of the generated 5m images.
2. No, the diffusion model was conditionally trained on the ground-truth CIFAR-10 labels, but did not at any point use a classifier.
3. Appendix E contains a figure comparing the best versus last epochs, and the full result tables in supplementary material directory cert-robust also contain columns with both. We did not find any meaningful correlation, which is also a marked difference compared to adversarial robustness.
We have now included a discussion of the Bayes error in Sec. 5 “Amount of Training Data.” Undoubtedly further exploration is required to determine why this behavior differs, and we remain confident our paper provides the necessary impulse to answer these and related questions.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I realized that I made a mistake when reading Table 1 regarding the second point of weakness. I will keep my score for now and I need to further discuss with other reviewers and ACs about the first point of weakness.
---
Reply to Comment 1.1.1:
Comment: Thank you for encouraging us to perform additional experiments on convex bound propagation, for which we provide additional results in our overall rebuttal. In short, MTL-IBP performs significantly worse when successively adding auxiliary data and its scaling dynamics deviate significantly from those for Lipschitz-bound models. We'd kindly ask you to consider this in your final evaluation. | Summary: The paper explores advancements in certified defenses against adversarial attacks in deep learning models by leveraging data from state-of-the-art diffusion models during training. It addresses the current challenges where empirical methods, such as adversarial training, augment data but face difficulties with new attacks, contrasting with certified approaches that provide robustness guarantees within predefined threat models but often exhibit lower overall robustness. By integrating additional data from diffusion models, the study achieves state-of-the-art robustness certifications on CIFAR-10 and CIFAR-100 under L infinite and L2 threat models. This approach not only enhances deterministic defenses but also improves accuracy on clean data. The paper also performs extensive ablation studies to examine the effects of various design choices on certified robustness.
Strengths: 1. The writing of the paper is clear and easy to follow.
2. The paper explores a novel approach by leveraging data from diffusion models to improve certified defenses, which are typically more reliable than empirical methods.
3. The approach achieves state-of-the-art results on CIFAR-10, demonstrating a significant improvement in robustness certificates for both L infinite and L2 threat models.
4. The experimental findings are comprehensive and solid.
Weaknesses: 1. The paper evaluated four architectures selected from the certified robustness leaderboard [9]. However, the achieved clean accuracy and robustness accuracy on these neural networks are notably low. For instance, compared to architectures like WideResNet, which can attain at least 95% clean accuracy and over 80% robustness accuracy on CIFAR-10, the accuracies achieved by the approach in this paper remain insufficient.
2. While evaluating the impact of using generated data from diffusion models to bolster certified robustness is novel, the idea itself represents a somewhat incremental advance. Previous research has already explored the use of such data to enhance adversarial robustness within empirical methods.
3. Despite the appeal of certified robustness due to its robustness guarantees, its lower accuracy compared to empirical methods diminishes its practicality. While this paper enhances certified robustness, its results have limited practical applicability.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The clean accuracy and robust accuracy of the networks trained in this paper are still lower than those of other architectures. Could you explain why they are lower and provide some reasons for this?
2. Considering the relatively low accuracies, how practical is certified robustness? Could you list several practical scenarios where sacrificing accuracy for robustness to this extent would be justified?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. The results primarily focus on CIFAR-10 and CIFAR-100 datasets, with limited exploration of generalization to other datasets and real-world scenarios.
2. The results are based on four selected neural network architectures, and the generalization to a broader range of architectures is not thoroughly investigated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to provide valuable feedback. We’d like to address your concerns regarding the weaknesses as follows.
1. This is currently a well-known general limitation of certified robustness, and we believe should thus not have any influence in how our work is evaluated. To date, there is a trade-off to be made between high levels of robust accuracy to predefined attack models (empirical robustness) or lower levels of accuracy to any theoretically possible attack model (Lipschitz-bound certified robustness, among others), the latter of which is obviously a much stronger guarantee. For what it’s worth, the work by Hu et al. [1] raises hope that some of these limitations may be overcome in future research.
2. The idea itself may not be novel, yet its evaluation on a broad set of Lipschitz-bound models certainly is. While incremental, we still believe it is of interest to the wider community focused on certified robustness, as it establishes that a better baseline for comparing certified models may be the data-saturated regime where no generalization gap is present. This better reflects the capability of each model to certify examples, rather than its ability to generalize.
3. Practicality is inherently subjective, yet we agree there are limits to certified robustness that prevent wide-spread adoption. However, we would also like to highlight that the direct comparison of adversarial accuracies to certified ones is inherently ill-posed, as the latter is strictly smaller than the former, which is dependent on a specific attack model (e.g., PGD). It is thus unfair to dismiss them on the basis of this comparison alone. While an in-depth discussion of advantages is beyond the scope of a rebuttal, we echo the sentiments of Mangal et al. [2] who postulate that Lipschitz-bound certified defenses may put an end to the cat-and-mouse game of adversarial defenses and attacks.
While some points have already been mentioned as part of our discussion of the weaknesses above, we’d also like to respond to the questions.
1. Unlike adversarial robustness, which is always evaluated with a specific attack, certified models present guarantees with respect to all possible attacks (given a norm and epsilon bound). This is achieved by imposing mathematical constraints, such as orthogonality, that have negative impacts on model capacity and computational complexity. Both limitations are currently being addressed in ongoing research.
2. While practicality is somewhat limited currently, safety-critical domains remain interested in Lipschitz-bound certified robustness as it is immune to advanced attack models that may be devised by malicious actors in the future as part of the cat-and-mouse game that is partially fueling the large amount of publications in adversarial robustness. We’d again like to point to Mangal et al. [2] who more extensively argue regarding its importance.
[1] K. Hu, A. Zou, Z. Wang, K. Leino, and M. Fredrikson, “Unlocking Deterministic Robustness Certification on ImageNet,” in Adv. Neural Inf. Process. Syst. (NeurIPS), vol. 36, New Orleans, LA, USA, Dec. 2023.
[2] R. Mangal, K. Leino, Z. Wang, K. Hu, W. Yu, C. Pasareanu, A. Datta, and M. Fredrikson, “Is Certifying ℓp Robustness Still Worthwhile?,” Tech. Rep., Oct. 2023, arXiv:2310.09361 [cs] type: article. [Online]. Available: https://arxiv.org/abs/2310.09361
---
Rebuttal Comment 1.1:
Comment: Thanks for your clarification. My concerns about the incremental nature and the limited practical applicability of this work still remain. I believe these aspects are still critical. I will keep my score. | Summary: This work proposes using data augmentation with diffusion models to improve the certified robustness of image classification models. The authors analyze the training and certification behavior of different Lipschitz-bound-based machine learning models when the training data is supplemented with additional generated samples.
Strengths: **The Method Improves Certified Robustness**
The experiments demonstrate that adding generated data can enhance the certified robustness of several certification methods. Three out of the four tested methods show improvements when trained on additional data, and the fourth method also shows improvements when dropout is removed. The authors suggest this is due to a generalization gap in existing models, which can be bridged by training on additional (generated) data.
**Extensive Evaluation**
The evaluation section is comprehensive, with tests conducted across two datasets, four certification methods, and two threat models. It also includes numerous ablations, such as varying model sizes and percentages of generated data. This provides a well-rounded understanding of the proposed data generation method, with findings effectively summarized in the takeaway list in Section 4.6.
**Code Included and Well Documented**
The supplementary material includes code for running the experiments, which enhances reproducibility. The repository is well-documented, providing clear instructions for setup and reproduction of the paper’s results, tables, and figures.
Weaknesses: **Limited Novelty and Insights**
The idea of supplementing training data with generated data is not new and has been applied in various contexts. Specifically, it has already been used to improve Lipschitz certification by Hu et al. [22]. While this work offers a more thorough evaluation across different threat models and model architectures, the additional insights gained are limited. The paper mainly serves as an evaluation without providing new theoretical insights or methodological advancements.
**Missing Comparison to a Key Related Approach**
A very similar work by Hu et al. [22] also uses diffusion models (DDPM) to augment the training of Lipschitz-based certification methods. Although this work is listed in the references, it is neither discussed nor compared to, despite being the most related approach. This omission leads to several issues:
- The claim in L105 that certified guarantees on ImageNet are “close to random guessing” is inaccurate. Hu et al. report 35% CA for $\ell_2$-norm with $\epsilon = 36/255$, which is significantly higher than the 0.1% of random guessing.
- The claim in L36 that “[generated data] has not yet been combined with deterministic certified training methods” is incorrect, as Hu et al. have done this.
- The claimed improvement over SOTA methods (L13, L145) is not entirely accurate. For $\ell_2$-norm with $\epsilon = 36/255$ in CIFAR-10, Hu et al. report 70.1% CA, which surpasses the 69.05% reported in this work.
**Unclear Meaningfulness of Improvements**
The improvements over prior methods are minor, in the low single digits. As the authors note (L261), a change in hyper-parameters or different seeds can have similar effects on the model. Therefore, it is difficult to judge if the measured improvements are meaningful, especially considering the significant overhead of training a generator, generating the data, and additional model training on the larger dataset. This issue is compounded by the fact that all reported numbers are from single runs with one seed, without error bars or standard deviation. While the authors argue that the large computational cost makes multiple runs infeasible, it remains unclear how meaningful the improvements are.
Technical Quality: 2
Clarity: 3
Questions for Authors: L180: Do you really mean “1m, 1m, or 1m auxiliary data”?
**Remarks**
Tables and figures are sometimes far from where they are referenced (e.g., Table 1, Fig 2). This makes it difficult to read as the reader has to jump back and forth.
Some instances of % should be replaced by percentage points, e.g., in the abstract (L14), L42, etc.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The limitations are not adequately addressed. The paper should include a discussion of limitations, such as the fact that only four Lipschitz-bound-based certification methods are evaluated. It is unclear if these results generalize to other Lipschitz-based certification methods and are likely not applicable to certification methods from different families. Furthermore, only mid-scale datasets like CIFAR are considered; the generalization gap may not exist in larger-scale datasets. Additionally, the results are limited to image classification, and the resulting improvements are small.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the feedback, and for highlighting our typo in L180, which was meant to read “1m, 5m, or 10m auxiliary data.” We’d like to address your main concerns as follows.
**Limited Novelty and Insights**
The original idea of using data from generative models trained on the original data actually traces back further to a paper by Gowal et al. [1] in 2021, with Wang et al. [2] later providing a more in-depth empirical analysis of the various factors contributing to the observed improvements. Both publications remain highly influential in the domain of empirical robustness. Although we acknowledge that Hu et al. [3] have now also used DDPM, they only dedicate a single paragraph to its discussion. Hence, our detailed empirical analysis in the domain of Lipschitz-bound certified defenses remains novel. One of the key, and we believe substantial, insights is that additional data can only help close the generalization gap, but not improve robustness itself for Lipschitz-bound certified robustness.
**Missing Comparison to a Key Related Approach**
As outlined previously, Hu et al. [3] might be the first published work to use generative models with Lipschitz-bound certified defenses but is missing a broader set of experiments and discussion of the applicability of data scaling to improve Lipschitz-bound robustness guarantees. We did not include larger-scale datasets, in particular ImageNet, as they are the only ones to achieve meaningful levels of accuracy and also do not apply DDPM. Given the chance, a camera-ready version would better delineate our work from that by Hu et al. [3] as well as reformulate some of the claims mentioned for additional clarity.
**Unclear Meaningfulness of Improvements**
We understand that a larger sample size of different seeds would have been preferable, yet a significant budget of GPU hours (approx. 3000 hours) was already spent on the presented experiments. Given that most trends are present across multiple experiments, we nevertheless feel confident that our conclusions hold up to further scrutiny.
In a camera-ready version we would also revisit placement of tables and figures and ensure that %pt (percentage points) are used in place of % where necessary.
[1] S. Gowal, S. Rebuffi, O. Wiles, F. Stimberg, D. A. Calian, and T. A. Mann, “Improving Robustness using Generated Data,” in Adv. Neural Inf. Process. Syst. (NeurIPS), vol. 34, Virtual Event, Dec. 2021, pp. 4218–4233.
[2] Z. Wang, T. Pang, C. Du, M. Lin, W. Liu, and S. Yan, “Better Diffusion Models Further Improve Adversarial Training,” in Proc. Intl. Conf. Mach. Learn. (ICML), Honolulu, HI, USA, Jul. 2023.
[3] K. Hu, A. Zou, Z. Wang, K. Leino, and M. Fredrikson, “Unlocking Deterministic Robustness Certification on ImageNet,” in Adv. Neural Inf. Process. Syst. (NeurIPS), vol. 36, New Orleans, LA, USA, Dec. 2023.
---
Rebuttal 2:
Comment: Thank you for your response to my questions and comments.
**Limited Novelty and Insights**
As stated in my original review, I agree that the evaluation is more thorough than Hu et al.'s and provides some additional insights. However, the novelty compared to prior work is still limited, as a very similar method has been proposed before.
**Missing Comparison to a Key Related Approach**
Thank you for expanding the discussion and comparison to Hu et al. and for correcting the related claims. The rebuttal alleviates this concern.
**Unclear Meaningfulness of Improvements**
I understand the reason for not running wide-scale additional experiments. However, the small improvements combined with single runs remain a weakness, as it is difficult to judge how meaningful the improvements are. Could it be possible to perform multiple runs with different seeds for some key configurations to limit the computational budget required and still show the mean and variance of results? I don't expect those results within the discussion period, but it would be good to add them to the final version.
---
Rebuttal Comment 2.1:
Comment: Thank you for your response. We agree that additional experiments with different seeds would help better judge the meaningfulness of the observed improvements and have scheduled four additional runs for the best configurations of each model with and without auxiliary data (e.g., the ones with 64.53% and 69.05% certified accuracies for LOT, Tab. 2). As we are currently running experiments on convex bound propagation we do not expect those results to be ready before the end of the discussion period, though.
---
Rebuttal 3:
Comment: While we are unable to provide true error estimates based on different seeds within the discussion period, we can make an educated guess for GloroNet (L, 2400 epochs) based on a larger-scale sweep with 49 different auxiliary dataset sizes. When fitting a logarithmic curve to the obtained certified accuracies, this yields a standard deviation of 0.20 percentage points under a constant error model (i.e., the error is assumed to be independent of the amount of auxiliary data). This is well below the 2.91 percentage point difference with and without auxiliary data. Although this is certainly limited in methodology, we hope this may help further alleviate any concerns with regards to the significance of the differences observed. | null | null | Rebuttal 1:
Rebuttal: In response to the reviewer’s feedback, we have made the following changes to the manuscript
* In both Sec. 1 (Introduction) and Sec. 7 (Conclusion) we have added further references to Hu et al. and their prior usage of DDPM generated data, and carefully reworded related claims where appropriate (ours remains first for $\ell_\infty$ with new state-of-the-art). This is in response to reviewer #1's valid concerns that we may not have highlighted this enough in our initial manuscript.
* In Sec. 2 (Related Work), subsection “Certified Robustness”, we have dedicated a paragraph to convex bound propagation, and how it differs from Lipschitz-bound certification. This includes the references mentioned by reviewer #3. Experiments are currently running, and we will update once results are in.
* In Sec. 3.1 (Dataset and Threat Models) we have removed the reference to random guessing and replaced it with a more extensive discussion why we chose not to evaluate on the larger-scale ImageNet dataset.
* In Sec. 5 (Comparison to Empirical Robustness), subsection “Amount of training data”, we have added Bayes error as an alternative explanation as to why 1 million additional images seem sufficient for the investigated models. Thanks to reviewer #3 for pointing us in this fairly recently researched direction (May 2024).
* After Sec. 5 we have added a new Sec. 6 (Limitations) which addresses all known limitations, previously scattered throughout the paper, in a single place. Despite our extensive experiments, this includes (a) only evaluating one type of deterministic certified robustness, namely Lipschitz-bound ones; (b) only performing one repetition per experiment; and (c) only work on CIFAR-10 and CIFAR-100, as opposed to ImageNet.
We are currently also running experiments using the work by De Palma et al. [1], which uses convex bound propagation and is the 3rd-best model after ℓ∞-dist Net. However, as we have a shared cluster, we are unable to estimate whether results will be finished by the end of the discussion period. We expect to post an update by Monday.
[1] A. De Palma, R. Bunel, K. Dvijotham, M.P. Kumar, R. Stanforth, and A. Lomuscio, “Expressive Losses for Verified Robustness via Convex Combinations,” in Proc. Intl. Conf. Learn. Representations (ICLR), Austria, AT, May 2024. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Fast, accurate training and sampling of Restricted Boltzmann Machines | Reject | Summary: Training RBMs is challenging and slow due to the multiple second-order phase transitions and associated slow mixing of MCMC sampling. The paper introduces a pre-training method, consisting in integrating the principal directions of the dataset into a low-rank RBM through a convex optimization procedure. The Gibbs-Boltzmann equilibrium distribution of the pre-trained model can be efficiently sampled via a static Monte Carlo process. Starting from the pre-trained model, the standard Persistent Contrastive Divergence (PCD) training procedure for RBMs partially overcomes the problem of second-order phase transitions. The pre-training method is tested on the MNIST 01 dataset, a synthesized “Mickey” dataset, and the Human Genome dataset (HGD). The method is shown to outperform the PCD algorithm and the Jarzynski reweighting method (JarRBM).
The paper also introduces a new method to sample from the trained model, called Parallel Trajectory Tempering (PTT), and compares it with the Annealed Importance Sampling (AIS) method.
Strengths: Technically sound.
The proposed pre-training method is shown to improve on other training schemes for RBMs (namely the standard PCD method, and the Jarzynski reweighting method).
The novel PTT method is shown to improve on standard Gibbs sampling.
Weaknesses: My main concern is about the usefulness of the RBM approach to generative modeling. (cf question Q1 below).
The PTT sampling method seems to require more memory than standard methods for RBMs.
It is unclear how novel the proposed pre-training method is. (cf question Q2 below)
The paper contains a link to a github repository that reveals the author’s identity (section 8 page 9):
https://github.com/nbereux/fast-RBM
Minor. Line 465: Appendix A.2 references itself.
Technical Quality: 3
Clarity: 3
Questions for Authors: Q1. I am missing the big picture. Could you clarify the primary motivation for studying RBMs? How do you envision RBMs improving on generative models? In what scenarios could RBMs improve on other SOTA generative methods (e.g. transformer-based generative models) ? And according to what criterion/metric can these RBMs improve over these SOTA generative methods?
Q2. Can you clarify what was done in the Reference below, and what is the contribution of the present work compared to this earlier work?
Aurélien Decelle and Cyril Furtlehner. Exact training of restricted boltzmann machines on 414 intrinsically low dimensional data. Physical Review Letters, 127(15):158303, 2021.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the referee for his comments and questions.
**Q1:** The main reasons for using RBMs are twofold: they are easy to interpret and are particularly well suited for tabular datasets with discrete variables such as DNA or protein sequences, both in terms of generation power and sample efficiency, which makes them particularly suitable for inference applications in physics, bioinformatics or neuroscience. Transformer-based generative methods require a large amount of training data. Most applications in science cannot afford this luxury, as experimental annotations are expensive or private. Moreover, once these models are trained, apart from their impressive generating power, it is almost impossible to extract understandable information from their parameters, which is the typical goal of data-driven scientific studies.
We copy below a list of recent references published in high-impact journals using the RBMs in biology:
* [Bravi, B., et al. Probing T-cell response by sequence-based probabilistic modeling. PLOS Computational Biology, (2021)]
* [Di Gioacchino et al. Generative and interpretable machine learning for aptamer design and analysis of in vitro sequence selection. PLoS computational biology, (2022)]
* [Bravi, B. et al. RBM-MHC: a semi-supervised machine-learning method for sample-specific prediction of antigen presentation by HLA-I alleles. Cell systems, (2021)]
* [S. Quiroz Monnens et al, The Recurrent Temporal Restricted Boltzmann Machine Captures Neural Assembly Dynamics in Whole-brain Activitye Life (2024)]
* [Bravi, B. A transfer-learning approach to predict antigen immunogenicity and T-cell receptor specificity. ELife, (2023)]
* [Tubiana, J.et al. Funneling modulatory peptide design with generative models: Discovery and characterization of disruptors of calcineurin protein-protein interactions. PLoS computational biology (2023)]
* [Bravi, B. Development and use of machine learning algorithms in vaccine target selection. npj Vaccines, (2024)]
We say that RBMs are interpretable precisely because their architecture is simple enough to allow a high level of analytical treatment. For example, their energy function can be rewritten in terms of an explicit many-body spin interaction system. This analogy makes it a very powerful inference engine that overcomes the standard Ising inverse approaches restricted to pairwise interactions. The structure of the probability distribution can also be explored with perturbative tools that allow, for example, to hierarchically cluster data for phylogenetic reconstruction (see [8]) or to extract possible mutational paths between data points [Mauri, E., Cocco, S., & Monasson, R. (2023). Mutational paths with sequence-based models of proteins: from sampling to mean-field characterization. Physical Review Letters, 130(15), 158402.].
**Q2:** the question of the novelty is answered in the Author rebuttal section
**Weaknesses:**
* *memory costly PTT:* It is true that the PTT sampling requires more memory than a standard sampling procedure, but it allows the generation of samples in models where it is simply impossible to generate samples in a reasonable amount of time otherwise. Memory is much easier to handle than time, especially in our case the models are moved to GPU memory one at a time, and only when we need to generate samples of them. Otherwise, they are kept in RAM. Even though the method requires more memory than standard Gibbs sampling, it is a very limited amount when compared to sample large models. Moreover, pre-training allows keeping this memory cost very low (in our case, only 7 times more memory was needed than standard sampling). We will mention this in the limitations section.
**Minor comments:**
* We will correct the problem with the auto-reference.
* We apologize for the non-anonymized GitHub link, we were not very sure how to do it otherwise.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed explanations. They have addressed my main concern, and the novelty compared to prior works on RBMs seems convincing to me too. Overall, their response motivates me to increase my score.
In the introduction section of the paper, I'd encourage you to further motivate the usefulness of the RBM approach compared to other generative models (especially compared to SOTA transformers). | Summary: The work proposes to pretrain RBMS with a recently developed convex approximation, the restricted coulomb machine and then fine-tune the model using standard techniques like PCD. Further, a novel sampling technique, PTT is proposed that can sample from the final trained model by employing a sequence of model snaphsots during training, that are then connected via replica exchange in the style of parallel tempering.
Experiments are conducted and results show that the sampled distributions, when projected to the first few principal components, match the true distribution better. Moreover, log-likelihood comparisons based on single training runs show that the proposed method starts at much higher likelihood values due to the initialisation and there is some evidence that the resulting model als reaches higher likelihood values. Further experiments for PTT show that it is more likely to jump between clusters of the distribution than PCD based on gibbs sampling
Disclaimer: This is an emergency review. I have not had the time to do a detailled analysis, or implement/reproduce any of the results. While I am expert in the field, I will adapt my confidence accordingly. I will not fullfy abide to the review format.
Edit: An edit has been performed that only included changes of the format, but not the content.
Strengths: - The use of the approximation using the restricted coulomb machine as an initialisation is an interesting idea that is worth investigating and the results in Fig 3C suggest that this pretraining approach is very effective.
- The idea of using the training models as sampling steps is an interesting approach as well.
In general, i can see that both techniques can become tools in a more general RBM toolbox, however both of them seem to be incremental changes, even though they could impact RBM training a lot.
Weaknesses: The main weaknesses of the present work are in two areas: experiments/comparisons and language, of which I only deem the former critical, while the latter will limit the potential impact of the article in the machine-learning community. As a result of the weaknesses, this paper has a number of misleading claims or claims that are not supported by the data presented in the work.
Experiments/Comparisons:
- The authors dismiss using PT for being "expensive" and do not compare against it. This is against evidence that even single steps of PT chains with a moderate number of parallel chains can be more efficient than PCD using similar resources. This is especially interesting, since the authors use PCD-100 for training, which allows a lot of resources for PT. Note that the authors themselves reference [40] which shows an order of magnitudes improvement over Gibbs sampling already with 10 chains.
Another example is given by
[*] https://www.sciencedirect.com/science/article/pii/S0004370219301948
where the authors used PT for training with k=50 chains and only a single sampling step per iteration. Note that this paper uses a very simple heuristic to improve PT sampling: choose the reference (temperature ->infty) distribution as not the uniform, but the marginal data distribution. This is in contrast to [40] that used the uniform distirbution, leading to a significantly worse baseline.
- There are almost no comparisons of log-likelihood values or values of normalisation constants for the proposed technique. While some are given in Figure 3C, they are only single run and only using approximated likelihoods. Due to this, the phrase " significantly higher log-likelihoods" in the conclusion is NOT supported by the data, given there is not enough data to test for significance or even measure the variability.
- The training is also cut short, or the RBMs trained are not powerful enough, since 3B shows clear artefacts in all samples, indicating that none of the machines approximate the dataset well. Since RBMS with enough latent variables are universal function approximators, we can not get a definitive statement of whether the proposed pretraining does allow for better likelihoods. Since the experiments are not very expensive, this reviewer would propose to at least repeat experiments in order to obtain error bars on Fig 3C.
- The learning rate of 0.01 used in the experiments seems to be on the high end. This is not only bad for PCD training, but also for obtaining high likelihood values, and could explain big parts of the leveling out of the graph in Fig 3C. While it is okay to use a high learning rate in the beginning, keeping it constant over the course of training seems like an oversight.
- For PTT especially, there are no good comparisons that compare the quality of the samples in terms of representing the true distributions. While visual examples are shown that show visually good mixing, the baseline to compare to is again PCD, and not PT, nor stacked tempering. Since, again [40] showed that both alternatives clearly beat gibbs sampling given the same amount of resources, we do not know how good this sampling scheme really is, compared to strong baselines. This reviewer suggests to compare in at least one experiment the proposed approach to an approach with known normalisation constant and then measure how well estimators based on these samples work, similar to [*]. This would also partially verify 3F.
- As an addendum to the previous point: [*] showed that the performance of AIS improves significantly with the choice of reference distribution. Using the same distribution as [*] or the pretrained distribution proposed in this work, might diminish any performance gains by PTT. This would still be a major improvement to the state of the art.
Language:
- While in general well written, this work reads like targeting a physics community, not the ML community. As a result, this paper includes slang terms mostly encountered in statistical physics/thermodynamics, which do not have a clear meaning in the statistics community. This includes terms like "phase transitions" (first and second order), "equilibrium models" (not consequential for the article, nor the review, but this reviewer genuinely does not know what this term is supposed to mean), "critical transitions", "relaxation times", "free energy". Since most of these terms appear in sections where the authors try to explain the method and/or its consequences, a significant number of readers will not be able to understand those reasonings as they do not have a grasp of the physical analogies. This reviewer would propose to replace some of the terms by the statistical equivalent, or to introduce them.
- Some of the explainations and reasonings are misleading. The initial paragraph highlights that RBMs are supposedly interpretable. While they are simple models, Binary RBMs are still universal function approximators and thus it is highly unlikely that the latent space has any meaning that aligns with any human interpretable semantics. If the authors disagree with this, This reviewer encourages them to add a citation to line 25.
- Missing citations: the datasets used should be cited.
- Figure references missing: the article does not always refer to the figure they are talking about, e..g, line 205. In general references should include the subfigure letter as the article does later, e.g., lines 263+
Technical Quality: 3
Clarity: 2
Questions for Authors: Suggestions are added under weaknesses.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors mark "Yes" for point 2 "Limitations" on the checklist. This reviewer has not found that the authors discussed the limitations of their work. This especially includes the guidelines that say that authors should "reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs." However, in fairness, the authors answer "[NO]" in question 7, discussing the presence of error bars or significance tests to measure significance. However, since this is not part of the final publication, and the authors include misleading claims about significance of results, this must be discussed in the main text, or the phrasing weakened.
I have not found any information on runtime or CPU/GPU hours, but the CPU/GPU were reported, so point 8 is partially fulfilled.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for its comments and suggestions.
**1. PT:** The reviewer disagrees with us on the overall performance of standard PT in RBMs. We have several comments on this. First, the reviewer disagrees with the statement that PT is costly and assesses that the reliability of PT in RBMs has been well established in previous references. It is more or less textbook knowledge in statistical physics that the PT algorithm does not work in the presence of a 1st order phase transition (Ph-Tr). The reason is that if new distant modes simply emerge when the temperature decreases, the chains coming from high temperatures will never nucleate them if the number of MCMC steps performed between temperature-swaps is not long enough because the barriers between modes are large. As discussed theoretically in [41], this is the case when dealing with datasets clustered in a low-dimensional space (which is the case for genetic or protein data). Schematically, for these datasets, one tries to encode several distant modes at $\beta=1$, which are usually not symmetric around the zero magnetization. Since the temperature is only coupled to the energy and not to the entropy, a change in temperature leads to “1st order” Ph-Tr (because the modes are more sensitive to temperature changes the farther they are from the origin, see [41]). As a result, there are modes that simply disappear from the equilibrium measure when the temperature is increased. On the other hand, the chains located at these vanishing clusters at low temperatures melt into other clusters when they visit the high temperatures. When iterating the standard PT process, the chains occupying these clusters will gradually disappear, giving the false impression that the configurations belonging to these clusters are not typical at $\beta=1$, when they are. In contrast, the “2nd order” transitions are characterized by a merging of the modes. If the “2nd order" Ph-Tr occurs at temperature, this would mean that one hump gradually splits into two (or more) as the temperature decreases. The transitions that occur along the training trajectory are 2nd order transitions, and the transitions encountered when the temperature is increased (in clustered data sets) are 1st order Ph-Tr. Both statements are analytically proven in RBMs (see [26,27,28] for the trajectory, and [41] for the temperature).
We further elaborate on this point in the official comments.
**2. LL:** The LL was shown in Fig. 3 for MNIST and in Fig.5 for the HGD. We will include the LL of the remaining dataset in the Supplemental. We will add error bars to the curves by repeating several times the estimation. Following the reviewer’s suggestion, we will compare the different strategies to compute the LL in a controlled set-up where the partition function can be exactly computed. In particular, we will consider RBMs trained with the datasets discussed in the paper but with only a few hidden nodes (we can compute the exact LL as long as $N_h\le20$). We will also illustrate these cases to discuss the differences between PT and PTT.
3. It is not clear to us which artifacts the reviewer refers to. The samples in Fig. 3B are real samples from the learned RBM and not the probabilities for each pixel. The white dots that sometimes appear in the black background are rare events due to fluctuations at T=1. Further away, one starts to observe overfitting effects. If necessary, we can use the mean values of the pixels instead of samples, which smooths the images into gray levels.
4. The learning rate we used seemed to work well in our case and was maintained for all experiments and methods we used, for the sake of comparison. While we agree that keeping the learning rate constant is generally not the best strategy, we also believe that it is an easier setting to compare different methods and avoid adjusting more and more hyperparameters that would introduce hard-to-control effects. Moreover, adjusting the learning rate would similarly affect both strategies using PCD (i.e., with or without pre-training). We propose to show experiments with smaller learning rates in the Appendix, showing that the overall behavior between training schemes remains unchanged. We will also include error bars in the LL figures to illustrate the typical fluctuations of this measure.
5. We will include in the appendix the comparison between the samples obtained with PTT and those obtained after sampling $10^7$ “classical” MCMC steps (several hours long) the HGD model. We show this data in the attached PDF. They both perfectly match. We also show that the PT sampling fail to reproduce the upper cluster. We could not compare with the modified PT during this rebuttal week, but we will do it in the final version.
**Languages:**
1. We thank the referee: we should have defined better these quantities, and we will do it in final version.
2. RBMs are much easier to interpret than other models based on DNN. The main reasons for this are, first, that the weights of the latent variables interact directly with the visible nodes and therefore offer many possibilities to analyze them: SVD, looking at their value as a function of the visible nodes, etc. For example, Ref [15] shows that the protein functionality can be inferred from the RBM's latent space. Second, Ref. [7] showed that the RBM can be mapped to a system of interacting visible nodes, from which not only pairwise couplings between nodes can be inferred, but also higher-order couplings. Third, ref. [8] showed that the minima of the approximated free energy can be used to cluster data hierarchically. One can accurately approximate the free energy of the RBM because it has a very simple structure. There are also many recent papers that use the RBM for interpretive applications in biology. A list can be found in the rebuttal of reviewer SD84.
3. We will add the citation in the final version.
4. We will add more precise citation to the figures for the final version.
---
Rebuttal Comment 1.1:
Comment: Regarding point 1
The rebuttal does not answer my points. I was not arguing that PT would perform better overall, but that it would perform better than PCD. This is well established for RBMs. That PT can fail is well known, but less often than PCD - in both examples shown in the pdf the authors provided, PCD would fail, since the distances of the modes are large, while PT would only fail in one of the two cases.
The authors argue based on [41] that the first order transitions are more common.
I have taken a look at [41] and It is not clear to me why the analysis should apply to the case of binary RBMs in general. The analysis is based on an continuous relaxation, by arguing that the discrete spin vectors lie on a low dimensional subspace of the data and then some of the discrete variables are replaced by variables taking values in [-1,1] to model this subspace. This changes the topology of the space from the discrete metric to a continuous metric. I see no reason to doubt the analysis on the continuous topology. But it is not clear to me how the two types of phase transitions can be differentiated on the original discrete topology or to what degree this is an artefact of the relaxation approach used. E.g., we know of continuous RBM types where all transitions are second order: the gaussian binary RBM (since it is a mixture of gaussians with mean that goes to 0 as temp->infty).
Indeed, in the original work proposing PT for RBMs ( [r1] http://proceedings.mlr.press/v9/desjardins10a/desjardins10a.pdf ) the authors showcase PT on a toy dataset that uses 4 modes in a 4x4 image dataset. This dataset fulfills the basic assumption of [41] that the data points cluster around a few common points that are distant in the discrete topology (and also the continuous topology), and [r1] shows that PT outperforms PCD by a wide margin at similar computational budget.
point 2: Thanks for providing additional experiments. However, to make the claims regarding better LL, you need to also perform multiple runs since all your results are dependent on the RNG and initialisation.
point 3+4: The artefacts i refer to are the errors in the samples produced. We would expect that a well-trained RBM would learn that large areas of the images are black without any noise. The presence of this noise on all trained models signals that the dataset is not learned well. This is most likely due to the large learning rate used. Note that already [r1] and the work by Tieleman,2008 ([r2] https://www.cs.toronto.edu/~tijmen/pcd/pcd.pdf ) used smaller learning rates with a decaying schedule.
---
Rebuttal 2:
Title: Full comment on the first point raised by the referee
Comment: **On the PT**
In the attached PDF, we include a sketch explaining the difference between first and second order first transitions in the attached Fig. 1. In Fig. 2 D-E, we illustrate the effect of the disappearance of a peak upon lowering the temperature in the HDG dataset and the failure of the PT strategy of this dataset (Fig.2C). We also compare these results with the data sampled using 10^7 ordinary Gibbs MCMC steps (that should be long enough to guarantee equilibrium given the measures of Fig.4D).
The reviewer suggests using an improved version of PT using a non-uniform reference configuration. We were not aware of this modified algorithm, and we will discuss it in the new version of the manuscript and this reference, and compare it with the PTT. We agree with the referee that this reference probability will add a bump at the center of the probability distribution, which should to mitigate (not making them disappear) the effect of the first order transitions. Yet, our guess is that the efficiency of this algorithm must depend a lot on the properties of the dataset and the spatial location of the barriers, and it should struggle also with datasets with multiple clusters. We could not do a comparison during the rebuttal week due to the lack of available resources, but we will do it for the final version of the paper.
This trapping phenomena associated to the existence of first order transitions in temperature is reproduced when trying to compute the partition function using a temperature annealing, and it is beyond the failures observed in the computation of AIS. Instead, the trajectory tempering only crosses second order transitions, which means that a high temperature mode smoothly splits on two when lowering the temperature, which avoids the disappearance of modes.
---
Rebuttal 3:
Comment: We thank the reviewer for its reply. Our paper and also ref. [41] focus on binary-binary RBMs in describing highly clustered datasets, where the different modes are separated by large barriers that are already visible at the PCA level, where typical training approaches drastically fail.
**On the comparison of PT over PCD.** The cases discussed in the rebuttal are good examples of situations where a PCD training can work better than a PT training (the version of PT where a temperature swap is followed by only one or a few Gibbs steps). In these data sets, the equilibrium measure can be sampled by performing many accumulated Gibbs steps, but not by performing many PT steps. This is because visiting the higher temperatures makes the sampling less ergodic than at T=1 because each time the chain visits high temperatures, a cluster is suppressed and standard sampling does not have enough time to re-nucleate it before visiting high temperatures.
You can solve this problem by significantly increasing the number of MCMC steps $k$ between temperature exchanges and/or the number of temperatures in the ladder, but in this case the overall acceleration of PT becomes less clear because you are forced to sample $N_\beta$ systems in parallel that are not particularly useful for anything else. The total cost for $k$ sampling steps at $\beta=1$ is then $N_\beta\times k$. For this reason, we have said that PT is computationally very demanding. Ref [40], for example, shows an order of magnitude improvement over AGS in the jumps' timescale in the case of MNIST 0/1, but at the cost of simulating 10 systems in parallel which is much slower as reported in Table S4. They also report no further acceleration when $N_\beta$ is increased from 10 to 1000. For proteins and the 2D Ising model, the situation is even worse.
Finally, we would like to mention that the ref https://www.sciencedirect.com/science/article/pii/S0004370219301948 cited by the reviewer analyzes machines trained with CD-25 and CD-1. The CD training produces very pathological models with a high LL but with a very strange dynamic behavior and, more importantly, models that are completely unable to generate proper samples (see ref. [21] or ref. [r2] suggested by the reviewer). We are not sure to what extent the results of this work are affected by this.
**About the toy dataset:** The dataset used in reference r1 provided by the reviewer is not a good example. It is simply too small (N=16). The divergence of times associated with bimodality increases with exp(N). For N=16, the thermalization problems associated with first-order phase transitions are much smaller than the ones we address in our work.
**On first-order transitions in RBMs:** First-order transitions in temperature are common when working with “clustered” data sets. This is not necessarily the case for most datasets where training algorithms are tested in RBMs.
We disagree with the reviewer's statement that first-order transitions do not occur in other versions of RBMs. Could the reviewer give us a reference stating that all transitions in the Gaussian binary RBM are second order? We would be very interested in such results for our research.
Both first and second order transitions have been reported in Gaussian Mixture Clustering, see for example [Lesieur, T., De Bacco, C., Banks, J., Krzakala, F., Moore, C., & Zdeborová, L. (2016, September). Phase transitions and optimal algorithms in high-dimensional Gaussian mixture clustering. In 2016 54th Annual Allerton Conference on Communication, Control, and
Computing (Allerton) (pp. 601-608). IEEE] or [J. Schneider (Phys Rev E 1998), First-order phase transitions in clustering], the latter showing precisely that first-order transitions in temperature are frequent in this problem. Moreover, using the method proposed in [25], it can be measured that an RBM trained with a simple data set consisting of a Bernouilli mixture of two clusters undergoes a first-order transition when $\beta$ is interpolated from 1 to 0.
On the other hand, the same arguments about RBMs learning PCA through second-order phase transitions in the early stages of training also apply to binary-binary, binary-Gaussian, and Gaussian-binary RBMs, as in Ref [Decelle, A., & Furtlehner, C. (2021). Restricted Boltzmann machine: Recent advances and mean-field theory. Chinese Physics B, 30(4), 040202].
Title: (1/3)
---
Rebuttal 4:
Title: (2/3)
Comment: **About the learning rate:** As discussed with other referees, we believe that it would be better to reduce the learning rate for practical applications. To allow comparison between methods, we apply the same scheme to all training methods. A high learning rate exacerbates the problems associated with the lack of thermalization during PCD training, and not the other way around. Furthermore, the learning rate is not an absolute value. The optimal value for each machine strongly depends on the number of visible and hidden nodes used. Also on the number of MCMC steps. The machines discussed in Tielemann’s paper do not have the same number of nodes as ours, so the direct comparison of lr does not make sense.
Our samples show no artifacts. Tielemann's work shows the probabilities, not the variables. This is obvious because the numerical images are displayed in grayscale and not in black and white pixels. The display of probabilities is common in the literature for binary variables and image data. When we display the probabilities, we get a perfect black in the background, as in Tielemann, and our digits are also better shaped and more diverse when we work with the full MNIST and this lr. Finally, the (random) presence of white pixels could be more related to the value of the bias, which is set at the beginning of the training. The bias is initialized to reproduce the frequencies without weights before it starts learning. However, for pixels that do not fluctuate, a threhold is used to avoid infinite bias. The flipping of some spins is probably related to a “weak” threshold.
---
Rebuttal Comment 4.1:
Title: (3/3)
Comment: **Coulomb relaxation. ** Concerning the question whether the presence of a first order phase transition is just an artifact of the Coulomb relaxation: actually this is slightly
more involved to show it for RBM than for Coulomb machine because for Coulomb machines the energy term is directly proportional to inverse temperature but
we could add to the paper a formal argument which goes schematically as follows: just writing the local minima of the free energy (function of magnetization in the reduced intrinsic space)
and looking at how the equilibrium is displaced with temperature shows that their energy vary in a non-linear way with temperature and non-uniformly regarding their position on the
magnetization manifold, while the entropy contribution do not change. This necessarily implies that changing even slightly the temperature will
break the highly sensitive equilibrium obtained between these state corresponding to the multimodal distribution and will typically favorise one state among all.
This scenario is expected to occur as soon as the data are located on low dimensional space ($d=O(1)$ ) compared to a large embedding space ($d=O(N)$, $N\gg1$)
which is quite common in our point of view, at least for the type of data we are interested in. | Summary: The manuscript suggests to apply
Aurélien Decelle and Cyril Furtlehner. Exact training of restricted Boltzmann machines on intrinsically low dimensional data. Physical Review Letters, 127(15):158303, 2021.
to initialise persistent contrastive divergence (PCD) learning for RBM training and estimating the log likelihood / partition function of RBMs.
Strengths: While one may argue that the novelty is limited because the interesting theoretical work was done in the abovementioned paper by Aurélien Decelle and Cyril Furtlehner, the idea is sound and the approach may be useful in practice.
Weaknesses: The novelty is limited because the interesting theoretical work was done in the abovementioned paper by Aurélien Decelle and Cyril Furtlehner.
My **main criticism** refers to the empirical evaluation, and I think these questions should be addressed:
- Was enough effort out into the baseline methods (including hyperparameter choice)?
- What would be an example where the PCA decomposition is misleading (i.e., not helping or even slowing down the process)?
- What about MNIST with all 10 digits?
**Details** (not ordered by importance):
* „On the diametrically opposite side (on interpretability) are generative ConvNets [9, 10], where the energy function is formulated as a deep neural network, which are capable of synthesizing photorealistic images but are almost impossible to interpret as a physical model.“: Not clear, perhaps add half a sentence to elaborate.
* „second-order phase transition“: define what this is already in the beginning
* Beginning of section 2: I suggest to add the analysis in
Fischer, Igel. Bounding the Bias of Contrastive Divergence Learning. Neural Computation, 2011
https://direct.mit.edu/neco/article-abstract/23/3/664/7646/Bounding-the-Bias-of-Contrastive-Divergence?redirectedFrom=fulltext
to the discussion of the limitations of CD.
* „much better than those obtained with the standard Annealing Important Sampling (AIS) techniques“:
In [42], several methods are discussed, in particular one based on Bennett’s Acceptance Ratio method (BAR), which performed in general better than standard AIS. How does the proposed method perform in comparison to BAR?
* What if linear PCA is not well suited to find a good representation of the data because of a highly non-linear latent structure?
* I am not fully happy with the selected benchmark tasks.
In particular: How does the method perform on MNIST with all 10 digits?
**Minor** comments:
The reference list should be revised. Inconsistent capitalisation, author first name abbreviations, etc.
Technical Quality: 3
Clarity: 2
Questions for Authors: My main criticism refers to the empirical evaluation, and I think these questions should be addressed:
- Was enough effort put into the baseline methods (including hyperparameter choice)?
- What would be an example where the PCA decomposition is misleading (i.e., not helping or even slowing down the process)?
- What about MNIST with all 10 digits?
See my comments below for further questions?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: I think the limitations should have been explored in more depth.
What would be an example where the PCA decomposition is misleading (i.e., not helping or even slowing down the process)?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for its careful reading of the paper and for its constructive comments. We attempt to respond to the comments individually below.
The answers to the concerns about **novelty of this paper** have been answered in the section "Author Rebuttal", as they were shared by several reviewers.
Concerning the rest of comments:
**On the PCA** The reviewer doubts the reliability of the PCA to best describe the multimodality of the data. We agree with the reviewer that the slowest mode could be related to a non-linear decomposition of the dataset and that, if this were the case, our method would not be particularly useful to avoid this. Our model is designed for data that is highly clustered in a low-dimensional space, which is typically already captured at the PCA level. This is the case of the datasets analyzed in this work (and the typical situation encountered in dna/protein datasets). We will try to make this distinction clearer and discuss this limitation in a new section: "Limitations". **We further elaborate our answer in the official comments**.
**MNIST:** We did not discuss the entire MNIST dataset because it is much easier to train than the 0-1 version because barriers are much less pronounced in that case. MNIST dataset can be well-trained with a PCD-10 scheme, and in fact we do not observe any particular improvement of using the pretraining strategy (nor do we observe that it is getting worse because the quality is limited by the number of steps k used at the second stage of the training PCD-k). Being larger CelebA is a much better test for our algorithm because PCD-10 struggles to get decent images. We will include some analysis in the new version of the paper (and discuss the performance on the complete MNIST in the Supplemental).
**Baseline methods:** Referee wonders if we did any effort to optimize the baseline methods for a meaningful comparison. Our approach proposes only to include a pre-training before initiating the standard training methods. In this sense, we tried to optimize the baseline methods first and use the same hyperparameters later. Nevertheless, we agree with the referee that no discussion about the hyperparameters was included in the manuscript, and it is an important point. We will include some figures in the supplemental material showing the performance at different learning rates and number of added hidden nodes.
**Comments:**
* **Interpretability** Simple EBMs can be directly mapped to spin ferromagetic models in physics, for instance a binary Boltzmann machine without hidden units is formally identical to a (pairwise) disordered Ising model, and thus the trained model parameters can be interpreted as physical pairwise interactions between both variables. A strategy (typically referred as inverse Ising approach) widely used in data-driven interpretability approaches in biology, such as the direct coupling analysis (DCA) of protein families or the inference of neuron connections from spike data in neuroscience. Similarly, RBMs can be mapped to a slightly more complex spin model where interactions between spins are not only pairwise but multibody. The interactions between variables in a deep convolutional generative neural network cannot be easily written in that way, and the most prominent interactions between variables cannot be easily inferred from the analysis of the network. Furthermore, the analogy between neural networks and spin systems allows us to apply all the machinery developed in physics during the last decades to explore the equilibrium behavior of complex systems to analyze the learning or probe the probability distribution function. We will extend the paragraph to make it clear and add a section in the SM illustrating the mapping to a physical system.
* Following the same physical analogy, one can describe the encoding of features during the training as a thermal annealing process, along which each new mode in the model is introduced through a phase transition which is analogous to a paramagnetic to ferromagnetic transition in temperature. These kinds of transitions are denominated in physics as second order because they are characterized by a divergence on the second order statistics of the system's order parameter (in this case, the fluctuations of the magnetization - the variables projected on the each of the PCA components-). Second order phase transitions, also known as critical transitions, have many interesting properties such as universality (identical exponents rule the divergences in very different physical systems satisfying same basic symmetries and dimensions), the emergence of system size correlations between variables, and the apparition of also universal dynamical features, such as a sudden arrest of the dynamics, an effect known in physics as **critical slowing down**, which in simulations is reflected by a sharp growth of the mixing times with a power of the number of variables. These dynamical effects have recently been studied in the training of RBMs in [[D. Bachtis et al, Cascade of phase transitions in the training of Energy-based models, arXiv:2405.14689 2024]].
We acknowledge that phase transitions are not common knowledge in computer science and statistics (another referee complained about the same point), so we propose to include a whole pedagogical discussion in the Supplemental Material explaining the mapping between the RBM and a physical system and the dynamical characteristics of first and second order phase transitions as they are both important to understand the properties of the training process and the failures of the sampling one.
* We thank the referee for pointing out the interesting reference by Fischer&Igel that was unknown to us, and will be discussed in a further revision of this work.
* We also thank the good suggestion about comparing our estimation of the log-likelihood against the BAR method, which will be taken into account in the final version of the paper.
---
Rebuttal 2:
Title: More on PCA, MNIST and Baseline
Comment: **On the PCA:** The reviewer doubts the reliability of the PCA to best describe the multimodality of the data. We agree with the reviewer that the slowest mode could be related to a non-linear decomposition of the dataset and that, if this were the case, our method would not be particularly useful to avoid this. Our model is designed for data that is highly clustered in a low-dimensional space, which is typically already captured at the PCA level. This is the case of the datasets analyzed in this work (and the typical situation encountered in dna/protein datasets). We will try to make this distinction clearer and discuss this limitation in a new section: "Limitations".
The RBM learning is triggered by the encoding of the covariance matrix (and thus the PCA), it is only at a second stage when it starts to encode the higher-order statistics. The problem is that if the dataset is very clustered (as in the cases discussed in the work), the permanent chain used to estimate the negative part of the gradient remains very far from equilibrium from that point and on (chains typically get trapped in one of several isolated modes no matter their statistical weight), leading to very biased trainings. In these situations, our method can really make a difference. At variance, if the encoding of the first PCA components does not imply dividing the phase space in separate clusters (as it is the case for instance when dealing with the entire MNIST dataset), the encoding of these first directions do not imply insurmountable mixing times (in Fig. 3C of ref. [21] in the manuscript estimated that the first transition implied a growth of thermalisation times of order only 100 MCMC steps, which can be easily tackled by PCD), which means that a pre-training makes no particular speed-up in that case. The situation could be different for larger dimensions due to the critical-slowing down (ref.[28] explicitly computes the mixing times for training using resized versions of MNIST). For this reason, training the entire MNIST datasets is much easier than training the 0-1 MNIST one.
In our experience, however, for highly clustered datasets, the strongest time divergences appear at the beginning of training (when learning the PCA) because it is the moment where different modes are separated by very large barriers, later stages are typically much easier to sample because barriers become finite. This statement was quantified for some datasets used in this work in ref. [25] (see for instance Fig. 8 in [25] for the 0-1 MNIST). This is not the case for the entire MNIST, where mixing times softly grow along the training (see Fig. 3 [21]).
To make these statements clearer, we propose to show the explicit evolution of the exponential correlation times of the different models along the training time, which show very sharp jumps at an initial stage when the first eigenvalues of W grow, further decrease to much lower values that only very softly increase in time.
---
Rebuttal Comment 2.1:
Title: Rebuttal
Comment: I have read the rebuttal.
Challenge: Explain what you mean by second-order phase transitions by defining it using rigorous math without using any analogy from physics.
I did not criticize the choice of relying on PCA. However, for each heuristic it is important to also think about the cases where it does not work.
I would even suggest to include an example for a problem where the approach does not help or even leads to worse results (e.g., because PCA does not nicely decompose the problem).
Along the same line, I support to include and discuss the experiments on full MNIST. It is instructive to see that you do not get a speed-up here and that you have an explanation for this.
---
Reply to Comment 2.1.1:
Title: On the second order phase transition
Comment: One of the goals of statistical physics is to determine the typical configurations to be expected for a set of model parameters. In the case of RBMs, the question is what typical visible (and/or hidden) variables one expects to observe. For example, one can ask what typical energies one expects to observe for a set of parameters $\theta$. For this purpose, let us rewrite the partition function as a sum over all possible values of the energy $E$:
$$Z=\sum_E e^{-E} g(E)=\sum_E e^{-E+S(E)}=\sum_E e^{-F(E)}=\sum_E e^{-Nf(E)}$$
where $g(E)$ is the number of states with energy $E$, $S(E)=\log g(E)$ is the *entropy*, and $F(E)=E-S(E)$ is what we refer as *free energy* ($f(E)=F(E)/N$ is the free energy per variable, with $N$ the total number of variables). The intrinsic free-energy $f(E)$ is a self-averaging quantity, which means that it does not depend on $N$ for large values of $N$. All $Z$, $E$ and $F$ depend on the model parameters.
For $N\to\infty$ the sum in $Z$ is dominated by the states with minimum free energy (and same applies for the probability $p(E)$). A *first order* transition occurs in the parameters when the first partial derivative of $f(E)$, with respect to each of the parameters, is discontinuous (in physics, the parameter is usually the temperature). A *second order* transition occurs when the first partial derivative is continuous, but the second partial derivative is discontinuous. The same discussion done in $E$, can be done with other observables, like the magnetization $m$.
In the case of the RBM, the first learning transition can be mapped to a very simple model, the so-called Curie-Weiss model, as recently described in Ref. [*Bachtis, D., Biroli, G., Decelle, A., & Seoane, B. (2024). Cascade of phase transitions in the training of Energy-based models. arXiv preprint arXiv:2405.14689.*]. The Curie-Weiss model is fully solvable, and for this reason the meaning of a second-order transition is perhaps easier to understand here. Let's give it a try. The Curie-Weiss model consists in a set of $N$ discrete variables $s_i = \pm 1$. We define the (exponential) distribution over these variables as $p(\boldsymbol{s}) = Z^{-1} \exp(\beta /N \sum_{i<j} s_i s_j)$ where $\beta$ is a parameter of the model and $Z$ the partition function. The question is to understand the “structure” of the distribution as a function of $\beta$. We expect that for $\beta$ small, each spin behaves as an isolated Bernoulli random variable, while for a large value of $\beta$, the distribution of the system is dominated by configurations where the variables have (almost) all the same signs. The key mathematical aspect is to study the system in the limit where $N \to \infty$ and the transition between both regimes. A way to study this distribution is to study the moment generating function adding a term $\sum_i h_i s_i$ in the exponential and computing the partition function as a function of $h$. It is possible to show (rigorously in this precise model), that the probability of $m=N^{-1} \sum_i s_i$ of the system is given by $p(m)\propto \exp(-N\Omega(m))$ (in the large $N$ limit) where $\Omega(m)$ is the large deviation function. The structure of $\Omega(m)$ is such that for $\beta < \beta_c=1$, it is a convex function with only one minimum in $m=0$, while for $\beta>\beta_c$ it has two symmetric minimum $m=\pm m(\beta) \neq 0$. The values (in function of $\beta$) of the $m$ are continuous: $m=0$ for $\beta \in [0,1]$, and then positive until it saturates to $1$ as $\beta \to \infty$. In practice, it means that the distribution $p(m)$ pass from an unimodal distribution to a bimodal one as $\beta$ increases. The point where it happens is at $\beta_c=1$ and is called a second order phase transition. A particular interesting properties is that at $\beta_c$, the function $\Omega(m)$ develop a non-analyticity in its second-order derivative, that can be linked with long-range fluctuations.
For a recent review [Kochmański et al, Curie–Weiss magnet—a simple model of phase transition, Eur. Journal of Physics 2013], and for more rigorous results [Sinai, Theory of phase transition: rigorous results, 1982]. | null | null | Rebuttal 1:
Rebuttal: We thank all reviewers for their comments. We answer the question about the novelty of this new work in this Author rebuttal, as this question was raised by several reviewers. We answer the remaining questions directly in each reviewer's rebuttal.
**On the novelty:** Reviewers yudf and SD84 raised concerns about the novelty of the paper, especially regarding the connections with the Decelle&Furtlehner paper. While we rely on the mapping between the RBM and the Coulomb machine proposed in that paper to approximate the low-rank RBM, our work significantly extends the applicability to real data on three ways.
First, because the D&F's work only studies very simple, low-dimensional synthetic data sets with specific modes and regular features to cover the low-dimensional space, and mainly focuses on theoretical aspects. The first added value of the present work is to move from theory to practice, where many details need to be specified to make the technique work for arbitrary data. In particular, we extend this technique up to four intrinsic dimensions, including the direction of the bias, which requires a special treatment. This bias improvement is crucial for processing image data and allows us to achieve an additional direction. We have also corrected the D&F's calculation of entropy to ensure that true equilibrium samples can be obtained through a static Monte Carlo procedure, which is crucial to sample fast the trained machines. We will highlight these improvements to the method for real data in a new version.
The second novelty is that we propose to use this construction as pre-training, which has not been investigated in previous work. We show that this combined scheme allows us to deal with real datasets highly clustered in a low-dimensional space (characteristic of genetic or protein data) where standard methods fail. These datasets are much more complex than the ones treated in D&F, as they are also clustered in more directions than the ones we can approximate with the low-rank RBM. In summary, our combined approach not only allows us to match statistics across a few dimensions, but also to capture the fluctuations of high-dimensional real data sets. The use of RBMs on this type of data is particularly interesting from a scientific point of view, as RBMs work very well with tabular discrete data even in situations where data is scarce (which is the case for biological datasets), while allowing systematic extraction of biologically interpretable data from them.
And third, beyond the construction of the Restricted Coulomb Machine, we propose a new and general sampling strategy. This new strategy is supported by recent developments in the theoretical understanding of the learning process in RBMs. In particular, we exploit the progressive encoding of modes through second order phase transitions in the training, in combination with the fast sampling of low-rank RBMs to propose a strategy that is very similar to the standard parallel tempering method, but where the parameters of the models (saved along the training process) are exchanged instead of the temperature. We show that this strategy can be used both to generate equilibrium samples in highly clustered models, where generating samples using standard methods is not feasible in a reasonable time, and to obtain reliable log-likelihood calculations.
**In the attached PDF file**, you will find some figures answering the reviewer NWGy's questions about the PTT sampling method and its differences from the standard parallel temperature method.
Pdf: /pdf/fa521f92d5b38e02547af08bdf9bd483c49e2cba.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Generated and Pseudo Content guided Prototype Refinement for Few-shot Point Cloud Segmentation | Accept (spotlight) | Summary: This paper introduces a FS-3DSeg framework called Generated and Pseudo Content guided Prototype Refinement (GPCPR) for few-shot 3D point cloud semantic segmentation, leveraging LLM-generated content and reliable query context to enhance prototype quality. GPCPR includes two core components: Generated Content-guided Prototype Refinement (GCPR), which enriches prototypes with comprehensive semantic knowledge from large language model-generated class descriptions, and Pseudo Query Context-guided Prototype Refinement (PCPR), which mitigates class information bias by aggregating reliable class-specific pseudo-query context to create more suitable query-specific prototypes. Additionally, a dual-distillation regularization term is introduced to enable knowledge transfer between early-stage entities and their deeper counterparts, enhancing refinement. Extensive experiments demonstrate the superiority of GPCPR.
Strengths: - The proposed method is technically sound.
- The experiments confirm the effectiveness of the proposed method.
- The paper is well-written
Weaknesses: - The novelty is relatively limited. This paper is not the first to explore using LLM for few-shot segmentation; it has already been done in 2D few-shot segmentation [1][2]. Since LLMs inject semantic knowledge into visual features, their rich semantic knowledge should be transferable between 2D and 3D. This paper lacks specific design or optimization for 3D tasks, making the innovation of the GCPR module quite limited.
- The idea of using pseudo predictions to generate class-specific pseudo-query context in the PCPR module is quite straightforward (not novel) in few-shot segmentation. Additionally, the PCPR module simply stacks techniques from previous works. For example, QGPA directly corresponds to [3], and prototype distillation is identical to [4].
- The introduction of LLMs may significantly increase computational overhead and inference time compared to previous methods.
[1] LLaFS: When Large Language Models Meet Few-Shot Segmentation CVPR2024
[2] Simple Semantic-Aided Few-Shot Learning CVPR2024
[3] Prototype Adaption and Projection for Few- and Zero-shot 3D Point Cloud Semantic Segmentation TIP 2023
[4] Dynamic Prototype Adaptation with Distillation for Few-shot Point Cloud Segmentation 3DV 2024
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. The authors propose using LLM-generated content to enrich prototypes. How sensitive is the model's performance to the quality and diversity of these generated descriptions? Have the authors explored the impact of using different LLMs or prompting strategies?
2. The paper introduces a dual-distillation regularization term. How does this compare to other knowledge distillation techniques in few-shot learning? What is the specific advantage of this approach in the context of 3D point cloud segmentation?
3. The paper uses pseudo masks to extract class-specific pseudo-query context. How robust is this approach to potential errors in the initial pseudo mask generation? Have the authors explored the impact of different thresholds or strategies for generating these masks?
4. The proposed method significantly outperforms state-of-the-art approaches on both S3DIS and ScanNet datasets. Have the authors investigated why the performance gain is notably larger on ScanNet compared to S3DIS?
5. How does the computational complexity of GPCPR compare to existing methods, especially considering the integration of LLM-generated content and multiple refinement stages?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 1
Limitations: 1. scalability: The complexity of generating differentiated descriptions grows quadratically with the number of classes (O(N^2)). This could pose challenges when applying the method to datasets with a large number of semantic classes.
2. Dependency on LLMs: The reliance on large language models for generating descriptions introduces an external dependency that may not always be available or consistent across different implementations or time periods.
3. Computational overhead: The paper lacks a detailed analysis of the additional computational cost introduced by the LLM content generation and multiple refinement stages. This is crucial for understanding the method's practical applicability.
4. Sensitivity to LLM output: The quality of the generated descriptions could vary depending on the specific LLM used and its training data. The paper doesn't thoroughly explore how variations in LLM output might affect the overall performance.
5. Pseudo mask reliability: The method relies on generating pseudo masks for query point clouds. However, the paper doesn't extensively analyze how errors in these pseudo masks propagate through the system and affect the final segmentation results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1. Novelty of using LLM. Specific design.**
1) Novelty:
FS-3DSeg is more complex than 2D, due to 3D data is complex. No prior work has applied LLMs to FS-3DSeg. Directly extending [1][2] to FS-3DSeg faces challenges: [1] finetunes LLMs and outputs 2D polygon coordinates, unsuitable for 3D objects; [2] lacks 3d-related descriptions. These challenges motivated us to pioneer the use of LLM to solve FS-3DSeg.
2) Our specific design for 3D tasks:
①Four 3D-oriented LLM commands for generating diverse class descriptions covering 3D properties like geometric structures and surface shape.
②3D-specific prompt for generating differentiated descriptions that highlight visual or geometric differences.
③Two Text-To-Prototype Compressors for enriching 3D prototypes with knowledge from LLM.
> **W2. Novelty of PCPR.**
Our PCPR is the first to use pseudo-query context to refine prototypes in FS-3DSeg. Innovations include query-to-prototype compressor, prototype distillation, and pseudo-prediction distillation, improving 3D prototype quality and pseudo mask accuracy.
Difference with other methods: QGPA is used as baseline.
[4] only distills prototypes at the last stage, while we distill prototypes at multiple progressive prototype refinement stages to ensure more stable refinement.
Beyond that, we also propose pseudo-prediction distillation to improve pseudo masks, which have not been explored before. These designs effectively tackle semantic information bias in FS-3DSeg.
> **W3. Computational Overhead.**
The usage of LLMs does not increase inference time, as descriptions and text features for all classes are generated and stored offline in one phase. During training/testing, we load the stored text features, avoiding redundant computing costs for regenerating them by LLM online. Specifically, the total offline times are 30.41 minutes for S3DIS and 67.45 minutes for S3DIS and ScanNet, mainly due to the text generation process, with negligible feature extraction time (see Table 1 in the rebuttal PDF). During inference, our model better balances time costs and results, achieving superior results with moderate computational cost compared to the SOTA (see Table 2 in the rebuttal PDF).
> **Q1. Impact of LLMs or prompting strategies.**
1) Different LLMs: Using both `gpt-3.5-turbo` and `gpt-4o-mini` can achieve superior results compared with SOTA, with 75.74% mIoU and 73.88% mIoU on S3DIS, respectively. (see Table 3 in rebuttal PDF).
2) Prompting Strategies: Differentiated descriptions outperform diverse descriptions by 0.9% mIoU. Combining both strategies yields best performance (see Table 3 in our paper).
3) Number of Descriptions: As shown in Figure 4 (c) of our paper, mIoU increases with more diverse descriptions, stabilizing around 10/command.
> **Q2. Compare DD loss with other KD.**
1) Compared with other KD: We verify the effects of logit distillation, feature distillation, and relational knowledge distillation. Our DD loss performs better by adaptively improving the quality of pseudo masks and prototypes through multi-stage distillation (see Table 4 in rebuttal PDF).
2) Advantage: The complexity of 3D data leads to low-quality prototypes and inaccurate predictions. Our DD loss enables early-stage 3D prototypes or predictions to gain insights from their deeper optimal counterparts, facilitating bidirectional information exchange, and resulting in accurate segmentation results.
> **Q3. Effects of pseudo mask errors.**
1) Robustness: Our method is robust to pseudo mask errors, since PCPR and DD loss enable prototypes and prediction to mutually benefit each other, adaptively improving pseudo mask quality.
2) Impact of thresholds: Experimental results in Table 5 in rebuttal PDF file reveal:
- Threshold = 0: Optimal performance without filtering operation (reported in our paper).
- Threshold <= 0.1: Comparable to Threshold=0, indicating effective integration of pseudo query context.
- Threshold > 0.1: Performance decreases as most query features are filtered out.
> **Q4. Why performance gain is larger on ScanNet?**
1) ScanNet has more classes, so the limited 3D support set is insufficient to distinguish classes. Our GCPR integrates rich semantic knowledge from LLM into 3D prototypes to improve performance.
2) ScanNet’s complex scenarios lead to higher class information bias. Our PCPR integrates reliable pseudo query context to refine prototypes, boosting performance.
> **Q5. Computational complexity of GPCPR.**
GCPR and PCPR take 12.84 ms and 11.01 ms, respectively. We achieve the best performance with moderate total computing cost (see Table 2 in rebuttal PDF).
> **L1. Scalability.**
In the offline stage for generating differentiated descriptions, it takes 20 minutes and 50 minutes for S3DIS and ScanNet. To address generation complexity for more classes, we provide the following alternatives:
1) We can divide all classes into several groups and generate differentiated text in parallel. This greatly reduces the time cost and maintains our superior performance.
2) We can remove the differentiated description generation process and only use diverse descriptions. By this, mIoU drops slightly from 75.74% (reported in the paper) to 73.84 % but still far exceeds SOTA (DPA 70.19%).
> **L2. Dependency on LLMs.**
Descriptions and text features of all classes are generated and stored in offline stage. During training/testing, we load stored text features without regeneration, thus ensuring availability and consistency across different settings or time periods. Besides, we do not rely on a specific LLM, using other LLMs (e.g., `gpt-4o-mini`) in GCPR module still far surpasses SOTA (see Table 3 in rebuttal PDF). In addition, even if no LLM is available, using alternative texts such as Word2Vec or CLIP Text still exceeds the SOTA due to the effective cooperation of our GCPR, PCPR, and DD losses (see Figure 4(b) of our paper).
For L3-L5, please see our response to W3, Q1 and Q3, respectively.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response, after considering other reviews as well as the rebuttal, I decided to raise my rating to borderline accept.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer pd6x,
Thank you very much for raising the rating to borderline accept. We sincerely appreciate your time and effort in reviewing our paper and for your constructive comments. Your insights are very valuable in helping us improve our work.
Thank you once again, and we wish you all the best. | Summary: This paper analyses and addresses two issues of prototype-based methods in the Few-shot Point Cloud Segmentation task. For the constrained semantic information issue, they present the GCPR module to enrich prototypes with text knowledge via LLM and CLIP. For the class information bias issue, the proposed PCPR module mitigates this bias with reliable context aggregation. Further, the dual-distillation regularization promotes the refinement of prototypes.
Strengths: This idea of enriching prototypes with knowledge from text is innovative, and the impressive performance on various datasets demonstrates its effectiveness.
The provided visualization is beneficial for explaining the proposed method.
Weaknesses: As the authors mentioned in the Limitations, LLM and CLIP will bring additional computing costs. Although the proposed offline operation can help to some extent, specific metrics like model parameters and FLOPs should be provided to make a fair comparison with the previous methods. Also, I wonder whether the authors have tried to distillate knowledge from text to prototype directly like these works[1,2], which may be more efficient because there is no need to use LLM and CLIP during inference.
[1] ULIP: Learning Unified Representation of Language, Image and Point Cloud for 3D Understanding, CVPR2023
[2] OpenShape: Scaling Up 3D Shape Representation Towards Open-World Understanding, NeurIPS 2023
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have analyzed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer FCL4,
We thank you for taking the time to review our manuscript and offer detailed and constructive comments. We appreciate your positive reception of our work and carefully considered each of your points and would like to address your comments as follows:
> **1. Computing cost and specific metrics.**
Thank you for your helpful suggestion. In our approach, generating descriptions with LLM and extracting text features through CLIP for all classes in the dataset are performed offline before the training and testing phases. Once text features are stored through offline operations, we can directly load the stored text features without regenerating by LLM online, avoiding bringing redundant computing costs during training and testing. We will add the analysis of computing cost to our paper, as detailed below:
**(1) Offline computation cost:** The table below shows the time cost of extracting text features for all classes in the entire dataset. We analyze that the total offline time is primarily determined by the description generation process, with feature extraction time being negligible. And the time for generating descriptions from LLM varies on different datasets, depending on the number of classes.
| Phase | S3DIS | ScanNet |
|:-:|:-:|:-:|
|Description Generation: gpt-3.5-turbo|30.23 min|67.15 min|
|Text Feature Extraction: CLIP rn50|10.95 s|17.79 s|
|Total|30.41 min|67.45 min|
**(2) Specific metrics:** The table below compares the computational costs and experimental results of our model with SOTA methods under the 2-way 1-shot setting. Our approach effectively balances computational cost and performance, achieving the highest performance with moderate computational cost.
| Methods | #Params | FLOPs (G) | FPS | Inference Time (ms) | S3DIS | ScanNet |
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| attMPTI | 357.82K | 152.65 | 1.47 | 678.67 | 54.86 | 41.69 |
| QGPA | 2.79M | 16.30 | 38.68 | 25.85 | 62.76 | 56.51 |
| DPA | 4.85M | 15.49 | 32.35 | 30.91 | 70.19 | 62.90 |
| Ours | 4.22M | 18.96 | 20.57 | 48.61 | **75.74** | **73.93** |
> **2. Distillate knowledge from text to prototype directly.**
Thank you for your insightful comment. [1][2] follow the paradigm of 3D-CLIP contrastive pre-training and aim to distill knowledge from pre-trained CLIP to align 3D, 2D and text representations. Following them, we conduct more experiments to directly distill knowledge from text to 3D prototypes. Specifically, we remove the proposed Text-to-Prototype Compressors and Prototype Distillation Loss in the GCPR module and incorporate a contrastive loss to align 3D prototypes and text features, where the prototype is treated as a query, with text features from the same class as positive samples and those from different classes as negative samples. As you noted, this avoids the need for LLM and CLIP during inference. The experimental results are shown in the table below.
| Loss Weight | 0.00 | 0.10 | 0.20 | 0.50 | 1.00 | Ours |
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| S0 | 66.17 | 68.18 | 68.43 | 65.56 | 63.45 | **74.04** |
| S1 | 74.60 | 77.00 | 76.43 | 72.46 | 71.09 | **77.44** |
| Mean | 70.39 | 72.59 | 72.43 | 69.01 | 67.27 | **75.74** |
We find that while 3D-text contrastive training can distill text knowledge into 3D prototypes to some extent, it is not as effective as our approach. We analyze that the success of [1][2] depends on the use of large-scale 3D datasets with a large number of classes for pre-training, whereas our FS-3DSeg task involves limited labeled data and non-overlapping training and testing classes, making such distillation methods less effective for this task. In contrast, our approach proposes two Text-to-Prototype Compressors in the GCPR module, which directly aggregate diverse and differentiated text features to enhance the quality of 3D prototypes. Additionally, the prototype distillation loss facilitates effective information transfer between prototypes at different stages, improving the prototype refinement process and leading to superior performance in the FS-3DSeg task.
Thank you once again for your positive feedback, and constructive and insightful suggestions. Your feedback is crucial in helping us refine our paper.
---
Rebuttal Comment 1.1:
Title: reply
Comment: The authors have addressed my concerns.
I think this is a good paper that presents an interesting idea of enriching prototypes with knowledge from the text. Although it has some slight limitations / weaknesses, it provides a good attempt and the method is effective.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer FCL4,
Thank you very much for your positive feedback and for taking the time to review our paper. We greatly appreciate your thoughtful comments. Your insights are very valuable in helping us improve our work. | Summary: This paper targets few-shot 3D point cloud semantic segmentation and proposes GPCPR to explicitly leverage the LLM-generated content and query context to enhance the prototype quality. The component GCPR integrates diverse and differentiated class descriptions generated by LLMs to enrich prototypes. The component PCPR further aggregates reliable class-specific pseudo-query context to mitigate class information bias and generate more suitable query-specific prototypes. Furthermore, a dual-distillation regularization term is also introduced to enable knowledge transfer between early-stage entities (prototypes or pseudo predictions) and their deeper counterparts to enhance refinement. Experiments are conducted on the S3DIS and ScanNet datasets.
Strengths: This paper is easy to read. This paper targets an interesting problem, few-shot 3D point cloud semantic segmentation. The proposed method is also interesting.
Weaknesses: Figure 1 should explain what the symbols mean, such as P, T, S, and dot.
The CVPR 2024 paper [1] found that there are some issues in the experimental setting of few-shot 3D point cloud segmentation and corrected the setting. However, this paper is still evaluated under the old setting and does not compare with [1], which makes the results less convincing. Thus, I believe that the experiments must be carried out under the new setting in the final version.
In Table 3, the PCPR module seems to only yield a marginal improvement. Please provide some explanations.
Considering that this paper uses LLMs, which classes does the model perform best and which classes does the model perform worst? Is there any connection between this finding and LLMs?
Technical Quality: 3
Clarity: 3
Questions for Authors: What is the performance of the proposed method under the setting corrected by [1]?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The experiments are carried out in the old, ill-posed setting.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer ABMp,
We would like to express our gratitude for taking the time to review our manuscript and providing detailed and constructive feedback. We appreciate your positive reception of our work and carefully considered each of your points and would like to address your comments as follows:
> **1. Explain symbols in Figure 1.**
Thank you for your valuable suggestion. We will add legends to Figure 1 to clearly explain the symbols. Specifically, $\mathbb{P}$ denotes the original 3D prototype set. $\dot{\mathbb{T}}$, $\ddot{\mathbb{T}}$, $\dot{\mathbb{P}}$ and $\ddot{\mathbb{P}}$ represent prototype sets at different refinement stages. Among them, $\dot{\mathbb{T}}$ and $\ddot{\mathbb{T}}$ represent prototype sets refined by diverse and differentiated descriptions, $\dot{\mathbb{P}}$ and $\ddot{\mathbb{P}}$ indicate prototype sets refined by QGPA module and Query-to-Prototype Compressor, respectively. $S$ denotes cosine similarity, and $\odot$ represents matrix dot product.
> **2. Experiments under the new setting of [1].**
Thank you for your valuable comments. Based on the new experimental settings corrected by CVPR 2024 paper [1], we conduct more experiments under the 2-way 1-shot setting on S3DIS dataset. The results are presented in the table below. Our method achieves slightly higher performance than COSeg but far exceeds attMPTI and QGPA, demonstrating the proposed GCPR module, PCPR module and DD-loss can collaborate to improve the prototype quality by integrating comprehensive text descriptions and reliable pseudo query context. We will include these experimental results and analysis in the final version of our paper to provide a more convincing comparison with SOTA under the new setting.
Methods | $S^0$ |$S^1$ | mean
:--------| :---------:|--------:|--------:
attMPTI | 31.09 | 29.62 | 30.36|
QGPA | 25.52 | 26.26 | 25.89
COSeg | 37.44 | 36.45 | 36.95
Ours | 37.96 | 37.38 | 37.67
> **3. In Table 3, the PCPR module seems to only yield a marginal improvement. Please provide some explanations.**
The impact of the PCPR module depends on the scale of the dataset, the number of classes, and the complexity of scenes.
(1) In Table 3, on the **S3DIS** dataset, without GCPR (Row 2 vs. Row 1), PCPR significantly improves the baseline by 8.68% mIoU due to its effective handling of class information bias between support and query sets. With GCPR (Row 6 vs. Row 5), PCPR only further yields a 0.64% improvement because GCPR already addresses the semantic information constraints of support set in S3DIS dataset with less classes and simpler scenes.
(2) On the **ScanNet** dataset, removing PCPR causes a 5.33% drop in mIoU under 2-way 1-shot setting (68.60% without PCPR vs. 73.93% with PCPR), highlighting the importance of PCPR in reducing class information bias in complex scenes with more classes.
In summary, PCPR is more impactful on large-scale dataset with greater complexity and more classes, such as ScanNet. We will explain this more clearly in the paper.
> **4. Which classes perform best/worst? Is there any connection between this finding and LLMs?**
Thank you for your insightful question.
(1) The **class performance** of our model is shown as below:
* S3DIS dataset: Best on *"chair"* (or *"sofa"*) and worst on *"ceiling"* (or *"wall"*) under $S^0$ (or $S^1$) setting.
* ScanNet dataset: Best on *" curtain "* (or *"sofa"*) and worst on *"floor"* (or *"picture"*) under $S^0$ (or $S^1$) setting.
For example, under the 2-way 1-shot setting, the class performance are as follows:
| | S3DIS | | ScanNet| |
|:-:|:-:|:-:|:-:|:-:|
| | S0 | S1 | S0 | S1 |
| Best IoU (%) | chair: 87.18 | sofa: 89.40 | curtain: 86.11 | sofa: 82.09 |
| Worst IoU (%) | ceiling: 61.08| wall: 67.40| floor: 64.08 | picture: 51.89 |
We observe that our model excels in classes with complex geometric structures, unique shapes, and rich colors, but performs not as well in classes with flat geometric structures. Notably, compared to the baseline—which performs best on *"chair"* (or *"sofa"*) and worst on *"beam"* (or *"table"*) on S3DIS dataset under the $S^0$ (or $S^1$) setting—our model consistently shows improvements in both per-class IoU and mIoU. This is particularly evident in classes with complex structures, such as *"beam"* and *"table"*, where performance increases from 32.06% to 63.50% and from 53.22% to 72.39%, respectively.
(2) **Connections with LLM**: The superior performance of our model on complex structures can be attributed to the introduction of LLM, which can generate more diverse and distinguishable visual or geometric descriptions for classes with complex structures, resulting in higher performance. Conversely, for classes with flat structures, LLM produces less rich descriptions that struggle to capture subtle differences, resulting in not as good performance.
Once again, we thank you for your positive feedback, constructive criticism, and thoughtful suggestions. Your feedback plays an integral role in refining our work. | null | null | Rebuttal 1:
Rebuttal: To PC, AC, and all Reviewers:
We sincerely appreciate the time and effort the PC, AC, and all reviewers have dedicated to reviewing our work. We are grateful for the detailed and thoughtful feedback on our submission, particularly the positive comments and insights. Below, we summarize the strengths recognized by the reviewers:
1. **Innovation**: The proposed method is interesting, innovative and technically sound. (Reviewer ABMp , Reviewer FCL4, Reviewer pd6x)
2. **Effectiveness**: The impressive performance on various datasets demonstrates the effectiveness of the proposed method. (Reviewer FCL4, Reviewer pd6x)
3. **Readable**: The paper is well-written and easy to read. (Reviewer pd6x, Reviewer ABMp)
4. **Beneficial visualizations**: The paper includes beneficial visualizations for explaining the proposed method. (Reviewer FCL4)
Thank you once again for your time and effort in reviewing this manuscript. For the questions and other concerns raised by the reviewer, we respond to each in detail and outlined our plans for improvement. If any questions are not answered or our response is unclear, we would appreciate the opportunity to communicate further with our reviewer.
Pdf: /pdf/675adcdb8ee56cf1921d2cc9c2386034a511a81d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
LaSCal: Label-Shift Calibration without target labels | Accept (poster) | Summary: The paper addresses the problem of confidence calibration under label shift given unlabeled samples from the target domain. The first step is estimating the label distribution of the target domain. Then a calibration parameter is computed using the source domain samples where each sample is reweighted according to its label probability ratio in the source and the target domain.
Strengths: The proposed method makes sense and yields good calibration results.
Weaknesses: My main concern is a lack of novelty. The standard calibration methods in case of domain shift (CPCS, TransCal) are based on importance weighting of a labeled sample (x,y) from the source domain according to its similarity to the target domain p_t(x)/p_s(x). Here the proposed method applies the same principle to the label shift problem. Given a labeled sample (x,y) from the source domain, it is reweighted according to the ration p_t(y)/p_s(y).
Technical Quality: 3
Clarity: 3
Questions for Authors: Can you clearly state the novelty compared to covariate shift methods?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Can you clearly state the novelty compared to covariate shift methods?**
Please see our general response for the perceived lack of novelty. To summarize:
- This paper introduces the first label-free, consistent estimator for target calibration error under label shift. Estimating calibration error under covariate and label-shift are two very different problems.
- The re-weighting strategy in label shift is based on the label distribution ratio, which is fundamentally different from the feature distribution ratio used in covariate shift methods.
- The paper provides both a strong theoretical justification and a thorough empirical analysis, demonstrating the effectiveness of our approach, and showcasing that existing covariate shift methods are suboptimal under label shift (see Table 2). This result emphasizes the necessity for designing recalibration methods tailored to label shift conditions.
---
Rebuttal Comment 1.1:
Comment: I was convinced by the author explanations regarding the difference between their method and weight sampling methods for distribution shift.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for your feedback and updated score. We’re glad that our explanation resolved your concern. | Summary: This paper proposes a novel calibration error estimator under label shifts (without ground truth). They use this estimator to apply standard calibration techniques such as temperature scaling on the unlabelled target set, allowing to calibrated the model for the target domain without needing access to a labelled set on this domain. Experiments are conducted for different strength of label shifts on various imaging and text datasets.
Strengths: * [Clarity] The paper is very well-written, experiments and methods are clearly presented, figures and tables are of high quality.
* [Relevance] Estimation of calibration error under shift is an important problem, still largely under studied.
* [Experiments] Experiments are presented on multiple datasets, with statistical errors measures and contains several ablations on the main components of the proposed method. In particular ablations on the density ratio estimator shows that the method is quite robust to the particular choice of estimator.
Weaknesses: **Pre-rebuttal review:**
Major:
* **Missing very important baselines for model calibration under label shift** (table 2). The authors have included multiple baselines to compare their method against, unfortunately their choice of baselines is inappropriate for the type of shift they are studying here. The problem tackled in this work is very clearly stated at the beginning of the paper, the author tackle calibration under _label shift_. However, the baselines they chose to compare against in table 2 are baselines that were designed at improving calibration under _covariate shift_ (TransCal, HeadToTail as the author state themselves in the related work section), which are two very different problems! It is unfair to compare a method specifically designed for improving calibration under label shift with methods designed to tackle covariate shift. There exists methods that are designed to recalibrate probabilities under label shift in the literature but these were completely omitted in the authors' analysis. There is no clear reason why the authors did not include these baselines, even more so since they directly use part of these works in their own method. To cite just a few missing baselines:
* The recalibration method from Alexandari et al. In their work the authors proposes to estimate the density ratio $w= p_t(y) / p_t(y)$ via an EM algorithm and then they propose to _recalibrate the outputs_ via $\hat{p}(y=i|x) = \frac{w_i \cdot p(y=i|x)}{\sum_j w_j \cdot p(y=j|x)}$, where $w_i$ is the estimated density ratio for class i and $p(y=i|x)$ is the i-th output of the classifier for sample $x$. See section 2.2 of that paper. The authors then use these recalibrated probabilities to get updated classification predictions.
* Similar Wen et al, in their work ‘Class Probability Matching with Calibrated Networks for Label Shift Adaption’, proposed another density ratio estimator and recalibrated the probabilities in a similar way as Alexandari et al, see section 4 of that paper. Using their re-calibrated probabilities, they achieve SOTA results in terms of accuracy under shifts.
* **These density-ratio based recalibration baselines are the current go-to approach for re-calibration of model outputs under label shifts and should be included in the paper for a fair comparison with relevant baselines.** At least one of them needs to be included in the paper to be able to claim SOTA results in calibration under label shift.
Minor:
* **Calibration error estimator is data hungry**: in Fig 3.c. we can see that the method is very data hungry and needs at least 4000 samples to be able to perform the calibration error accurately. This should be highlighted and discussed in the limitations section, as this may hinder some practical applications.
* **Effect of recalibration step on calibration error estimation?**: I would be curious to see if the proposed calibration error estimator is able to estimate the true CE with the same precision after the proposed recalibration step compared to before. I would expect that since the error estimation is based on the estimated density ratio the estimator would be overestimating true CE after output recalibration (since recalibration would close the domain gap). Do the authors have any insights on this?
**Update after rebuttal**:
The authors have successfully addressed my concerns during the rebuttal.
Technical Quality: 2
Clarity: 3
Questions for Authors: My main point of concern is the missing label shift recalibration baselines, please see weaknesses for details. These missing baselines are the main reason behind my rating. If the authors are able to include these baselines in Table 2 and show superiority of the proposed method over existing label shift recalibration methods, I would be willing to increase my score.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: See weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the valuable feedback and suggestions. We implemented these baselines and performed additional experiments, which we will include in the camera-ready version. We address each question below:
1. **Add at least one more relevant baseline for model calibration under label shift**
We compared our method both with CPMCN [1] and EM-BCTS [2], on CIFAR-10/100, Amazon, and iWildCam.
### CIFAR10-LT
| Model| EM-BCTS| CPMCN| LaSCal |
|-------|---------|-------|-------|
| ResNet20 | $3.77_{\pm 0.12}$ | $3.79_{\pm 0.11}$ | $4.40_{\pm 0.15}$ |
| ResNet32 | $4.99_{\pm 0.24}$ | $5.19_{\pm 0.21}$ | $4.78_{\pm 0.16}$ |
| ResNet56 | $4.41_{\pm 0.11}$ | $4.42_{\pm 0.12}$ | $4.57_{\pm 0.16}$ |
| ResNet110 | $4.40_{\pm 0.12}$ | $4.43_{\pm 0.13}$ | $4.70_{\pm 0.16}$ |
| **Macro-average** | **4.39$_{\pm 0.22}$** | $4.46_{\pm 0.20}$ | $4.61_{\pm 0.16}$ |
### CIFAR-100-LT
| Model | EM-BCTS | CPMCN| LaSCal|
|--------|------|------|----------|
| ResNet20 | $25.01_{\pm 0.21}$ | $25.02_{\pm 0.24}$ | $5.61_{\pm 0.08}$ |
| ResNet32 | $26.17_{\pm 0.21}$ | $24.76_{\pm 0.20}$ | $5.79_{\pm 0.08}$ |
| ResNet56 | $26.33_{\pm 0.23}$ | $24.53_{\pm 0.22}$ | $5.89_{\pm 0.07}$ |
| ResNet110 | $28.22_{\pm 0.24}$ | $26.49_{\pm 0.22}$ | $6.19_{\pm 0.07}$ |
| **Macro-average** | $26.43_{\pm 0.25}$ | $25.20_{\pm 0.22}$ | **5.87$_{\pm 0.08}$** |
### Amazon
| Model | EM-BCTS | CPMCN | LaSCal|
|----|----|----|----|
| RoBERTa | $2.72_{\pm 0.35}$ | $1.36_{\pm 0.17}$ | $3.64_{\pm 0.32}$ |
| DistillRoBERTa | $2.13_{\pm 0.28}$ | $2.81_{\pm 0.23}$ | $2.71_{\pm 0.25}$ |
| BERT | $3.95_{\pm 0.40}$ | $9.32_{\pm 0.54}$ | $3.74_{\pm 0.39}$ |
| DistillBERT | $3.41_{\pm 0.36}$ | $5.48_{\pm 0.34}$ | $3.40_{\pm 0.28}$ |
| **Macro-average**| **3.05$_{\pm 0.36}$** | $4.74_{\pm 0.42}$ | $3.37_{\pm 0.31}$|
### iWildCam
| Model | EM-BCTS| CPMCN | LaSCal |
|----|---|---|----|
| ResNet50 | $15.84_{\pm 0.57}$ | $19.43_{\pm 0.69}$ | $13.01_{\pm 0.45}$ |
| Swin-Large | $16.81_{\pm 0.63}$ | $18.03_{\pm 0.62}$ | $15.19_{\pm 0.48}$ |
| ViT-Large | $24.83_{\pm 1.31}$ | $19.33_{\pm 0.80}$ | $13.00_{\pm 0.43}$ |
| ViT-Large (384) | $19.78_{\pm 0.73}$ | $20.74_{\pm 0.72}$ | $16.58_{\pm 0.69}$ |
| **Macro-average**| $19.31_{\pm 0.76}$ | $19.38_{\pm 0.75}$ | **14.45$_{\pm 0.52}$** |
Overall, on all datasets LaSCal either outperforms or performs competitively with both CPMCN [1] and EM-BCTS [2], as per the macro-average CE across models (note for CIFAR-10 and Amazon the error bars overlap for LaSCal and EM-BCTS). LaSCal's gains are prominent for datasets which feature a large(r) number of classes (100 in CIFAR-100 and 20 in iWildCam). We hypothesize this is due to the increased complexity of the optimization process associated with higher-dimensional spaces, which our approach seems to handle more effectively. Aside from the empirical gains, our method has the following advantages compared to these methods: (1) is based on a consistent estimator of target calibration error; (2) it enables unsupervised calibration on the target distribution, whereas CPMCN [1] and EM-BCTS [2] perform the calibration step on a labeled validation set (from source).
2. **Calibration error estimator is perceived as data-hungry**
While it is true that the estimator requires a sufficient number of data samples (4000 samples in Fig. 3c) to accurately estimate the calibration error (CE), we do not believe that this should be perceived as data-hungry. We do agree that in severely data-scarce settings, this requirement may limit potential applications, and we will discuss this in the revised manuscript.
However, please note that the error rate of our estimator is $O(n^{-1/2} + m^{-1/2})$, which is the same as the weight estimation methods (see Lemma1 for the RLLS estimator in *Azizzadenesheli et al.* [6], and top paragraph on page 8 in *Garg et al.* [4] for the EM-BCTS method from *Alexandari* et al. [2]). Therefore, the data requirement is not unique to our method, but rather is common across all weight estimation-based approaches. Further, Fig. 3c also shows that our estimator has a positive bias in scenarios with limited data. This characteristic is preferable as it prevents the false impression that a model is well-calibrated due to insufficient sample size. In essence, our method errs on the side of caution, ensuring reliability even in data-constrained environments.
3. **Effect of recalibration step on calibration error estimation?**
Thank you for the interesting question. The label shift gap is not affected by the recalibration step, and therefore the obtained weights from the weight estimation methods remain the same before and after calibration. Having said that, we hypothesize that the precision of our estimator would remain the same. To gain insights, we performed additional experiments on CIFAR-10, comparing the precision of estimating CE using our estimator, compared to ground truth (with labels) before and after the re-calibration step with LaSCal. The empirical results we obtained seem to confirm our hypothesis.
| Model | Uncal | LaSCal |
|-----|-----|---|
| ResNet20 (w/ labels) | $9.01_{\pm 0.34}$ | $4.43_{\pm 0.16}$ |
| ResNet20 (w/o labels) | $9.15_{\pm 0.50}$ | $4.48_{\pm 0.25}$ |
| **Absolute Difference** | **0.14**| **0.05**|
| ResNet32 (w/ labels) | $10.41_{\pm 0.44}$ | $4.76_{\pm 0.13}$ |
| ResNet32 (w/o labels) | $11.94_{\pm 0.49}$ | $6.05_{\pm 0.41}$ |
| **Absolute Difference** | **1.53** | **1.29** |
| ResNet56 (w/ labels) | $11.18_{\pm 0.23}$ | $4.56_{\pm 0.14}$ |
| ResNet56 (w/o labels)| $11.63_{\pm 0.32}$ | $4.99_{\pm 0.17}$ |
| **Absolute Difference** | **0.45**| **0.43** |
| ResNet110 (w/ labels) | $11.86_{\pm 0.26}$ | $4.71_{\pm 0.14}$ |
| ResNet110 (w/o labels) | $12.15_{\pm 0.29}$ | $4.98_{\pm 0.18}$ |
| **Absolute Difference** | **0.29**| **0.27** |
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their extensive response and additional experimental results. I have updated my score accordingly.
My primary concern i.e. the need to compare with relevant label adaptation baselines has been addressed by the provided experiments and substantially strengthen the paper. Effect of recalibration step on calibration error estimation: great to see these additional results and that the ECE estimation still holds. I agree with the rationale of the authors for point 2. and hope that this discussion will be added to the manuscript.
I have no further question at this point.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for your valuable feedback, as well as voting for acceptance of our paper. We are glad our response has addressed your concerns, and we firmly believe that your suggestions greatly improved the manuscript. We will update the manuscript based on the new results and discussion we had during the rebuttal. Thank you again! | Summary: This work considers the problem of model calibration under label shift with label-free data. The work obtains the unsupervised calibration error through the kernel-based estimation of the TARGET data with the SOURCE data and applies it to the TS calibration method. The problem's has some novelty and is technically feasible.
Strengths: 1. The article considers the problem of model calibration in unsupervised and label shift for unlabeled data. The problem seems to be interesting and has some novelty.
2. The article provides a solution to the problem of model calibration under unlabeled data by estimating the calibration error for unlabeled data through a kernel method, which is meaningful.
Weaknesses: The other methods in the comparison experiments, such as TS, use the source data. Such comparison experiment seems to be not rigorous. In comparison experiments, the validation sets for different comparison methods should remain the same.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What are the potential causes of label shift?
2. Does label shift have an impact on model calibration? Label shift between training and validation sets does not seem to be reflected by calibration. Can the authors give some explanation for the relationship between label drift and model calibration? In my opinion, calibration methods for unsupervised data are the focus of this work.
3. Does the result obtained from calibrating the model with a validation set that has data shift negatively affect the results of the training set?
4. See weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: see Weaknesses and Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive review and feedback on our paper. We provide an explanation to the questions below:
1. **Some of the baselines use source data for calibration.**
While it is true that compared to i.i.d calibration methods, unsupervised calibration methods (LaSCal, HeadToTail, TransCal, CPCS) have access to samples from the unlabeled target distribution in addition to the labeled validation set (from source), the reasons we included these methods are two-fold:
1. To showcase that traditional i.i.d. calibration methods fall short in the case of label shift and highlight the necessity for designing calibration methods specifically for this scenario.
2. To be consistent with the papers on calibration under covariate shift i.e., HeadToTail, TransCal and CPCS, which also include TS, Vector Scaling, Ensemble TS and IROvA.
In the revised manuscript, we propose to replace two of these baselines (Vector scaling and ETS) with the baselines proposed by Reviewer **Lv5P** in Table 2, and move the VectScal and ETS to Appendix. We will also group the methods or add markers, to make this difference more visible in the tables.
2. **What are the potential causes of label shift?**
Some of the potential causes of label shift:
- Change in population demographics: e.g. a model trained on mainly adult population (Hospital A) is deployed in a hospital that primarily serves children (Hospital B).
- Unusual events like disease outbreaks or natural disasters: e.g. during a pneumonia outbreak, $p(Y)$ (e.g. flu) might rise, but the symptoms of the disease $p(X| Y)$ (e.g. cough given flu) do not change.
- Geographic changes: e.g. consider wildlife classification model for identifying animal species based on camera trap images (as in our experiments), trained on images from one region with a particular animal species distribution, and deployed in a different region with a different animal speciries distribution.
- Seasonal changes: A model designed to predict seasonal allergies might face label shift if it is trained using data collected during spring, when pollen levels are high and hence allergies are more frequent, and then applied in autumn when allergies are less frequent
3. **Does label shift have an impact on model calibration? Label shift between training and validation sets does not seem to be reflected by calibration.**
Model calibration is defined with respect to a data distribution. When that data distribution changes (e.g., when facing label shift), the predicted probabilities of a model trained on the source distribution may no longer reflect the true empirical frequencies of the target distribution, leading to poor calibration. For instance, if a model trained on a skewed dataset predicts a high probability for a rare class, this probability might not accurately reflect the true frequency of that class (conditioned on that probability) in the new distribution where that class is more frequent.
Tables 10-16 in the Appendix confirm that label shift does have an impact on model calibration. For example, in Table 16 we can observe a notable increase of the calibration error from source data (CE_s) to the label-shifted target data (CE_t), particularly prominent for the Amazon dataset. Please note that in our experiments, the training and validation sets (CE_s) are sampled from the source distribution, while the test set (CE_t) is sampled from the label-shifted target distribution.
4. **Does the result obtained from calibrating the model with a validation set that has data shift negatively affect the results of the training set?**
Our proposed post-hoc calibration method is accuracy-preserving, as it performs temperature scaling by optimizing the CE estimator. Since the logits are scaled with a single parameter, temperature scaling maintains the original order of the predictions, ensuring that the predicted class remains unchanged. Therefore, calibration on label-shifted data does not negatively affect the predictive performance of the classifier on the training (source) set.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: I appreciate the author's response. I do not have any additional questions at this time.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for your time and valuable feedback. We are glad that our responses addressed your questions and we will update the manuscript with extra discussion where appropriate. | Summary: This paper proposes a consistent estimator of class-wise expected calibration error (class-wise ECE) for unsupervised domain adaptation under label shift assumption, i.e., the class proportion of the source $p_s(y)$ and target distribution $p_t(y)$ differs while the class-conditional probability $p(X|y)$ remains the same. In this problem, estimating class-wise ECE is not straightforward as target domain data is unlabeled. The proposed method LaSCal suggests estimating importance weight using existing methods and then incorporate it in the validation objective. Once we have validation objective, one can use a simple post-hoc method such as temperature scaling to tune the temperature to minimize validation class-wise ECE objective. Experiments show that the proposed consistent estimator performs better than baselines that were not designed for label shift scenario.
Strengths: 1. The proposed method has strong theoretical justification because it is a consistent estimator of the target classwise-ECE.
2. Experiments clearly showed that the proposed method is effective.
3. Not only does the proposed method show the best performance in experiments, but several different configurations of the proposed methods were also analyzed (e.g., using different importance weight methods, sensitivity analysis of different ratio of positive/negative, source/target, sample size).
Weaknesses: 1. Given existing techniques for learning under label shift, I might be wrong to find the proposed method is quite straightforward without much difficulty because the estimator is based on the well-known importance weighting method. Having said that, conducting experiments and analyses is still very important and useful as label shift learning is one important kind of dataset shift in unsupervised domain adaptation.
2. Only classwise-ECE metric is considered for the objective function, although there are many different metrics for calibration error.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Is it possible to extend the proposed techniques for tuning different objectives rather than class-wise ECE, e.g., expected calibration error (ECE). I feel that it might also be useful to look into this setting or discuss it in future work. Intuitively, I think it should be possible without much modification from the proposed method. Given the proposed method relies on the accuracy of importance weights, analyzing different objectives under these constraints can also give some insights for reliable confidence learning in this setting.
2. In (7), it seems we only use target data for the objective. We only used source data for importance weight estimation. I was wondering if it is also possible to consider incorporate source data in this objective as well to improve the performance. What do you think?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: The discussions of limitations and future work are appropriate.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the valuable feedback and future work suggestions. We performed further experiments which serve as an interesting addition to our paper. Regarding the questions/concerns:
1. **Proposed method is perceived as straightforward**: While our method builds upon the well-known importance weighting technique, our contribution extends beyond the straightforward application of this method.
- We derive the first consistent calibration error estimator under label shift, without using target labels. Furthermore, the estimator has a known error rate $O(n^{-1/2} + m^{-1/2})$ [L140]. Depending on the choice of kernel, the estimator can be differentiable and integrated as an objective in both post-hoc and trainable calibration methods. To the best of our knowledge, no other estimator exists for target calibration error under label shift with the same properties as ours.
- Based on this estimator, we introduce an accuracy-preserving recalibration method. While some of the papers addressing label shift (*CPMCN, Wen et al. [1]; EM-BCTS, Alexandari et al. [2]*), do incorporate a calibration step, it is performed on validation (source) data, whereas our method performs unsupervised calibration on the target domain.
- We provide a thorough analysis of our method’s properties, demonstrating its robustness across datasets, modalities and severity of shift. We benchmark different weight estimation methods and identify the most robust method for our task. Note that although several weight estimation techniques exist for the label shift scenario, none of them extend their application beyond improving the predictive performance of the classifier.
2. **Alternative objective functions and/or metrics:** Indeed, our method (LaSCal) naturally extends to the standard ECE. In an additional experiment, we adapted our label-free estimator under label shift to estimate ECE and used that as an objective for temperature scaling. The table below reports ECE (with labels) for several baselines: IID temperature scaling, HeadToTail (as representative for methods derived under covariate shift assumption) and the two new baselines proposed by reviewer **Lv5P**, which include most recent works (*CPMCN [1] @ ICLR 2024*) on calibrating models facing label shift. Please see our general response for reasons why we focus on classwise ECE.
### Amazon experiments
| Model | Uncal | TempScale | HeadToTail | EM-BCTS | CPMCN | LaSCal |
|------|--------|-----------|------------|----------|-----------|---------|
| RoBERTa | $6.03_{\pm 0.50}$ | $1.14_{\pm 0.16}$ | $0.93_{\pm 0.17}$ | $0.40_{\pm 0.11}$ | $0.37_{\pm 0.10}$ | $0.29_{\pm 0.08}$ |
| DistillRoBERTa | $10.73_{\pm 0.63}$ | $1.83_{\pm 0.25}$ | $0.58_{\pm 0.12}$ | $0.43_{\pm 0.09}$ | $0.66_{\pm 0.14}$ | $0.42_{\pm 0.11}$ |
| BERT | $17.32_{\pm 0.67}$ | $3.71_{\pm 0.35}$ | $0.73_{\pm 0.13}$ | $0.75_{\pm 0.15}$ | $2.19_{\pm 0.21}$ | $0.61_{\pm 0.12}$ |
| DistillBERT | $13.59_{\pm 0.72}$ | $2.22_{\pm 0.25}$ | $0.33_{\pm 0.10}$ | $0.37_{\pm 0.09}$ | $1.72_{\pm 0.25}$ | $0.30_{\pm 0.09}$ |
| **Macro-average** | $11.42_{\pm 0.59}$ | $2.23_{\pm 0.17}$| $0.64_{\pm 0.09}$ | $0.49_{\pm 0.07}$ | $1.24_{\pm 0.19}$ | **0.41$_{\pm 0.09}$** |
### iWildCam experiments
| Model | Uncal | TempScale| HeadToTail | EM-BCTS | CPMCN | LaSCal |
|-------|---------|-------|-------------|-----------|---------------|---------|
| ResNet50 | $3.21_{\pm 0.40}$ | $1.66_{\pm 0.27}$ | $1.06_{\pm 0.24}$ | $0.64_{\pm 0.15}$ | $2.40_{\pm 0.34}$ | $0.74_{\pm 0.17}$ |
| Swin-Large | $5.92_{\pm 0.57}$ | $1.88_{\pm 0.28}$ | $1.07_{\pm 0.25}$ | $1.48_{\pm 0.27}$ | $1.17_{\pm 0.28}$ | $1.17_{\pm 0.26}$ |
| ViT-Large | $2.43_{\pm 0.41}$ | $1.59_{\pm 0.33}$ | $1.48_{\pm 0.26}$ | $3.32_{\pm 0.43}$ | $2.34_{\pm 0.33}$ | $0.62_{\pm 0.14}$ |
| Vit-Large (384) | $2.69_{\pm 0.40}$ | $1.96_{\pm 0.33}$ | $1.85_{\pm 0.34}$ | $1.93_{\pm 0.34}$ | $2.18_{\pm 0.37}$ | $0.81_{\pm 0.16}$ |
| **Macro-average** | $3.56_{\pm 0.41}$ | $1.77_{\pm 0.26}$ | $1.37_{\pm 0.27}$ | $1.84_{\pm 0.21}$ | $2.02_{\pm 0.23}$ | **0.84$_{\pm 0.12}$** |
From the results we see that the observations remain the same as in the main paper: (1) LaSCal significantly reduces the CE of all models across datasets; (2) LaSCal outperforms the baselines, achieving state-of-the-art results on the datasets and settings we experiment with. We will add a summary of these results in the camera-ready version of the paper (and the full experiments in Appendix).
3. **Adding source data to the objective**
Incorporating source data could be useful in the scenario where $p_s(X) = p_t(X)$ (please note that this is NOT implied by the label shift definition). We tested this approach empirically by measuring CE on the target distribution of CIFAR-10 using 3 different estimators: (1) Ground truth (with labels); (2) Our estimator (no labels, no source samples); (3) Our estimator + source samples (no labels, with source samples). The results (reported in the Table below) demonstrate that this approach leads to less precise estimates, i.e., yields higher estimated values compared to ground truth (computed with labels).
| Model | Ground truth | Our estimator | Our estimator + source samples |
|---------|----------------|--------------------|--------------------|
| ResNet-20 | $8.95_{\pm 0.36}$ | $9.08_{\pm 0.53}$ | $11.55_{\pm 0.52}$ |
| ResNet-32 | $10.46_{\pm 0.25}$ | $11.97_{\pm 0.71}$ | $15.37_{\pm 0.66}$ |
| ResNet-56 | $11.18_{\pm 0.29}$ | $11.73_{\pm 0.48}$ | $15.06_{\pm 0.50}$ |
| ResNet-110 | $11.92_{\pm 0.40}$ | $12.20_{\pm 0.39}$ | $15.27_{\pm 0.59}$ |
We also incorporated the estimator including source samples as an objective function for post-hoc calibration (with the temperature scaling method), and we observed no improvement in the results compared to those reported in the paper.
---
Rebuttal Comment 1.1:
Title: Thank you very much for the paper update and conducting requested experiments
Comment: I have read other reviews and rebuttals. I appreciate the authors responding to my concerns and conducting more experiments on using source data. I raised my score to 6 (Weak accept) mainly because of the experiments added and the ECE results.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for revisiting our paper and for your updated evaluation. We're glad that the additional experiments addressed your concerns. | Rebuttal 1:
Rebuttal: We want to thank the reviewers for the constructive feedback on our paper. We appreciate the provided insights and are pleased to see the recognition of several strengths in our work. Below, we summarize the main strengths and weaknesses highlighted by the reviewers, and address them accordingly.
### Strenghts
- **Theoretical justification and novelty:** Our method has been acknowledged for its strong theoretical foundation as a consistent estimator of the target classwise-ECE (**vmtH**). Additionally, the novelty of addressing unsupervised model calibration under label shift was noted as a meaningful and interesting problem (**Mi4F**).
- **Comprehensive experiments**: The comprehensiveness and clarity of our experiments are well-received by most reviewers. Reviewer **vmtH** noted that our experiments clearly showed the effectiveness and superior performance of our method, while reviewer **Lv5P** praised the robustness of our method. Additionally, both reviewers (**vmtH** and **Lv5P**) compliment our clear presentation and analysis which includes multiple datasets, statistical error measures, and thorough ablations.
- **Clarity and presentation:** The clarity and quality of writing, figures, and tables were praised (**Lv5P**), highlighting that our paper effectively communicates the methodology and findings.
### Weaknesses and response
- **Perceived lack of novelty**: Reviewer **z7nx** has concerns that our method, based on importance weighting, could be lacking novelty compared to covariate shift methods.
We would like to emphasize that our work introduces the first label-free, **consistent** estimator of target calibration error under **label shift**. Estimating calibration error under covariate and label shift are two very different problems (also pointed out by reviewer **Lv5P**). Our estimator has a known error rate, and is differentiable when used with a differentiable kernel (*Popordanoska et al. [5]*). **To the best of our knowledge, no other estimator exists for target calibration error under label shift with the same properties as ours**. Furthermore, while importance weighting is a common technique for addressing distribution shifts, the re-weighting in label shift is based on the label distribution ratio $p_t(y) / p_s(y)$, which is fundamentally different from the feature distribution ratio $p_t(x)/p_s(x)$ used in covariate shift methods. Our paper provides both a strong theoretical justification (reviewer **vmtH**) and a comprehensive empirical analysis demonstrating the robustness and clear advantage over covariate shift methods in these conditions. Specifically, Table 2 in the main paper shows that existing methods derived under covariate shift assumption (HeadToTail, CPCS, TransCal) are suboptimal under label shift, which further highlights the need for designing CE estimators and recalibration methods tailored to this type of shift.
- **New baselines**: Reviewer **Lv5P** suggested including two relevant baselines designed for label shift scenarios.
We conducted experiments with the proposed baselines (CPMCN [1], EM-BCTS [2]) and observed that for all dataset / model combinations, LasCal either significantly outperforms or remains competitive with them; therefore these new experiments do not affect the overall conclusions drawn from our paper. Please note that these baselines perform a calibration step on a labeled validation (source) data prior to obtaining the importance weights, whereas our approach allows for unsupervised calibration on the target distribution. See the response to reviewer **Lv5P** for details.
- **Choice of metric**: Reviewer **vmtH** asked if we can extend the proposed technique to different objectives rather than classwise- ECE, e.g. expected calibration error ECE.
We focus on classwise calibration error because it is a stronger notion of calibration compared to ECE (i.e., CWCE = 0 implies ECE = 0, but not the other way around; see Theorem 3.1 in Gruber and Buettner [3]). Note that, we already report ECE in the reliability diagrams in Figure 2 (and Fig. 6 in Appendix), which show that optimizing our CWCE estimator leads to superior performance also in terms of ECE compared to competing methods. As per the reviewer’s suggestion, we adapted our estimator to estimate target ECE without target labels, and performed experiments for the Wilds datasets (iWildCam and Amazon) using ECE as a calibration objective. Overall, our insights and conclusions remain the same as with the original submission. Please see the response to **vmtH** for details and experiments.
*[1] Wen et. al., 2024 @ ICRL: Class Probability Matching with Calibrated Networks for Label Shift Adaption*
*[2] Alexandari et. al., 2020 @ ICML: Maximum Likelihood with Bias-Corrected Calibration is Hard-To-Beat at Label Shift Adaptation*
*[3] Gruber and Buettner 2022 @ NeurIPS: Better Uncertainty Calibration via Proper Scores for Classification and Beyond*
*[4] Garg et al., 2020 @ NeurIPS: A Unified View of Label Shift Estimation*
*[5] Popordanoska et al., 2022 @ NeurIPS: A consistent and differentiable Lp canonical calibration error estimator*
*[6] Azizzadenesheli et al., 2019 @ ICLR: Regularized Learning for Domain Adaptation under Label Shifts* | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Deep Bayesian Active Learning for Preference Modeling in Large Language Models | Accept (poster) | Summary: This work proposes BAL-PM, a stochastic acquisition policy aiming to select informative data samples (i.e., the prompt and its corresponding paired responses) that need to collect human feedback for LLMs' preference learning.
Specifically, BAL-PM mainly addresses the issue of sampling redundant data points in previous methods that rely solely on epistemic uncertainty, such as BALD. On top of the previous BALD acquisition method, BAL-PM employs a task-agnostic entropy estimator for the prompt distribution to enhance the diversity of acquired data samples.
Experiments on the Reddit and CNN/DM datasets under a pool-based setting demonstrate the effectiveness of BAL-PM over random sampling and the BALD baseline.
Strengths: - The paper is easy to follow.
- The method is well-motivated by the observation of the tendency of sampling redundant data points in previous methods.
- The effectiveness of key components of the proposed BAL-PM is well justified and supported by the experimental results and ablation.
Weaknesses: - The experiment part is not compelling and clear enough to me. Regarding the reported log-likelihood metric, it is unclear to me how it is calculated (e.g., on preferred response?) and why it is the only metric used for evaluating preference modeling. It seems to me that it can not measure to what extent the chosen response is preferred over the rejected one. Regarding the datasets evaluated, the task is limited to text summarization. The experiment part can be improved by adopting more metrics on more general-purposed datasets/tasks to illustrate the effectiveness of the proposed method.
- While encouraging data diversity is plausible and effective in LLM's preference learning according to the experiments, the necessity of using the feature of the base LLM to measure the entropy of the prompt distribution is unclear to me. Although it is mentioned as a common approach in lines 97-99, the motivation for focusing on feature space is still somewhat vague to me and may introduce extremely high computational overhead in the context of LLM's preference learning.
- The proposed BAL-PM, which requires maintaining/updating an ensemble of adapters for collecting each batch, may be difficult to practice in large scale.
- The discussion of prior works is inadequate. Besides the difference in configuration, the technical parts of other related active preference optimization of LLMs should be discussed more.
Technical Quality: 3
Clarity: 3
Questions for Authors: - What is the advantage of adopting the (static) feature space of the base LLM for estimating the entropy of the prompt distribution? Is there any evidence to demonstrate that it is superior to other approaches, such as using smaller proxy models (e.g., BERTScore) or simpler fashions?
- I am curious about whether different adapters can make diverse predictive distributions with limited expressive power. Is there any empirical evidence for this?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for sharing your concerns and questions. We address them below:
**Q14** Why is the Log-Likelihood metric used for evaluating Preference Modeling?
> We refer to **Q1** (global rebuttal).
**W3** The task is limited to text summarization.
> **R3** We understand the importance of different NLP tasks to enrich evaluation. Still, we focus on validating the BAL-PM as an Active Learner in Preference Modeling. We argue that the considered datasets already bring the key challenges for this task to provide empirical evidence to support our findings:
> - They present real human feedback, reflecting the true human behavior and aleatoric uncertainty involved in the preference elicitation process;
> - Prompts are human-generated and go beyond simple instructions. They incorporate human subjectivity and diversity in terms of culture and topics, requiring a high level of language understanding. We refer to Tables 25-28 in [1] for samples. It is a challenging setup.
> - The CNN/DM News dataset presents OOD prompts (from News, far from Reddit posts) while following the same label-generation process. It allows the evaluation of robustness/generalization of learned preference models.
**W4** The experiment part can be improved by adopting more metrics on more general-purposed datasets/tasks to illustrate the effectiveness of the method.
> **R4** We highlight that our work already explores several dimensions to validate BAL-PM, going beyond performance claims. For instance, we:
> - Provide evidence in an OOD setting (Figs 4b, 5b) to validate robustness/generalization;
> - Present measures of diversity to analyze the acquisition of redundant samples (Fig 6);
> - Evaluate the scalability of our method for larger LMs with 70b+ parameters (Fig 7);
> - Evaluate crucial design choices related to the policy objective and its implementation (Figs 8-10);
> - Analyze how the stochastic policy balances the influence of each epistemic uncertainty source, a key property of BAL-PM (Fig 11).
>
> All raised experiments validate crucial points of BAL-PM and Active Preference Modeling, providing evidence to support our method.
**Q15** What is the advantage of adopting the feature space of the LLM for entropy estimation of the prompt distribution? Is it superior to using smaller proxy models (e.g., BERTScore) or simpler fashions?
> **A15** Estimating the entropy of the prompts requires representations that encode semantic features. The underlying idea follows the core of BERTScore: leveraging the distances on this semantic vector space to represent similarities among natural language sentences. We expand this perspective to use not as a metric but as a density/entropy estimator of the prompt distribution.
> The key point is that **this process requires a feature space that can perform well in encoding the extent of the natural language distribution**. Simpler feature extractors, such as tf-idf or CBoW, struggle to encode contextual semantic information. While traditional deep learning models like BERT and ELMo have promising architectures and learning objectives, they are currently limited by their model and training set sizes compared to LLMs trained on meticulously curated datasets that encompass the entire web. This inherent relative limitation translates to performance: for evidence, we refer to the SuperGLUE benchmark [2], comprising several language understanding tasks and online leaderboard. Currently, the best ranked models present 10b+ parameters (the scale of our experiments). BERT-based baselines and classic techniques place in 26th position (see “SuperGLUE baselines”). This result strongly suggests that LLMs provide better representations than models with smaller scales, which is especially crucial for our method, particularly for the challenging prompts in the considered datasets.
**W5** Focus on the LLM feature space to measure entropy may introduce computational overhead in the context of LLM's preference learning.
> **R5** We refer to **Q2** (global response) for computational cost clarifications. There is no additional cost as we can extract these "entropy features" and "preference model features" simultaneously.
**W6** BAL-PM requires maintaining/updating an ensemble of adapters for collecting each batch, which may be difficult to practice in large scale.
> **R6** We again refer to **Q2**. Our adapters are simple MLPs with 2 hidden layers, whose cost is reasonably cheap for LLM research.
**W7** The discussion of prior works is inadequate and should describe the technical parts of Active Preference Optimization in LLMs.
> **R7** We extended the "Active Preference Modeling" section in Related Work, detailing the draft references [11, 13, 25, 35]. The actual text is too long to fit here, so we summarize: [11, 13, 25] theoretically studies active query generation and propose different methods based on estimating confidence bands to generate high-uncertainty triples. [35] generate answers using double Thompson Sampling, representing model uncertainty with an Epistemic Neural Network. We also describe that [18] uses a fully fine-tuned deep ensemble for model uncertainty estimation. Let us know if this is clear. Otherwise, we may paste the exact text as a comment.
**Q16** Can adapters make diverse predictive distributions with limited expressive power?
> **A16** Adapters can be seen as a special case of Epistemic Neural Networks [3], where the prior distribution is a mixture of delta functions over indices. EpiNets provide principled theoretical justification for adapters. There is empirical evidence of adapters in the context of exploration in LLMs [4], which is similar to our method (on capturing epistemic uncertainty) but focused on answer generation, not data selection. We also incorporated MC-Dropout [5] as baseline (see **Q6**), a well-known method for Bayesian approximation. Adapters outperform MC-Dropout in our setup, suggesting effectiveness on approximating posterior distributions.
---
Rebuttal Comment 1.1:
Title: Reply to the Rebuttal
Comment: The authors did a good job of addressing most of my concerns and making clarifications regarding their method and evaluation. However, I strongly encourage the authors to validate the effectiveness of BAL-PM on a broader range of tasks/datasets, especially given the notable computational efficiency of the proposed method and the moderate scale of common academic preference datasets. In my humble opinion, the current paper is a nice technical contribution with fair empirical significance to the community.
---
Rebuttal 2:
Title: Rebuttal References
Comment: [1] Stiennon et. al. Learning to summarize with human feedback. NeurIPS, 2021.
[2] Wang et. al. SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems. NeurIPS, 2019.
[3] Osband et. al. Epistemic Neural Networks. NeurIPS, 2023.
[4] Dwaracherla et. al. Efficient Exploration for LLMs. ICML, 2024.
[5] Gal et. al. Dropout as a Bayesian Approximation:
Representing Model Uncertainty in Deep Learning. ICML, 2016. | Summary:
The authors present a method, Bayesian Active Learning for Preference Modeling (BAL-PM) that seeks to learn a preference model in a sample-efficient manner within an active learning setting. A key insight by the authors is the consideration of task-dependent and task-agnostic uncertainty to encourage diversity in the sampled prompt-completion pairs. My main concern with the approach is the link between the pool-based active learning setting and active learning over an open-ended and continuous prompt space which I will describe below. As such, my recommendation is borderline pending the authors rebuttal on this point.
Strengths:
The paper is well-written and the idea behind BAL-PM is well-motivated. The empirical results appear compelling in the pool-based setting and furthermore, the authors provide the code to reproduce their results. The analysis of the diversity of the acquired prompts is a useful diagnostic to validate the authors' hypothesis that BAL-PM attains performance improvements by encouraging diversity in the sampled prompts.
Weaknesses:
__MAJOR POINTS__
1. My main concern with the current work is the link to the real-world problem of constructing a preference model in a sample-efficient fashion in the absence of a pool of labelled prompt-completion pairs. Specifically, as I understand, one of the main motivations for being sample efficient in constructing the preference model is the cost of human labelling. In the experiments considered by the authors, the datasets already contain labels. As such, I would ask a) what is the motivation for being sample-efficient in the pool-based active learning setting? b) assuming the authors feel performance on the pool based setting is representative of an open-ended active learning problem, how would they go about verifying that BAL-PM performs well in the non pool-based setting?
__MINOR POINTS__
1. It would be great if the references appeared in numbered order.
2. There are some missing capitalizations in the references e.g. "Bayesian".
3. The arXiv identifier is missing for reference 2.
4. When referencing PyTorch, [1] should be used rather than the earlier workshop paper version.
5. It would be great if the authors could provide instructions for reproducing the experimental results in the README of their GitHub repository.
6. Reference 7 was accepted at TMLR.
7. Line 38, I would not expect the statistic that agreement rates among human labelers is typically 60-75% to be robust across all problem settings. It would be worth qualifying this statement.
8. The arXiv identifier for reference 18 is missing.
9. "Prompt-completion pairs" would potentially be a more appropriate terminology in place of "prompt-answer" pairs.
10. On line 105, S is not defined, this should presumably be \Chi.
11. Line 134 typo, "an unsupervised".
12. In Figure 4, it would be great to include in the caption the number of random trials for which the errorbars are computed.
13. In the abstract the authors do not mention that the figures of 33% and 68% apply to a random sampling baseline.
__REFERENCES__
[1] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L. and Desmaison, A., 2019. PyTorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems, 32.
[2] Yang, A.X., Robeyns, M., Coste, T., Wang, J., Bou-Ammar, H. and Aitchison, L., 2024. Bayesian reward models for LLM alignment. arXiv preprint arXiv:2402.13210.
Technical Quality: 4
Clarity: 4
Questions for Authors:
1. In Section 4, the authors mention that their Bayesian preference model is constructed using an ensemble of adapters. What was the motivation for this choice over Laplace-LoRA [2] for example?
2. In Figure 7, random sampling outperforms BAL-PM for the 140b parameter model early in the active learning trace. Do the authors have an explanation for why this might be the case?
3. It would be interesting to see the full details of the validation procedure for choosing the hyperparameters, most notably the entropy term \Beta. How sensitive was performance to the choice of \Beta hyperparameter? What is the link to Section F of the appendix?
4. In terms of the results presented in Figure 9, how do the authors compute the aleatoric uncertainty for the datasets? Are these uncertainty estimates provided in the datasets?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations:
The biggest limitation I foresee with the current work is the potential for the performance of BAL-PM to carry over between the pool-based active learning setting and the non pool-based setting as described above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for sharing your concerns and questions. We address them below:
**Q8** What is the motivation for being sample-efficient in the pool-based active learning setting?
> **A8** We clarify that the goal of a pool-based setup is to mimic an open-ended data selection setup. Although we use logged data in the pool, **the experiment only reveals the labels when the active learning policy selects the data points**. Hence, for the Preference Model, the pool data is completely unlabeled. It is a standard experimentation protocol in Active Learning literature [1,2] and allows the use of real human feedback. For instance, Gleave [3] also adopts the same experimental setup.
> A non-pool-based setup would require either collecting human feedback for every batch in each experiment (impractical) or relying on a preference simulator that tries to mimic human behavior. This is also unrealistic, as accurately modeling the human behavior and the aleatoric uncertainty involved in the preference elicitation process is really challenging. Therefore, using data that leverages real human preferences is more realistic for Preference Modeling and enables purely offline experiments.
**Q9** How would BAL-PM perform in a non-pool-based setup?
> **A9** Given answer **A8**, the key thing here is to realize that the difference between a pool-based and a non-pool-based one is that the latter permits generating different answers for the prompts, while in the former the answers are fixed. As pointed out by previous work [3], this means that our setup may actually **underestimate** the benefit of Active Learners (BAL-PM) since we limit the stochastic policy to prompt selection. Therefore, in a non-pool setup, we believe that BAL-PM should work as well as, or even better, given another degree of freedom for data selection. Crucially, we designed BAL-PM to work in both settings without changes.
**W3** Minor Points
> **R3** We appreciate the detailed list. We added all points to our current draft, which should be reflected in the camera-ready version. For point 7, we qualified the statement by clarifying that this is the result observed in past preference modeling settings for LLM finetuning (as in the references).
**Q10** What is the motivation of using adapters over other methods such Laplace-LoRA?
> **A10** This is a great question that lies at the core of our method. The choice of adapters is computational tractability: they rely on LLMs solely for feature extraction. Hence, BAL-PM does not require training or finetuning LLMs during the active learning loops, considerably reducing the computational costs and allowing scaling for very large LMs. A method like Laplace-LoRA (or any LoRA method) requires finetuning the base LLM. Even if we assume that the finetuning is cheap (which is not for 70B+ models), updating the LLM requires generating new features for all the prompt-answer pairs in the pool at each training loop. This considerably increases the computational costs of performing active learning loops. For adapters, we can generate features once *before* the active learning experiment in inference-optimized settings (with quantization, for instance), and go beyond 100B models in a single A100 GPU.
**Q11** In Fig 7, random sampling outperforms BAL-PM for the 140b parameter model early in the active learning trace. Do you have an explanation for why this might be the case?
> **A11** We hypothesize that this may be due to 4-bit quantization affecting prompt representations for entropy estimates, which is crucial in the early stages of training for accelerating the Bayesian Preference Model learning. The quality of the features is one of the limitations of BAL-PM. Also, as the Bayesian Preference Model contains more parameters for larger base LMs, it may require more initial data to fit well and provide accurate epistemic uncertainty estimates.
**Q12** What is the validation procedure for hyperparameter selection? How sensitive BAL-PM is for the choice of $\beta$? What is the link with the Appendix F?
> **A12** The procedure was quite standard: given the search space in Table 2, we evaluated the candidates in the held-out validation set and selected the best-performing hyperparameter. We tuned the presented hyperparameters in isolation (a grid search would be too expensive).
> **We perform a $\beta$ robustness analysis in Fig. 15 (Rebuttal PDF)**, considering the values in the search space. The impact of the choice is more noticeable to values 100x greater/lower than the optimal choice. Values around 10x greater/lower still perform well, suggesting good room for choosing this hyperparameter. Furthermore, we employed the same value of $\beta$ across the different datasets and LLMs, suggesting robustness across different relevant dimensions.
> Crucially, $\beta$ trades off the contribution of two different terms. As such, it provides a spectrum of objectives and may recover the two extremes presented in the ablation of Fig. 8. Naturally, different choices of $\beta$ will change the uncertainty score ratio presented in Fig. 11 on Appendix F (i.e., the contribution of each term after convergence). Nevertheless, and most importantly, the behavior of the curves – the entropy contribution progressively reducing and converging and the relevance of epistemic uncertainty estimates increasing – should remain.
**Q13** How do you compute the aleatoric uncertainty? Is this provided in the datasets?
> **A13** We compute the predictive uncertainty as the entropy of the posterior predictive distribution (the first entropy term in the Equation 3). Similarly, we compute the epistemic uncertainty via Equation 3. By the Law of Total Variance, the Total (Predictive) Uncertainty in a Bayesian Model is the sum of the Epistemic Uncertainty and Aleatoric Uncertainty. Thus, we can estimate the Aleatoric Uncertainty based on that. These are only **estimates** from our model. Thus, they are not values provided in the dataset.
---
Rebuttal Comment 1.1:
Title: Many Thanks to the Authors for their Rebuttal
Comment:
Many thanks to the authors for their rebuttal.
1. In terms of the pool-based vs. non-pool-based setting, I think it would be beneficial to include a preference simulator for the non-pool-based setting which would a) emulate an open-ended active learning loop b) allow some simulated notion of ground truth aleatoric uncertainty to be present. I would encourage the authors to consider this for the camera-ready version of the paper.
2. Many thanks for the clarification on the use of adapters in place of Bayesian LoRA. The intuition regarding finetuning the base LLM makes a lot of sense and I can see why this would be an important factor to stabilize the surrogate in an active learning loop.
3. The intuition for the performance of BAL-PM relative to random sampling early in the trace makes a lot of sense.
4. Many thanks for the sensitivity analysis on the \Beta parameter.
Given the above, I am happy to raise my score but I would strongly encourage the authors to think about including an additional experiment emulating an open-ended active learning loop with a preference simulator. I think this would "complete" the paper in the sense that both a) the pool-based setting with realistic human preferences and b) the non-pool-based setting with synthetic preferences are considered.
---
Rebuttal 2:
Title: References
Comment: [1] Smith et. al. Prediction-Oriented Bayesian Active Learning. AISTATS, 2023.
[2] Imberg et. al. Optimal sampling in unbiased active learning. AISTATS, 2020.
[3] Gleave et. al. Uncertainty Estimation for Language Reward Models, 2022. | Summary: This paper proposed a novel framework to select the most informative preference data for training based on Bayesian active learning.
To collect the prompt-response pair (x,y), it firstly selects the prompt based on Bayesian Active Learning by Disagreement by maxmizing the information gain. Then, the selection of response considers both the preference model epistemic uncertainty estimation and entropy estimation for the acquired prompt distribution, which addresses the challenges of selecting diverse samples. When evaluated on Reddit and CNN/DM preference datasets, it saves 33% and 68% training samples by comparing to random sampling.
Strengths: 1. The research problem is important to data-efficient training.
2. The presented emprical results are impressive in saving utilised training data over listed baselines.
3. The empirical studies is comprehensive and verify the effectiveness across different settings.
Weaknesses: 1. Lack for diversity-based and uncertainty-based sample selection methods, such as [1,2,3,4], which makes the technical contributions less reliable,
2. As the method is based on feature extraction from existing LLMs, the computation cost in estimating the diversity and uncertainty is not clear, which inhits its pratical application.
References:
[1] What Makes Good In-Context Examples for GPT-3?
[2] An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels
[3] Demystifying Prompts in Language Models via Perplexity Estimation
[4] Diverse Demonstrations Improve In-context Compositional Generalization
Technical Quality: 3
Clarity: 3
Questions for Authors: See the details in Weakness.
1. How are the performances of existing other baselines.
2. What are the computation cost in estimate the utility score, diversity and entropy.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the authors mentioned the intrinsic limitation of the transformer's embedding, i.e., noise-TV. The impropriate selected samples can be toxic, unfair or not well-represented.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for highlighting the strengths of our work and for bringing up your concerns and questions. We aim to address them in this response.
**Q6** How does BAL-PM perform in comparison with other sampling methods [1, 2, 3, 4]:
> **A6** Thank you for the references. We highlight that these works focus on data sampling for In-Context Learning (ICL), while ours investigate Active Learning for Preference Modeling in LLMs. While both topics aim to improve sample efficiency, they operate in different setups: **ICL is a test-time procedure** that incorporates training points to condition the prompt in order to improve predictions of a test point), while **Active Learning is a training-time procedure** that selects training points for improving the model. Fundamentally, this may lead to different conclusions as to what is required for sample efficiency. Nonetheless, we adapted most of the suggested methods to our scenario to evaluate them and provide more evidence of our contribution. We also added other baselines from the Active Learning literature:
> - Entropy Minimizer: Inspired by [1], we consider an objective that, in addition to selecting points with high epistemic uncertainty, also selects points that are semantically similar to the current training points. This is equivalent to selecting points that increase the entropy of the prompt distribution **the least**, thus the name "Entropy Minimizer". It serves as a check for our central hypothesis that entropy maximization leads to better batch active learning.
> - Perplexity: Inspired by [3], we consider an objective that selects points based on the perplexity of the base LLM. We consider two versions: one that chooses points with lower perplexity (Low Perplexity), and another with higher perplexity (High Perplexity). This is an interesting baseline since perplexity is equivalent to the predictive entropy of the token distribution. Therefore, it helps to analyze how much the base LLM "knows what it does not know" in terms of preference modeling.
> - MC Dropout [5]: This method performs approximate Bayesian inference via executing dropout at test time to generate different parameter hypotheses. Therefore, it can express epistemic uncertainty, which is used to select the most informative points.
> - Latent Reward Uncertainty (LRU): This method computes a reward distribution over the data points by leveraging the latent reward model learned via the Bradley-Terry model. Then, it selects extreme points (too high or too low rewards) as a proxy for the uncertainty of the model.
>
> Figure 16 (Rebuttal PDF) reports performances for both test and OOD sets. In both cases, BAL-PM outperforms additional baselines. In the sequence, MC-Dropout works best, as the baseline that targets the epistemic uncertainty of a Bayesian model. Expectedly, Entropy Minimizer and Low Perplexity work worse since they target points with lower entropy. LRU presented mixed results, suggesting that the latent reward may not represent well the preference model's uncertainty. More interestingly, while these models can represent different uncertainties to seek informative points, they naturally cannot provide in-batch diversity - they suffer from the same challenges as BALD. In this perspective, the BAL-PM objective can also improve upon those methods, as we show in Figure 17: we combined MC-Dropout and LRU with our entropy term to provide in-batch diversity, which consistently improved both methods across the datasets.
> Lastly, we highlight that the suggested method in [2] is akin to BALD, as it focuses on the mutual information objective (described in Equation 3 in our paper). [4] focus on semantic parsing approaches to enable compositional generalization, which implicitly assumes key properties of the considered input: For instance, there is a mapping between the natural language sentences and formal queries. In preference modeling for LLMs, we do not assume the existence of such formal queries. Given this assumption, [4] builds diversity sampling on top of syntax trees, which is out of the scope of our work on preference modeling.
**Q7** What are the computation costs involved in BAL-PM?
> **A7** We kindly refer the reviewer to **Q2** in our global response, where we discuss in detail the costs of our method, contextualizing with the Active Learning domain.
**W2** Limitation: the impropriate selected samples can be toxic, unfair or not well-represented
> **R2** We provide some context around this potential limitation. In terms of toxicity, we highlight that selecting toxic prompts for getting preferences is actually a crucial step for detoxifying fine-tuned language models. If the reward model is not trained on such prompts, they may behave as out-of-distribution samples, leading to incorrect predictions that could reinforce toxic answers. Therefore, providing preference labels that consider safe answers over toxic ones is necessary for preference modeling. A good example is the work of Bai et. al. [5], where the employed dataset contains content that may be offensive or upsetting, to make LMs less harmful.
> Regarding representativity, we believe that having an entropy term that seeks diversity in the prompts will actually reinforce representativity over different prompt classes.
We hope that we addressed your questions and concerns. Let us know if there are any further points to be clarified - we genuinely appreciate your review.
[1] Liu et. al. What Makes Good In-Context Examples for GPT-3?. ACL DeeLIO, 2022
[2] Sorensen et. al. An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels. ACL, 2022
[3] Gonen et. al. Demystifying Prompts in Language Models via Perplexity Estimation. EMNLP Findings, 2023
[4] Levy et. al. Diverse Demonstrations Improve In-context Compositional Generalization. ACL, 2023.
[5] Bai et.al. (Anthropic Team). Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback, 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed responses, which largely address my concern. I will raise the score accordingly. | Summary: This paper presents BAL-PM, a Bayesian active learning framework for the training preference model. The authors propose an acquisition policy that seeks examples that have high epistemic uncertainty and can maximize training data’s entropy. Specifically, the epistemic uncertainty is estimated by the training preference model and entropy is estimated by the base LLM. Experiments on two preference datasets show that the proposed method requires fewer training data compared to previous methods (33% and 68% reduction on each dataset accordingly). The author also shows that their method is scalable to larger LLMs.
Strengths: - The problem is clearly formulated. The method is well derived and justified.
- The results are strong, showing significant improvement in sample efficiency.
- The authors test their method on models of various sizes, demonstrating its scalability.
- The paper is clearly written and easy to follow.
Weaknesses: - The evaluation metric is limited. The authors only show the log-likelihood of the test sets. This single metric has no guarantee that preference models trained by this method have a more accurate judgment (shown by pair preference accuracy) and lead to better fine-tuned LMs (shown by tuning with the trained preference model)
Technical Quality: 4
Clarity: 4
Questions for Authors: - How do the results compare to models trained on the whole dataset?
- Do the trained models lead to better LMs?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors have properly adequately the limitations of their work in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for appreciating our work strengths and for raising concerns and questions. We hope to clarify them in this response. Please see the answers below:
**W1** "The evaluation metric is limited and gives no guarantee of more accurate judgment or better fine-tuned LMs."
> **R1** We kindly refer the reviewer to **Q1** under the global rebuttal, where we clarify the Log Likelihood metric and shows that it provides more information about preference strength for ranking models than prediction accuracy. We also refer to **Q5** where we argue about the relationship between Average Log-Likelihood as a performance measure for preference models and the quality of the fine-tuned policy.
**Q4** How do the results compare to models trained on the whole dataset?
> **A4** In Figure 12 of the Rebuttal PDF, we show (in purple) the performance of a trained model in the full dataset (over five seeds). BAL-PM achieves on-par performance only requiring ~24000 points (the full dataset contains 92,858 points). This result is another interesting evidence of the sample efficiency of our method.
**Q5** Do the models evaluated by Average Log Likelihood (LL) lead to better finetuned policies?
> **A5** Despite our work strictly focusing on Preference Modeling (which has several applications in psychology [1], economy [2], and sociology [3] that goes beyond RLHF), we agree that fine-tuning LM policies is a very relevant downstream task. Our Preference Modeling optimization objective and model selection protocol follow exactly the prior influential work on the topic [4, 5], which provides evidence that better preference models (in terms of validation loss) lead to improved downstream policies. Thus, we expect our models to behave similarly under the same conditions.
>
> As additional evidence, we empirically illustrate the relationship between log-likelihood and policy performance on a simplified setup (see Figure 13 on Rebuttal PDF). Here, prompts $x$ and answers $y$ are real numbers in [0, 1]. The ground-truth reward function is given by a Gaussian density function $r(x, y) = \mathcal{N}(x + y \mid \mu = 1.0, \sigma = 0.4)$, and true preferences follow the Bradley-Terry model. In this setup, we progressively increase the training set size (the x-axis in Figure 13a) in which we train the preference models. This process generates different models with increasing levels of test-set average log-likelihood. Then, similar to [6], we optimize the base policy via a Best-of-N optimizer by leveraging each of these learned preference models. Finally, we report the rate in which the fine-tuned policy answer is preferable over the base policy answer according to the ground truth reward model ("win rate"). Although simple, this setting allows us to bypass several optimization and distributional challenges and solely focus on evaluating the relationship between average log-likelihood and the performance of the fine-tuned policy. Figure 13 (left) reports the log-likelihood (red) and the win rate against the base policy (blue). Figure 13 (right) directly plots both measures and fits a regression line. We observe a strong correlation, which aligns with our point: **a higher test-set average log-likelihood means that the preference model is better at predicting the ground truth preferences, assigning higher rewards for better answers, and, therefore, improving fine-tuned policies that maximize such reward scores.**
We hope we clarified your questions and concerns. Please let us know if there any any further points to be clarified - we genuinely appreciate your time reviewing our work.
**References**
[1] Kahneman et. al. Judgement under Uncertainty — Heuristics and Biases, 1981.
[2] Armstrong, W.E. A note on the theory of consumer’s behavior. Oxford Economics, 1950.
[3] Sen, A.K. Social choice theory. Handbook of Mathematical Economics, 1986.
[4] Ziegler et. al. Fine-tuning language models from human preferences, 2019.
[5] Stiennon et. al. Learning to summarize with human feedback. NeurIPS, 2021.
[6] Gao et. al. Scaling laws for reward model overoptimization. ICML, 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and the additional results. My concerns are addressed. I'll maintain my overall rating as I've already given an acceptance score. But I will increase my Soundness rating. | Rebuttal 1:
Rebuttal: We thank the reviewers for raising concerns and providing feedback to improve our work. We appreciate the acknowledgement that:
- **The paper is clear and well-written** (x99G, vibJ, tUWV);
- **The proposed method is principled and well-motivated** (all reviewers);
- **Empirical results are strong/impressive/compelling/effective** (all reviewers);
- **The method demonstrated scalability (x99G) and effectiveness across different settings** (aVyo).
We highlight the new empirical evidence in the Rebuttal PDF to clarify concerns and incorporate feedback. In detail:
- **The performance of preference models in the full dataset** (Fig. 12, **Q4**, x99G);
- **The relationship between a preference model test loglikelihood and the corresponding finetuned policy performance** (Fig. 13, **Q5**, x99G)
- **The objective's $\beta$ hyperparameter robustness analysis** (Fig. 14, **Q12**, vibJ)
- **Several additional baselines** (Figs. 15/16, **Q6**, aVyo)
We now clarify questions raised by different reviewers:
**Q1** Is Average Log Likelihood (LL) a proper performance measure for Active Preference Modeling? **(Reviewers x99G, tUWV)**
> **A1** We first clarify how the Average LL is computed. Given the test set $\mathcal{D}\_{test} = \\{(x, y_1, y_2, {y_1 \succ y_2})\\}^{N}$ and the learned preference model $p_{\boldsymbol{\theta}}(y_{1} \succ y_{2} \mid x, y_{1}, y_{2})$, the average Log Likelihood is given by $LL(\mathcal{D}\_{test}, \boldsymbol{\theta}) = \mathbb{E}\_{(x, y_1, y_2, y_1 \succ y_2) \sim \mathcal{D}\_{test}} [\log(p_{\boldsymbol{\theta}}(y_{1} \succ y_{2} \mid x, y_{1}, y_{2}))]$. It is exactly the objective maximized in standard binary classification (or, equally, the minimization of the negative log-likelihood loss) but computed over the test data. In other words, this is the negative "test loss".
>
> Average LL is a typical metric in the Active Learning and Uncertainty Quantification literature [1, 2, 3]. For Preference Modeling, it is very relevant as **LL directly accounts for the *preference strength* to rank models**: given a triple $(x, y_{1}, y_{2})$ where all raters agree that $y_{1}$ is preferable over $y_{2}$, LL allows us to measure that a model A predicting $p_{A} = 0.9$ ($LL = -0.1$) is better (in that data point) than another model B predicting $p_{B} = 0.6$ ($LL = -0.5$). Accuracy would provide an equal score for both models since it only accounts for the binarized prediction. LL provides a more "fine-grained" measure.
>
> Another crucial point is that **LL factors in the aleatoric uncertainty in the label-generating process**. For instance, in a scenario where only 70% of the raters agree that $y_{1}$ is preferable, LL better ranks models whose predictions are closer to p = 0.7, respecting the ground truth preference strength, which is not possible with accuracy.
> We also empirically illustrate in Figure 13 (Rebuttal PDF) in a simple problem setting that preference models with higher average LL lead to finetuned policies with higher win rates over the base policy (see **Q5**).
**Q2** What is the computational cost of BAL-PM? **(aVyo, tUWV)**
> **A2** We clarify this point since we argue that computational tractability is one of the main contributions of our method. First, some context: our work focuses on *(Bayesian) Active Learning*, which is naturally more computationally demanding than simply training predictive models. This is because **we require models that expresses epistemic uncertainty** to acquire informative labels for efficient training. This also **requires models to constantly update their uncertainties given the new data, via re-training**. The key is that **Active Learning reduces the number of labels required to train a better model, which considerably overcomes the additional computational cost**. The labeling process is considerably more expensive and laborious.
>
> As described in the Introduction, Preference Modeling in LLMs requires batch acquisition - it is impossible to request the label of a single point, re-train the model, and repeat this process. Still, tractable methods rely on these single-point acquisition objectives. Thus, **what BAL-PM does computationally is to replace $B - 1$ model re-trainings per acquired batch with computing entropy estimates** (considerably cheaper, as explained below). $B$ is the batch size, and $B = 320$ in our experiments.
>
> **BAL-PM does not require training or inference on LLMs during the active learning loops**. This considerably reduces the computational cost and allows us to scale up to 140b models in a single A100 GPU. To put it in perspective, fully fine-tuning a 7b model currently requires at least 4 A100s. LoRA methods also require new LLM inferences for every model update, while BAL-PM only requires once.
>
> The computation of BAL-PM has three pieces: offline processing (LLM inference and kNN computation), adapters update, and entropy estimation. LLM inference is done only once, prior to Active Learning. It is the bare minimum for LLM adoption. Furthermore, **we can compute the features used for the preference model and for entropy estimation in the same forward pass**: every prompt-answer input concatenates prompt/answer texts; thus, we can extract prompt features as the last layer embedding right after the last prompt token, and the prompt-answer features right after the answer's last token. Hence, there is no extra cost to extract features for entropy estimation.
> The cost of updating adapters is minimal, as they are MLPs with 2 hidden layers, reasonably cheap for LLM research. The entropy estimation only requires computing the di-gamma function (Equation 11) in the pool.
>
> Ultimately, our experiments show that BAL-PM can handle a challenging real-world preference dataset (used in the precursor of ChatGPT) with LLMs with up to 140b parameters, demonstrating its scalability (as highlighted by Reviewer x99G). We hope this clarifies its applicability in practical settings.
Pdf: /pdf/b213e2efaa329b25c21783e85238b9bfb8424a5c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable | Accept (poster) | Summary: This work proposes B-cosification, a method to transform a pre-trained network to a B-cos network [1,2]. Consequently, the transformed model can be finetuned for better explanations of the model behavior while retaining predictive performance. Experiments are done on various CNN and Transformer architectures, as well as a case study on CLIP.
[1] Böhle, Moritz, Mario Fritz, and Bernt Schiele. "B-cos networks: Alignment is all we need for interpretability." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[2] Böhle, Moritz, et al. "B-cos Alignment for Inherently Interpretable CNNs and Vision Transformers." IEEE Transactions on Pattern Analysis and Machine Intelligence (2024).
Strengths: * B-cos networks are an upcoming, interesting research direction. The need to train a network from scratch is commonly considered a limitation (e.g., see the Concept Bottleneck Model Literature [3,4]); therefore, alleviating this need is a valuable contribution to the research landscape.
* The paper is clearly structured and well-written. Only towards the end of experiments, it becomes a bit cramped with information, which is minor.
* Experiments are performed over a broad range of backbones. Also, I appreciate that the authors make the effort to actually measure the interpretability rather than just stating that it is interpretable.
[3] Yuksekgonul, Mert, Maggie Wang, and James Zou. "Post-hoc Concept Bottleneck Models." The Eleventh International Conference on Learning Representations.
[4] Marcinkevičs, Ričards, et al. "Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?." arXiv preprint arXiv:2401.13544 (2024).
Weaknesses: * My biggest uncertainty is about the stated (inherent) interpretability. Firstly, to me, I see B-cos(ified) networks more as a post-hoc gradient-based explanation method, which additionally regularizes during training for more faithful explanations. However, there is no certainty upon seeing an explanation that the model actually acted upon these highlighted pixels, which to me makes B-cos networks not inherently interpretable. In other words, a regularizer does not enforce that the model truly depends on the given explanations, and therefore, the explanation given is not necessarily faithful to the underlying model. Thus, I also disagree with the Abstract that states that the explanations are "model-faithful by design".
* Followingly, such a method should be evaluated with respect to faithfulness to the underlying model. As such, I would be very interested in how B-cosified networks perform in some faithfulness sanity checks such as the Model Parameter Randomization Test of [5].
* Without any standard deviation, it is hard to assess the (statistical) significance of performance improvements. Also, multiple seeds (up to some degree) prevent the all-too-common hyperparameter optimization on the test set.
* The CLIP explanations when B-cosifying CLIP are on par with GradCAM on the pre-trained weights. To be better requires the introduction of a new, arbitrary aggregation of value vectors.
* If I understand it correctly, this new aggregation would also affect CLIP's performance. It is not explored how the model performance on CLIP Benchmark would be affected by this aggregation that appears to make CLIP more explainable.
* While this work has high significance in today's world of foundation models, the methodological novelty is limited, as most B-cosification tricks have already been introduced in [2].
* In my opinion. speedups of 2x are not necessarily big enough to warrant that it's a huge improvement over training from scratch.
* Page 1 appears to be missing some vspace after line 37.
[5] Adebayo, Julius, et al. "Sanity checks for saliency maps." Advances in neural information processing systems 31 (2018).
Technical Quality: 3
Clarity: 3
Questions for Authors: * In line 295, is "interpolate" really the right word (it confused me slightly in my read). That is, if it was interpolating, then shouldn't Fig. 5a B-cos FeatureCLIP for p=1 be equal to B-cos CLIP? (Also, there are some minor reference naming issues with respect to the figure in line 292-297, and the legend of Fig. 5a needs to be updated)
* While the Grid Pointing Game appears to be a good metric to measure how well the explanations are localized, it is not telling enough on a fine-grained level how informative the explanations are. While the qualitative figures 1 & 5b give some insights, into this, they're obviously cherry-picked (which I don't condone). What I like in papers with qualitative examples is that, in the Appendix, there are also some randomly chosen examples so that an interested reader can get a feeling for how the method actually performs. Thus, I would be interested in seeing some more qualitative examples that are (truly) randomly drawn.
* How well would the proposed method be transferrable to different input modalities such as text?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: One could maybe write one more sentence with respect to which models are *not* B-cosifiable. Apart from that, limitations are adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and valuable feedback. We appreciate your positive remarks on our work's significance and your constructive suggestions for improvement. We have carefully considered your comments and would like to address each point you raised.
- **Inherent Interpretability of B-cos:** Please note that B-cos explanations are not mere post-hoc explanations. Instead, the linear summaries W(x) of the models are in fact exact summaries (i.e., yielding the same outputs when applied to x) of the models’ computations and are optimized during training to align with task-relevant features, as described in [6], and have further been found to compare favorably to post-hoc explanations (Figure 6 in [6]). While we agree that the B-cos explanations of course constitute a simplification of the full network, previously reported results suggest that they indeed capture important aspects of the models’ computations, allowing them to e.g. localise class-specific features at a fine granularity. Our work builds on those prior findings and examines how to bring these previously reported benefits to pretrained conventional models.
- **Model Randomization test performance:** Note that the localisation scores give clear evidence that the explanations are highly model-dependent: if they were not, they would not allow for localizing specific classes with such high specificity (>90%, see Figure 6 in [6]). Further note that conversely, explanation methods that failed the randomisation tests, as proposed in Adebayo et al., notably score very low in the grid pointing game. We are happy to include results regarding the model randomisation tests in the final version of the paper. Lastly, as can be seen in the new figure we provide in the attached global response PDF (Figure 6, right), before fine-tuning the models for inherent interpretability, they give highly uninterpretable explanations (cf. ‘at start’).
- **Multiple seeds:** As we write in our global response, we now provide results over three runs corresponding to Tables 2, 3, and 4 in Tables 5, 6, and 7, respectively, with standard deviations. From tables 5-6, we find that the conclusions in Section 3.2.2 remain unchanged — the simplest approach of directly setting B=2 and removing the bias at the beginning performs as well as more involved strategies such as learning B or decaying bias. Also, from Table 7, we find that the results regarding classification and interpretability performance, as in Section 4.1, are supported by repeated experiments.
- **Aggregation of CLIP vectors:** We agree with the reviewer that the aggregation mechanism changes the model’s behavior and thus does not faithfully reflect the full model’s predictions. We introduce this aggregation mechanism to show that our B-cosification approach allows us to interpret intermediate features of the CLIP model at a high level of detail (clearly outperforming the localisation ability of both full models), as the features still maintain a higher level of spatial resolution. This highlights an additional aspect in which B-cosified models could help better understand the inner workings of foundational models.
- **Speedups in fine-tuning:** We find that the speedups are often much better than 2x for convolutional models (Table 4). We agree that the speedups are lower for ViTs, and believe this could be because our method could further be optimized for ViTs, e.g. see the discussion on ViT performance in response to reviewer DdYE.
- **Interpolation:** For aggregating and explaining the intermediate features of the CLIP model, many options exist - e.g., attention pooling as in the original (and B-cosified) CLIP model, average pooling, or max pooling. To better understand the implications of the feature aggregation, we propose a mechanism based on the cosine similarities that allow us to interpolate between average (p=1) and max pooling (p=inf); note that since both of them differ from attention pool (denoted by B-cos CLIP in the figure), neither of them give the same results as attention pool. These results also indicate that the attention pooling mechanism selects the embedding features in a more specific fashion than mere average pooling, but aggregates the features in a coarser fashion than selecting only the features with high cosine similarities (p$\gg$1).
- **Qualitative examples:** We provide additional randomly selected examples corresponding to both Figures 1 and 5b in Figure 7 of the attached global rebuttal PDF and will provide an extended set as a supplement to the final version of the paper.
- **Performance on text modalities:** We agree this would be a very interesting investigation. Since B-cos models were primarily used for image classification, we also focused on the same setting in this work. However, exploring how to extend to other modalities, such as text, would be a fruitful direction for future research.
- **Which models are not B-cosifiable?** Based on our experimental results, we believe this question does not have a yes or no answer - different models might benefit from a B-cosification to different degrees. For example, while the convolutional models we tested adhere closely to the original formulation of the B-cos models and thus lend themselves well to B-cosification with a high degree of localisation, our results regarding the feature aggregation mechanism (attention pooling vs. our proposed approach) suggest that some models might lack the necessary alignment pressure to exhibit equally strong localisation abilities. We appreciate the reviewer’s suggestion and will update our discussion accordingly.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response and especially appreciate their candor in presenting Fig.7.
I have increased my score. While the methods proposed are not the most novel, this work might pave the way for a broader applicability of B-cos-like networks, which I deem desirable.
---
Rebuttal 2:
Title: Publishing of code
Comment: I would like to remind the authors to publish their code.
The answer in the paper checklist says "... and [we] will make all our modifications available for ensuring full reproducibility of the reported results." Yet, the github repo https://github.com/shrebox/B-cosification remains empty.
My score was influenced by this belief, as I believe the additional usability of B-cos is a major positive point of this work.
As such, I encourage the authors to adhere to the statements made during the submission period in a timely manner. Also, it's bad for distributing your work, as other (like me) won't use this work if there is no codebase available.
Best regards | Summary: This work builds on the recent line of works aiming to design inherently interpretable models. In particular, it considers the recently proposed B-cos networks and shows how pre-trained conventional CNN/VIT models can be converted into a B-cos network through fine-tuning. The authors first discuss which parts of conventional DNNs can be converted into an equivalent B-cos model and then propose two simple mechanisms that address the two missing components (increasing B and removing the biases) through fine-tuning. Results illustrate that the proposed B-cosified models are able to obtain equivalent accuracy to both the original model as well as the standard B-cos network, while obtaining similar localization results to the B-cos network in a shorter time due to the re-use of the original weights.
Strengths: The work addresses an important problem of converting pre-trained black-box models into inherently interpretable models. Instead of requiring complete retraining, which often restricts the applicability of these models, only requiring fine-tuning is a first step into the right direction.
The paper is well structured and the different modifications required are presented in sufficient detail.
The experimental evaluation on both CNN and Transformers as well the case study on CLIP shows the potential of the proposed approach.
Weaknesses: It would be beneficial to include a plot on how accuracy and localization score vary as fine-tuning progresses for one/some of the standard models (CNN/VIT).
In Sec. 3.2.2, it is unclear how the bias is handled when performing the ablation study in Table 2. Also, the B-cosified model consistently outperforms the baseline in Table 2 and 3, do the authors have an hypothesis on why this fine-tuning process increases performance?
Minor:
It would be beneficial to also provide the number of fine-tuning epochs that were required in Table 4.
Figure reference missing in Line 141.
Technical Quality: 4
Clarity: 4
Questions for Authors: For the ViT results, there are some more cases where the B-cosified networks performance degrades slightly compared to the black-box and the localization is slightly lower than for the B-cos model. While these differences are small, do the authors have a hypothesis on their reason? Are these induced by the interpretation of the GeLU?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The limitations of the proposed method have been briefly but sufficiently discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We appreciate your recognition of the significance of our work and your constructive suggestions. We have carefully considered your comments and would like to address each of the points you raised.
- **How do localization scores vary with epochs?:** As we write in our global response, we now report these results in Figure 6 in the attached PDF. We find both quantitatively (left) and qualitatively (right) that the localization improves very quickly. For example, for ResNet-18, the localization improves from 21.5 to 82.0 after just one epoch of fine-tuning, which is close to 86.9 achieved by the B-cos model trained from scratch. On the other hand, modifications to the model architecture at the beginning of the B-cosification process lead to an initial drop in accuracy, from 69.8 for the standard model to 59.9 after one epoch. However, we recover the accuracy much quicker than training from scratch, with the B-cosified model needing just 29 epochs to reach the accuracy of a B-cos model trained from scratch for 90 epochs.
- **How is bias handled?** Apologies if this was not sufficiently clear. In the ablation studies, we remove the bias when varying B since that is the setup in B-cos as well as our B-cosification process (Table 3); for an additional discussion, also see ‘Completeness/bias’ in answer to reviewer Z7HS.
- **Performance of B-cosified models in Tables 2-3:** The baseline in Tables 2-3 refers to the standard and B-cos models trained for 90 epochs. During B-cosification, we start from the 90 epoch trained standard model and further fine-tune, which leads to performance gains. For a fairer comparison, we also report results when fine-tuning the baselines for an additional 90 epochs in Table 8 (blocks 3-4) in the global response PDF, and find that the B-cosified models perform similarly or better, while also being interpretable.
- **Speedup epochs:** In Table 4, we always fine tune for 90 epochs, however, the speedup column shows the point at which the B-cosified model reaches the performance of the corresponding B-cos model when trained from scratch. We now also report the epoch number at which this ‘overtake’ happens at in Table 8 ($t_\text{ovt}$) in global response PDF, e.g. for ResNet-18, we find that the B-cosified model reaches B-cos performance with just 29 epochs of fine-tuning.
- **ViT performance:** Please note that in comparison to many other post-hoc explanation methods, the localisation scores of both B-cos and B-cosified ViTs are relatively high, differing only by a few percentage points. We will analyse and discuss (e.g., impact of GELU) potential reasons for the differences between the ViT models in more detail in the final version.
- **Missing reference:** Thank you for pointing this out, we will correct the missing reference for the final version.
---
Rebuttal Comment 1.1:
Title: Reply to Author Response
Comment: I would like to thank the author for providing the additional clarifications as well as the additional results on how the localization scores change during fine-tuning. After reading the rebuttal as well as the other reviewer's comments, I will retain my 'accept’ score and believe that this will make a valuable contribution to the conference. | Summary: This work targets the interpretability of modern architectures via a process that the authors call B-cosification. Contrary to the original B-cos networks that are trained from scratch by architecturally enforcing alignment between inputs and weights, B-cosification constitutes a post-hoc method, aiming to convert/finetune exististing pre-trained models towards interpretability.
Strengths: This work aims to transform existing pre-trained models to "inherently interpretable" ones via a post-hoc b-cosification method. Contrary to other post-hoc approaches, this finetuning allows for essentially altering the properties of the network, bypassing some of the criticisms of post-hoc approaches.
Overall, this work constitutes an interesting investigation of how to transform standard architectures to B-cos like ones.
Weaknesses: Even though this is an interesting approach, its novelty is limited. The authors explore how to finetune the model to match the b-cos alignment pressure, by exploring some architectural changes like normalisation and bias decrease.
One of the main issues the proposed process is the complexity of the finetuning. Some of the ResNet architectures considered in the main text are trained for around 90 epochs, e.g., on ImageNet. That is a common number of epochs to train these models from scratch. Can the authors provide a summary/visualisation of the acc/localization with respect to the number of epochs?
What is the behaviour of the base models when finetuned the same way as in the proposed b-cosification process (without b-cos)?
Can the authors add another column to Table 4, denoting the accuracy of the standard trained from scratch b-cos networks?
The text needs to be proofread. There are several sentences that lack clarity, typos and other issues, e.g., line 141 "see also Fig. XX, whereas conventional models use 3 channels".
The authors are encouraged to remove the spacing altercations that do not follow the format of the conference.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the Weaknesses section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have a limitation paragraph in the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback on our work. We have carefully considered your comments and would like to address each point you raised.
- **Limited novelty**: To the best of our knowledge, we are the first to investigate how to transform existing uninterpretable models into inherently interpretable B-cos models. In particular, we find that we can effectively B-cosify existing models at a fraction of the cost of training them from scratch, which has the potential to make interpretable models far more accessible than they currently are. In order to be able to do this, we discuss in detail how the models differ and devise specific solutions to address those differences. In particular, we first show that it is possible to cast conventional models in the same framework as B-cos models without changing the function that they represent (i.e., we adapt the model $f_{org}$ to a B-cos version $f_{Bcos}$ with B=1 whilst maintaining functional equivalence, such that $f_{org}(x) = f_{Bcos}(x) $ $\forall$ $x$ ). Specifically, in this context we address the input encoding and normalization, activation functions, weight normalization, and incorporate the pre-trained weights. Thereafter, to increase interpretability, we increase the parameter B and decrease the models’ biases. This, of course, functionally changes the model and hence requires fine-tuning as a final step to adapt and recover the model’s performance to the new changes. We are unaware of previous work that explored this and would be grateful if the reviewer could point us to relevant related work that would limit the novelty of our contribution.
- **Complexity of fine-tuning**: We indeed train the models for a commonly used number of epochs (90) - however, to understand the efficiency gains by using pre-trained models, apart from accuracy and localisation scores, we also report the speed-up for reaching the same accuracy as randomly initialized B-cos models in the results (Table 4). We are very grateful for the reviewer’s suggestion to provide a visualization of accuracy and localisation scores over the training epochs, which we added in the attached PDF in Figure 6, as this helps clearly see how quickly the models’ explanations improve in their localisation (**localisation scores > 80% after the first epoch**). While the accuracy generally improves over the course of 90 epochs, we find that they tend to reach the performance of B-cos models trained from scratch very quickly (e.g. **ResNet-18 reaches the performance of the original B-cos models within 29 epochs**). This is our primary contribution.
- **Fine-tuning base models**: In Table 8 (column blocks 3-4) in the PDF attached, we report the results for the standard pre-trained and B-cos pre-trained models fine-tuned further for 90 epochs in the same way as proposed in the B-cosification process for two CNNs and two ViTs models. We find that even after fine-tuning the pre-trained models (standard and B-cos) further, B-cosified models perform competitively with conventional DNNs (column block 3) and outperform the B-cos DNNs (column block 4) consistent with the original findings mentioned in the main submission. We will provide full results in our revision.
- **Accuracy of B-cos trained from scratch**: In Table 8 (column 5) in the global response PDF, we now report this additional column for two CNNs and two ViTs models. We will provide full results in our revision.
- **Writing fixes**: Thank you for pointing these out; we will make the changes in our revision.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their thorough rebuttal to all the reviewers' questions. Having carefully read the clarifications and the other comments, I decided to increase my score. | Summary: The authors discuss the B-cosification of a pre-trained model.
The B-cosification involves changing the operations performed in the linear layers to one involving a cosine. Not all properties of B-cos models are in the end satisfied, yet performance in terms of accuracy and localization are on par with a fully trained B-cos model.
Strengths: - The method contributes to the current effort to turn pre-trained models into explainable ones without full retraining.
- The performance dropped are limited both against the original model and the equivalent B-cos model
Weaknesses: - The method still requires a full fine-tuning of the weights.
- The experimental setting is not very clear.
- The presentation is not always clear and sometimes verbose.
Technical Quality: 3
Clarity: 2
Questions for Authors: * P1 L33: which are increasingly popular—can cost millions of dollars.
What if I am not using dollars? Maybe time would be a better metric? Or energy?
* P2 L38: the recently proposed B-cos Networks.
Until here, B-cos was never defined. Since it is not a contribution of this paper, would you have a citation to guide the reader?
* P2 L42: What does "functionally equivalent" mean? What do you mean with "alignment pressure"?
Maybe instead of introducing these vague notions, why not use the opportunity given by this paragraph to explicitly say what B-cos consists of, namely (up to my understanding) replacing the linear operations of the hidden layers by a cosine-based function parametered with B? You can go further and say that for B=1, the operation is ("is" and not "is equivalent to") the classic linear matrix multiplication. "Linear" implies the absence of a bias term. Otherwise, that would be the "classic affine matrix multiplication". Nevertheless, one can stress this further.
* P2 L45: "significantly more interpretable explanations (Fig 1)"
What do you mean by "more interpretable explanations"? What I see in the third row of Fig1 is kind of a mix between the second row and an edge detector or an Integrated Gradient saliency map. Is capturing the shape or details of the "important part of the image" making an explanation more interpretable?
Following this discussion, the result of GradCam+CLIP on the first example is preferable from my point of view because one can see that the model uses the help of the grass to actually predict a sheep. The same applies to the "[...] photo of a boat."
* P2 L47: "On supervised settings, we find that B-cosified models often outperform"
How is performance measured here? Classification accuracy or caption prediction?
* P2 L53-57: I am a bit confused when you say "design[ing] inherently interpretable models". I do not understand your contribution as a new design but a conversion. This is a valuable and efficient contribution that aligns with current concerns, be it ecological, time efficient, or simply democratizing ML for the majority who do not have access to huge computational resources.
This work fits into the recent line of research investigating how to alter the architecture of trained models to make them "interpretable." Why not stress this aspect more?
* P2 L65-: I am unsure if the claimed results are that important to count as contributions.
Again, what you mean by "significantly interpretable" is not clear. Could you please clarify?
* Sec2: If you choose to attach your contribution to the line of "converting trained models" (you don't have to) to "interpretable ones", I suggest some works:
* Stalder, S., Perraudin, N., Achanta, R., Perez-Cruz, F., & Volpi, M. (2022). What you see is what you classify: Black box attributions. Advances in Neural Information Processing Systems, 35, 84-94.
* Aytekin, C. (2022). Neural networks are decision trees. arXiv preprint arXiv:2210.05189.
* Gautam, S., Boubekki, A., Höhne, M. M., & Kampffmeyer, M. (2024) Prototypical Self-Explainable Models Without Re-training. Transactions on Machine Learning Research.
* P3 L111: "Many common DNNs consist of a series of blocks of linear layers followed by non-linear ReLU activations [29], and are thus piece-wise linear functions1 : i.e., for every input x, they effectively compute a linear transformation of that input: y(x) = W(x)x + b(x)"
What are W and b? I assume the model's input and output are x and y, respetively.
If it is "effectively [...] a linear transformation", why are W and b functions of x?
* P3 L113: "dynamic linear"
Why do you need to introduce this notion? What is dynamic in an affine matrix multiplication?
* P3 L116: "This linear mapping W(x)". The same goes here: W cannot be a linear mapping/function if we are talking about "a piece-wise linear model".
* P3 L120: Why does a "complete explanation" requires "y(x)=W(x)x"? Two sentences later, you actually contradict this claim.
I fail to grasp why having a bias is so problematic if most of the backward operations are based on the gradient. Could you clarify?
* P4 L123: Can you clearly define "alignment pressure" and why it is necessary to be introduced?
From this paragraph, I understand that the conversion from line (B=1) to (B>1)-cos model has already been investigated in [6] and [7]. Has it?
If it has been, this jeopardizes the relevance of your main contribution.
* P4 Eq1: How do you compute $cos(x,w)$?
I understand that $x$ is a vector, $W$ is a matrix, and $\times$ is the element-wise multiplication.
* P4 L127: Why do we need the notion of "dynamic linear"?
If it has to do with W being a function of x: if B=1, W does not depend on x.
* P4 L141: Missing reference with Fix XX.
* P4 L146: "In particular, we show that a conventional model can be framed as a functionally equivalent B-cos model as in [7] with B =1, which additionally employs bias terms. Only upon modifying these two aspects, i.e., biases and B, does the model need to be fine-tuned to adapt the weights to those changes."
These two sentences are not clear.
* P4 L150: "As mentioned in Sec. 3.1, B-cos models use input representations with six color channels"
This was not mentioned in Sec 3.1. This seems to be an atavism from an older version of the paper. Please fix.
* P4 "Input Encoding and Normalisation": The operation is straightforward: B-cos models requires 6 channels: so inputs need to be transformed according to Eq2 and the first layer of the B-cosified models is discarded and replaced by one with 6 channels (implying a full training thereof).
* P5 "Activation Functions." You start the section by saying that activations are unnecessary, yet you introduce them. Why?
* P5 "...unit norm weights, ..., which the authors motivated by the fact that the only way any given neuron can achieve its maximal output is by increasing the weight-input alignment, which in turns[typo] leads to the improvements of the explanations."
What is "weight-input alignment"? I fail to see why normalized weights improves the explanations.
You could streamline this section by leaving the justification of the B-cos meaningfulness to an introductory paragraph and focusing on the changes required by your approach. There is no need to justify or make claims about the benefit of B-cos since it is not one of your contributions here.
* P5 Section 3.2.2
None of the tables present statistical tests, which makes it difficult to draw any conclusion, especially given that the results are quite similar from one setting to another.
* How many times the experiments were repeated? What are the standard deviation? Which result is significantly better?
* What is the localization score?
* P6 L204: "setting B = 2 and then fine-tuning, yields performance that is on par with
learnable B parameters,"
If this holds, it is also on par with Linear B 90 epochs.
* P6 Table 4: The increments are quite small. Have you considered running a t-test to identify significant differences?
Is the "speedup" with respect to the fully trained B-cos model? What about the "speedup" with respect to the vanilla models with classic matrix multiplication?
* P7 L245: "Specifically, we find that averaged across architectures, B-cosified models outperform B-cos models trained from scratch by 2.31 pp, with an average training speedup (to match performance) of 2.96x."
This is an important result. Why not highlight it in a separate section/ablation study?
* P8 Interpretability: In the text you refer to paper [6] while in the caption of Fig2 you refer to [7] (These papers share essentially the same figures). Fig2 would be better positioned closer to its text.
Finally, interpretability is defined in paper [6] (or rather in paper [6] of paper [6]) as the performance at the "grid pointing game". If I understand it, it is related to the sum of the GCAM attribution to each class of the grid cell. My understanding of gradient-based explainable methods is that the scale of the output saliency maps is inconsistent. Is it valid? If yes, how does it affect the score returned by the Pointing Game? How robust is this score against possible spurious relationships between grid cells? How does performing well in this game relate to the interpretability of the model's explanations?
* P8 Fig4: What are Natural/Specialized/Structured Data?
* P9 L284: "We find that the B-cosified models significantly outperform the Text2Concept approach and achieve accuracies that are more similar to the original CLIP’s zeroshot and linear probing accuracies."
Text2Concept is a quite different approach. In the original paper thereof, they report performance close to CLIP's. How do you explain this difference? What is different in your setting? Do you train supervised or self-supervised?
* Evaluating Model Interpretability.
In the image classification case, you compute the explanation based on a single label; the equivalent case here is thus to back-propagate using the
Isn't the average similarity equivalent to the similarity with an out-of-distribution point in the text embedding?
Why did you choose p=7 and p=19?
Overall I did not quite understand this section. I am not very familiar with CLIP, so it is not easy for me to follow what is happening.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed feedback and the suggestions to improve clarity; we will incorporate them as well as the following answers in our revision.
- **Full fine-tuning**: The research question we examine is how to leverage existing pre-trained uninterpretable models to make training more interpretable ones more cost-effective. To endow pre-trained models with _inherent interpretability_, it is indeed inherently necessary to fine-tune the full models, as we aim to fundamentally change how they operate - however, we find that this approach is significantly cheaper than training the corresponding interpretable models from scratch (up to 9 times speed-up, Tab. 4).
- **Multiple runs**: See global response.
- **Tab 4 small increments**: The increments in accuracy are indeed small but also not our core focus. Instead, we show that it is possible to _maintain the original models’ accuracy_ whilst significantly increasing their interpretability, at a significantly lower cost than when training from scratch as in [6].
- **B-cos more interpretable**: As reported in [6,7], B-cos explanations **explain the full model from input to output** and are significantly more localized, and allow to visualize the important features in color (Figs. 1,7 in [6]), while also allowing to understand misclassifications (Fig 9, [6]), making it easier for humans to see understand the model's predictions. Please note that in the examples in Fig. 1, the rows correspond to different models, and so are not directly comparable. I.e., the CLIP model might indeed use the grass to predict the sheep, while the B-cosified CLIP model might not.
- **Comparing to Text2Concept**: [28] shows that a linear transform can map an arbitrary model's features to CLIP features. Hence, training such a transform on an existing B-cos model serves as a good baseline for B-cosification. We follow [28] and use ImageNet to learn the linear map. However, other parameters are different: we use a B-cos encoder instead of a standard model, a CLIP ResNet-50 instead of ViT-B/16, and evaluate across a broader range of datasets (Fig. 4). Understanding the performance difference requires further investigation, but is orthogonal to our current work.
- **Dynamic linearity**: As discussed in prior work (cf. [18]), a linear layer followed by ReLU or MaxOut is ‘piece-wise linear’: for every input x, the prediction is effectively computed via a linear transformation of x, with the choice of the linear matrix W depending on x. Dynamic linearity (cf. [6, 7]) is a generalization of this concept, making the dependence of the linear matrices on the input more explicit. In particular, ‘dynamic linear models’ compute their output via an input-dependent linear transformation of the input y(x) = W(x) x. Finally, we agree with the reviewer that when bias terms are employed, “piecewise affine” is more accurate in contrast to commonly used nomenclature; given that the bias can be modeled via an additional input dimension, however, we opted to stick to standard nomenclature.
- **Completeness/bias**: The importance of biases has been discussed in prior work: [48] discusses how gradient-based explanations tend to neglect important contributions from biases (cf Eq. 1, [48]), which have been shown to play a significant role in the models’ outputs (Fig. 13 in [7]); completeness further constitutes one of the foundational axioms of IntGrad [49]. We will revise to better reflect these connections.
- **Novelty of conversion**: No, while [6] discussed and emphasized the differences between B-cos and conventional networks, we leverage the similarities and show that the existing weights of pre-trained DNNs can be used to efficiently obtain performant and interpretable DNNs, despite e.g. using a different number of input channels (3 vs. 6) and activation functions.
- **Input encoding**: No, we do not discard the first convolutional layer and train a new one from scratch; instead, as part of our contributions, we show how to construct a first layer and compute the corresponding weight that takes six channels as input and is nonetheless functionally equivalent to the original layer **with no additional training**.
- **Activation functions**: We keep ReLU since (i) we want to at first maintain functional equivalence with the original model, (ii) using ReLU does not affect the dynamic linearity of B-cos, (iii) the pre-trained weights were trained with them, so using ReLU keeps us closer to the original models.
- **Normalized weights**: Using normalized weights ensures that the output can be maximized only if the input and weight vectors align, i.e. if the cosine term in the dot product between them is high, see [6]. This helps interpretability, since weights being aligned to inputs better highlights the most important regions.
- **Localization score**: It refers to the GridPG metric used in [6,7].
- **Robustness of GridPG**: GridPG [6, 7] computes the ratio of attributions in a grid cell to the total attributions, and so is invariant to scaling To reduce the risk of spurious relationships, the metric uses grids where each image is of a distinct class and is classified correctly with high confidence by the original model, implying that it is unambiguous to the model.
- **CLIP Interpretability**: We show p=7 and p=19 for illustrative purposes, the full trend across p is shown in Fig 5a. We also provide a more detailed discussion on the CLIP experiments in our response to reviewer JC7z.
- **Natural/Specialized/Structured data**: We follow the CLIP benchmark [26] for zero-shot and linear probing results (Fig. 4) and categorize datasets based on the images they contain.
- Sec 3.1: We do mention it, see L139.
- **"Outperform" performance measure**: We report accuracy for image classification, and zero-shot performance for CLIP models.
- **Citations**: Thank you, we will add them.
- **Eq1**: We use row-wise dot products, following prior work (cf. Eq. 9 in [6]). We will revise for clarity.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed answer.
I still have two comments:
- *Dynamic linearity* Your explanations comfort me in my understanding that it means "piece-wise linear". So I still don't understand why we need a new word to say that.
- *Normalized weights* I failed to find where the advantage of it is studied in [6], except in the limitation because of the computation overhead, and the visualization paragraph.
To maximize a cos, only the angle is important. Not the norm (I guess that's why $w$ doesn't have a hat in the cos of Eq3). the matrix $W$ appears in Eq3 outside the cos, so the norms of its rows influence the norm of the output. Again I don't see why it is problem, or where is the advantage. If it is just for visualization, I wonder if normalizing before visualization is not enough?
I follow my fellow reviewers and increase my score to accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comments. For the remaining questions, we elaborate below:
**Dynamic Linearity:** Apologies for not making it sufficiently clear, we will make sure to better clarify in the final paper. Dynamic linearity and piece-wise linearity are not the same. In particular, dynamic linearity describes the general notion of linearly transforming the input with an input-dependent matrix $ \mathbf W(x)$, whereas piece-wise linearity refers to one particular case of dynamic linearity. E.g., in B-cos transformations, the rows of a fixed matrix $\mathbf W_{static}$ with trainable parameters are scaled by an additional cosine factor (compare Eq. 11 in [6]) to give $ \mathbf W_i( \mathbf x) = \cos( \mathbf W_i, \mathbf x) \times \mathbf W_{static, i}$ for row $i$. In contrast, in piece-wise linear ReLU networks, $ \mathbf W_i( \mathbf x)$ is given as $ \mathbf W_i( \mathbf x) = [\mathbf {0}$ if $\cos( \mathbf x, \mathbf W_{i, static}) < 0$ else $ \mathbf W_i ]$. While both of them are dynamic linear, only the latter is piece-wise linear (i.e., the function consists of **two linear pieces**, one for $\cos( \mathbf x, \mathbf W_{i, static}) < 0$ and one for $\cos( \mathbf x, \mathbf W_{i, static}) \geq 0$).
Following [6], we believe this to be a useful distinction since for any dynamic linear model, $\mathbf W(\mathbf x)$ can be viewed as a faithful summary of the contribution of each component of $\mathbf x$ to the output $\mathbf y$. For standard piece-wise linear models without bias, such a $\mathbf W(\mathbf x)$ can be obtained by computing the gradients with respect to input, as done in X-DNNs [20]. However, X-DNN explanations are not as human interpretable as compared to B-cos because of lack of alignment pressure. This is addressed by B-cos models [6], which with their B-cos transform obtain dynamic linearity differently, without piece-wise linearity, as discussed in Sec 3.2.1 in [6].
**Normalized Weights:** Apologies for the overly short answer (due to space constraints) with respect to the question on weight normalisation, we gladly take this opportunity to elaborate in more detail. With our above answer, we tried to clarify on the motivation on using normalised weights as given in [6] — specifically, due to weight normalisation, a "B-cos neuron" (as defined by Eq. 3 in [6]) is bounded in its output strength and will produce its highest output if and only if w and x are co-linear, i.e., have maximal cosine-similarity; this property has been referred to as **weight-input alignment**. While this argument seems intuitive when considering B-cos layers in isolation, when combined with a subsequent normalisation layer (e.g., batch- or layer-normalisation), the model will become invariant to the usage of weight normalisation, as we describe in lines 170-183 in our submission. As a result, we find that B-cos networks without weight normalisation show similar properties as those reported in [6] and we thus agree with the reviewer that weight normalisation does indeed not seem necessary. We will revise our manuscript to make this point clearer.
We unfortunately did not fully understand the question about 'Eq. 3' in the comment — we would be grateful if the reviewer could clarify so that we can fully resolve the reviewer's concerns. Specifically, $w$ does have a hat in the cosine term in Eq. 3 of [6]. | Rebuttal 1:
Rebuttal: We thank the reviewers for their detailed comments and constructive feedback. We are encouraged to find that the reviewers appreciate that our proposed approach allows us to convert pre-trained models to be interpretable whilst maintaining performance (Z7HS, DdYE). We are further encouraged to find that the reviewers find our work to constitute an interesting research direction (71N9, HC7z), our submission to be well structured (DdYE, JC7z), and to have a broad experimental evaluation (JC7z, DdYE).
While we are happy to find our submission to be generally very positively received, we of course also highly appreciate the various suggestions by the reviewers to improve our work. In the following, we address some of the concerns shared by multiple reviewers, with more detailed responses to be found in the respective rebuttal sections:
- **Repeat experiments multiple times (Z7HS, JC7z):** We now provide results over three runs corresponding to Tables 2, 3, and 4 in Tables 5, 6, and 7, respectively, in the attached PDF. From Tables 5-6, we find that the conclusions in Section 3.2.2 remain unchanged — the simplest approach of directly setting B=2 and removing the bias at the beginning performs as well as more involved strategies such as learning B or decaying bias. Also, from Table 7, we find that the results regarding classification and interpretability performance, as in Section 4.1, are supported by repeated experiments.
- **Localization scores vs. epochs (71N9, DdYE):** Thank you for the suggestion. We now report these results in Figure 6 in the attached PDF. We find both quantitatively (left) and qualitatively (right) that the localization improves very quickly. For example, for ResNet-18, the localization improves from 21.5 to 82.0 after just one epoch of fine-tuning, which is close to 86.9 achieved by the B-cos model trained from scratch. On the other hand, modifications to the model architecture at the beginning of the B-cosification process lead to an initial drop in accuracy, from 69.8 for the standard model to 59.9 after one epoch. However, we recover the accuracy much quicker than training from scratch, with the B-cosified model needing just 29 epochs to reach the accuracy of a B-cos model trained from scratch for 90 epochs.
- Finally, we will carefully proofread the final version of the manuscript and incorporate the valuable suggestions regarding the clarity of the writing and experimental evaluation given by the reviewers.
Pdf: /pdf/c9726acf69436d673e6b55863dc3309113e0c45c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
HHD-GP: Incorporating Helmholtz-Hodge Decomposition into Gaussian Processes for Learning Dynamical Systems | Accept (poster) | Summary: This paper proposes a novel dimensionality reduction method.
The method relies on the Helmholz-Hodge decomposition to identify the dynamical system through a decomposition into a curl-free and a divergence-free part.
Furthermore, the method introduces a way to incorporate priors that constrain the identified vector fields based on symmetry constraints.
This method is applied to various toy models (mass-spring system, pendulum, Chua circuit) and to real world data (ocean current fields).
Strengths: This paper addresses an important problem in dynamical systems modeling.
This paper is the first to introduce the use of symmetries to construct unique GP models of dynamics that are decomposed into div- and curl-free vector fields.
This method is shown to be consistent in terms of convergence to the true parameter values as the amount of data increases.
The paper discusses the computational complexity of the proposed method.
The paper demonstrates that the presented framework is outperforms baselines on various toy models
Furthermore, the method is shown to be able to handle noisy data.
Finally, the method is applied to a real world dataset, which indicates practical significance.
Weaknesses: The overall applicability of this method is questionable due to not completely addressing the identifiability issue of the decomposition. The issue is only resolved for systems with known symmetries, see also Questions.
The paper mentions that the accurate prediction of energy demonstrates the interpretability of learned div-free features.
However, this is not well-explained. The paper claims that the curl- and div-free components are physically meaningful, but this should be better justified.
The paper is missing comparison to other existing approaches for modeling physical systems. See for example:
Guo, Q., Mandal, M. K., \& Li, M. Y. (2005). Efficient Hodge–Helmholtz decomposition of motion fields. Pattern Recognition Letters, 26(4), 493-501.
Compare to other dynamical systems reconstruction methods:
- Hess, F., Monfared, Z., Brenner, M., \& Durstewitz, D. (2023). Generalized teacher forcing for learning chaotic dynamics. arXiv preprint arXiv:2306.04406.
-I. Mac\^edo and R. Castro, “Learning Divergence-Free and Curl-Free Vector Fields with Matrix-Valued Kernels,” technical report, Instituto de Matema ́tica Pura e Aplicada, Rio de Janeiro, Brazil, 2010.
Finally, there is no code provided to show the method's working and not all hyperparameters are described (see also Questions).
Technical Quality: 3
Clarity: 3
Questions for Authors: What is the relevance of $\mathcal{G}$-equivariance and uniqueness/identifiability?
Appendix H: Uniqueness and Symmetries
- How do you know what priors to choose? Isn't an agnostic method better?
- How do you know which symmetries to enforce?
Could it be more fruitful to restrict yourself to a compact manifold with boundary so that the decomposition is unique?
# Implementation details
- How were the kernel parameters and noise variances initialized? Does training depend on initialization?
(Here you mention that these are hyperparameters, but typically one refers to non trained parameters as hyperparamters.)
-What were the ADAM optimizer paramters used for training?
# Choice of evaluation metrics
-Why did you mostly focus on performance of the vector field? If the point is learning the dynamics, metrics that look at the dynamics directly would be preferable.
Why not prediction of the future state/trajectory? Why prediction only measured through VPT?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors should comment on limitations and societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer KheP
We are very grateful for your valuable comments and acknowledgement of our main contributions. We sincerely appreciate your time and effort in reviewing our paper. Here is our response to your comments.
---
> **Question 1**: How do you know what priors to choose? Isn't an agnostic method better? How do you know which symmetries to enforce? Could it be more fruitful to restrict yourself to a compact manifold with boundary so that the decomposition is unique?
**Response**:
The current work assumes that the prior of symmetries is directly available, and focuses on developing the method of incorperating symmetries into a GP model. This assumption is reasonable for many dynamical systems, but we totally agree with your concern about the availability of symmetry. Although physical systems always adhere to symmetries, the symmetries of many complex phenomena are often considered to be partially known or unknown. Now that our model has been demonstrated to perform well when system symmetry is available, we look forward to developing a method for symmetry-agnostic scenarios, i.e. enhancing our model with the capabilities of automatedly learning symmetries. Several recent studies have explored the feasibility of learning hidden symmetries from data [1, 2]. Since our model is developed in a probabilistic framework, another promising direction is to incorporate approximate symmetry instead. Studying the relationship between approximate symmetry and model identifiability is definitely an interesting and promising but unexplored research problem.
We also agree with you that enforcing boundary condition is another effective way to make the decomposition unique. Our method of enforcing symmetry can also be used to impose the boundary condition, which is demonstrated by our experiments on learning ocean current fields (Section 6.3 of the manuscript). In this experiment, mirror symmetry was incorporated into the div-free kernel to enforce the divergence-free vector field to be parallel to the domain boundary, which is a sufficient condition for the decomposition to be unique [3]. The experimental results in Section 6.3 of the manuscript demonstrate that enforcing the boundary condition allows our model to provide more realistic ocean current predictions and divergence identification than the baselines.
> **Question 2**: How were the kernel parameters and noise variances initialized? Does training depend on initialization? What were the ADAM optimizer paramters used for training?
**Response**: For the experiments on simulated systems (damped mass-spring system, damped pendulum, and Chua circuit), we set the noise level to 0.05 (lines 275 and 355 in the main text). To explore the impact of noise level on model performance, in Appendix J.5 we provide experimental results with increasing noise in the training data (0.01, 0.05, 0.1, 0.2). For the experiments on a realistic system (ocean current field), we did not manually add noise to the training data since we used a realistic dataset. The kernel parameters are initialized randomly from a range of [0.1, 10]. In Table 1 of the manuscript we reported experimental results averaged over 10 independent experiments performed by resampling the kernel initial parameters, from which we observed no dependence of training on the initial parameters. As discussed in line 802, the GP-based models were trained by the ADAM optimizer, with a learning rate of 0.01 for 3000 gradient steps, and other parameters are the default values in pytorch.
> **Question 3**: Why did you mostly focus on performance of the vector field? If the point is learning the dynamics, metrics that look at the dynamics directly would be preferable. Why not prediction of the future state/trajectory? Why prediction only measured through VPT?
**Response**: In our experiments, we evaluate the performance of the models in predicting both vector fields and state trajectories, with the former measured by the root mean squared error (RMSE) and the latter measured by the valid prediction time (VPT). The reason why we chose VPT is that many recent studies in learning dynamical systems [4,5,6] suggested that the RMSEs of state trajectories over long time horizons can be misleading indicators. As an example in the pendulum dataset, a trajectory remaining stationary at its initial position may have lower RMSE compared to a trajectory recovering the oscillatory behavior correctly but having a slight shift in angular velocity. Therefore, VPT has become a popular metric for measuring a model's ability to predict state trajectories.
---
Thank you again for the constructive comments. We hope these explanations could address your concerns. Any further questions or suggestions would be greatly appreciated.
---
## References
[1] Liu, Ziming, and Max Tegmark. "Machine learning hidden symmetries." Physical Review Letters 128.18 (2022): 180201.
[2] Desai, Krish, Benjamin Nachman, and Jesse Thaler. "Symmetry discovery with deep learning." Physical Review D 105.9 (2022): 096031.
[3] Bhatia, Harsh, et al. "The Helmholtz-Hodge decomposition—a survey." IEEE Transactions on visualization and computer graphics 19.8 (2012): 1386-1404.
[4] Matsubara, Takashi, and Takaharu Yaguchi. "Finde: Neural differential equations for finding and preserving invariant quantities." arXiv preprint arXiv:2210.00272 (2022).
[5] Jin, Pengzhan, et al. "SympNets: Intrinsic structure-preserving symplectic networks for identifying Hamiltonian systems." Neural Networks 132 (2020): 166-179.
[6] Vlachas, Pantelis-Rafail, et al. "Backpropagation algorithms and reservoir computing in recurrent neural networks for the forecasting of complex spatiotemporal dynamics." Neural Networks 126 (2020): 191-217.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal. However, I still believe that the paper would benefit from comparisons to relevant baselines (besides GPs and HNNs). Specifically, including a comparison to other dynamical systems reconstruction methods would provide valuable context and help to clarify the advantages of your approach. Such comparisons could significantly strengthen the paper by demonstrating its effectiveness relative to existing techniques.
I will keep my score as is. | Summary: The authors tackle the problem of modelling dynamical systems in scenarios where it may not be possible to straightforward to determine the exact form of the ODEs governing the system and optimise their parameters directly. Whilst physics-informed Bayesian models which learn divergence-free vector fields are capable of extrapolating dynamics effectively in an interpretable manner, they cannot represent certain common real-world behaviours such as dissipation. Proposed in this work is a Gaussian process (GP) which combines the utility of the div-free vector field with a curl-free vector field, an approach which is theoretically motivated by the Helmholtz-Hodge decomposition which states that many vector fields may be decomposed into a sum of a div-free and curl-free field. This approach (termed the SPHHD-GP) allows for a much wider range of dynamical systems to be modelled without strict assumptions on the form of the system, and also permits a symmetry preserving representation which allows for enhanced interpretability of the learned dynamics. Experimental results are presented which show the efficacy of the approach on tasks such as learning Hamiltonian dynamics from noisy data, learning chaotic dynamics and modelling ocean currents.
Strengths: - The approach taken is novel, but not unnecessarily complex. The theoretical insight regarding the Helmholtz-Hodge decomposition is combined with a basic building block of GP-based modelling (additive combination of GPs) in order to yield a technique which consistently outperforms all the relevant baselines presented by the authors.
- The quality of the work carried out in general is very good; the experiments are thorough and well documented overall. The Hamiltonian dynamics experiments are effectively a standard benchmark in this area, the chaotic system presents a further challenge for the model, and the ocean current experiment is a real-world example of how this approach may be useful in a practical setting. The performance of the SPHHD-GP across all of these experiments is impressive.
- The clarity of presentation is also very good, and I believe that the work would be accessible both to researchers from a dynamical systems background without in-depth knowledge of GPs, and vice versa. The narrative of the paper is well crafted, from problem setting to initial formulation of the model, onto discussion of limitations regarding identifiability, and finally the implementation of symmetry-based constraints which address these limitations.
Weaknesses: - From the text it appears the ocean current experiment was performed using a sparse GP implementation, whereas the rest of the experiments were performed using exact inference, is this correct? This is only mentioned very briefly in passing and few details given; I know the details of the sparse GP formulation are probably a given to readers with a background in the area, but I think adding some detail to the appendix on how this was implemented would be useful to many readers.
- Several of the derivations, proofs and discussions in the appendix are of sufficient relevance that it would be great if they could be included in the main text, however I’m obviously aware that the authors are constrained by the page limit so I wouldn’t expect any alterations based on this. I think the amount of content more just speaks to the fact the work has been carried out in a thorough manner.
Technical Quality: 4
Clarity: 4
Questions for Authors: - Please address the point mentioned in the weaknesses section regarding adding some brief details about the sparse implementation into the appendix and make some reference to this in the main text (maybe where the sparse GP framework is mentioned in Section 6.3).
- Related to the above point, could you speak to how the sparse formulation performs empirically compared to the exact approach on problems for which both are computationally feasible (i.e., sections 6.1 and 6.2)?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors address the main limitation of the model in Appendix I which is the considerable computational complexity of the model, $\mathcal{O}(m^3 n^3)$, although this approach can also be implemented using sparse GPs instead to reduce this. The authors also briefly mention in Appendix K how the model has a wide range of applications, but from a societal impact perspective it is important that predictions and uncertainty estimates are rigorously evaluated. As I don’t think the work has significant potential for negative impact, I think this is adequate.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer X44d
Thank you very much for your thoughtful review and constructive comments. We are very pleased that you recognize the importance and contribution of our work. We have carefully gone through your comments and suggestions, and we believe addressing these points in the manuscript would indeed make the paper better.
---
> **Question 1**: Please address the point mentioned in the weaknesses section regarding adding some brief details about the sparse implementation into the appendix and make some reference to this in the main text (maybe where the sparse GP framework is mentioned in Section 6.3).
**Response**: Following your suggestion, we will add some brief details about the implementation of the sparse GP into the appendix. The key idea of this sparse GP framework is to introduce pseudo-inputs, which form a smaller set of data serving as a compact representation of the original data. These pseudo-inputs are not necessarily a subset of the training data but are optimized to capture the essential structure of the dataset. In our experiments in Section 6.3, 200 training data points are randomly selected as the initial pseudo-inputs, whose locations are learned jointly with kernel parameters through the ADAM optimizer. These pseudo-inputs greatly reduce the computational complexity while ensuring that the sparse GP model maintains a high prediction accuracy.
> **Question 2**: Related to the above point, could you speak to how the sparse formulation performs empirically compared to the exact approach on problems for which both are computationally feasible (i.e., sections 6.1 and 6.2)?
**Response**: Considering the computational feasibility, the experiments in Sections 6.1 and 6.2 were performed using exact inference. The sparse GP framework can achieve similar prediction accuracy to the exact inference as long as sufficient pseudo-inputs are employed.
---
Thank you again for the constructive comments. We hope these explanations could address your concerns. Any further questions or suggestions would be greatly appreciated.
---
Rebuttal Comment 1.1:
Title: Rebuttal response
Comment: Thank you very much to the authors for taking the time to clarify those points, I appreciate it's primarily details that will be apparent to people closely familiar with GPs, but I do think expanding on them will broaden the accessibility of the paper to those coming more from a physics background.
I keep my score unchanged and recommend acceptance. | Summary: The paper formulates a vector-valued GP model to infer unknown vector fields. The authors discuss how to impose symmetry-based constraints on the GP models to ensure physically meaningful decompositions. The paper provides theoretical proofs to support the construction of curl-free and divergence-free vector fields that preserve desired symmetries. The model's effectiveness is validated through experiments on various dynamical systems, including dissipative Hamiltonian dynamics and chaotic systems.
Strengths: - The curl-free and divergence-free kernel considering symmetry constraints is novel.
- The manuscript is generally easy to read.
- The model is evaluated on multiple dynamical systems.
Weaknesses: - There are missing citations for relevant existing methods.
- There is no discussion on computational complexity.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In [R1], a Gaussian Process for dissipative Hamiltonian systems has been proposed. Can HHD-GP be considered a generalization of [R1]? What are the essential differences between SSGP and HHD-GP? While SPHHD-GP is clearly novel, it was unclear whether HHD-GP is new compared to [R1].
- Please add a discussion on computational complexity. Specifically, numerical integration is used to consider symmetry constraints, but could this be a computational bottleneck?
- In Figure 3, the error of HHD-GP appears to increase with the number of samples, which seems unnatural. Shouldn't this be analyzed in more detail?
[R1] Yusuke Tanaka, Tomoharu Iwata, Naonori Ueda, Symplectic Spectrum Gaussian Processes: Learning Hamiltonians from Noisy and Sparse Data, NeurIPS, 2022.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The differences between HHD-GP and existing methods (such as [R1]) should be thoroughly discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer w6Wk
Thank you very much for your constructive comments and for taking the time to review our manuscript. We've carefully read your comments and our response is as follows.
---
> **Question 1**: In [R1], a Gaussian Process for dissipative Hamiltonian systems has been proposed. Can HHD-GP be considered a generalization of [R1]? What are the essential differences between SSGP and HHD-GP? While SPHHD-GP is clearly novel, it was unclear whether HHD-GP is new compared to [R1].
**Response**: SSGP and our models (HHD-GP and SPHHD-GP) study the problem of learning dynamical systems from different perspectives and scopes, leading to essential differences in their construction of kernels. Specifically, we emphasize that our kernels are constructed from the perspective of satisfying certain differential invariants (either free of divergence or of curl), rather than from physically governing equations (e.g. the Hamiltonian equation used by SSGP). Therefore, HHD-GP has a wider scope of applicability, which is applicable to any physical system that can be described by a smooth vector field, but SSGP can only be used for dissipative Hamiltonian systems. For example, SSGP cannot be used to model the Chua circuit, a chaotic system used in our experiments, because this system cannot be described by the Hamiltonian equation. Hamiltonian vector fields are a specific type of divergence-free vector fields. From this perspective, HHD-GP can indeed be considered as a generalization of SSGP. Furthermore, although SSGP combines a Hamiltonian kernel with an additive dissipation kernel, it does not consider the non-identifibility problem prevalent in additive models, making the model struggle with learning system components from noisy data. In contrast, we solve the non-identifibility problem in HHD-GP by incorporating symmetries of underlying dynamics. We believe this is an original and valuable contribution to the field of machine learning for dynamical systems. Thank you for providing us with the opportunity to further elaborate on the contributions of our work, which we will include in the future manuscript as you suggest.
> **Question 2**: Please add a discussion on computational complexity. Specifically, numerical integration is used to consider symmetry constraints, but could this be a computational bottleneck?
**Response**: When the symmetry-preserving kernel admits no closed form, numerical integration is exploited to approximate the kernel and compute the covariance, whose computational complexity grows linearly with the number of discrete points. To investigate the impact of the number of discrete points on model performance, we performed experiments on learning with the kernel maintaining rotation symmetry approximated by varying amount of discrete rotations. The results are given in the table below, from which we can observe that the performance of the kernel approximation improves rapidly with increasing number of discrete rotations. Kernels with rotational symmetry were well approximated even with a small number of samples (e.g., n = 8), which ensures that the numerical integration could not be a computational bottleneck. The computational complexity of our method is dominated by the Cholesky decomposition to invert the covariance matrix, which grows cubically with the number of training data points and the input dimension.
| Number of rotation samples | $n=1$ | $n=4$ | $n=8$ | $n=36$ | $n=100$ |
|-----|-----|-----|-----|-----|-----|
| RMSE $(\times 10^{-2})$ | 5.76 | 3.12 | 1.96 | 1.74 | 1.67
> **Question 3**: In Figure 3, the error of HHD-GP appears to increase with the number of samples, which seems unnatural. Shouldn't this be analyzed in more detail?
**Response**: Figure 3 provides the experimental results for the RMSE of energy prediction as the number of training data increases, which demonstrate the non-identifiability problem suffered by HHD-GP. The covariance matrix equation ($K_* = K_{curl} + K_{div} + \\Sigma$, Eq.6 in the manuscript) shows that when the HHD-GP is used to learn the decomposition, the effects of different components can be treated as observation noise to each other. However, the non-uniqueness of HHD makes the HHD-GP model non-identifiable, making it struggle to distinguish between the different factors contributing to the data. Therefore, more uncertainty in model parameters is introduced with more data, leading to an increase in prediction error as the number of training data increases. This inability of the model parameters to converge to their true values as the amount of data increases indicates a lack of model consistency, which is a necessary condition for a model to be identifiable, as discussed from line 321 of the manuscript. We will add more details to explain the experimental results, following your suggestion.
---
Thank you again for the constructive comments. We hope these explanations could address your concerns. Any further questions or suggestions would be greatly appreciated.
---
Rebuttal 2:
Comment: Thank you for your reply. My concerns are now largely resolved, and the differences between the proposed model and SSGP are much clearer. I will raise my score from 5 to 6. | Summary: This paper derives a G-equivariant versions of the both curl (through the Haar Integration kernel) and divergence free (through GIM kernel) kernels. This is then used to define a prior over the Helmholtz decomposition and is identifiable wrt to relevant functions in the Euclidean group (translation, rotation, reflection etc). This is demonstrated on a range of examples.
Strengths: This is a well written, clear paper, that improves on recent work with convincing experiments.
Weaknesses: 1. The justification for how and why the curl and div free kernels should be invariant is not given until the experiments/the appendix. It would be nice if this was discussed in section 5.
2. The fact that the actual symmetries required is problem specific is not really clear in the main paper. In the appendix it is stated, but it would be good if this was made more explicit in the main paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is the proposed symmetry-preserving Helmholtz decomposition as expressive as non-symmetry preserving one? Ie any (sufficient smooth) field can be decomposed into curl and divergence free part. Is the same for the symmetric-preserving decomposition?
2. On line 328 it is stated that mirror symmetry is incorporated into the div-free kernel. Where is this kernel given?
3. Are there general symmetries that should be held or is this always going to be problem/application specific?
4. When the kernels do not admit a closed form how sensitive are they to approximations? And does this significantly affect the computational cost?
5. Is there any impact of using sparse GPs? Ie are the sparse models still guaranteed to be G-invariant?
6. Relevant GP citations:
o Equivariant Learning of Stochastic Fields: Gaussian Processes and Steerable Conditional Neural Processes, Holderrieth et al
o Vector-valued Gaussian Processes on Riemannian Manifolds via Gauge Equivariant Projected Kernels – Hutchinson et al
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer DJX1
We are very grateful for your valuable comments and acknowledgement of our main contributions. We greatly appreciate the time and effort you put into reviewing our paper. Below are our responses to your questions.
---
> **Question 1**: Is the proposed symmetry-preserving Helmholtz decomposition as expressive as non-symmetry preserving one? Ie any (sufficient smooth) field can be decomposed into curl and divergence free part. Is the same for the symmetric-preserving decomposition?
**Response**: The the Helmholtz-Hodge decomposition involves breaking down a vector field into its divergence-free and curl-free parts, which may not necessarily preserve the symmetries of the original vector field. This is because the decomposition process is based on mathematical properties (curl and divergence) rather than on the symmetries of the field. However, the symmetry-preserving Helmholtz-Hodge decomposition always exists for physical systems due to the physical relevance of their divergence-free and curl-free components. These physical relevance aligns well with the symmetries of underlying systems. The Helmholtz-Hodge decomposition respects these physical relevance, which naturally leads to the preservation of symmetries.
> **Question 2**: On line 328 it is stated that mirror symmetry is incorporated into the div-free kernel. Where is this kernel given?
**Response**: Do you mean the div-free kernel mentioned in line 378? We apologize that we did not provide details about this div-free kernel in the manuscript. To make our GP model identifiable, we constructed the div-free kernel by incorporating mirror symmetry with respect to the square boundary $[0, 1] \\times [0, 1]$. The symmetry group is given by
$$
G = \\left \\{
\\begin{bmatrix} 1 & 0 & 0\\\\ 0& 1 & 0\\\\ 0 & 0 & 1 \\end{bmatrix},
\\begin{bmatrix} 1 & 0 & 0\\\\ 0& -1 & 0\\\\ 0 & 0 & 1 \\end{bmatrix},
\\begin{bmatrix} -1 & 0 & 2\\\\ 0& -1 & 0\\\\ 0 & 0 & 1 \\end{bmatrix},
\\begin{bmatrix} -1 & 0 & 2\\\\ 0& 1 & 0\\\\ 0 & 0 & 1 \\end{bmatrix},
\\begin{bmatrix} -1 & 0 & 2\\\\ 0& -1 & 2\\\\ 0 & 0 & 1 \\end{bmatrix},
\\begin{bmatrix} 1 & 0 & 0\\\\ 0& -1 & 2\\\\ 0 & 0 & 1 \\end{bmatrix},
\\begin{bmatrix} -1 & 0 & 0\\\\ 0& -1 & 2\\\\ 0 & 0 & 1 \\end{bmatrix},
\\begin{bmatrix} -1 & 0 & 0\\\\ 0& 1 & 0\\\\ 0 & 0 & 1 \\end{bmatrix},
\\begin{bmatrix} -1 & 0 & 0\\\\ 0& -1 & 0\\\\ 0 & 0 & 1 \\end{bmatrix}
\\right \\},
$$
where the group elements are the mirror transformation matrices applied to homogeneous coordinates $(x, y, 1)$. Theorem 5.3 in our manuscript shows that a $G$-equivariant div-free vector field can be constructed from a vector potential with the same symmetry. So, we constructed the kernel of the vector potential by using Eq.17 to integrate a *squared exponential kernel* over the symmetry group above. This potential kernel was then substituted into Eq.18 to construct the div-free kernel. The partial derivatives involve in Eq.18 were calculated by automatic differentiation in PyTorch, so we did not need to derive the analytic expression of the div-free kernel. We will add these details in the future manuscript. Thank you very much for pointing out this issue.
> **Question 3**: Are there general symmetries that should be held or is this always going to be problem/application specific?
**Response**: There is no general symmetry that must be maintained. Although we have indicated in the appendix that the actual required symmetry is problem-specific, we will make this more explicit in the main paper based on your suggestion.
> **Question 4**: When the kernels do not admit a closed form, how sensitive are they to approximations? And does this significantly affect the computational cost?
**Response**: The kernel maintaining rotation symmetry is constructed by integrating over the continuous rotation group, and this kernel has no closed form (Eq.72). So we numerically approximated this kernel by sampling discrete rotations. To investigate the impact of this kernel approximation, we added experiments on learning with varying number of sampling rotations. The results are given in the table below, from which we can observe that the performance of the kernel approximation improves rapidly with increasing number of sampling rotations. Kernels with rotational symmetry were well approximated even with a small number of samples (e.g., n = 8), which ensures that approximating the kernel by numerical integration does not significantly improve the computational cost. (The computational complexity increases linearly with the number of discrete rotation samples.)
| Number of rotation samples | $n=1$ | $n=4$ | $n=8$ | $n=36$ | $n=100$ |
|-----|-----|-----|-----|-----|-----|
| RMSE $(\\times 10^{-2})$ | 5.76 | 3.12 | 1.96 | 1.74 | 1.67
> **Question 5**: Is there any impact of using sparse GPs? Ie are the sparse models still guaranteed to be G-invariant?
**Response**: In our method, the symmetry constraints are imposed to GP models by disigning suitable kernels. The sparse GP model we used reduces the computational complexity by using a smaller set of inducing points to compactly represent the original data, which do not alter the kernel's structure and properties. Therefore, the symmetries can still be respected in the sparse GP model.
> **Question 6**: Relevant GP citations: o Equivariant Learning of Stochastic Fields: Gaussian Processes and Steerable Conditional Neural Processes, Holderrieth et al o Vector-valued Gaussian Processes on Riemannian Manifolds via Gauge Equivariant Projected Kernels – Hutchinson et al
**Response**: Thank you for providing these two works, which will be cited in our future manuscript.
---
Thank you again for the constructive comments. We hope these explanations could address your concerns. Any further questions or suggestions would be greatly appreciated.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thorough response. I think including the discussions above in the paper will improve an already well-written paper. I have no more questions and maintain my original score of acceptance. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Layer-Adaptive State Pruning for Deep State Space Models | Accept (poster) | Summary: This submission proposed to conduct pruning on Deep State Space Model (DSSM). Basically, it formulates the output distortion (energy loss) after pruning, the derive that the importance of state is related to $H_{\inf}$ norm, which is used as the pruning criteria. Besides, it proposes to use a greedy optimization by pruning a (entire) subsystem (with the remaining parts intact) to derive the importance of a subsystem (LAST), leading to pruning ratio for this subsystem.
Strengths: 1. It is interesting to see pruning in DSSMs and the leading method is coupled with the structure.
Weaknesses: 1. Since it is a pioneer work, more comparison method should be used, such as magnitude pruning.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. What is the difference between pruning states and parameters?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N.A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and suggestions based on their insightful paper summary. We appreciate their assessment that our method, which is closely coupled with the structure of DSSMs, is particularly interesting.
### **Weakness:**
> Since it is a pioneer work, more comparison method should be used, such as magnitude pruning.
We appreciate the reviewer's suggestion to include more comparison methods, such as magnitude pruning. As a common point, this has been thoroughly discussed in Response to Reviewer y832. In summary, we have newly defined magnitude-based state pruning methods and presented the extended experiment results to show clearer effectiveness and contributions. Notably, LAST outperformed other methods by a large margin, and all $\mathcal{H}\_{\infty}$ pruning methods consistently showed superior results over magnitude pruning methods.
### **Question:**
> What is the difference between pruning states and parameters?
As in channel pruning (He et al. [2017]), the term of state pruning represents its pruning granularity. That is, the difference arises in the unit of parameters being pruned at once, and the act of pruning training parameters follows the same approach as traditional pruning methods. In a DSSM layer, how the input and output are associated with a state is represented by the training parameters, which include the continuous-time system matrices $\mathbf{\Lambda}$, $\mathbf{B}$, and $\mathbf{C}$, which are discretized into $\overline{\mathbf{\Lambda}}$, $\overline{\mathbf{B}}$, and $\mathbf{C}$. The implementation of state pruning incorporates the identification of insignificant states and masking all parameters corresponding to the states, i.e., masking $\lambda_i$ from $\mathbf{\Lambda}$, the row vector $\mathbf{B}_i$ from $\mathbf{B}$, and the column vector $\mathbf{C}_i$ from $\mathbf{C}$. As compared in Section 4, this concept originates from model reduction studies for single SSMs, and LAST extends this to the deep-layered architecture with nonlinear functions.
To explicitly demonstrate the necessity of state pruning in SSMs, we compared the performance of unstructured random pruning and (structured) random state pruning using the same experimental setup in Response to Reviewer y832. For unstructured random pruning, we pruned randomly selected elements from the system matrices, obtaining the following results:
| Method | Average pruning ratio | Average accuracy loss $\downarrow$ |
| - | - | - |
| Unstructured random | 33.00 (36.67) | 59.92 (66.58) |
| Structured random | 33.00 (36.67) | **29.53 (32.82)** |
Despite the same amount of parameters being removed, it can be observed that the model completely loses its performance when pruning is performed in an unstructured manner.
This is because unstructured pruning can disrupt the integrity of the model's learned dynamics by altering subsystems into significantly different ones. In contrast, state pruning maintains the functionality of subsystems, leading to less performance degradation. This highlights the importance of considering the structure and relationships within the model when applying pruning techniques.
---
Y. He et al. Channel pruning for accelerating very deep neural networks. In *The IEEE International Conference on Computer Vision*, 2017.
---
Rebuttal Comment 1.1:
Title: Response to Author
Comment: Thanks for your detailed response and appended experiments.
For my first concern on magnitude pruning, author has defined a similar experiment setting for DSSM. Though author claimed that the parameters pruning and DSSM pruning is quite different in rebuttal to y832, I did not catch the main idea of the difference.
Besides, author further claimed that the `there is a fundamental disparity between unstructured and state pruning methods in SSMs, making it unfair to compare them` in discussion with xVFN, which is me. I fail to see the disparity explanation, is that the anwser to my question?
Similar situation happens in the anwser to the question, I fail to understand the explanation. Overall, I keep my score for I can not fully understand DSSM and how it is different with normal deep nerual network, especially in pruning.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to read our rebuttal. Due to space constraints, we had to distribute our responses across several rebuttals. We appreciate your thorough review of all the details.
### **Correction of misunderstanding**
We apologize for any confusion caused by our use of 'disparity' and for not clearly referencing the relevant response to the reviewer's question.
To clarify, we used 'disparity' to describe the performance gap between unstructured pruning and state pruning (a type of structured pruning), which is evident in our response.
As shown in the previous rebuttal, structured pruning (state pruning) outperforms unstructured pruning even in randomly pruned cases, which clearly shows the effect of using state pruning granularity by excluding the other factors that might affect the performance. Thus, what we intended to deliver to Reviewer y832 was that **the presented table contains only state pruning**, omitting unstructured pruning due to the significant performance difference between unstructured/structured versions of every pruning method.
In another way, our intention may be better conveyed by replacing 'fundamental' with 'trivial.'
Therefore, that part does not conflict with the explanation that state pruning is a type of neural network pruning but is tailored for SSM.
### **Additional comments to the previous rebuttal**
The equations used in our explanation are based on the standard form of system matrices in SSM, consistent with the notation in our submission. To assist the reviewer's understanding, we define key terminology and a detailed illustration of state pruning.
Pruning granularity refers to the specific pattern or structure of parameters to be pruned, such as individual weights, entire neurons, channels, or states. For example, channel pruning prunes all parameters corresponding to an insignificant channel. Likewise, state pruning prunes all parameters corresponding to an insignificant state.
For a DSSM layer's training parameters $\mathbf{\Lambda}$, $\mathbf{B}$, and $\mathbf{C}$, such that $x_{k+1}=\mathbf{\Lambda}x_k+\mathbf{B}u_k$ and $y_k=\mathbf{C}x_k$, the specific parameters for a state $i$ are the $i$th diagonal element $\lambda_i$ of $\mathbf{\Lambda}$, the $i$th row vector $\mathbf{B}_i$ of $\mathbf{B}$, and the $i$th column vector $\mathbf{C}_i$ of $\mathbf{C}$. For instance, if the 2nd state is identified as insignificant in a DSSM layer with the state dimension of 4, by masking the insignificant parameters to 0, the system matrices before and after pruning would be as follows:
$x_{k+1}=\begin{bmatrix} \*&&& \\\ &0&& \\\ &&\*& \\\ &&&\*\end{bmatrix}x_k + \begin{bmatrix} \*&\*&\*&\* \\\ 0&0&0&0 \\\ \*&\*&\*&\* \\\ \*&\*&\*&\* \end{bmatrix} u_k$
$y_{k}=\begin{bmatrix} \*&0&\*&\* \\\ \*&0&\*&\* \\\ \*&0&\*&\* \\\ \*&0&\*&\* \end{bmatrix}x_k$
In this sense, we would like to reiterate that we responded to the question 'What is the difference between pruning states and parameters?' by 'As in channel pruning (He et al. [2017]), the term of state pruning represents its pruning granularity. That is, the difference arises in the unit of parameters being pruned at once, and the act of pruning training parameters follows the same approach as traditional pruning methods'.
---
Reply to Comment 1.1.2:
Title: Toy example of the proposed method
Comment: We understand that DSSM might be relatively new to some reviewers, having been proposed only a few years ago and primarily applied to sequential data. However, it is a well-established attention-drawing network rooted in linear system theory, demonstrating superior performance on sequences with continuous properties. Similar to how other works have developed structured pruning for specific models like the diffusion model (G. Fang et al. [2023]), our work offers a network optimization method tailored to DSSMs.
To illustrate DSSM and its pruning, we present an explicit time-domain toy example evaluating state importance. Consider a simple single-input single-output (SISO) system with a four-dimensional state vector parameterized and trained in a DSSM layer. The numerical values in the matrices represent the layer’s training parameters, totaling 12 in this example.
**Example system of a DSSM layer**
$x_{k+1}=\begin{bmatrix}0.9&&& \\\ &0.7&& \\\ &&0.5& \\\ &&&0.3\end{bmatrix}x_k+\begin{bmatrix}1 \\\ 1 \\\ 1 \\\ 1 \end{bmatrix}u_k$
$y_{k}=\begin{bmatrix}1&1&1&1\end{bmatrix}x_k$
Assuming the input consistently comes in as 1 and $x_{-1}=[0,0,0,0]$, the state and output of the given system would be as follows:
| | Input $u_k$ | State $x_k$ | Output $y_k$ |
| - | - | - | - |
| $k=0$ | $1$ | $[1,1,1,1]$ | $4$ |
| $k=1$ | $1$ | $[1.9,1.7,1.5,1.3]$ | $6.4$ |
| $k=2$ | $1$ | $[2.71,2.19,1.75,1.39]$ | $8.04$ |
As the state-output matrix is $[1,1,1,1]$, the state directly contributes to the output. Although the diagonal terms $\lambda_i$ are linear, e.g., 0.9, 0.7, 0.5, 0.3, their contributions to the output show nonlinear differences, e.g., 2.71, 2.19, 1.75, 1.39. This nonlinearity occurs because each state component, influenced by the diagonal term, decays at different rates, leading to varying impacts on the output.
Thus, we learn that the magnitude of $\lambda_i$ alone cannot fully determine the relative importance of each state. When the $B$ and $C$ matrices vary from this simple example, their effects must also be considered. Moreover, since DSSM processes sequence data, not one-shot data, the analysis of how the model processes data can be more intuitive and comprehensive in the frequency domain. As explained in the Response to Reviewer y832, the $\mathcal{H}\_{\infty}$ norm measures the maximum frequency-domain gain of each subsystem, and the proposed method even considers the contribution of each layer to the entire model.
Specifically, $\mathcal{H}\_{\infty}$ and LAST scores can evaluate the importance of each state (subsystem) in the previous example system, as follows:
| State | $\mathcal{H}\_{\infty}$ score | LAST score |
| - | - | - |
| $\lambda_0$ | $(1\cdot 1)/(1-0.9)^2=100$ | $100/100=1$ |
| $\lambda_1$ | $(1\cdot 1)/(1-0.7)^2=11.11$ | $11.11/(11.11+100)=0.099$ |
| $\lambda_2$ | $(1\cdot 1)/(1-0.5)^2=4$ | $4/(4+11.11+100)=0.034$ |
| $\lambda_3$ | $(1\cdot 1)/(1-0.3)^2=2.04$ | $2.04/(2.04+4+11.11+100)=0.017$ |
For both methods, insignificant states can be identified by their smaller scores, allowing us to prune the corresponding parameters. Although not explicitly demonstrated in this example, the LAST score is particularly useful when dealing with multiple DSSM layers, as it enables the comparison of states across different layers by evaluating the relative contribution of each subsystem to the entire model.
Consequently, by pruning the two insignificant states (i.e., pruning 6 corresponding parameters), the resulting pruned DSSM reduces the total number of parameters by 6, as shown below.
$x_{k+1}=\begin{bmatrix}0.9& \\\ &0.7\end{bmatrix}x_k+\begin{bmatrix}1 \\\ 1\end{bmatrix}u_k$
$y_{k}=\begin{bmatrix}1&1\end{bmatrix}x_k$
---
G. Fang et al. Structural pruning for diffusion models. In *Advances in Neural Information Processing Systems*, 2023. | Summary: In this paper, the authors propose Layer-Adaptive $\mathcal{H}{\infty}$ STate pruning (LAST), a deep state space model (SSM) pruning method that optimizes the state dimension of a deep diagonal SSM in terms of model-level energy loss. Experimental results on different tasks demonstrate that the proposed method performs better than Uniform $\mathcal{H}{\infty}$ and Global $\mathcal{H}{\infty}$ methods.
Strengths: The paper proposes a novel method to prune deep state space models. Experimental results on different tasks demonstrate the effectiveness of the proposed methods.
Weaknesses: In the experiments, the work only compares the proposed method with Uniform $\mathcal{H}{\infty}$ and Global $\mathcal{H}{\infty}$ methods. It would be good if more weight pruning methods could be applied to deep state space models for fair comparisons.
Technical Quality: 3
Clarity: 3
Questions for Authors: Could the structured weight pruning methods on the traditional DNNs be applied to the pruning of deep state space models?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad the reviewer appreciated the novelty and effectiveness of the proposed method. As the reviewer suggested, we compare our method to three magnitude pruning methods that use magnitude-based criteria from traditional DNNs and state pruning granularity as in the proposed method.
State pruning is structured pruning, where all training parameters corresponding to the insignificant states are pruned based on a state importance metric. In traditional DNN, such as MLP, the transfer function of a layer is a weight matrix, $f(u)=Wu$, where each element $W_{ij}$ is a weight parameter. This allows evaluating the training parameters by their magnitude, $|W_{ij}|$, as the magnitude controls the input's influence on the output. Applying this to SSMs could involve unstructured magnitude pruning, which masks the high-magnitude elements of each system matrix according to the pruning ratio.
However, as discussed in Response to Reviewer xVFN, there is a fundamental disparity between unstructured and state pruning methods in SSMs, making it unfair to compare them. Thus, we define magnitude-based state pruning methods for SSMs and evaluate their **ability to distinguish significant and insignificant states**, thereby preserving overall model performance.
- **Uniform Magnitude**
Every layer is pruned to have an identical pruning ratio by evaluating the importance of each state $i$ as $|\overline{\lambda}_i|||\mathbf{\overline{B}}_i||||\mathbf{C}_i||$. The pruning ratio is applied uniformly across all layers. While any $L_p$ norm can be used, we present the results using the $L_2$ norm as an example.
- **Global Magnitude**
The same state importance metric as in Uniform Magnitude is used, but the comparison group is extended from intra-layer to inter-layer, ensuring the pruning ratio is met globally across the entire network.
- **Layer-Adaptive Magnitude-based Pruning (LAMP)**
This method employs a metric of $\frac{|\overline{\lambda}_i|^2||\mathbf{\overline{B}}_i||^2||\mathbf{C}_i||^2}{\sum\_{j\geq i}|\overline{\lambda}_j|^2||\mathbf{\overline{B}}_j||^2||\mathbf{C}_j||^2}$ adapted from Lee et al. [2021], which originally used $\frac{W_i^2}{\sum\_{j\geq i} W_j^2}$ for a real-valued weight parameter $W$. The state indices in the denominator are assumed to be ordered based on their evaluation using the basic magnitude criteria.
Extending Table 1, we report newly conducted JAX experiments with 3 different seeds on an NVIDIA RTX 3090 as follows:
| Method | Average pruning ratio | Average accuracy loss $\downarrow$ | State importance |
| - | - | - | - |
| Random | 33.00 (36.67) | 29.53 (32.82) | - |
| Uniform magnitude | 33.00 (36.67) | 22.03 (24.48) | $\vert\overline{\lambda}_i\vert\Vert\mathbf{\overline{B}}_i\Vert\Vert\mathbf{C}_i\Vert$ |
| Global magnitude | 33.00 (36.67) | 17.49 (19.43) | $\vert\overline{\lambda}_i\vert\Vert\mathbf{\overline{B}}_i\Vert\Vert\mathbf{C}_i\Vert$ |
| LAMP | 33.00 (36.67) | 18.07 (20.07) | $\frac{\vert\overline{\lambda}_i\vert^2\Vert\mathbf{\overline{B}}_i\Vert^2\Vert\mathbf{C}_i\Vert^2}{\sum\_{j\geq i}\vert\overline{\lambda}_j\vert^2\Vert\mathbf{\overline{B}}_j\Vert^2\Vert\mathbf{C}_j\Vert^2}$ |
| Uniform $\mathcal{H}\_{\infty}$ | 33.00 (36.67) | 4.32 (4.80) | $\frac{\Vert\mathbf{C}_i\Vert^2\Vert\mathbf{\overline{B}}_i\Vert^2}{(1-\vert\overline{\lambda}_i\vert)^2}$ |
| Global $\mathcal{H}\_{\infty}$ | 33.00 (36.67) | 7.51 (8.35) | $\frac{\Vert\mathbf{C}_i\Vert^2\Vert\mathbf{\overline{B}}_i\Vert^2}{(1-\vert\overline{\lambda}_i\vert)^2}$ |
| LAST | 33.00 (36.67) | **0.52 (0.58)** | $\frac{\frac{\Vert\mathbf{C}_i\Vert^2\Vert\mathbf{\overline{B}}_i\Vert^2}{(1-\vert\overline{\lambda}_i\vert)^2}}{\sum\_{j\geq i}\frac{\Vert\mathbf{C}_j\Vert^2\Vert\mathbf{\overline{B}}_j\Vert^2}{(1-\vert\overline{\lambda}_j\vert)^2}}$ |
The table lists the accuracy loss of pruning methods for S5 models (Smith et al. [2023]), averaged across all 10 tasks. Values in parentheses exclude non-compressible tasks with zero pruning ratios and accuracy loss.
At the same pruning ratio, LAST significantly outperforms other methods by exhibiting the least accuracy loss.
Random state pruning results show that all methods can identify insignificant states, but all $\mathcal{H}\_{\infty}$ pruning methods surpass magnitude pruning methods. This implies the suitability of $\mathcal{H}\_{\infty}$ pruning for SSMs, which can be explained by the transfer functions of SSMs. In an SSM layer, the multiplicative mapping from input to output in the frequency domain is given by $\mathbf{Y}(z)=\mathbf{C}(zI-\mathbf{\overline{\Lambda}})^{-1}\mathbf{\overline{B}}\mathbf{U}(z)$, where $\mathbf{U}$ and $\mathbf{Y}$ are the $\mathcal{Z}$-transforms of input and output sequences $\mathbf{u}$ and $\mathbf{y}$, and $\mathbf{C}(zI-\mathbf{\overline{\Lambda}})^{-1}\mathbf{\overline{B}}$ is the transfer function matrix.
The importance of training parameters in SSMs can be evaluated by their influence in the frequency domain. Using the $\mathcal{H}\_{\infty}$ norm, which measures the maximum gain of the transfer function in the frequency domain, $\mathcal{H}\_{\infty}$ pruning optimizes the system by **minimizing the maximum gain alteration**, leading to better performance than magnitude pruning. Specifically, $\overline{\lambda}_i$'s importance in $\overline{\mathbf{\Lambda}}$ is measured by $(1-|\overline{\lambda}_i|)^{-1}$ in $\mathcal{H}\_{\infty}$ pruning, while magnitude pruning measures $|\overline{\lambda}_i|$. Additionally, LAST normalizes this importance by the total energy transmission of the layer, preventing extensive pruning of high energy-conveying layers and enabling stable global comparison and pruning.
---
J. T. Smith et al. Simplified state space layers for sequence modeling. In *The International Conference on Learning Representations*, 2023.
J. Lee et al. Layer-adaptive sparsity for the magnitude-based pruning. In *The International Conference on Learning Representations*, 2021.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed responses and extra experimental comparisons. However, the comparisons with magnitude-based weight pruning methods cannot fully address my concern as magnitude-based pruning is a very straightforward approach. I highly suggest that the authors compare the proposed method with more well-designed approaches. In conclusion, I would not oppose to accept this paper. I would like to keep my rating as 5: Borderline accept.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's careful review and further feedback on our rebuttal. We agree that comparing the proposed method with other pruning methods beyond magnitude-based pruning could provide a clearer validation of our work.
While magnitude-based pruning is straightforward, it is an effective and strong baseline, as it continues to be widely used in studies on iterative pruning, pruning schedules, or pruning on large language models (Sun et al. [2024], K. Xu et al. [2023], M. Gupta [2024]). Notably, LAST and $\mathcal{H}\_{\infty}$ pruning significantly outperform it, highlighting its necessity in SSM. Additionally, the previous comparison can be viewed as one for one-shot pruning methods, which are particularly useful when dealing with scaled models compared to iterative methods.
In response to the remaining reviewer's concern, we present the following results for a scaling-based pruning method using batch normalization statistics as another baseline.
**Uniform Scaling-based Pruning** (Liu et al. [2017])
This method uses the scale $\gamma^{(l)}\in\mathbb{R}^h$ trained in the $l$th batch normalization layer as the criterion for channel pruning out of $h$ channels. Specifically, we used $\gamma^{(l-1)}$ to prune $\mathbf{B}^{(l)}$ and $\gamma^{(l)}$ to prune $\mathbf{C}^{(l)}$ in consideration of the input-output processing scheme of SSM.
| Method | Average pruning ratio | Average accuracy loss $\downarrow$ | Importance |
| - | - | - | - |
| Uniform scaling-based | 33.00 (36.67) | 25.84 (28.71) | $\gamma_k^{(l-1)}$ and $\gamma_k^{(l)}$ for channel $k$ |
Due to time constraints, we could not test pruning methods that involve retraining, such as using a designed loss function or tracking gradient flow. We promise to include comparison experiments with other well-designed pruning methods in the final version of the paper.
Additionally, although we considered recent feature statistics-based pruning methods, we noticed that further research is needed, especially in observing specific patterns in feature vectors within SSM. As we are the first to explore pruning in SSM, designing new criteria for state pruning is akin to suggesting a new algorithm. We hope the reviewer considers our work a foundational study that suggests appropriate pruning criteria and directions for SSM.
---
M. Sun et al. A simple and effective pruning approach for large language models. In *The International Conference on Learning Representations*, 2024.
K. Xu et al. Efficient joint optimization of layer-adaptive weight pruning in deep neural networks. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, 2023.
M. Gupta, et al. Is complexity required for neural network pruning? a case study on global magnitude pruning. In IEEE Conference on Artificial Intelligence, 2024.
Z. Liu et al. Learning efficient convolutional networks through network slimming. In *Proceedings of the IEEE international conference on computer vision*, 2017. | Summary: Inspired by the traditional layer-adaptive neural network pruning, this paper develops and verifies a layer-adaptive model order reduction (MOR) method to reduce the state dimension in DSSM models. The proposed method reveals state importance and prunes insignificant subsystems for a desired compression level, reducing unnecessary computations while bounding the output energy loss.
Strengths: 1. Proposes to use layer-adaptive pruning methods in new areas DSSM models.
2. Extensive experiments among 10 tasks show the robustness and effectiveness of the methods. Besides, it conducts ablation studies on hyper-parameters in Appendix.
Weaknesses: Since I am not an expert in DSSMs area, I just doubt the impact of the state dimension to the computational cost, it might be better to provide some statistics to show the impact. Besides, if this state dimension is very significant, do we have other some related works but not pruning to reduce the state dimension? Can you elaborate more on related works on reducing state dimension?
Technical Quality: 3
Clarity: 3
Questions for Authors: As mentioned in the weakness,
1) please use some statistics to show the importance of state dimension?
2) if the state dimension is significant, can you provide more related works on reducing the state dimension, maybe those related works are not from the perspective of pruning to reduce the state dimension, but i think it is meaningful to discuss them.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's assessment that the paper contributes to new areas of DSSMs and the experiments are thorough to show the robustness and effectiveness of the proposed method. The reviewer's concerns were the practical impacts and related works of reducing state dimension.
> Please use some statistics to show the importance of state dimension.
We present the average evaluation step speed and peak GPU memory usage when the state dimension is reduced to the maximum pruning ratio that resulted in less than 1% accuracy loss using our proposed method. The evaluation was conducted by implementing an actual decrease in model size on an NVIDIA RTX 3090. While there is some variance depending on the channel size of each model per task, reducing the state dimension improves efficiency in both computational and memory costs.
| | ListOps | Text | Retrieval | Image | Pathfinder | Path-X |
| - | - | - | - | - | - | - |
| Pruning ratio | 0% | 60% | 50% | 30% | 30% | 30% |
| Inference speed $\uparrow$ | 1.0$\times$ | 1.6$\times$ | 1.7$\times$ | 1.2$\times$ | 1.1$\times$ | 1.3$\times$ |
| GPU memory usage $\downarrow$ | 1.0$\times$ | 0.9$\times$ | 0.6$\times$ | 1.0$\times$ | 0.8$\times$ | 0.8$\times$ |
> If the state dimension is significant, can you provide more related works on reducing the state dimension, maybe those related works are not from the perspective of pruning to reduce the state dimension, but I think it is meaningful to discuss them.
To alleviate the issue of large state dimensions, related research has evolved with two primary approaches: architectural and algorithmic.
**1. Architectural approaches**
One architectural approach is modulating states for parallelized inference. Specifically, an $h\cdot n_{SISO}$-th order system is modulated by $h$ systems, each with $n_{SISO}$ states, where $h$ is the number of channels. This means each $n_{SISO}$-th order system handles information from a specific channel. To address the inability to integrate information across channels, a channel mixing layer is introduced. This is referred to as, in our paper, the multi-SISO architecture (Gu et al. [2021], Gu et al. [2022a]), allowing for training $h\cdot n_{SISO}$ effective state spaces with reduced computational burden by parallelizing into $n_{SISO}$, thus enabling the development of many subsequent SSMs (Gupta et al. [2022], Gu et al. [2022b]).
In comparison, another architecture proposed by Smith et al. [2023], referred to as MIMO architecture, uses $n_{MIMO}\ll h\cdot n_{SISO}$ states to determine the output for a channel based on information from all input channels, making the mixing layer unnecessary. This parameterization allows for high performance with much smaller state dimensions than equivalent block systems in multi-SISO layers. For instance, in the Path-X task, which involves the longest sequences among LRA tasks, this model shows state-of-the-art performance. However, both architectures lack optimization methods for model size, leading to computational inefficiencies when the task does not require such a high model capacity.
- **Relation to the pruning approach**
Our state pruning method can be applied to all these architectures, effectively reducing the model size to fit task or resource requirements while preserving the performance. It identifies and adaptively prunes the states with the least impact on performance based on layer-wise energy transmission. In multi-SISO architectures, since each state is assigned to a specific channel, the difference in importance among states may not be significant. However, our method still shows superior retention compared to other methods. In MIMO architectures, the difference in state importance becomes more pronounced, and the accuracy loss caused by pruning varies significantly depending on how a method evaluates the state importance. Consequently, our method effectively preserves the most significant states and demonstrates superior performance compared to other pruning methods, as shown in the table in Response to Reviewer y832.
**2. Algorithmic approaches**
One recent study has shifted from learning the element of decomposition of the transfer function in DSSMs to directly learning the coefficients of the transfer function (Parnichkun et al. [2024]). The motivation behind this approach is also to address the computational load caused by large state dimensions. The primary contribution of this approach is making computations even independent of the state dimension. However, as discussed in Appendix A, this approach has a weakness of only guaranteeing stability at initialization, unlike DSSMs, which can consistently guarantee stability due to their direct parameterization for the poles of the transfer function. Therefore, as a branch of DSSM research, our work contributes to enhancing the efficiency of DSSMs, which is dependent on the state dimension, by minimizing the state dimension.
---
A. Gu et al. Combining recurrent, convolutional, and continuous-time models with linear state space layers. In *Advances in neural information processing systems*, 2021.
A. Gu et al. Efficiently modeling long sequences with structured state spaces. In *The International Conference on Learning Representations*, 2022a.
A. Gupta et al. Diagonal state spaces are as effective as structured state spaces. In *Advances in Neural Information Processing Systems*, 2022.
A. Gu et al. On the parameterization and initialization of diagonal state space models. In *Advances in Neural Information Processing Systems*, 2022b.
J. T. Smith et al. Simplified state space layers for sequence modeling. In *The International Conference on Learning Representations*, 2023.
R. N. Parnichkun et al. State-free inference of state-space models: The transfer function approach, In *International Conference on Machine Learning*, 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors' response and address my concerns. I would like to keep my positive score. | null | null | Rebuttal 1:
Rebuttal: We thank all reviewers and ACs for their efforts in reviewing our paper. We are glad that the reviewers found the proposed method novel and noted that the experiments support its effectiveness. We sincerely appreciate the insightful and constructive feedback, and we have carefully responded to all comments with individual replies.
**Highlights of rebuttal**
- Clearer descriptions of the motivation to reduce state dimension, with comparisons to related works
- Deeper explanations for why the proposed state importance metric works well for state space models
- Extended experiments with magnitude pruning
- Detailed comparison between unstructured pruning and state pruning
- Demonstrations showing the computational cost reduction by LAST | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Federated Online Prediction from Experts with Differential Privacy: Separations and Regret Speed-ups | Accept (poster) | Summary: The paper addresses the problem of online prediction from experts (OPE) under differential privacy (DP) constraints in a federated setting. OPE operates in a set of rounds, and consists on choosing at each round the expert that minimizes the regret over observations of data. The selection of the expert at each round is based on previous adversarially chosen observations of data. Differentially private OPE (DP-OPE) ensures that the selection of experts is not significantly different (up to $\epsilon$ and $\delta$ parameters) if one observation of the data changes.
The current protocol studies private OPE in the federated setting (Fed-DP-OPE), where the expert selection is done by a set of clients that collaborate via a server. Under local DP (i.e. all messages between server and clients are part of the output of the protocol) and $m$ clients, the current work provides fundamental bounds of over the impact of the federated collaboration on the regret. First, it proposes a protocol that achieves an order-wise $\sqrt{m}$ multiplicative regret reduction with respect to DP-OPE when the adversarial observations are chosen randomly. Second, it shows that the communication cost is logarithmic in the number of expert selection rounds. Third, it shows that under an oblivious adversary that chooses the the observations arbitrarily in advance, it is not possible to obtain an order-wise improvement regret in Fed-DP-OPE over DP-OPE. Finally, it shows that improvement is possible in the federated setting for a relaxation of the oblivious adversary.
Strengths: The paper is in general clearly written.
The contributions provide a significant impact, showing that federated collaboration can improve accuracy under privacy constraints without incurring in large communication cost (i.e. the cost is logarithmic in $T$ for the stochastic adversary setting). The $\sqrt{m}$ factor speedup is significant. The relaxation of the oblivious adversary seems realistic, therefore improvements in this setting seem also important.
The paper also proposes novel techniques. Although the natural adaptation to federated setting and the use tree based private aggregation have been widely studied in the past, the paper also proposes novel ways to reduce (i) communication in the stochastic adversary setting, (ii) regret and communication in the (relaxed) oblivious adversary setting and (iii) a novel proof technique for the lower bounds on the classical oblivious case.
The work is of good quality, appropriately backing it's claims with proofs and empirically illustrating the impact of the results with reproducible experiments.
Weaknesses: There are certain non-major weaknesses in motivating the problem and clarity at preliminaries:
1- The implications of accurate OPE could be better motivated in the introduction.
2- The notion of DP in the online setting could be better clarified. First, a link between the chosen sequence of loss functions and possibly sensitive data is missing. Second, the definition specifies "DP against an adversary". The same terminology for DP adversary and OPE adversary in charge of selecting observations over the sensitive dataset is confusing.
Technical Quality: 4
Clarity: 3
Questions for Authors: Could you please clarify what is the link between sensitive data and the notion of neighboring input used in the paper?
For instance, it would seem reasonable that the sequence $\mathcal{S}$ of loss functions chosen by the adversary depend on a set of sensitive data points. Then, it could be possible that more than one loss function changes if a single entry of this dataset changes. However, your setting is a bit different and I am wondering where could it be applicable.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The limitations of the work have been appropriately addressed by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful reading and thoughtful comments. We address the reviewer's questions in the following and will revise our paper accordingly. We hope the responses below address the reviewer's concerns.
**Q1:** There are certain non-major weaknesses in motivating the problem and clarity at preliminaries: The implications of accurate OPE could be better motivated in the introduction.
**A1:** Thank you for your helpful comment. We will clarify the implications of accurate OPE in the introduction as follows:
OPE is an important problem in machine learning with various applications. For example, in personalized healthcare, a patient's wearable device (player) collects and processes health data to provide personalized health treatment (experts). OPE helps select the most effective treatments based on historical data, improving patients' outcomes.
We will include this to better highlight the implications of OPE in our revised introduction.
**Q2:** The notion of DP in the online setting could be better clarified. First, a link between the chosen sequence of loss functions and possibly sensitive data is missing. Second, the definition specifies "DP against an adversary". The same terminology for DP adversary and OPE adversary in charge of selecting observations over the sensitive dataset is confusing.
**A2:** Thank you for your insightful comments. We will clarify the notion of DP in the online setting in the revision.
1. Link Between Loss Functions and Sensitive Data: The sequence of loss functions $\mathcal{S}=\left(l_{1,1}, \ldots, l_{m,T}\right)$ represents the sensitive dataset since each loss function $l_{i,t}$ reflects the performance of selecting different experts based on sensitive information about client $i$ at time step $t$.
2. Terminology Confusion: To clarify, the OPE adversary refers to the entity selecting the sequence of loss functions to challenge the online algorithm, while the DP adversary represents a potential attacker trying to infer sensitive information from the algorithm's outputs. We will revise the preliminary section to distinguish these roles clearly, referring to the OPE adversary as the "game adversary" and the DP adversary as the "privacy adversary".
**Q3:** Could you please clarify what is the link between sensitive data and the notion of neighboring input used in the paper? For instance, it would seem reasonable that the sequence of loss functions chosen by the adversary depend on a set of sensitive data points. Then, it could be possible that more than one loss function changes if a single entry of this dataset changes. However, your setting is a bit different and I am wondering where could it be applicable.
**A3:** Thank you for the question. In our setting, the sequence of loss functions $\mathcal{S}=\left(l_{1,1}, \ldots, l_{m,T}\right)$ is considered the sensitive dataset and we define neighboring datasets as sequences that differ by a single loss function. Our setting is applicable when protecting individual loss functions is sufficient. For example, in a personalized recommendation system, each user's interaction with content (e.g. articles) generates a loss function. Our setting ensures that changing a single interaction does not significantly influence the system's recommendations, protecting individual user interactions. This is suitable for applications like personalized content recommendations, where protecting individual interactions is essential while still allowing for effective personalization.
We understand that the reviewer's example describes a stronger DP setting, where one sensitive data point may affect multiple loss functions, and all of those loss functions need to be protected simultaneously. In some sense, we can view one data point as a user, and each user may contribute multiple loss functions. Thus, to protect individual users, it is necessary to protect all loss functions that each user contributes. This essentially gives rise to **user-level DP** [1]. We note that user-level DP in general is much more challenging to handle [2-6] compared with instance-level DP discussed in this work, and we leave it as one of the important future directions to explore.
[1] Levy D, Sun Z, Amin K, et al. Learning with user-level privacy[J]. Advances in Neural Information Processing Systems, 2021, 34: 12466-12479.
[2] Liu Y, Suresh A T, Yu F X X, et al. Learning discrete distributions: user vs item-level privacy[J]. Advances in Neural Information Processing Systems, 2020, 33: 20965-20976.
[3] Acharya J, Liu Y, Sun Z. Discrete distribution estimation under user-level local differential privacy[C]//International Conference on Artificial Intelligence and Statistics. PMLR, 2023: 8561-8585.
[4] Cummings R, Feldman V, McMillan A, et al. Mean estimation with user-level privacy under data heterogeneity[J]. Advances in Neural Information Processing Systems, 2022, 35: 29139-29151.
[5] Ghazi B, Kumar R, Manurangsi P. User-level differentially private learning via correlated sampling[J]. Advances in Neural Information Processing Systems, 2021, 34: 20172-20184.
[6] Huang R, Zhang H, Melis L, et al. Federated linear contextual bandits with user-level differential privacy[C]//International Conference on Machine Learning. PMLR, 2023: 14060-14095.
-----
We thank the reviewer again for the helpful comments and suggestions for our work. We are more than happy to address any further questions that you may have.
---
Rebuttal 2:
Comment: Dear Reviewer hhm9,
We've taken your initial feedback into careful consideration in our response. Could you please check whether our responses have properly addressed your concerns? We are more than happy to answer your further questions.
Thank you for your time and effort in reviewing our work!
Best Regards,
Authors
---
Rebuttal Comment 2.1:
Comment: Dear authors,
Thank you for successfully addressing my (non-major) concerns in your rebuttal.
I have no further questions and my score is likely to remain unchanged.
---
Reply to Comment 2.1.1:
Comment: Thank you so much for your positive feedback! Your insightful comments have helped us make our work better. We really appreciate it. | Summary: The paper studies the problem of online federated expert selection. In order to make the proposed algorithm robust against adversaries, the paper proposes algorithms with differential privacy guarantees. Both stochastic and oblivious adversaries are investigated by the paper. The paper provides theoretical guarantees for the proposed algorithms. The paper provides experimental results to verify theoretical findings.
Strengths: 1. The paper proposes differentially private online federated learning algorithm which is an important and unexplored research area.
2. The paper provides regret analysis with differential privacy guarantees for the proposed algorithms. Although I am not expert at differential privacy, theoretical results looks convincing to me.
3. The paper is well-written and clear.
Weaknesses: 1. The paper can benefit from extending its experimental study. It would be great if authors can add more datasets and also do more ablation studies to strengthen the contribution of the paper.
2. The paper can add more explanations about applicability of the study. Usually federated learning is used to train a model. However, reading the paper, the practical applications of the paper is not obvious.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. I do not understand the statement in the paper that "*In the federated setting, it is infeasible for the central server to have full access to past loss functions due to the high communication cost.*" I think in these cases the server can collect losses over time and store them in such a way that clients can afford communication cost. I believe in these cases probably storage can be more important bottleneck than communication cost. Furthermore, I do not think sending a loss would be bottleneck for the system in both communication and storage aspects even if the number of clients is large. Can you explain more about this?
2. I suggest extending the literature review of the paper by including the work "Personalized online federated learning with multiple kernels. *Advances in Neural Information Processing Systems (NeurIPS)*, 35:33316–33329, 2022.". This work studies online model selection for communication and prediction efficiency in online federated learning.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper discusses the limitations adequately. For example, the paper clearly presents the assumption in section 3 that the loss functions are assumed to be convex and smooth with respect to $||\cdot||_1$.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful reading and thoughtful comments. We address the reviewer's questions in the following. We hope the responses below address the reviewer's concerns.
**Q1:** The paper can benefit from extending its experimental study.
**A1:** Thanks for the helpful suggestion. We have performed experiments on a real-world dataset. Please refer to our Author Rebuttal.
**Q2:** The paper can add more explanations about applicability of the study.
**A2:** Thanks for the insightful suggestion. In this work, we focus on differentially private federated online prediction, where the objective is to minimize the average regret on $m$ clients working in parallel while maintaining the privacy of users at each client. Differentially private federated online prediction has many important real-world applications. We provide three examples below, and will add the discussions in our revision.
**Personalized Healthcare**: Consider a federated online prediction setting where patients' wearable devices collect and process health data locally, and the central server aggregates privacy-preserving updates from devices to provide health recommendations or alerts. DP federated online prediction can speed up the learning process and improve the prediction accuracy without exposing individual's health data, thus ensuring patient privacy.
**Financial Fraud Detection**: DP federated online prediction can also enhance fraud detection systems across banking and financial services. Each client device (e.g. PC) locally analyzes transaction patterns and flags potential fraud without revealing sensitive transaction details to the central server. The server's role is to collect privacy-preserving updates from these clients to improve the global fraud detection model. This method ensures that the financial company can dynamically adapt to new fraudulent tactics, improving detection rates while safeguarding customers' financial privacy.
**Personalized Recommender Systems**: Each client (e.g. smartphone) can personalize content recommendations by analyzing user interactions and preferences locally. The central server (e.g. company) aggregates privacy-preserving updates from all clients to refine the recommendation model. Thus, DP federated online prediction improves the whole recommender system performance while maintaining each client's privacy.
**Q3:** Can you explain more about the statement that "In the federated setting, it is infeasible for the central server to have full access to past loss functions due to the high communication cost"?
**A3:** Thank you for your insightful comments. In federated learning, training data is distributed across an incredibly large number of devices, each potentially having limited communication bandwidth to the central server [1]. Consider the hospitals within an area as the set of clients, each aiming to make accurate predictions (e.g. diagnostic) for its own patients. Then, the loss function for each client depends on the healthcare history of all patients in the hospital. Obviously, transmitting patients' healthcare histories from all hospitals to a central server frequently requires a lot of communication bandwidth and is often time-consuming. One major motivation for federated learning is to avoid transmitting raw data (e.g., loss functions) to the central server, while maintaining learning performances comparable with the centralized setting [2-5].
Moreover, keeping raw data (e.g., loss functions) at local clients in federated learning is also beneficial for **privacy protection**. Intuitively, to protect the loss function transmitted every round, we need to apply DP mechanism to privatize each loss function (e.g. Laplacian noise $\text{Lap}(\frac{1}{\varepsilon})$), as exposing sensitive patient data is a concern. Such a noise level will compromise the data accuracy at the server, and leads to significant learning performance degradation. In contrast, under an efficient communication protocol of federated learning, the noise level added during each communication round can be significantly reduced, since we don't have to protect each loss function but certain information extracted from multiple loss functions. For example, in Fed-DP-OPE-Stoch, each client adds Laplacian noise with standard deviation $O(\frac{1}{2^{p-1}\varepsilon})$ to the estimated gradients before transmission in phase $p\in [\log T]$. Such reduced noise level has minimum impact on the aggregated gradient estimate at the server, and results in near-optimal privacy-utility trade-off.
In summary, due to both communication cost and privacy considerations, designing communication-efficient learning algorithms is of critical importance in the federated setting. In this work, we follow the standard federated learning principle to keep raw data (loss functions) at the clients, and design communication-efficient federated online prediction algorithms. We thank the reviewer again for the comment and will revise the statement in the paper to avoid possible confusion.
**Q4:** I suggest extending the literature review of the paper by including the work "Personalized online federated learning with multiple kernels".
**A4:** Thank you for the suggestion. We will include the recommended work and provide a literature review on (federated) online model selection in Appendix A as follows.
**(Federated) Online Model Selection:** Online model selection when models are kernels has been extensively studied [6-9]. In the federated setting, [10,11] have explored scenarios where each client learns a kernel-based model, utilizing the specific characteristics of kernel functions. Moreover, [12] proposes an online federated model selection framework, where clients interact with a server with sufficient memory. Additionally, research on online learning with feedback graphs, a generalization of sequential decision-making with bandit or full information feedback, has been explored in [13-19].
---
Rebuttal Comment 1.1:
Comment: Thank you for responding to my comments. I read the rebuttal and it addressed part of my concerns. However, I believe the experimental study of the paper can still be extended and improved. I would maintain my initial rating which is in overall in favor of accepting the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to review our rebuttal. We’re glad to hear that we addressed some of your concerns and that your overall assessment is in favor of accepting the paper.
We understand your point regarding the experimental study. Due to time constraints, we were only able to conduct experiments on MovieLens dataset during rebuttal. We will certainly include additional experimental results in the revision.
Thank you again for your feedback, and we are happy to address any further questions you might have.
---
Rebuttal 2:
Title: References
Comment: [1] Li X, Huang K, Yang W, et al. On the convergence of fedavg on non-iid data[J]. arXiv preprint arXiv:1907.02189, 2019.
[2] Shahid O, Pouriyeh S, Parizi R M, et al. Communication efficiency in federated learning: Achievements and challenges[J]. arXiv preprint arXiv:2107.10996, 2021.
[3] McMahan B, Moore E, Ramage D, et al. Communication-efficient learning of deep networks from decentralized data[C]//Artificial intelligence and statistics. PMLR, 2017: 1273-1282.
[4] Smith V, Chiang C K, Sanjabi M, et al. Federated multi-task learning[J]. Advances in neural information processing systems, 2017, 30.
[5] Sattler F, Wiedemann S, Müller K R, et al. Robust and communication-efficient federated learning from non-iid data[J]. IEEE transactions on neural networks and learning systems, 2019, 31(9): 3400-3413.
[6] Yang T, Mahdavi M, Jin R, et al. Online kernel selection: Algorithms and evaluations[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2012, 26(1): 1197-1203.
[7] Zhang X, Liao S, Xu J, et al. Regret bounds for online kernel selection in continuous kernel space[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2021, 35(12): 10931-10938.
[8] Ghari P M, Shen Y. Graph-aided online multi-kernel learning[J]. Journal of Machine Learning Research, 2023, 24(21): 1-44.
[9] Ghari P M, Shen Y. Online multi-kernel learning with graph-structured feedback[C]//International Conference on Machine Learning. PMLR, 2020: 3474-3483.
[10] Hong S, Chae J. Communication-efficient randomized algorithm for multi-kernel online federated learning[J]. IEEE transactions on pattern analysis and machine intelligence, 2021, 44(12): 9872-9886.
[11] M Ghari P, Shen Y. Personalized online federated learning with multiple kernels[J]. Advances in Neural Information Processing Systems, 2022, 35: 33316-33329.
[12] Ghari P M, Shen Y. Budgeted Online Model Selection and Fine-Tuning via Federated Learning[J]. arXiv preprint arXiv:2401.10478, 2024.
[13] Mannor S, Shamir O. From bandits to experts: On the value of side-observations[J]. Advances in Neural Information Processing Systems, 2011, 24.
[14] Cohen A, Hazan T, Koren T. Online learning with feedback graphs without the graphs[C]//International Conference on Machine Learning. PMLR, 2016: 811-819.
[15] Alon N, Cesa-Bianchi N, Dekel O, et al. Online learning with feedback graphs: Beyond bandits[C]//Conference on Learning Theory. PMLR, 2015: 23-35.
[16] Cortes C, DeSalvo G, Gentile C, et al. Online learning with dependent stochastic feedback graphs[C]//International Conference on Machine Learning. PMLR, 2020: 2154-2163.
[17] Ghari P M, Shen Y. Online learning with uncertain feedback graphs[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023.
[18] Esposito E, Fusco F, van der Hoeven D, et al. Learning on the edge: Online learning with stochastic feedback graphs[J]. Advances in Neural Information Processing Systems, 2022, 35: 34776-34788.
[19] Ghari P M, Shen Y. Online learning with probabilistic feedback[C]//ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022: 4183-4187.
---
Rebuttal 3:
Title: Thank You for Your Helpful Feedback
Comment: We thank the reviewer again for the helpful comments and suggestions for our work. If our response resolves your concerns to a satisfactory level, we kindly ask the reviewer to consider raising the rating of our work. Certainly, we are more than happy to address any further questions that you may have.
---
Rebuttal 4:
Comment: Dear Reviewer RFE1,
We've taken your initial feedback into careful consideration in our response. Could you please check whether our responses have properly addressed your concerns? If so, could you please kindly consider increasing your initial score accordingly? Certainly, we are more than happy to answer your further questions.
Thank you for your time and effort in reviewing our work!
Best Regards,
Authors | Summary: This paper studies differentially private federated online prediction from experts against stochastic and oblivious adversaries. The goal is to minimize average regret across clients over time with privacy guarantees. For stochastic adversaries, the proposed Fed-DP-OPE-Stoch algorithm achieves regret improvement over single-player counterparts with logarithmic communication costs. For oblivious adversaries with a low-loss expert, the new Fed-SVT algorithm demonstrates an m-fold regret speed-up under pure and approximate differential privacy, nearly optimal up to logarithmic factors according to established lower bounds. Simulation experiments have been done.
Strengths: 1, The paper is well written, presented: The background, related previous work, problem formulation and key concepts (e.g., federated online prediction, differential privacy, stochastic/oblivious adversaries) are clearly explained. The algorithms are described in detail.
2, The theoretical analysis appears rigorous, with the authors establishing regret bounds for their proposed algorithms and deriving lower bounds for the oblivious adversary case. The Fed-SVT algorithm achieves near-optimal regret performance (up to logarithmic factors) in the special case of oblivious adversaries with a low-loss expert, demonstrating the quality of the proposed solution.
Weaknesses: 1, Lack of Real-World Evaluation: The lack of experiments on real-world dataset scenarios is a significant limitation. The authors should address how their proposed algorithms would perform and be applicable to practical problems in domains like recommender systems or healthcare. Without evaluations on real-world datasets, it is challenging to judge the practical utility and potential challenges of their methods.
2, Novelty Concerns: After given a tree-based method is used for private aggregation of the gradients of loss functions (Asi et al., 2021b), what's the challenge for the theoretical analysis of proposed algorithms given Asi et al., 2022b and Asi et al., 2023?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1, Please add at least one experiment with real-world dataset. So reader can have better sense regarding the real-world application and how practical of the proposed algorithms.
2, Please clarify the challenges in theoretical analysis given existing methods.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful reading and thoughtful comments. We address the reviewer's questions in the following and will revise our paper accordingly. We hope the responses below address the reviewer's concerns.
**Q1:** The lack of experiments on real-world dataset scenarios is a significant limitation. The authors should address how their proposed algorithms would perform and be applicable to practical problems in domains like recommender systems or healthcare. Without evaluations on real-world datasets, it is challenging to judge the practical utility and potential challenges of their methods.
**A1:** Thank you for the insightful suggestion. We have performed experiments on a real-world dataset. Please refer to our Author Rebuttal.
**Q2:** After given a tree-based method is used for private aggregation of the gradients of loss functions (Asi et al., 2021b), what's the challenge for the theoretical analysis of proposed algorithms given Asi et al., 2022b and Asi et al., 2023?
**A2:** The novelty of our theoretical analysis in Theorem 1 is in addressing the unique challenges of a multi-client federated setting, which is not covered in previous works like Asi et al. (2021b, 2022b). While these prior works use a tree-based method for private aggregation of gradients in a single-player context, our approach involves multiple clients who add noise to their gradient estimates locally. This introduces new challenges in establishing the regret upper bound due to the aggregated effect of noisy updates from all clients.
Specifically, in Fed-DP-OPE-Stoch, each client sends $\\{\langle c\_n,v\_{i,j,s} \rangle +\xi\_{i,n}\\}\_{n\in [d]}$, where $\xi\_{i,n}\sim \text{Lap}(\lambda\_{i,j,s})$, and $\lambda\_{i,j,s}= \frac{4\alpha2^j}{b\varepsilon}$. The server aggregates these to predict a new expert, i.e., $\bar{w}\_{j,s}=\underset{c\_n: 1\leq n\leq d}{\arg\min}\left[\frac{1}{m}\sum_{i=1}^m \big( \langle c\_n,v\_{i,j,s} \rangle + \xi\_{i,n} \big) \right]$.
The key novelty and challenge are in handling the noisy estimates aggregated from multiple clients ($\frac{1}{m}\sum_{i=1}^m \big( \langle c_n,v_{i,j,s} \rangle + \xi_{i,n} \big)$) and analyzing its impact on the regret upper bound. To handle the impact of the average estimates $\frac{1}{m}\sum_{i=1}^m \langle c_n,v_{i,j,s} \rangle$, we show in Lemma 2 that every index of the $d$-dimensional vector $\nabla L\_{t}(x\_{i,p,k})-\bar{v}\_{p,k}$ is $O(\frac{1}{bm})$-sub-Gaussian by induction on the depth of vertex. This enables us to achieve $\sqrt{m}$-fold regret speed-up compared with Asi et al. (2022b). To deal with the challenge introduced by the aggregated noise $\frac{1}{m}\sum_{i=1}^m \xi_{i,n}$, we introduce Lemma 4, which quantifies the bound on the sum of $m$ IID Laplace random variables. This differentiates our analysis from prior single-client studies (Asi et al. 2021b, 2022b).
Additionally, the federated setting poses challenges in reducing communication costs while maintaining privacy. Our Fed-DP-OPE-Stoch algorithm introduces a strategic communication protocol where communication is triggered only when the DP-FW subroutine, used for private gradient aggregation at each client, reaches a leaf vertex in the binary tree structure (Line 7-8 in Algorithm 1). This approach contributes to the overall reduction in communication costs, achieving logarithmic communication costs (Corollary 1). This aspect of Fed-DP-OPE-Stoch is another novel contribution, addressing the critical challenge of efficient communication in the federated setting. In comparison, Asi et al. (2021b, 2022b) focus on single-player settings and do not involve methods to control the communication cost.
**Q3:** Please add at least one experiment with real-world dataset. So reader can have better sense regarding the real-world application and how practical of the proposed algorithms.
**A3:** Thanks for the suggestion. We include it in our Author Rebuttal.
**Q4:** Please clarify the challenges in theoretical analysis given existing methods.
**A4:** We present the following challenges and contributions in the theoretical analysis.
1. Regret Upper Bound for Stochastic Adversaries: The challenges in the theoretical analysis of Fed-DP-OPE-Stoch and our contributions are detailed in **A2**.
2. Novel Lower Bounds for Oblivious Adversaries: We establish new lower bounds for federated OPE with oblivious adversaries, showing that collaboration among clients does not lead to speed-up in regret minimization (Theorem 2). The key challenge is formulating an instance of oblivious adversaries where the collaborative nature of federated learning does not result in the expected improvements in regret minimization. To address this, we propose a novel *policy reduction approach in FL*, representing a technical breakthrough (highlighted in the proof sketch of Theorem 2). Specifically, by defining an "average policy" among all clients against a uniform loss function generated by an oblivious adversary, we reduce the federated problem to a single-player one, showing the equivalence of per-client and single-player regret. To the best of our knowledge, our lower bounds represent the **first** of their kind for differentially private federated OPE problems.
3. Lower Bound for Oblivious Adversaries under Realizability Assumption: We establish a new lower bound in this setting (Theorem 3). Our analysis considers a specific oblivious adversary (detailed in the proof sketch of Theorem 3), which differentiates it from single-player scenarios (Asi et al., 2023). Our lower bound indicates that Fed-SVT is nearly optimal up to logarithmic factors (Remark 6).
---
We thank the reviewer again for the helpful comments and suggestions for our work. If our response resolves your concerns to a satisfactory level, we kindly ask the reviewer to consider raising the rating of our work. Certainly, we are more than happy to address any further questions that you may have.
---
Rebuttal 2:
Comment: Dear Reviewer NyWa,
We've taken your initial feedback into careful consideration in our response. Could you please check whether our responses have properly addressed your concerns? If so, could you please kindly consider increasing your initial score accordingly? Certainly, we are more than happy to answer your further questions.
Thank you for your time and effort in reviewing our work!
Best Regards,
Authors
---
Rebuttal Comment 2.1:
Comment: Thank you for responding. I read the rebuttal and it addressed my second concern regarding novelty. I increased my score from 5 to 6.
---
Reply to Comment 2.1.1:
Comment: Thank you very much for your positive feedback! Your valuable input has helped us improve the quality of our work significantly. We really appreciate it. | Summary: The paper studies the problems of differentially private federated online prediction from experts against both stochastic adversaries and oblivious adversaries. The main contributions are three-fold. First, for stochastic adversaries, the paper proposes a differentially private mixture of experts algorithms and provide theoretical guarantees along with. Besides, for oblivious adversaries, the paper first shows a pessimistic result by providing a lower bound showing that federated learning cannot improve over the single machine learning in general. Furthermore, the paper shows that under special realizable case, however, federated learning can benefit from more machines. The paper proposes an algorithm that achieves that benefit with theoretical guarantees.
Strengths: Both the theoretical analysis and algorithm design seems original and non-trivial. The technical contribution seems solid. The paper is also clear in its problem setup, algorithm description and theoretical justification. The problem solved by the paper is a concrete theoretical problem, and the paper does a good job in solving the problem overall.
Weaknesses: My only concern of the paper is on the application side. While the problem studied by the paper makes perfect sense from a theoretical perspective, it does not seem very clear to me where it can have applications. With that being said, I think it is fine for authors to focus on a theoretical problem and leave its potential applications for future work.
Technical Quality: 3
Clarity: 3
Questions for Authors: Would you please list some potential applications of the proposed algorithms? It will be better to have some specific applications for readers to keep in mind.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Like admitted by the authors and pointed out in the weakness section, the main limitation of the paper is the lack of real-world data experiments and applications. But given the theoretical nature of the paper, I do not think this limitation is unacceptable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful reading and thoughtful comments. We address the reviewer's questions in the following and will revise our paper accordingly. We hope the responses below address the reviewer's concerns.
**Q1:** My only concern of the paper is on the application side. While the problem studied by the paper makes perfect sense from a theoretical perspective, it does not seem very clear to me where it can have applications. With that being said, I think it is fine for authors to focus on a theoretical problem and leave its potential applications for future work. Would you please list some potential applications of the proposed algorithms? It will be better to have some specific applications for readers to keep in mind.
**A1:** Thank you for the helpful comment. Differentially private federated online prediction has many important real-world applications. We provide three examples below, and will add the discussions in our revision.
**Personalized Healthcare**: Consider a federated online prediction setting where patients' wearable devices collect and process health data locally, and the central server aggregates privacy-preserving updates from devices to provide health recommendations or alerts. DP federated online prediction can speed up the learning process and improve prediction accuracy without exposing individual users' health data, thus ensuring patient privacy.
**Financial Fraud Detection**: DP federated online prediction can also enhance fraud detection systems across banking and financial services. Each client device (e.g. PC) locally analyzes transaction patterns and flags potential fraud without revealing sensitive transaction details to the central server. The server's role is to collect privacy-preserving updates from these clients to improve the global fraud detection model. This method ensures that the financial company can dynamically adapt to new fraudulent tactics, improving detection rates while safeguarding customers' financial privacy.
**Personalized Recommender Systems**: Each client (e.g. smartphone) can personalize content recommendations by analyzing user interactions and preferences locally. The central server (e.g. company) aggregates privacy-preserving updates from all clients to refine the recommendation model. Thus, DP federated online prediction improves the whole recommender system performance while maintaining each client's privacy.
**Q2:** Like admitted by the authors and pointed out in the weakness section, the main limitation of the paper is the lack of real-world data experiments and applications. But given the theoretical nature of the paper, I do not think this limitation is unacceptable.
**A2:** Thank you for valuable comment. We have performed experiments on a real-world dataset. Please refer to our Author Rebuttal.
-----
We thank the reviewer again for the helpful comments and suggestions for our work. We are more than happy to address any further questions that you may have.
---
Rebuttal 2:
Comment: Dear Reviewer H7Yw,
We've taken your initial feedback into careful consideration in our response. Could you please check whether our responses have properly addressed your concerns? We are more than happy to answer your further questions.
Thank you for your time and effort in reviewing our work!
Best Regards,
Authors | Rebuttal 1:
Rebuttal: We thank all reviewers for their feedback, which has greatly improved our paper. We are glad that our work is recognized for studying "an important and unexplored research area" (Reviewer RFE1) and developing Fed-DP-OPE-Stoch with "novel techniques" (Reviewer hhm9) to handle stochastic adversaries, as well as a "novel proof technique" (Reviewer hhm9) for lower bounds in the classical oblivious case. Our Fed-SVT algorithm achieves near-optimal regret in a near-realizable case of oblivious adversaries, "demonstrating the quality of the proposed solution" (Reviewer NyWa). Our theoretical analysis and algorithm design are "original and non-trivial" (Reviewer H7Yw) and we "empirically illustrate the impact of the results with reproducible experiments" (Reviewer hhm9). Below, we address a common question the reviewers have.
**Could you perform experiments on real-world dataset scenarios? (Reviewers: H7Yw, NyWa, RFE1)**
We use the MovieLens-1M dataset [1] to evaluate the performances of Fed-SVT, comparing it with the single-player model Sparse-Vector [2]. We first compute the rating matrix of 6040 users to 18 movie genres (experts) $R = [r\_{u,g}] \in \mathbb{R}^{6040\times 18}$, and then calculate $L = [\max (0,r\_{u,g^\star}-r\_{u,g})]\in \mathbb{R}^{6040\times 18}$ where $g^\star = \arg \max\_g \left( \frac{1}{6040} \sum_{u=1}^{6040} r\_{u,g} \right)$. We generate the sequence of loss functions $\\{l\_u\\}\_{u\in [6040]}$ where $l\_u = L\_{u,:}$. In our experiments, we set $m=10$, $T = 604$, $\varepsilon = 10$, $\delta = 0$ and run 10 trials. In Fed-SVT, we experiment with communication intervals $N =1, 30, 50$, where communication cost scales in $O(mdT/N)$. The per-client cumulative regret as a function of $T$ is plotted in Figure 1 in our uploaded PDF file. Our results show that Fed-SVT significantly outperforms Sparse-Vector [2] with low communication costs (notably in the $N=50$ case). These results demonstrate the effectiveness of our algorithm in real-world applications. We will perform more experiments to thoroughly validate the performance of both Fed-SVT and Fed-DP-OPE-Stoch, and include the results in the next version of this paper.
References
[1] Harper F M, Konstan J A. The movielens datasets: History and context[J]. Acm transactions on interactive intelligent systems (tiis), 2015, 5(4): 1-19.
[2] Asi H, Feldman V, Koren T, et al. Near-optimal algorithms for private online optimization in the realizable regime[C]//International Conference on Machine Learning. PMLR, 2023: 1107-1120.
Pdf: /pdf/a36f75f0525ecac00b72510d600b9850c6c8407e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Conformal Prediction for Class-wise Coverage via Augmented Label Rank Calibration | Accept (poster) | Summary: This paper introduces the Rank Calibrated Class-conditional Conformal Prediction (RC3P) algorithm, designed to address the issue of large prediction sets in conformal prediction (CP), especially in imbalanced classification tasks. The RC3P algorithm enhances class-conditional coverage by selectively applying class-wise thresholding based on a label rank calibration strategy. This method reduces the size of prediction sets compared to the standard class-conditional CP (CCP) method while maintaining valid class-wise coverage. The authors demonstrate that RC3P achieves significant reductions in prediction set sizes across multiple real-world datasets.
Strengths: 1) The introduction of a rank calibration strategy to the class-conditional CP framework is a novel contribution. It offers a practical solution to the challenge of large prediction sets in CP, especially for imbalanced datasets.
2) The paper provides comprehensive experimental results across various datasets and imbalance types, showing a significant reduction in prediction set sizes while maintaining valid class-wise coverage.
3) The authors provide rigorous theoretical guarantees for the class-wise coverage and improved predictive efficiency of the RC3P algorithm, demonstrating its robustness and general applicability.
4) The paper is well-structured and clearly written, with detailed explanations of the algorithm, theoretical analyses, and experimental setups.
Weaknesses: 1) While the paper provides theoretical guarantees for coverage, the theoretical analysis of the predictive efficiency could be expanded. Specifically, a deeper exploration of the conditions under which RC3P outperforms CCP in terms of predictive efficiency would strengthen the paper.
2) Although the experiments are thorough, additional experiments on more diverse and larger-scale datasets could further validate the generalizability of the RC3P method.
3) One additional comment is to verify the predictive efficiency using approximate conditional coverage metrics suggested in the literature, for example, by "Cauchois, M., Gupta, S., and Duchi, J. (2021). Knowing what you know: valid and validated confidence sets in multiclass and multilabel prediction. Journal of Machine Learning Research, 22(81):1–42" to ensure that approximate conditional coverage is not negatively impacted.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) Can the authors provide more details on the computational complexity of the RC3P algorithm compared to the CCP and Cluster-CP methods?
2) How does the RC3P algorithm perform on larger and more diverse datasets beyond the ones tested in this paper?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback. Below we provide our rebuttal for the key questions from the reviewer.
**Q1: The theoretical analysis of the predictive efficiency could be expanded.**
A1: We agree that a deeper exploration of the conditions under which RC3P outperforms CCP would strengthen our paper. However, predictive efficiency analysis is challenging and there is no existing theoretical framework for thoroughly analyzing the predictive efficiency of CP methods (most of them focus on coverage guarantees), especially for the class-wise coverage setting. In an attempt to fill this gap in the current state of knowledge, we provide Lemma 4.2 and Theorem 4.3 to show under what conditions of the model, the predictive efficiency can be improved (See also **GR2**).
We highlight that **Lemma 4.2** is not our main intellectual contribution for the improved predictive efficiency of RC3P. Instead, it showcases a condition number $\sigma_y$ that **parametrizes** whether a target event $\mathcal E$ “the predictive efficiency is improved by RC3P’’ happens or not, i.e., $\sigma_y \leq 1 \Rightarrow \mathcal E$.
Then **Theorem 4.3** further analyzes the **transition** from the parameter $\epsilon_y^{\widehat k(y)}$ of RC3P to the parameter $\sigma_y$ of the target event to indicate how to configure RC3P to improve predictive efficiency. Specifically, as pointed out in its Remark, if we set $\alpha - \epsilon_y^{\widehat k(y)}$ as small as possible but still guarantee $\epsilon_y^{\widehat k(y)} < \alpha$ (required by the model-agnostic coverage, Theorem 4.1) in RC3P, then we have a higher probability to guarantee $\sigma_y \leq 1$. Therefore, the overall **transition** of parameters is expressed by:
$$\text{Setting RC3P with (i) } \epsilon_y^{\widehat k(y)} < \alpha, \text{and (ii) } \epsilon_y^{\widehat k(y)} \text{ as small as possible } ~~~ \Longrightarrow ~~~ \sigma_y \leq 1 ~~~ \Longrightarrow ~~~ \mathcal E,$$
which is equivalent to (7) in Remark and helps us configure RC3P to most likely improve the predictive efficiency over baseline CCP method.
In addition to serving as a guideline for configuring RC3P to improve the predictive efficiency, we can further investigate how to interpret the involved condition, e.g., the defined terms D and B involve properties of the underlying model. Further analyzing how to ensure a large $B-D$ can help guarantee the improved predictive efficiency of RC3P over CCP. For instance, according to the definition in Theorem 4.3, a small $D$ requires a high top-k accuracy. Given the challenging nature of analyzing predictive efficiency, we will continue to expand the theoretical analysis as part of immediate future work.
**Q2: Additional experiments on more diverse and larger-scale datasets**
A2:
**We add balanced experiment on diverse datasets (CIFAR-100, Places365, iNaturalist, ImageNet) in Experiment (1) of GR1** (see Table1 in the attached PDF). The models are pre-trained. UCR is controlled to $\leq 0.05$. RC3P significantly outperforms the best baseline with $32.826\\%$ reduction in APSS ($\downarrow$ better) on average, with $10.539\\%$ on iNaturalist and $59.393\\%$ on ImageNet. We get similar improvements with APS and THR scoring functions.
In addition to the large-scale experiments on balanced data in Experiment (1) of GR1, in **Experiment (2) Large-scale imbalanced classification experiment** (Table 2 in PDF), we also tested baselines and RC3P on a large-scale dataset (ImageNet-LT) with a deep model trained from scratch using the imbalanced training method LDAM [r1]. UCR is controlled to $\leq 0.12$. RC3P significantly outperforms the best baseline with $28.523\\%$ reduction in APSS ($\downarrow$ better) on average.
**Q3: Adding approximate conditional coverage metrics?**
A3: We highlight that our goal is class conditional coverage, which is a parallel notion with the conditional coverage. Class-conditional coverage that is conditioned on (output space) each class, i.e., $Y = y$ while conditional coverage conditioned on the (input space) features $X \in G$, i.e., $X$ belongs to “pre-specified groups”. We will add other approximate conditional coverage metrics in the revised version based on the recent work from Gibbs and Candes (2024).
**Q4: The computational complexity of the RC3P, CCP and Cluster-CP**
A4: Suppose $n_y = O(n/K)$, where $n_y$ is the number of calibration samples in each class $y$, $n$ is the total number of calibration samples, and $K$ is the number of candidate class labels. CCP needs to sort the conformity scores of each class to compute class-wise quantiles. The computational complexity for CCP is $O(n \cdot \log(n/K))$
Compared with CCP, the additional process in RC3P is to find the rank threshold $\hat k(y)$ for each class, where the computational complexity is either $O(n K)$ with brute-force search or $O(n \log(K))$ with binary search over classes. Therefore, by combining the computation costs from rank calibration and score calibration in RC3P, we get the total complexity as $O(n \cdot \log(n/K) + n K)$ or $O(n \cdot \log(n/K) + n \log(K))$.
Cluster-CP first runs $k$-means clustering algorithm where $k=M$ is the number of clusters. The computational complexity for the clustering algorithm is $O(M T n)$, where $T$ is the number of iterations. Then, Cluster-CP computes cluster-wise quantiles with time complexity as $O(n \cdot \log(n/M))$ where we assume the number of samples in each cluster is in $O(n/M)$. Therefore, the total computation complexity for Cluster-CP is $O(M T n + n \log(n/M))$
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed response. i maintain my positive score of weak accept. | Summary: This paper introduces the Rank Calibrated Class-conditional CP (RC3P) algorithm, which reduces prediction set sizes while ensuring valid class-conditional coverage by selectively applying class-wise thresholding. The RC3P algorithm achieves class-wise coverage regardless of the classifier and data distribution, and show improvement empirically.
Strengths: The idea is simple but effective, supported by solid theoretical and empirical evidence.
Weaknesses: - Some notions, especially the Y and y, are confusing. Similarly, k is for label class, $\hat{k}(y)$ is for a different notion of label rank, but both uses k.
- More extra illustrations may be helpful for understanding. For example, regarding the tony example in para 1 of section 4, try give a concrete figure example. Also, give a table of important dataset statistics, e.g., total sample size, number of class, imbalance rate.
Technical Quality: 3
Clarity: 2
Questions for Authors: - For fairness evaluation, especially for many classes, what's the worst under coverage ratio? Only the mean is provided.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback. Below we provide our rebuttal for the key questions from the reviewer.
**Q1: Some notions, especially the Y and y, are confusing. Similarly, k is for label class, $\hat k(y)$ is for a different notion of label rank, but both uses k.**
A1: Our notations are consistent, but we will try to make them more clearer. We denote $Y$ as the ground-truth label of data sample $(X, Y)$ (see Line 103) and $y$ as a realized class label (see Line 107). $K$ is the number of candidate classes, where the maximum rank is still $K$ (see Line 103). $k$ is used to describe the rank threshold of top-$k$ class-wise error $\epsilon^k_y$ (see Line 108). Thus, we use $\hat k(y)$ as the rank threshold of class $y$.
**Q2: Try to give a concrete figure example.**
A2: We plot the **class-wise score distribution as a concrete example of the explanation in Section 4.1** (Table 6 in PDF).
The true labels of all data from class $1$ are ranked within $\\{1, …, 4\\}$, which indicates low uncertainty in class $1$. However, nearly half of APS scores with high ranks (i.e., rank 2, 3) are larger than class quantile (the dashed straight line), and the corresponding label will not be included in prediction sets through CCP. On the contrary, the maximum true label ranks of data in class $5$ are 9, which indicates high uncertainty in the model's predictions, but most of the APS scores in class $5$ are smaller than the class quantile (the dashed straight line). Thus, the uniform class-wise iteration strategy of CCP includes more uncertain labels in the prediction sets and degenerates the predictive efficiency.
**Q3: Give a table of important dataset statistics?**
A3: We show the description of datasets in the following table. Thank you for your suggestion.e will add it in the revised paper.
| Dataset | CIFAR-10 | CIFAR-100 | mini-ImageNet | FOOD-101|
| --------------------- | ------------------- |-------------|----------- |-----------|
| Number of training samples | 50000 | 50000 | 30000 | 75750 |
| Number of classes | 10 | 100 | 100 | 101 |
| Each class calibration samples | 500 |50 |150 |125 |
| Imbalanced ratios | \{ 0.5, 0.4, 0.3, 0.2, 0.1\} | \{ 0.5, 0.4, 0.3, 0.2, 0.1\} | \{ 0.5, 0.4, 0.3, 0.2, 0.1\} | \{ 0.5, 0.4, 0.3, 0.2, 0.1\} |
**Q4: What's the worst under coverage ratio?**
A4:
We add the experiments to report the **worst class-conditional coverage (WCCC) as a new metric** (Table 5 in PDF) on imbalanced mini-ImageNet datasets under the same setting from Table 16 in the Appendix. In WCCC, RC3P significantly outperforms CCP and Cluster-CP with $4.29\\%$ improvement. Similar improvements of RC3P can be found on other datasets.
---
Rebuttal Comment 1.1:
Comment: Thank you for response. I will keep my positive rating. | Summary: This paper aims to reduce the prediction set sizes of conformal predictors while achieving class-conditional coverage. The authors identify that class-wise conformal prediction (CCP) scans all labels uniformly, resulting in large prediction sets. To address this issue, they propose Rank Calibrated Class-conditional Conformal Prediction (RC3P), which augments the label rank calibration strategy within CCP. Theoretical analysis and experimental results demonstrate that RC3P outperforms CCP.
Strengths: 1. The proposed method is well-motivated, and the paper is easy to follow. The authors find that CCP scans all labels uniformly, resulting in large prediction sets.
2. The proposed method introduces a novel conformal predictor for class-conditional coverage, achieving smaller set sizes than other classical methods on imbalanced datasets.
Weaknesses: 1. The proposed methods and experiments appear logically incoherent. The authors provide an explanation for the large prediction sets of CCP, but the interesting conclusion is not related to imbalanced data. Why do the authors only consider imbalanced datasets? Additionally, could the authors report the performance of different methods on balanced data?
2. The datasets used in the experiments are small, which is impractical for real-world applications. As shown in Section 5, the number of classes in the four datasets is fewer than 101. Moreover, Cluster-CP was proposed for datasets with many classes, so it may be unfair to compare these methods only on small-scale datasets. Furthermore, the classical method THR[1] should be considered in the experiments.
3. The contribution of Lemma 4.2 is limited. The assumption in Lemma 4.2 is very strong, which can directly infer the final results.
[1] Mauricio Sadinle, Jing Lei, and Larry Wasserman. Least ambiguous set-valued classifiers with bounded error levels. Journal of the American Statistical Association
Technical Quality: 2
Clarity: 3
Questions for Authors: - What is the ratio of training set, calibration set and test set? Does RC3P still outperform other methods when the size of calibration set is small? I think an ablation experiment about the size of calibration set is better.
- How do you control the UCR of RC3P? Please provide some details.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: They are adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback. Below we provide our rebuttal for the key questions from the reviewer.
**Q1: Why only consider imbalanced datasets? Report the performance on balanced data?**
A1:
RC3P is a general class-conditional CP method and works for the models trained on both imbalanced and balanced classification data. We did imbalanced experiments originally in the main paper, since the imbalanced setting is more common and challenging in practice, which can also differentiate the performance of RC3P with baselines.
The example in Section 4.1 only gives an intuitive motivation, but it is valid for both balanced and imbalanced settings. Additionally, since the standard CCP iterates over all class-wise quantiles, the prediction set sizes can be more diverse and exhibit large variance under the imbalanced setting, which further degenerates the predictive efficiency.
**We add balanced experiment on diverse datasets (CIFAR-100, Places365, iNaturalist, ImageNet) in Experiment (1) of GR1** (see Table1 in the attached PDF). The models are pre-trained. UCR is controlled to $\leq 0.05$. RC3P significantly outperforms the best baseline with $32.826\\%$ reduction in APSS ($\downarrow$ better) on average, with $10.539\%$ on iNaturalist and $59.393\%$ on ImageNet. We get similar improvements with APS and THR scoring functions.
**Q2: It may be unfair to compare these methods only on small-scale datasets.**
A2:
In addition to the large-scale experiments on balanced data in Experiment (1) of GR1, in **Experiment (2) Large-scale imbalanced classification experiment** (Table 2 in PDF), we also tested baselines and RC3P on a large-scale dataset (ImageNet-LT) with a deep model trained from scratch using the imbalanced training method LDAM [r1]. UCR is controlled to $\leq 0.12$. RC3P significantly outperforms the best baseline with $28.523\\%$ reduction in APSS ($\downarrow$ better) on average.
**Q3: Furthermore, the classical method THR[1] should be considered in the experiments.**
A3: We have added the experiments with **the THR scoring function [r2] in Experiment (4) of GR1** (Table 4 in PDF) for baselines and RC3P. UCR is controlled to $\leq 0.16$ on CIFAR-10 and $\leq 0.03$ on other datasets. RC3P significantly outperforms best baselines with $26.9\\%$ on mini-ImageNet reduction compared with the best baseline. Similar improvements of RC3P can be found on other datasets.
**Q4: The contribution of Lemma 4.2 is limited. The assumption in Lemma 4.2 is very strong, which can directly infer the final results.**
A4: (See also **GR2**)
We highlight that **Lemma 4.2** is not our main intellectual contribution for the improved predictive efficiency of RC3P. Instead, it showcases a condition number $\sigma_y$ that **parametrizes** whether a target event $\mathcal E$ “the predictive efficiency is improved by RC3P’’ happens or not, i.e., $\sigma_y \leq 1 \Rightarrow \mathcal E$.
Then **Theorem 4.3** further analyzes the **transition** from the parameter $\epsilon_y^{\widehat k(y)}$ of RC3P to the parameter $\sigma_y$ of the target event to indicate how to configure RC3P to improve predictive efficiency. Specifically, as pointed out in its Remark, if we set $\alpha - \epsilon_y^{\widehat k(y)}$ as small as possible but still guarantee $\epsilon_y^{\widehat k(y)} < \alpha$ (required by the model-agnostic coverage, Theorem 4.1) in RC3P, then we have a higher probability to guarantee $\sigma_y \leq 1$. Therefore, the overall **transition** of parameters is expressed by:
$$\text{Setting RC3P with (i) } \epsilon_y^{\widehat k(y)} < \alpha, \text{and (ii) } \epsilon_y^{\widehat k(y)} \text{ as small as possible } ~~~ \Longrightarrow ~~~ \sigma_y \leq 1 ~~~ \Longrightarrow ~~~ \mathcal E,$$
which is equivalent to (7) in Remark and helps us configure RC3P to most likely improve the predictive efficiency over baseline CCP method.
**Q5: What is the ratio of training set, calibration set and test set?**
A5: We show the description of datasets in the following table. Thank you for your suggestion. We will add it in the revised paper.
|Dataset|CIFAR-10|CIFAR-100| mini-ImageNet|FOOD-101|
|-|-|-|-|-|
|Number of training samples| 50000| 50000 | 30000|75750|
|Number of classes| 10|100 | 100 | 101|
|Each class calibration samples| 500 |50 |150|125 |
|Imbalanced ratios| \{ 0.5, 0.4, 0.3, 0.2, 0.1\} | \{ 0.5, 0.4, 0.3, 0.2, 0.1\} | \{ 0.5, 0.4, 0.3, 0.2, 0.1\} | \{ 0.5, 0.4, 0.3, 0.2, 0.1\}|
**Q6: Does RC3P still outperform other methods when the size of calibration set is small?**
A6: We have added the experiments with **various sizes for the calibration sets in Experiment (3) of GR1** (Table 3 in PDF): Baselines and RC3P are calibrated with calibration sets of various numbers of samples. UCR is controlled to $\leq 0.12$ for CCP and $\leq 0.07$ for Cluster-CP and RC3P. Following Cluster-CP, we set each class calibration samples $\in \\{20,50,75 \\}$. RC3P significantly outperforms the best baseline with $29.68\\%$ reduction in APSS ($\downarrow$ better) on mini-ImageNet. Similar improvements of RC3P can be found on other datasets.
**Q7: How do you control the UCR of RC3P?**
A7: (See also **GR3***)
\
[r1] Sadinle et al., 2019. Least ambiguous set-valued classifiers with bounded error levels.
[r2] Ding et al., 2024. Class-conditional conformal prediction with many classes.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for your response. I still have concerns about the controlled UCR. I think tunning the $g$ on the calibration data breaks the basic assumption of CP, i.e., exchangeability. Tunning the hyper-parameters on fresh/hold-out data is reasonable.
---
Reply to Comment 1.1.1:
Comment: **FQ1: “I think tunning the $g$ on the calibration data breaks the basic assumption of CP, i.e., exchangeability. Tunning the hyper-parameters on fresh/hold-out data is reasonable.”**
FA1: Thank you for your feedback. In the following response, we
(1) clarify that tuning the $g$ will not violate the exchangeability assumption.
(2) elaborate our experiment setting and show additional experimental results, where RC3P significantly outperforms the best baseline with $30.627\\%$ (four datasets) or $40.8\\%$ (excluding CIFAR-10) reduction in APSS ($\downarrow$ better). It is similar to the improvements of RC3P that we report in our main paper ($26.25\\%$ (four datasets) or $35\\%$ (excluding CIFAR-10) reduction).
(3) explain that limited calibration samples may result in inaccurate quantile and thus inaccurate coverage/efficiency measures.
(4) elaborate the experiment where RC3P still outperforms baselines without tuning $g$ in Table 7 of rebuttal PDF. Conditioned on similar APSS of all methods, RC3P significantly outperforms the best baselines with $35.18\\%$ reduction in UCG on average.
**(1) Adding inflation on calibration dataset will not violate the exchangeability assumption**:
The exchangeability is defined [fr1] by the invariance of the joint distribution of the variables $\\{ Z_1, \cdots, Z_n\\}$ to every permutation $\pi$ for $\\{1,…,n\\}$, i.e., $P(Z_1, …, Z_n) \\overset{d}{=} P(Z_{\pi(1)}, …, Z_{\pi(n)})$.
The exchangeability is the property of distribution. In our experiments, adding inflation on nominal coverage is an algorithm design and will not change the data distribution, where calibration and testing datasets still consist of i.i.d. samples form the same distribution. The same algorithm design (i.e., inflating the coverage during calibration) is applied in previous CP papers, such as ARCP (see Theorem 2, Equation 8 in [fr2]) that uses an inflated nominal coverage.
**(2) New experiment on hold-out data for tuning $g$: RC3P achieves the same level of improvement in the reduction of APSS ($30.627\\% \downarrow$ from the best baseline).**
Below we report the experiments where $g$ is tuned on hold-out samples on four datasets with the APS scoring function and imbalance EXP, 0.1. We first split calibration dataset into calibration and hold-out samples (50-50 split), where hold-out samples are used to tune $g$ and calibration samples are used to compute class-wise quantiles. UCR is controlled to $\leq 0.10$ on CIFAR-10 and $0.08$ on other datasets.
|Dataset|CIFAR-10|CIFAR-100| mini-ImageNet|FOOD-101|
|-|-|-|-|-|
|\# hold-out samples per class| 250 |25 |75|62 |
|CCP| 1.940 $\pm$ 0.020 | 38.937 $\pm$ 0.335 | 32.249 $\pm$ 0.419 |33.253 $\pm$ 0.367|
|Cluster-CP| 2.212$\pm$ 0.019 |34.125 $\pm$ 0.990 | 36.372 $\pm$0.270 | 40.762 $\pm$ 0.343|
|RC3P| 1.940$\pm$ 0.020 |19.668 $\pm$ 0.005 |17.736 $\pm$ 0.002 |21.565 $\pm$ 0.007|
These results shows that RC3P significantly outperforms the best baseline with $30.627\\%$ (four datasets) or $40.8\\%$ (excluding CIFAR-10) reduction in APSS ($\downarrow$ better). Similar order of improvements of RC3P can be found in our main paper ($26.25\\%$ on four datasets or $35\\%$ excluding CIFAR-10 APSS reduction from the best baseline, Table 1).
**(3) Limited calibration samples may result in inaccurate quantile and thus inaccurate coverage/efficiency measures**:
There are few training data samples for the tail classes and a limited number of calibration samples (e.g., 50 calibration samples of each class in CIFAR-100, see A5 for Q5) in our setting.
If we split the calibration datasets into calibration and hold-out samples, the limited samples may cause inaccurate class-wise quantiles compared to the true quantiles, and thus inaccurate coverage and efficiency measures.
In practice, our experiment in the main paper used as many samples as possible for calibration.
For instance, comparing the results in the above table with Table 1 in our main paper, there are minor perturbations in APSS measures.
**(4) Without tuning $g$, RC3P still outperforms than baselines ($35.18\\%$ reduction in UCG on average)**: We add the experiments without controlling UCR in Experiment (1) of GR1 (Table 7 in PDF) under the same setting with the main paper.
The model is trained from scratch using LDAM [fr3]. UCR is not controlled. We then use the total under coverage gap (UCG, $\downarrow$ better) between class conditional coverage and target coverage $1-\alpha$ of all under-covered classes.
We choose UCG as the fine-grained metric to differentiate the coverage performance in our experiment setting. Conditioned on similar APSS of all methods, RC3P significantly outperforms the best baselines with $35.18\\%$ reduction in UCG on average.
[fr1] Shafer, G. and Vovk, V., 2008. A tutorial on conformal prediction
[fr2] Gendler et al., 2021. Adversarially robust conformal prediction
[fr3] Cao et al, 2019. Learning imbalanced datasets with label-distribution-aware margin loss | Summary: The paper proposes a new algorithm called Rank Calibrated Class-conditional CP (RC3P) that augments the label rank calibration to conformal classification calibration step. It theoretically proves it
Strengths: - Overall, the idea is clearly presented, and the motivation behind the problem—improving efficiency in CCP for imbalanced data—is compelling.
- The idea of only using top-k classes in the conformal calibration is intuitive.
- The experimental results are adequate, covering different datasets/settings.
Weaknesses: - R3CP heavily relies on the ranking of candidate class labels. If the classifier's ranking is not reliable, the result could be more conservative.
- Lemma 4.2 is not very convincing, since the assumption is basically the conclusion itself. Theorem 4.3 tries to give a condition on when Lemma 4.2 holds, but it is still not very informative. It's not clear to me that R3CP is better than CCP.
Technical Quality: 3
Clarity: 3
Questions for Authors: In the simulation result, you "set the UCR of RC3P the same as or smaller (more restrictive) than that of other methods under 0.16 on CIFAR-10 and 0.03 on other datasets". I though UCR is a metric and why would you be able to do that.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback. Below we provide our rebuttal for the key questions from the reviewer.
**Q1: R3CP heavily relies on the ranking of candidate class labels?**
A1:
RC3P does not heavily rely on model’s label ranking to guarantee the class-conditional coverage, and is instead agnostic to the model’s ranking performance, as shown in Theorem 4.1.
The improved predictive efficiency of RC3P relies on certain conditions of the model, as specified in Lemma 4.2 and Theorem 4.3. However, these conditions, e.g., the parameterized condition number $\sigma_y \leq 1$, very likely holds in practice: we empirically verify it and it holds for all experiments. See Figure 3 in the main paper and Figures 15-25 in Appendix.
**Q2: Lemma 4.2 not convincing and Theorem 4.3 not informative. Not clear why R3CP is better than CCP**
A2: (See also **GR2**)
We highlight that **Lemma 4.2** is not our main intellectual contribution for the improved predictive efficiency of RC3P. Instead, it showcases a condition number $\sigma_y$ that **parametrizes** whether a target event $\mathcal E$ “the predictive efficiency is improved by RC3P’’ happens or not, i.e., $\sigma_y \leq 1 \Rightarrow \mathcal E$.
Then **Theorem 4.3** further analyzes the **transition** from the parameter $\epsilon_y^{\widehat k(y)}$ of RC3P to the parameter $\sigma_y$ of the target event to indicate how to configure RC3P to improve predictive efficiency. Specifically, as pointed out in its Remark, if we set $\alpha - \epsilon_y^{\widehat k(y)}$ as small as possible but still guarantee $\epsilon_y^{\widehat k(y)} < \alpha$ (required by the model-agnostic coverage, Theorem 4.1) in RC3P, then we have a higher probability to guarantee $\sigma_y \leq 1$. Therefore, the overall **transition** of parameters is expressed by:
$$\text{Setting RC3P with (i) } \epsilon_y^{\widehat k(y)} < \alpha, \text{and (ii) } \epsilon_y^{\widehat k(y)} \text{ as small as possible } ~~~ \Longrightarrow ~~~ \sigma_y \leq 1 ~~~ \Longrightarrow ~~~ \mathcal E,$$
which is equivalent to (7) in Remark and helps us configure RC3P to most likely improve the predictive efficiency over baseline CCP method.
Practical Implications for the analysis of predictive efficiency in experiments:
We conducted extensive experiments across multiple datasets to compare the predictive efficiency of each method and verify the conditional number $\sigma_y$. See Figure 3 in the main paper and Figures 15-25 in Appendix.
Our empirical results consistently show that RC3P achieves better predictive efficiency than CCP and the condition $\sigma_y \leq 1 \forall y$ holds across all experimental settings. This empirical validation demonstrates that the practical implementation of RC3P aligns with the theoretical improvements shown in Lemma 4.2 and Theorem 4.3.
**Q3: UCR is a metric and why would you be able to set it.**
A3: (See also **GR3***)
Why: While UCR is a coverage metric, fixing it to a certain value (e.g., 0) across methods allows us to ensure fair comparisons to make valid conclusions about the predictive efficiency of all CP methods.
Because coverage and predictive efficiency are two competing metrics in CP, e.g., achieving better coverage (resp. predictive efficiency) degenerates predictive efficiency (resp. coverage).
If we do not fix one of them, then we cannot conclude which method has more predictive efficiency by comparison. We add the experiments on balanced CIFAR-100 datasets without controlling the UCR. See the table below.
|Methods|CCP|Cluster-CP|RC3P|
|-|-|-|-|
|UCR|0.444|0.446|0.228|
|APSS|11.376|9.7865|10.712|
As shown, RC3P achieves the best coverage metric (UCR), while Cluster-CP achieves the best metric for prediction set sizes (APSS). So this comparison does not conclude which one is more efficient.
Therefore, we fix UCR to first guarantee the class-conditional coverage, conditioned on which we can perform a meaningful comparison of predictive efficiency. This strategy is also discussed in [r1].
How:
We add an inflation quantity to the nominal coverage, i.e., $1-\alpha+g/\sqrt{n_y}$, where $g \in \\{0.25,0.5,0.75,1\\}$ is a tunable hyper-parameter and $n_y$ represents the number of calibration samples in each class $y$ (for CCP and RC3P) or each cluster (for Cluster CP). The structure $g/\sqrt{n_y}$ follows the format of generalization error when setting the empirical quantile, as in [r2]. We select the minimal value from the above range for $g$ such that UCR is under 0.16 on CIFAR-10 and 0.03 on other datasets.
We add the experiments **without controlling UCR in Experiment (1) of GR1** (Table 7 in PDF) under the same setting with the main paper
The model is trained from scratch using LDAM [r1]. UCR is not controlled. We then use the total under coverage gap (UCG, $\downarrow$ better) between class conditional coverage and target coverage $1-\alpha$ of all under covered classes.
We choose UCG as the fine-grained metric to differentiate the coverage performance in our experiment setting. Conditioned on similar APSS of all methods, RC3P significantly outperforms the best baselines with 35.18% reduction in UCG on average.
[r1] Fontana et al., 2023. Conformal prediction: a unified review of theory and new challenges
[r2] Vladimir Vovk, 2012. Conditional validity of inductive conformal predictors
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed rebuttal. I acknowledge that I have read it and it addresses most of my concerns. I'll maintain my score for now. | Rebuttal 1:
Rebuttal: We thank all reviewers for their constructive feedback. Below we provide a summary of our rebuttal for key questions from reviewers’ as a global response.
\
**GR1. We added 7 new experiments for all CP baselines and RC3P** (We train the model from scratch in imbalanced settings by [r1])
**(1) Balanced experiment on diverse datasets (CIFAR-100, Places365, iNaturalist, ImageNet)** (Table 1 in PDF), as suggested by Reviewer Bzsm.
In the table, RC3P significantly outperforms the best baseline with $32.826\\%$ reduction in APSS ($\downarrow$ better) on average, with $10.539\\%$ on iNaturalist and $59.393\\%$ on ImageNet. We get similar improvements with APS and THR scoring functions.
**(2) Large-scale imbalanced classification experiment** (Table 2 in PDF), as suggested by Reviewer Bzsm and 2LUb.
RC3P significantly outperforms the best baseline with $28.523\\%$ reduction in APSS ($\downarrow$ better) on average.
**(3) Various sizes for the calibration sets** (Table 3 in PDF), as suggested by Reviewer Bzsm.
RC3P significantly outperforms the best baseline with $29.68\\%$ reduction in APSS ($\downarrow$ better) on mini-ImageNet. Similar improvements of RC3P can be found on other datasets.
**(4) Calibration with the THR scoring function** [r2] (Table 4 in PDF) for baselines and RC3P, as suggested by reviewer Bzsm.
RC3P significantly outperforms best baselines with $26.9\\%$ on mini-ImageNet reduction compared with the best baseline. Similar improvements of RC3P can be found on other datasets.
**(5) Worst class-conditional coverage (WCCC) as a new metric under the same setting from Table 16 in the Appendix** (Table 5 in PDF) on imbalanced mini-ImageNet, as suggested by Reviewer vcMw.
In WCCC, RC3P significantly outperforms CCP and Cluster-CP with $4.29\\%$ improvement. Similar improvements of RC3P can be found on other datasets.
**(6) Class-wise score distribution as a concrete example of the explanation in Section 4.1** (Table 6 in PDF), as suggested by Reviewer vcMw.
**(7) Comparison without controlling UCR under the same setting with the main paper** (Table 7 in PDF), as suggested by Reviewer yaTg and Bzsm
We then use the total under coverage gap (UCG, $\downarrow$ better) as fine-grained metric for coverage.
Conditioned on similar APSS of all methods, RC3P significantly outperforms the best baselines with 35.18% reduction in UCG on average.
\
**GR2. Theoretical significance of Lemma 4.2 and Theorem 4.3?**
We highlight that **Lemma 4.2** is not our main intellectual contribution for the improved predictive efficiency of RC3P. Instead, it showcases a condition number $\sigma_y$ that **parametrizes** whether a target event $\mathcal E$ “the predictive efficiency is improved by RC3P’’ happens or not, i.e., $\sigma_y \leq 1 \Rightarrow \mathcal E$.
Then **Theorem 4.3** further analyzes the **transition** from the parameter $\epsilon_y^{\widehat k(y)}$ of RC3P to the parameter $\sigma_y$ of the target event to indicate how to configure RC3P to improve predictive efficiency. Specifically, as pointed out in its Remark, if we set $\alpha - \epsilon_y^{\widehat k(y)}$ as small as possible but still guarantee $\epsilon_y^{\widehat k(y)} < \alpha$ (required by the model-agnostic coverage, Theorem 4.1) in RC3P, then we have a higher probability to guarantee $\sigma_y \leq 1$. Therefore, the overall **transition** of parameters is expressed by:
$$\text{Setting RC3P with (i) } \epsilon_y^{\widehat k(y)} < \alpha, \text{and (ii) } \epsilon_y^{\widehat k(y)} \text{ as small as possible } ~~~ \Longrightarrow ~~~ \sigma_y \leq 1 ~~~ \Longrightarrow ~~~ \mathcal E,$$
which is equivalent to (7) in Remark and helps us configure RC3P to most likely improve the predictive efficiency over baseline CCP method.
\
**GR3. Why control UCR? How to control UCR?**
Why: While UCR is a coverage metric, fixing it to a certain value (e.g., 0) across methods allows us to ensure fair comparisons to get valid conclusions about the predictive efficiency of all CP methods.
Because coverage and predictive efficiency are two competing metrics in CP, e.g., achieving better coverage (resp. predictive efficiency) degenerates predictive efficiency (resp. coverage).
If we do not fix one of them, then we cannot conclude which CP method has better predictive efficiency by comparison. We add the experiments on balanced CIFAR-100 datasets without controlling the UCR. See the table below.
|Methods|CCP|Cluster-CP|RC3P|
|-|-|-|-|
|UCR|0.444|0.446|0.228|
|APSS|11.376|9.7865|10.712|
As shown, RC3P achieves the best coverage metric (UCR), while Cluster-CP achieves the best metric for prediction set sizes (APSS), so this comparison does not conclude which one has higher predictive efficiency. Therefore, we fix UCR to first guarantee the class-conditional coverage, conditioned on which we can perform a meaningful comparison of predictive efficiency. This strategy is also discussed in [r3].
How:
We add an inflation quantity to the nominal coverage, i.e., $1-\alpha+g/\sqrt{n_y}$, where $g \in \\{0.25,0.5,0.75,1\\}$ is a tunable hyper-parameter and $n_y$ represents the number of calibration samples in each class $y$ (for CCP and RC3P) or each cluster (for Cluster CP). The structure $g/\sqrt{n_y}$ follows the format of generalization error when setting the empirical quantile, as in [r4]. We select the minimal value from the above range for $g$ such that UCR is under 0.16 on CIFAR-10 and 0.03 on other datasets due to the variability of data.
\
[r1] Cao et al, 2019. Learning imbalanced datasets with label-distribution-aware margin loss
[r2] Sadinle et al., 2019. Least ambiguous set-valued classifiers with bounded error levels
[r3] Fontana et al., 2023. Conformal prediction: a unified review of theory and new challenges
[r4] Vladimir Vovk, 2012. Conditional validity of inductive conformal predictors
Pdf: /pdf/2dff1090a8187042f26f5f7d530bdbb14c3eae20.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
OPERA: Automatic Offline Policy Evaluation with Re-weighted Aggregates of Multiple Estimators | Accept (poster) | Summary: The paper addresses the challenge of evaluating new sequential decision-making policies using OPE techniques. The authors propose a new algorithm, OPERA, which adaptively blends multiple OPE estimators to improve the accuracy of policy performance estimates without relying on explicit selection methods. This approach leverages bootstrapping to estimate the mean squared error of different estimator weightings and optimizes it as a convex problem. The proposed method is shown to be consistent and demonstrates superior performance in selecting higher-performing policies in healthcare and robotics compared to existing approaches. This contributes to a more general-purpose, estimator-agnostic framework for offline reinforcement learning.
Strengths: The paper formulates an interesting problem of how to aggregate different OPE estimators to achieve better(lower) MSE in a data-driven way
The paper provides interesting empirical investigations regarding the tuned alpha in Figure 1, which is interesting and helpful to understand how the method works
The paper provides promising empirical results on both contextual bandit and reinforcement learning environments, showing that, by coming multiple estimators, we can achieve better estimations in a range of situations
Weaknesses: There still remains an important question of how to construct an appropriate set of estimators before performing the proposed ensemble algorithm
I still do not get an intuition of if the proposed method solves fundamentally an easier problem than standard OPE. I mean, if we know the MSE, then it is really straightforward that the proposed algorithm works well, but we need to estimate it somehow, which includes the true policy value, which is the estimand of OPE. Therefore, intuitively, MSE estimation has the same difficulty as OPE, but the proposed algorithm works better than OPE. I’d like to know in what sense the proposed algorithm makes OPE easier.
In theorem 1, it seems that the error rate of \alpha estimation is given, which is not guaranteed, if my understanding is correct.
Technical Quality: 4
Clarity: 3
Questions for Authors: How should we construct the set of estimators to perform the proposed algorithm? Can the authors propose any general guideline?
What would be the intuition of the better effectiveness of the proposed method against standard OPE? Some reasonable accuracy of the proposed method means that we can estimate the MSE somewhat accurately, meaning that we can do OPE similarly accurately, in my intuition.
In theorem 1, the error rate of \alpha estimation is not guaranteed, right?
When combining different estimators, can we consider more complex functions such as polynomials or more general function classes rather than just linear combinations as done in the paper?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review.
> How should we construct the set of estimators to perform the proposed algorithm? Can the authors propose any general guideline?
Thank you for the suggestion. We are adding guidelines as a section in the appendix, and we summarize them here:
1. The estimators should be selected based on the properties of the domain well. For example, if the task horizon is short, then IS estimators would be great, but if the task horizon is long, we should not include IS estimators.
2. If there are estimators known to under or over-estimate the policy performance, then selecting a balanced set of such estimators can allow OPERA to cancel out the bias of these estimators (see the discussion in Sec 4.1 Interpretability).
3. Choosing a reasonable number of estimators (starting with K=3) and only include more when the dataset size grows.
> What would be the intuition of the better effectiveness of the proposed method against standard OPE?
Thank you for the good question. The reviewer is correct that OPERA uses estimates of the MSE, which can seem intuitively to be the same hardness as OPE estimation. However, OPERA fundamentally is an ensemble method (similar to stacking in statistics) that combines multiple estimators. For OPERA to offer an improvement over some of the input OPE estimators, we do not have to have a perfect estimate of their MSE, but a good enough estimate that we can combine across them. Note that OPERA with an estimated MSE per estimator is not guaranteed to be as good as picking the (unknown) estimator with the true best MSE. However, this estimator is unknown because, as the reviewer notes, we do not know the true MSE. We find empirically that the bootstrap estimates provide a reasonable enough approximation that we can find a decent weighting using OPERA that yields an estimator that often (see e.g. Table 1) improves over all individual input estimators, as Figure 1 (and the last part of Section 4) discusses should be possible in some cases given the true (unknown) MSE. We are happy to add additional discussion around this point in the paper. We also note that we think further work on better MSE estimation could further improve OPERA.
> In theorem 1, the error rate of \alpha estimation is not guaranteed, right?
The reviewer is correct, and we will make sure this is clear. The MSE of OPERA has a factor of $\lambda$.
> When combining different estimators, can we consider more complex functions such as polynomials or more general function classes rather than just linear combinations as done in the paper?
Yes, we completely agree this would be an interesting direction for future work. We choose to focus initially on a linearly weighted estimator, which allows us to decompose OPERA’s MSE as a function of the underlying estimators’ MSE: see the derivation in Remark 1. This also allowed the optimization objective to be a convex quadratic program. Other choices of combination might result in different non-convex optimization objectives which, depending on the solution techniques, might introduce additional approximations in OPERA’s policy value estimate. Still, we agree more complicated functions would be worth investigating since they give additional modeling flexibility.
Did this answer your questions? We are also happy to answer more questions if they arise.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for the useful clarification. I will maintain my positive assessment. | Summary: The paper "OPERA: Automatic Offline Policy Evaluation with Re-weighted Aggregates of Multiple Estimators" deals with the challenge of evaluating new decision-making policies using past data, which is vital in areas like healthcare and education where mistakes can be costly.
The authors introduce OPERA, a new algorithm that improves the accuracy of policy evaluation by combining several existing estimators instead of relying on just one. This approach makes the evaluations more reliable.
They provide a solid theoretical basis for OPERA, showing that it effectively minimizes errors and meets key evaluation criteria. A standout feature is using bootstrapping to estimate the mean squared error (MSE) of each estimator, which helps in finding the best way to combine them.
The paper demonstrates OPERA's effectiveness through tests in various fields, such as healthcare and robotics. It consistently outperforms other methods by selecting better policies and achieving lower errors.
One of the key contributions is OPERA's flexibility. It can be used in many applications without needing strict conditions on the input estimators, making it a versatile tool for different offline reinforcement learning tasks.
In summary, the paper presents a new method for policy evaluation that combines multiple estimators to enhance accuracy and reliability, backed by strong theoretical and practical evidence.
Strengths: Originality:
This paper brings an innovative approach to offline policy evaluation with the OPERA algorithm. Instead of relying on a single estimator, OPERA cleverly combines multiple estimators to improve accuracy. This innovation taps into the strengths of different estimators, solving a major limitation of existing methods. The creative use of bootstrapping to estimate mean squared error (MSE) adds to the algorithm's robustness, making it a standout contribution.
Quality:
The quality of this paper is excellent. The authors do a fantastic job of defining the problem of offline policy evaluation and explaining why current methods fall short. They clearly describe their proposed solution, the OPERA algorithm, and walk the reader through each step in a logical and detailed manner.
The theoretical groundwork is solid. The authors back up their claims with rigorous proofs, showing that OPERA effectively minimizes mean squared error (MSE) and meets important criteria for policy evaluation. They also anticipate potential issues with existing methods, like bias and variance, and introduce innovative solutions to tackle these challenges using bootstrapping.
The empirical validation is thorough and convincing. The authors test OPERA across various fields, including healthcare (like sepsis treatment) and robotics, demonstrating its effectiveness. They detail their experimental setup, the datasets used, and the baseline methods for comparison. The results are clear: OPERA consistently outperforms other methods by selecting better policies and achieving lower errors, which strongly supports the practical value of their algorithm.
Clarity:
The paper is well-structured and clearly written. The flow is logical and easy to follow, from problem definition to solution proposal and validation.
Significance:
This paper makes a significant impact in offline reinforcement learning and policy evaluation. By combining multiple estimators, the authors solve a crucial problem, especially in critical fields like healthcare and education. The results show that OPERA consistently outperforms existing methods, proving its value for both researchers and practical applications.
Weaknesses: While the theoretical and empirical aspects are well-covered, the paper could really use a more detailed discussion on how to implement OPERA in practice. Offering guidelines or best practices for using OPERA in different situations would be very helpful for practitioners.
Technical Quality: 4
Clarity: 4
Questions for Authors: The paper primarily compares OPERA with traditional methods. Have you considered comparing OPERA with more recent state-of-the-art models? Including such comparisons could provide a clearer picture of OPERA's competitive edge.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors have done a great job addressing the limitations of their work. They recognize the challenges of combining multiple estimators and provide a solid theoretical foundation to back up their approach. They also discuss using bootstrapping to tackle issues like bias and variance, which is a key part of their method.
However, it would be helpful to add a separate section that dives into potential edge cases and limitations in more detail. This could cover situations where OPERA might have difficulties, such as dealing with very noisy or incomplete data, and practical challenges users might face during implementation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful review.
> the paper could really use a more detailed discussion on how to implement OPERA in practice. Offering guidelines or best practices for using OPERA in different situations would be very helpful for practitioners.
Thank you for the suggestion. We are adding guidelines as a section in the appendix, and we summarize them here:
1. The estimators should be selected based on the properties of the domain well. For example, if the task horizon is short, then IS estimators would be great; but if the task horizon is long, we should not include IS estimators.
2. If there are estimators known to under or over-estimate the policy performance, then selecting a balanced set of such estimators can allow OPERA to cancel out the bias of these estimators (see the discussion in Sec 4.1 Interpretability).
3. Choosing a reasonable number of estimators (starting with K=3) and only include more when the dataset size grows.
> The paper primarily compares OPERA with traditional methods. Have you considered comparing OPERA with more recent state-of-the-art models?
Thank you for the suggestion. We do compare with methods such as SLOPE [1] and BVFT [2] which to our knowledge are state of the art. On the contextual bandit domain, we show that OPERA outperforms SLOPE across 180 conditions when the dataset size is larger than 300. For more realistic robotic control tasks (D4RL), OPERA outperforms BVFT on three tasks: Hopper, HalfCheetah, and Walker2d. If there are other additional algorithms the reviewer was thinking of, please let us know as we’d be happy to look into these.
[1] Su, Y., Srinath, P., and Krishnamurthy, A. (2020). Adaptive estimator selection for off-policy evaluation. In International Conference on Machine Learning, pages 9196–9205. PMLR.
[2] Xie, T. and Jiang, N. (2021). Batch value-function approximation with only realizability. In International Conference on Machine Learning, pages 11404–11413. PMLR.
Thank you for the review. Did this answer your questions? We are also happy to answer more questions if they arise.
---
Rebuttal Comment 1.1:
Comment: I have read the Rebuttal, thanks for answering my question. | Summary: The authors propose a novel offline policy evaluation algorithm, that linearly blends the estimates from many OPE estimators to produce a combined estimate that achieves a lower MSE.
Strengths: The paper is very well written, complete, and easy to read.
The experiments are well executed, the methods are evaluated in numerous different domains and compared against other relevant baselines in each of them.
I specifically like the discussion on different MSE estimation strategies in Section 6, which shows that the authors really thought about the problem and various choices made in their method.
Weaknesses: The authors propose to estimate MSE of \hat{V}_i by estimating \pi using bootstrapped D_n and calculating the squared error from an estimate of \pi done by \hat{V}_i using the original D_n dataset. I think this underestimates bias. For example, if \hat{V}_i returns a constant for any dataset and policy, then its MSE would be 0, and I assume \hat{\alpha}_i would be 1, meaning the ensemble reward would correspond to this constant. I appreciate the variants of OPERA presented in 6.2 that partially address this.
Minor:
* [L114] D_n is defined both as 1 to n and 0 to n
* [L125] \theta_* is undefined
* [L136] Eq. 4 is referred although it is not labeled when defined (and many others)
* [L169] \theta_* is still undefined
* [L189] The word equation is repeated twice, I recommend using the LaTeX package cleveref
Technical Quality: 3
Clarity: 4
Questions for Authors: Considering the first paragraph in weaknesses, can you discuss whether OPERA systematically favors biased estimators?
Recent work of Cief et al. (2024) provides a new way of estimating MSE, which can potentially improve OPERA. As this was published after the NeurIPS deadline, I do not consider it an issue.
Cief, Matej, Michal Kompan, and Branislav Kveton. “Cross-Validated Off-Policy Evaluation.” arXiv, May 24, 2024. http://arxiv.org/abs/2405.15332.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors provide a thorough discussion on the limitations in Section 6 and Appendix A.7.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful review and pointing out minor errors. We have corrected the inconsistent symbols and clerical mistakes. Much appreciated!
> The authors propose to estimate MSE of \hat{V}_i by estimating \pi using bootstrapped D_n and calculating the squared error from an estimate of \pi done by \hat{V}_i using the original D_n dataset. I think this underestimates bias. For example, if \hat{V}_i returns a constant for any dataset and policy, then its MSE would be 0… I appreciate the variants of OPERA presented in 6.2 that partially address this.
> …Considering the first paragraph in weaknesses, can you discuss whether OPERA systematically favors biased estimators?
Thank you for asking about this. We don’t believe that OPERA systematically favors biased estimators— for example, if there is an estimator which always returns -10^6 or 10^6 with 50/50 probability, it would have a high variance (in addition to being biased) and OPERA would likely place low weight on this estimator. We do completely agree that the method (prior to 6.2) can underestimate the bias. Indeed, using Bootstrap to estimate MSE reliably assumes that the estimator will approach the true target asymptomatically. Therefore, an estimator that only produces a constant value invalidates the Bootstrap procedure, and its MSE cannot be reliably estimated with Bootstrap. One way to address this is by using estimators that have known convergence guarantees.
We agree that empirically estimators that have very low variance but are potentially very biased (i.e., model-based estimators or FQEs) might be favored by OPERA. As the reviewer notes, this helped motivate the modifications we present in Section 6.2, where we use a consistent estimator as the centering variable/estimator.
We will expand our discussion of these issues in the paper.
> Recent work of Cief et al. (2024) provides a new way of estimating MSE, which can potentially improve OPERA. As this was published after the NeurIPS deadline, I do not consider it an issue.
Thank you for suggesting this paper– we look forward to going through it in more detail, though, as the reviewer notes, it was released after the deadline.
Thank you for the review. Did this answer your questions? We are also happy to answer more questions if they arise.
---
Rebuttal Comment 1.1:
Comment: Thank you for answering my question. After reviewing other reviews and their discussions, I decided to keep my score and vote for the paper's acceptance. | Summary: The paper introduces OPERA, an algorithm for offline policy evaluation (OPE) in reinforcement learning (RL). OPERA addresses the challenge of selecting the best OPE estimator by adaptively combining multiple estimators using a statistical procedure that optimizes their weights to minimize mean squared error (MSE). The authors prove OPERA's theoretical consistency and desirable properties, ensuring it is at least as accurate as any individual estimator. The algorithm employs a bootstrapping method to estimate MSE, circumventing the need for direct access to true performance measures.
Strengths: 1. The paper presents a novel approach to offline policy evaluation by introducing OPERA, a method that adaptively blends multiple OPE estimators using a statistically optimized weighting mechanism. This approach is innovative as it leverages the strengths of various estimators without requiring explicit selection. The use of bootstrapping to estimate mean squared error (MSE) and optimize the estimator weights is a simple yet creative combination of existing statistical techniques in a new context.
2. The paper is well-written and clearly structured. The problem statement, methodology, theoretical analysis, and experimental results are presented in a logical and coherent manner. The use of mathematical formulations and proofs is precise, aiding in the clear communication of complex concepts. Additionally, the inclusion of diagrams and pseudocode for the OPERA algorithm enhances understanding and provides readers with a clear roadmap of the proposed method.
Weaknesses: **Assumption of Consistent Estimators**
The theoretical guarantees provided for OPERA rely on the assumption that at least one of the base estimators is consistent. However, in practice, this assumption may not always hold, especially in complex or noisy environments where all available estimators could be biased or inconsistent. The paper could benefit from a discussion on how OPERA performs under such conditions and whether there are ways to relax this assumption while still maintaining acceptable performance. For example, could the authors design an experiment where the state, action and/or reward are severely imbalanced, which leads to insufficient data coverage and approximation error to behaviour policy? A practical reflection can be found in a critical study of real-world ICU treatment [1].
[1] Luo, Zhiyao, et al. "Position: Reinforcement Learning in Dynamic Treatment Regimes Needs Critical Reexamination." Forty-first International Conference on Machine Learning.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1.The theoretical guarantees of OPERA rely on the assumption that at least one base estimator is consistent. How does OPERA perform when this assumption does not hold? Are there any mechanisms within OPERA to detect and handle inconsistent base estimators?
2. What practical considerations should be taken into account when implementing OPERA in real-world scenarios? Are there specific guidelines for choosing the initial set of estimators or criteria for including new estimators in the ensemble?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: This paper seems to lack a section for limitation. I recommend adding 1 short paragraph to the conclusion section to summarize the border limitations and social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful feedback! We incorporated these discussions into the paper but respond to them individually here.
> OPERA rely on the assumption that at least one of the base estimators is consistent. However, in practice, this assumption may not always hold. The paper could benefit from a discussion on how OPERA performs under such conditions and whether there are ways to relax this assumption while still maintaining acceptable performance. For example, could the authors design an experiment where the state, action and/or reward are severely imbalanced, which leads to insufficient data coverage.
> How does OPERA perform when this assumption does not hold? Are there any mechanisms within OPERA to detect and handle inconsistent base estimators?
Thanks for raising this important issue. In general, if there is good coverage over states and actions, then including an IS estimator is sufficient to ensure that a consistent base estimator exists. However, if there is not good coverage, that will result in poor inconsistent estimators. OPERA is fundamentally an ensemble method (similar to stacking in statistics) that combines multiple estimators. As long as we can accurately estimate the MSE of the estimators in the reward/action imbalanced conditions, then OPERA will be able to improve upon base estimates.
As for mechanisms, we can introduce ideas such as KL divergence between the behavior/sampling policy and the evaluation policy, which will indicate when state/action/reward imbalance might occur. If this is the case, most OPE methods, in general, will have issues returning an accurate estimate. We would recommend collecting additional data before the evaluation or including estimators that can produce reliable OPE estimates in this case.
> What practical considerations should be taken into account when implementing OPERA in real-world scenarios? Are there specific guidelines for choosing the initial set of estimators or criteria for including new estimators in the ensemble?
Thank you for this question! We are adding guidelines as a section in the appendix, and we summarize them here:
1. The estimators should be selected based on the properties of the domain well. For example, if the task horizon is short, then IS estimators are often useful to include.
2. If there are estimators known to under or over-estimate the policy performance, then selecting a balanced set of such estimators can allow OPERA to cancel out the bias of these estimators (see the discussion in Sec 4.1 Interpretability).
3. Choosing a reasonable number of estimators (starting with K=3) and only include more when the dataset size grows.
> This paper seems to lack a section for limitation. I recommend adding 1 short paragraph to the conclusion section to summarize the border limitations and social impact.
We have added text in the conclusion to address the limitation. Thank you for the suggestion!
Thank you for the review. Did this answer your questions? We are also happy to answer more questions if they arise. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Monte Carlo Tree Search based Space Transfer for Black Box Optimization | Accept (spotlight) | Summary: This paper propose a search space transfer learning method based on Monte Carlo tree search, called MCTS-transfer, to iteratively divide, select, and optimize in a learned subspace. It can provide a well-performing search space for warm start for the target problem based on the source problems. It adaptively identify and leverage the information of similar source tasks to reconstruct the search space during the optimization process. Experiments on many situations have demonstrated the effectiveness of the algorithm.
Strengths: 1. The paper is well written, and is easily to understand.
2. The experiments are conducted extensively and demonstrate the effectiveness of the algorithm across numerous application scenarios.
3. Introducing Monte-Carlo tree search into the search space transfer problem is quite novel, and this paper modifies some operations of MCTS based on its own application scenarios to make it work better.
Weaknesses: 1. It would be beneficial to analyze the algorithm's running time and computational cost if possible.
2. In MCTS of AlphaZero, state value $v$ is used to predict the expected future reward from the current state to the end. In this paper, the evaluation of current state $p_m$, which can be treated as a summary of historical iterative path from the root to the current, is used as $v$. Can you discuss the differences between the two? What are the potential impacts to use historical $p_m$ to represent the future expected $v$?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weakness part.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please refer to the weakness part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable and constructive comments. Below please find our responses.
### Q1: Running time and computational cost analysis
Thanks for your valuable suggestions. Please refer to Q1 in general response.
### Q2: Discussion between the state value in MCTS of AlphaZero and MCTS-Transfer
Good points! Thank you very much for your interesting and insightful questions. AlphaZero [1] is designed to master complex games through self-play without relying on human knowledge or guidance. MCTS plays a crucial role in AlphaZero's decision-making process, whose state value is used to predict the expected future reward from the current state to the end, rather than the historical information in our paper. Different from that a state's future reward of AlphaZero can be obtained through multi-step simulations, i.e., alternating decisions through self-play, evaluation values in BBO can only be obtained through actual evaluations. Consequently, we utilize historical information in our paper. We have observed that some recent look-ahead BO works [2-3] have been used to predict the expected value of future steps in BBO problems, which have the potential to be applied in MCTS-Transfer as estimates for state values to further improve the performance. We will incorporate this discussion into our revised paper. Thank you for bringing this to our attention.
[1] A general reinforcement learning algorithm that masters chess, shogi and Go through self-play. Science, 2018.
[2] Practical Two-Step Look-Ahead Bayesian Optimization. NeurIPS, 2019.
[3] Accelerating Look-ahead in Bayesian Optimization: Multilevel Monte Carlo is All you Need. ICML, 2024. | Summary: This paper proposes a search space transfer learning method based on Monte Carlo tree search (MCTS) called MCTS-transfer, which aims to accelerate the optimization process in computationally expensive black-box optimization problems.
Strengths: - Originality: The integration of MCTS with search space transfer learning is a novel approach, which addresses the need for improved convergence in black-box optimization problems.
- Clarity: The paper is generally well-written and structured, making the methodology and results comprehensible.
Weaknesses: - Technical Depth: The paper lacks sufficient technical depth in explaining the underlying mechanics of MCTS-transfer. For instance, the specifics of how MCTS iteratively divides and selects subspaces are not clearly detailed.
- Experimental Rigor: The experimental validation, although covering various scenarios, does not delve deeply into comparative baselines. The choice of baselines is limited and more recent advancements in the field should be included.
- Adaptability and Scalability: There is insufficient discussion on the adaptability of MCTS-transfer to different problem domains and its scalability to very large search spaces. The potential computational overhead and limitations in such scenarios are not adequately addressed.
- Theoretical Analysis: The theoretical analysis supporting the method's efficacy is minimal. It would be better if more rigorous proofs or detailed theoretical justifications would strengthen the paper.
- Reproducibility: While the results are promising, more details should be presented to ensure reproducibility. Key implementation details, parameter settings, and the codebase are missing, which are crucial for the validation of the proposed method.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How does MCTS-transfer handle the computational overhead introduced by the MCTS component, especially in large and complex search spaces?
2. Can the authors provide more specific examples or case studies where MCTS-transfer significantly outperforms other methods?
3. What measures have been taken to ensure the reproducibility of the results? Is the code and dataset publicly available?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors should address the potential computational overhead and provide a more comprehensive discussion on the limitations of their method. Additionally, the impact of the method on different types of black-box optimization problems should be more thoroughly explored.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable and constructive comments. Below please find our responses.
### Q1: How does MCTS iteratively divides and selects subspaces?
We're sorry that we didn't make this part clear.
- How does MCTS iteratively divide subspaces? As described in Section 3.1, the division of space corresponds to the expansion of nodes. Starting from the ROOT node, if the node is splittable, we cluster the samples into two clusters and divide them apart, leading to the birth of two child nodes, where the node with better cluster is the left child node. Then, following the sequence of depth-first-search, we repeatedly try to expand the node and visit the child nodes if it's splittable, iteratively dividing the space.
- How does MCTS iteratively select subspaces? We select the subspace by the guidance of UCB. Starting from the ROOT node, if the node has two child nodes, we will choose the node with higher UCB as the next node to visit. Following the sequence, we will finally locate the target leaf node and find the subspace.
### Q2: Comparison with more advanced baselines
Thanks for your comments. However, we believe that our compared baselines are comprehensive. We include non-transfer BO (basic GP, LA-MCTS), existing search space transfer methods (Box-GP, Ellipsoid-GP, Supervised-GP), and state-of-the-art algorithm (PFN). If you have some specific suggestions of additional baselines, we are happy to discuss it.
### Q3: Discuss the adaptability of MCTS-transfer to different problem domains and its scalability to very large search spaces
Thanks for your valuable comments.
- Adaptability to different problem domains. We add three new complex real-world problems from Design-Bench to show the adaptability of MCTS-transfer to different problem domains.
- Scalability to very large search spaces. MCTS-transfer is suitable for solving large search spaces, because the subspaces with high potential can be gradually discovered and extracted, thus improving the optimization efficiency. The three new real-world problems from Design-Bench are relatively high-dimensional, among which Superconductor has 86 dimensions, Ant morphology has 60 dimensions, and D’Kitty Morphology has 56 dimensions. We believe the results can reflect the performance of MCTS-transfer in large search spaces.
Please see Q2 in general response for details.
### Q4: Lack of theoretical analysis
Thank you very much for your suggestions. We fully agree that theoretical analysis of MCTS-transfer is a very interesting topic. As also suggested by Reviewer x4CE, we provide two potential perspectives for theoretical analysis:
- One point of analysis could be the transfer efficiency in transfer learning. A possible approach is to analyze the space size covered by MCTS partitions at the optimal points [1]. The expected conclusion is that, compared to constructing LaMCTS from scratch, MCTS partitions cover optimal points more efficiently with the same number of partitions.
- Another point that can be theoretically analyzed is the regret bound. A possible approach is to use the error bounds of Gaussian process regression [2] and the characteristics of MCTS space partitions to bound the instantaneous regret at each step.
We will add this discussion into our main paper and leave it as our future work. Thank you.
[1] Multi-Objective Optimization by Learning Space Partitions. ICLR, 2022.
[2] Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design. ICML, 2010.
### Q5: Reproducibility issue
Our code has been provided in the supplemental materials. All the implementation details can be found in it. Besides, the parameter settings and data collection methods are provided in appendix A.1 and A.3, respectively.
### Q6: How does MCTS-transfer handle the computational overhead introduced by the MCTS component, especially in large and complex search spaces?
We acknowledge that building and maintaining MCTS in high-dimensional and complex search spaces will bring additional computational overhead. However, as shown in Figure 1 in the PDF file, the time of tree backpropagation and reconstruction seems minor compared to evaluation time. Additionally, if one wants to further reduce the time cost on MCTS, we can set the leaf size $\theta$ higher to reduce the tree depth, or choose a fast binary classifier such as LinearRegression to divide the space.
### Q7: Provide more specific examples or case studies where MCTS-transfer significantly outperforms other methods
MCTS-transfer is suitable for expensive BBO, especially when we are uncertain about which tasks are similar to the target task. Our method can automatically identify the most relevant tasks and give more considerations on them when constructing the subspaces. Even if none of the source tasks are considered similar, MCTS-transfer can still correct the search direction by relying more on the collected target task data just as the motivating case in section 4.1 demonstrated. Hyperparameter optimization is a specific example in practice, where we don't know any properties in advance. However, we can collect other optimization trajectories on the same domain, and MCTS-transfer will automatically identify the relevant part to accelerate optimization.
### Q8: Provide a more comprehensive discussion on the limitations of their method
Apart from lacking theory analysis and accurate task similarity measures, the limitations of our work include that it cannot handle search space transfer tasks for problems with different domains or with different dimensions. We will further improve the algorithm in this way by learning embedding space in future work. We will add this part in the new version.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed response. My concerns have been addressed. I have raised the score and confidence accordingly.
---
Reply to Comment 1.1.1:
Comment: Thanks for your feedback! We are glad to hear that your concerns have been addressed. We will make sure to include the added results and discussion in the final version. Thank you. | Summary: This paper proposes a new space transfer method for black-box optimization by using MCTS. The search space is divided by MCTS, and the data from source tasks are used to help evaluate the value of each node of the tree. The similarity between the source and target tasks is considered and adjusted dynamically. The authors performed experiments on various problems, and compared the propose method with state-of-the-art methods.
Strengths: The idea of the proposed method so-called MCTS-transfer is interesting and also natural. It extends the LA-MCTS method by using transfer learning: The data points of the source tasks are used to help evaluate the value of a node in the tree with weights related to their similarity to the target task. The similarity is updated using new sampled points from the target task and will become more accurate gradually.
In the initialization stage, the data of source tasks are used to construct a tree by clustering and binary classification. Particularly, the data points in a node are clustered into two clusters, where the cluster with better average objective value is treaded as “good” and the other cluster is “bad”. A binary classifier is then used to divide the space represented by the node into two parts. In the optimization stage, the proposed method uses MCTS to select one leaf node, and optimizes within the search space the leaf represents. The newly sampled points are further used to update the leaf node, e.g., expand the leaf.
The most interesting thing is the calculation of the potential value of each node, which is a weighted sum of objective values of sampled points from both source and target tasks. The weights consider the similarity between the source and target tasks; more similar, larger the weights. The authors calculated the weights by using the distance between the best points of the tasks, which will be updated after new data points of the target task are sampled. This makes the weights (or the similarity between the source and target tasks) adjustable during the optimization process.
The experiments are extensive. The authors compared state-of-the-art methods of transferring the search space, and some other recent related methods. The problems considered include BBOB, hyper-parameter optimization, and real-world problems. The results generally show the superior performance of MCTS-transfer. The authors also did various sensitivity analyses. The paper is overall well written, and easy to follow. The codes are provided.
Overall, I think this work can provide a good complement to the space transfer method for black-box optimization.
Weaknesses: The authors considered the number of evaluations in the experiments. This is OK. But I also want to see the running time comparison of each iteration, which will be useful in practice. It seems that the proposed method will cost more time, as it will use the procedure of Treeify to check the feasibility of the tree, i.e., whether a right child node has a larger value than the left one.
In the right subfigure of Figure 1(b), no blue line?
In the right subfigure of Figure 2(b), the PFN method is better than the proposed MCTS-transfer. I’d like to see some discussion. As MCTS-transfer can be equipped with any BO algorithm, can it achieve better results by combining advanced BO algorithms?
Lines 346-347, I cannot understand “we can still see the obvious strength in exploring the optimal solution at later stage,” can you give some explanation?
I suggest moving the pseudo-code of Algorithm 1 (Treeify) to appendix. Instead, you can include more experimental results in the main paper.
For the hyper-parameters \gamma and \alpha in equation 3 and 4, how to set them in practice?
Though the paper is overall well written, there are still some types.
-- line 310: 3 search space transfer algorithms -> three search space transfer algorithms
-- line 311: figure 1 -> Figure 1; line 322: figure 1 -> Figure 1; Please check throughout the paper.
-- line 330: The sentence “The detailed experimental results …” is redundant.
-- line 334: mcts-transfer -> MCTS-transfer
-- line 336: “equal” -> “reach”
-- line 337: “surprising performance”, I suggest using “superior performance”. The results are good, but not surprising.
-- line 339: “We test” -> “we test”
-- line 344: “in the RobotPush” -> “in RobotPush”
-- line 354: “close final and random final”?
-- line 360-361: in D -> in Appendix D; but it can -> it can
-- line 600: missing the period
-- line 621: \alpha -> and \alpha
-- line 643-645: the sentence is not well written.
-- line 641, 646: check the missing and redundant blank
-- Caption of Figure 13: Real-World Problem -> Real-World Problems
Technical Quality: 4
Clarity: 3
Questions for Authors: See weakness.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for carefully reviewing our paper and providing constructive comments, which have helped improve the work a lot. We are very glad that you appreciate our work. Below please find our responses.
### Q1: The running time comparison of each iteration
Thank you for your valuable suggestions. Please refer to Q1 in general response.
### Q2: Explain the lack of blue line in the right subfigure of Figure 1(b)
In Sphere2D, we consider mixed settings and dissimilar settings. As demonstrated in line 307-309, we use $D\_{(-5,-5)},D\_{(5,-5)},D\_{(5,5)}$ as source datasets in mixed setting, and remove the most similar task $D\_{(5,5)}$ in dissimilar setting. Figure 1(b) shows the curve of source task weight changes during the optimization process. As there are only two source tasks in dissimilar setting, only two lines are shown in right subfigure of Figure 1(b).
### Q3: In Figure 2(b), PFN is better than MCTS-transfer; Combining MCTS-transfer with advanced BO algorithms to achieve better performance.
Thanks for your comments, but there are some misunderstandings that need to be clarified. In Figure 2(b), PFN just converges faster in the early stages, but performs worse than our MCTS-transfer after about 70 iterations. Our algorithm finally achieved better results as well.
We appreciate your suggestion that combining MCTS-transfer with other advanced BO algorithms, which is an interesting idea and can further strengthen our work. According to your suggestion, we combine MCTS-transfer with PFN and compare it with other algorithms in Figure 2(b). The results can be found in the right subfigure of Figure 2 in the PDF file. We find that MCTS-transfer-PFN makes further improvements compared to MCTS-transfer-GP and PFN, which shows the versatility of MCTS-transfer. We will include this experiment in our revised paper. Thank you very much.
### Q4: Explain line 346,347
We are sorry for not expressing it clearly.
In RobotPush, MCTS-transfer has no advantage in the initial stage, but it can still effectively divide the search space and speed up optimization, as clearly shown in similar settings. In the later stage of optimization in similar setting, the speed of MCTS-transfer for finding the optimal solution exceeds all baselines, which may come from the efficient utilization of source task data by reasonable node potential evaluation, node expansion, and tree reconstruction.
### Q5: How to set $\gamma$ and $\alpha$ in practice?
$\gamma$ is a decay factor controlling the weight decay of source tasks, and $\alpha$ is to determine the source task ratio with high weights. We set $\gamma=0.99$ and $\alpha=0.5$ by default. Generally, if the source tasks are very relevant to the target task, i.e., very similar or important, you can set $\gamma$ and $\alpha$ higher; if the source tasks are diverse, $\alpha$ should be set lower to prevent disturbance from unimportant tasks.
### Q6: Remove treeify to appendix; Some typos in paper
Thank you very much for carefully pointing out the typos and providing your valuable suggestions on the paper organization. We will revise the paper carefully according to your suggestions.
---
Rebuttal Comment 1.1:
Comment: The responses have answered my questions and further confirmed my rating. Thank you. | Summary: The paper proposes a tree-based search space division to enable transfer across different instances of related optimisation problems.
The authors propose both a scheme to divide the search space in a hierarchical fashion based on training task samples as well as a way of weighing the resulting subspaces against each other to acquire new evaluations for the task at hand. Since the similarity between the training tasks and the current task is updated as samples are acquired, the tree rankings have to be continuously updated as well.
The derived algorithm is evaluated on both an illustrating toy example and several more challenging benchmarks. The appendix contains further ablations regarding design choices and hyperparameter selections.
Strengths: I found the paper very well-presented and easy to follow. The main ideas were clearly laid out and the overall structure made sense to me.
To my knowledge, the proposed approach is novel. Given that Bayesian Optimization is usually applied in tasks with limited evaluation budgets, search space transfer seems like a promising avenue of allocating these limited budgets more efficiently. It also circumvents the scaling issues that approaches based on synthetic data points have.
The empirical evaluation is sound and there are detailed ablation studies supporting the authors claims.
Weaknesses: Arguable the biggest weakness of the paper is the lack of theoretical analysis of the provided algorithm. However, given the depth of the empirical evaluation, I believe this can be left as future work.
I also feel that a comment on the runtime of the proposed framework would be helpful. MCTS-transfer requires the (re-)construction and update of an entire search tree as well as the training of several classifiers.
Beyond this, some minor points for consideration are:
- From the description it is unclear, when the search space clustering (i.e. the subspace classifier) is updated. Does this only happen during tree reconstruction or during back-propagation as well?
- I would not name LunarLander, RobotPush, and Rover as real-world problems. While the search spaces are higher dimensional, the problems themselves are relatively simple
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. How often is reconstruction of the search tree required? Would it be possible to add an analysis of how frequently the tree has to be rebuild as this presumably is a large factor in the runtime of the algorithm itself?
2. Could the authors comment on how the search-space division is done for conditional search spaces (e.g. in the hyperparameter optimization settings)?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors point towards future work and more accurate task similarity measures as future work but beyond this do not discuss the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for carefully reviewing our paper and providing constructive comments, which have helped improve the work a lot. Below please find our responses.
### Q1: Lack of theory
Thank you very much for your suggestions. We fully agree that theoretical analysis of MCTS-transfer is a very interesting topic. Here, we provide two potential perspectives for theoretical analysis:
- One point of analysis could be the transfer efficiency in transfer learning. A possible approach is to analyze the space size covered by MCTS partitions at the optimal points [1]. The expected conclusion is that, compared to constructing LaMCTS from scratch, MCTS partitions cover optimal points more efficiently with the same number of partitions.
- Another point that can be theoretically analyzed is the regret bound. A possible approach is to use the error bounds of Gaussian process regression [2] and the characteristics of MCTS space partitions to bound the instantaneous regret at each step.
We will add this discussion into our main paper and leave it as our future work. Thank you.
[1] Multi-Objective Optimization by Learning Space Partitions. ICLR, 2022.
[2] Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design. ICML, 2010.
### Q2: Lack of runtime analysis and reconstruction frequency analysis
Thank you for your valuable suggestions. Please refer to Q1 in general response.
### Q3: When is the search space clustering updated?
We are sorry for not making it clear. The update of tree clustering happens both in backpropagation and reconstruction stage. In backpropagation stage, after we get a new sample $(\boldsymbol{x}\_t, f(\boldsymbol{x}\_t))$ in node $m$, we will update the node status along the path from $m$ to ROOT node, including the data contained in the node, the number of visits, and whether it is splittable. To check whether the node $m$ is splittable, we need to update the clustering of the search space $\Omega\_m$. If $m$ contains more than $\theta$ samples and the samples can be clustered apart, we will expand $m$ into 2 child nodes. In reconstruction stage, we will prune the unqualified subtrees and reconstruct them. During the subtree reconstruction, the clustering of the search space will also be updated. We will revise to make it clear. Thank you.
### Q4: The real-world problems are relatively simple
Thanks for pointing this out. We will rename these problems as non-synthetic tasks. To further demonstrate the performance of MCTS-transfer in complex real-world problems, we add three new problems from Design-Bench [1]. The results still show the superiority of MCTS-transfer. Please refer to Q2 in general response.
[1] Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization. ICML, 2022
### Q5: How to do the search-space division for conditional search spaces?
Thanks for your question. To apply our method to the conditional search space, we may apply the following process. For conditional optimization, we consider the problem $\min \_{\boldsymbol{x} \in \mathcal{X} \subset \mathbb{R}^{d}} f(\boldsymbol{x})$. Specifically, the search space is tree-structured, formulated as $\mathcal{T}=\\{V,E\\}$, where $v\in V$ is a node representing subspace and $e\in E$ is an edge representing condition. The objective function is also defined based on $\mathcal{T}$, formulated as $f\_{\mathcal{T}}(\boldsymbol{x}):=f\_{p\_{j},\mathcal{T}}(\boldsymbol{x}|{l\_{j}})$, where $p\_j$ is a condition and $\boldsymbol{x}|l\_j$
is the restriction of $\boldsymbol{x}$ to $l\_j$ [1]. In the pre-learning stage, it builds subtrees for each $v\in V$ and generates the MCTS model $\mathcal{T'}$ based on $\mathcal{T}$. In each iteration, followed by UCB value, it finds the target node $m$ located in the subtree of $v$ with condition $p\_i$, optimizes in $\Omega\_m$, selects and evaluates the candidate using $f\_{p\_{i},\mathcal{T}}(\boldsymbol{x}|{l\_{i}})$. After that, it updates the task weights and node potential in the whole tree $\mathcal{T'}$ and tries to reconstruct the tree. Note that the tree reconstruction only happens in each subtree of $v\in V$. We will revise to add some discussion.
[1] Additive Tree-Structured Covariance Function for Conditional Parameter Spaces in Bayesian Optimization. AISTATS, 2020.
### Q6: The limitations of the work
Thanks for your suggestions. Apart from lacking theoretical analysis and accurate task similarity measures, the limitations of our work include that it cannot handle search space transfer tasks for problems with different domains or with different dimensions. We will further improve the algorithm in this way by learning embedding space in future work. We will add this to our revised paper. Thank you.
---
Rebuttal 2:
Comment: I thank the authors for their detailed response which have answered my questions and confirmed my rating. | Rebuttal 1:
Rebuttal: We are very grateful to the reviewers for carefully reviewing our paper and providing constructive comments and suggestions. Our response to individual reviewers can be found in the personal replies, but we also would like to make a brief summary of revisions about writing, discussion, and experiments for your convenience.
Writing:
- We have revised some typos and improved some expressions.
Discussions:
- We discuss the theoretical analysis of MCTS-Transfer, according to the suggestions of Reviewers x4CE and jvcN.
- We discuss how to apply MCTS-Transfer to the conditional search space, according to the suggestions of Reviewer x4CE.
- We discuss the differences in MCTS between AlphaZero and MCTS-transfer, according to the suggestions of Reviewer Tjsz.
Experiments:
- We analyze the runtime of MCTS-transfer, according to the suggestions of Reviewers x4CE, ygQe, and Tjsz.
- We add three new real-world problems from Design-Bench to compare the performance of algorithms in complex and high-dimensional real-world problems, according to the suggestions of Reviewers x4CE and jvcN.
- We equip MCTS-transfer with PFN on mixed real-world problems, demonstrating the versatility of MCTS-transfer combining with other advanced BO algorithms, according to the suggestions of Reviewer ygQe.
For the important questions many reviewers concerned about, we make general responses here.
### Q1 Runtime analysis
We conduct a comprehensive analysis of the runtime proportion of each component of MCTS-transfer and quantify the subtree reconstruction times on BBOB benchmark and three non-synthetic tasks (i.e., LunarLander, RobotPush, and Rover). We divide MCTS-VS into three main components: evaluation, backpropogation and reconstruction.
- The evaluation component includes the time required for surrogate model fitting, candidate solution selection and evaluation. This component is common to all the compared optimization algorithms.
- Backpropagation and reconstruction constitute the two principal modules specific to MCTS-transfer.
As illustrated in Figure 1 of the PDF file, the additional computational burden introduced by MCTS-Transfer (i.e., backpropagation and reconstruction) represents a relatively minor fraction of the total runtime, particularly in the three non-synthetic scenarios. These scenarios precisely exemplify the computationally intensive cases that transfer BO is designed to address, wherein MCTS-Transfer demonstrates small additional computational overhead.
Furthermore, our result reveals that the average frequency of tree reconstructions is low, with the corresponding reconstruction time being almost negligible when compared to the evaluation time. We will include this discussion in our revised paper. Thank you.
### Q2 More problem domains/complex real-world problems
To further verify the performance of MCTS-transfer in complex real-world problems, we add the following three problems from Design-Bench [1], which is a suite of diverse and realistic tasks derived from real-world optimization problems.
- Superconductor: critical temperature maximization for superconducting materials. This task is taken from the domain of materials science, where the goal is to design the chemical formula for a superconducting material that has a high critical temperature. The search space is a continuous space with 86 dimensions.
- Ant morphology: robot morphology optimization. The goal is to optimize the morphological structure of Ant from OpenAI Gym to make this quadruped robot to run as fast as possible. The search space is a continuous space with 60 dimensions.
- D’Kitty Morphology: robot morphology optimization. The goal is to optimize the morphology of D’Kitty robot to navigate the robot to a fixed location. The search space is a continuous space with 56 dimensions.
The data collection methods are consistent with existing real-world tasks in our paper, and the parameters remain unchanged. The results shown in the left sub-figure of Figure 2 in the PDF validate the significant advantages of MCTS-transfer in these complex, high-dimensional, and realistic tasks.
[1] Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization. ICML, 2022.
**We will include all these results into the revision of our paper, and will revise the paper carefully according to your comments and suggestions. We hope that our response has addressed your concerns, but if we missed anything please let us know.**
Pdf: /pdf/819fa02d4b34f6cd839ef2c0fcb48538332658c0.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Simplifying Constraint Inference with Inverse Reinforcement Learning | Accept (poster) | Summary: This paper proposes a way to reduce the tri-level stucture of ICRL to bi-level, and uses solid experiment results to validate that this bi-level reformulation achieves better expirical results. The authors also intuitively explain that this is due to the fact that the tri-level optimization has a more complicated optimization landscape. In general, this paper clearly delivers its idea, and is easy to follow. However, the major weakness is that this simplification of tri-level to bi-level is trivial, and can be further exploited.
Strengths: This paper provides solid empirical results to evaluate its idea. This paper answers a fundamental question for ICRL, i.e., there is no fundamental difference between IRL and ICRL. In other words, we can use IRL algorithms to solve ICRL problems.
Weaknesses: The main weakness lies in the core contribution, i.e., the simplification of tri-level to bi-level formulation. This simplication comes from the observation that we can always learn a cost function $c'$ that captures both the dual variable $\lambda$ and the original cost function $c$, i.e., $c'=\lambda c$. Therefore, we can learn a single cost function $c'$ to replace $\lambda c$.
This idea is correct, and I agree that learning a single cost function $c$ is enough. However, this idea is trivial and can be futher exploited. In fact, I have seen similar trick in an ICRL literature [1]. In reference [1], the authors also remove the dual variable $\lambda$ and only learn a single cost function $c$. Indeed, the reference [1] does not highlight this modification as their novelty, so that it is totally fine for this paper to highlight this trick as a core contribution.
However, this "c'=\lambda c" idea is more like an engineering trick, and more contribution is needed to support this trick. For example, as mentioned in the paper, compared to the tri-level formulation, the bi-level reduction has a simpler optimization landscape and thus is expected to have better optimization result. It will be great if the authors can theoretically support this claim, i.e., theoretically prove that this bi-level formulation is easier to achieve better result than the tri-level formulation.
[1] Liu \& Zhu, "Meta inverse constrained reinforcement learning: convergence guarantee and generalization analysis".
Technical Quality: 2
Clarity: 2
Questions for Authors: Please see weakness.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The limitation is discussed in the paper and is reasonable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful consideration of our paper. We understand that your primary concern with the paper is that our principal claim is trivial. While we understand that the derivation is relatively straightforward, we do not think the reduction is necessarily trivial, as evidenced by the number of peer-reviewed papers which propose complex methods for performing ICRL (Malik et al 2021, Liu et al 2023, Kim et al 2023).. Hence, our work would benefit the community by providing a reference and experimental justification for bypassing more complicated ICRL methods in favor of IRL methods for constraint inference.
Regarding your suggestion, “It will be great if the authors can theoretically support this claim, i.e., theoretically prove that this bi-level formulation is easier to achieve better result than the tri-level formulation” - we believe that this is outside the scope of this paper to prove this generally. However, we note that there is already evidence for this in the literature. In particular, GANs are notoriously difficult to train due to the dynamics induced by the adversarial game (Goodfellow et al 2014). Generally, iterates of gradient descent do not converge to saddle points (Freund and Schapire 1997). Some solutions exist for bi-level optimizations / two-player games (Rakhlin and Sridharan 2012, Moskovitz et al. 2023), but we are unaware of solutions to the tri-level optimization problem.
Generative Adversarial Networks, Goodfellow et al. 2014, Training GANs with Optimism, Daskalakis et al. 2017
A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting, Freund and Schapire 1997
Online Learning with Predictable Sequences, Rakhlin and Sridharan 2012
ReLOAD: Reinforcement Learning with Optimistic Ascent-Descent for Last-Iterate Convergence in Constrained MDPs, Moskovitz et al. 2023
G. Liu, Y. Luo, A. Gaurav, K. Rezaee, and P. Poupart. Benchmarking constraint inference in inverse reinforcement learning. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023.
S. Malik, U. Anwar, A. Aghasi, and A. Ahmed. Inverse constrained reinforcement learning. In
International conference on machine learning, pages 7390–7399. PMLR, 2021
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I will keep the current rating.
---
Reply to Comment 1.1.1:
Title: Continued discussion
Comment: Dear Reviewer,
Thank you for your continued discussion of our work and assistance in improving the research. We believe our paper updates and responses have addressed your concerns for the paper. If not please describe why our response is not sufficient and we would be happy to make further improvements. We look forward to further discussion. | Summary: The paper proposes a novel inverse constraint learning approach that leverages inverse reinforcement learning (IRL). Prior work by Kim et al. and Malik et al. proposed a game-theoretic approach to the constraint learning problem, where the resulting optimization problem is a tri-level optimization problem involving the policy, the constraint function, and the Lagrange multiplier. The algorithm is an iterated optimization that alternates between constrained RL and constraint function learning. In contrast, the present work leverages the equivalence between the original problem and a simpler IRL problem, which was overlooked in prior work. This reduction removes the Lagrange multiplier from the decision variables and enables the use of an existing IRL algorithm for the purpose of simultaneous constraint learning and constrained policy optimization. Simulation results in the Mujoco environment suggest the feasibility of the IRL approach.
Strengths: * The present work provides a key insight that the inverse constraint learning problem of Kim et al. is equivalent to inverse reinforcement learning, which was overlooked in the prior work. Although the equivalence requires an additional assumption that the set of constraint functions must be closed under multiplication by positive scalars, we can apply game-theoretic IRL algorithms to solve the constraint learning problem as long as we conform to the constraint function class.
Weaknesses: Although the present work provides an interesting insight into constraint learning problems in RL, the paper in its current form possesses multiple issues.
### Major Issues
The most concerning issue is the inconclusive and inconsistent nature of the analysis of experimental results, as detailed below.
* The feasible rewards and the violation rate are two competing evaluation metrics employed in the paper, which are equally important considering the natural trade-off between performance and safety. Nevertheless, Figure 1 only focuses primarily on the feasible rewards to compare different methods; most notably, it reports information on the violation rate only when it is worse than the proposed vanilla IRL method. Such a partial reporting can easily induce bias in the interpretation of the results, since it effectively ignores circumstances where the violation rate is improved for baselines. The authors should fully report both the feasible rewards and the violation rate, and investigate whether any method pareto-dominates the others in the two evaluation criteria. No definitive conclusion can be drawn without this analysis.
* No detailed analysis of Figure 2 is presented in Section 5.1. In fact, the authors did not reference Figure 2 in the main body of Section 5, and the caption does not provide any analysis either. By looking at Figure 2, it is not clear whether any conclusion can be drawn about the relative performance between the proposed algorithms and the two baselines (i.e. MECL and GACL).
* Speaking of the performance of MECL and GACL, It is not clear what “a best estimate” means in the footnote on page 6. In the first place, the authors should either reproduce the prior work with PPO or report the missing results as they are, with preference given to the former; they should never estimate missing values. If reproduction is impossible, I would contact the authors and ask for raw data.
* Figure 3 is incomplete. Specifically, the left plot on Feasible Reward is missing GACL 80% and GACL 50%. Similarly, the right plot on Violation Rate is missing MECL 50%.
* The paper examines whether various techniques for stabilizing constraint learning actually help the IRL approach, and provides some analysis in Figure 4 and Section 5.2. Unfortunately, its credibility is highly questionable given the small sample size (with only 3 seeds) and the fact that the error bars are largely overlapping in Figure 4. Rather than reporting standard deviation, the authors should report more appropriate uncertainty information, such as the standard error of the mean or a confidence interval. If time permits, please also conduct the experiment with more seeds, which may help separate the error bars and aid for more comprehensible results. (Note that the standard error reduces as the sample size is increased, as opposed to the standard deviation.)
* The conclusion is partly self-conflicting. Specifically, in Section 5.2 the authors mention batch normalization (BN) and reward normalization (RN). They state that “we find these modifications are generally more harmful than beneficial. Including batch normalization, reward normalization or both, … tended to hurt performance over basic IRL in a majority of environments.” On the contrary, in Section 6 the authors write “certain combinations of additional simple regularization such as batch normalization and reward normalization can produce significantly better results in several environments.”
### Minor Issues
#### Problem Formulation
* In equation (6), the Lagrange dual variable $\lambda$ should always be the outer optimization variable. I believe that it should be max min, not min max (unless the minimax theorem holds.)
* Equation (8) and its subsequent analysis are central to the development of this paper, and hence may be worth stated as a proposition or a theorem.
#### Related Work
* The authors seem to consider general Imitation Learning (IL) and IRL as two separate problems. A common view is that there are several categories in IL, of which behavioral cloning (BC) and IRL are among the most popular [1][2].
* There is one reference that is missing on Page 3, line 98.
* The authors state in Section 1 that “we would like to extract safety constraints from the data based on the expert behavior, which can then be used downstream to constrain task-specific learning.” Besides the RL approaches, there is a thread of prior work in learning-based control that considers this problem through the use of barrier functions and fully decoupling downstream task-specific learning from constraint learning (e.g. [3][4][5]), which is missing from the literature review.
[1] Zare, Maryam, Parham M. Kebria, Abbas Khosravi, and Saeid Nahavandi. "A survey of imitation learning: Algorithms, recent developments, and challenges." arXiv preprint arXiv:2309.02473 (2023).
[2] Osa, Takayuki, Joni Pajarinen, Gerhard Neumann, J. Andrew Bagnell, Pieter Abbeel, and Jan Peters. "An algorithmic perspective on imitation learning." Foundations and Trends® in Robotics 7, no. 1-2 (2018): 1-179.
[3] Robey, Alexander, Haimin Hu, Lars Lindemann, Hanwen Zhang, Dimos V. Dimarogonas, Stephen Tu, and Nikolai Matni. "Learning control barrier functions from expert demonstrations." In 2020 59th IEEE Conference on Decision and Control (CDC), pp. 3717-3724. IEEE, 2020.
[4] Lindemann, Lars, Alexander Robey, Lejun Jiang, Satyajeet Das, Stephen Tu, and Nikolai Matni. "Learning robust output control barrier functions from safe expert demonstrations." IEEE Open Journal of Control Systems (2024).
[5] Castaneda, Fernando, Haruki Nishimura, Rowan Thomas McAllister, Koushil Sreenath, and Adrien Gaidon. "In-distribution barrier functions: Self-supervised policy filters that avoid out-of-distribution states." In Learning for Dynamics and Control Conference, pp. 286-299. PMLR, 2023.
Technical Quality: 2
Clarity: 2
Questions for Authors: * In equation (4), do we need the outer optimization in $\lambda$ to be over non-negative real values, not the entire real line (because of the inequality constraint)?
* In the IRL formulation, the class of constraint functions $\mathcal{F}_c$ is more restrictive than the original formulation of Kim et al. (which is convex and compact). Can the authors comment on possible negative implications of this restriction?
* I did not fully follow the reward scaling part in Section 4.2. I wonder if the authors can elaborate on what they meant by “the constraint function is learned independently of the reward scale and hence may be more robust to different reward scales.”
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: * The limitations are properly discussed in Section 6, but the inconclusive nature of the analysis can be possibly alleviated in the revision, as discussed above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful consideration of our paper and the helpful comments and suggestions. We would hope that the improvements we have made and highlighted in the overall response will alleviate some of your concerns, but we would like to address each of your concerns here specifically:
- “The authors should either reproduce the prior work with PPO or report the missing results as they are”
- The performance of MECL and GACL reported in Fig 2 and Fig 4 are taken directly from the tables reported in Appendix D of Liu et al. 2023. However, the results of MECL and GACL for Fig 3 were not reported in a table and hence we estimated them based off the plots in Liu et al 2023. We agree that this is insufficient and instead have rerun the author’s code to generate all baseline performance estimates. We provide some of these results in Fig 1 and 2 of the rebuttal PDF which shows IQM on final performance of our proposed modifications versus MECL.
- “Figure 1 only focuses primarily on the feasible rewards to compare different methods; most notably, it reports information on the violation rate only when it is worse”
- You are correct that Fig 1 primarily focuses on feasible rewards and only partially reports violation rate, however, we have included both violation rate and feasible rewards in full, both Fig 2 and Fig 3. We have replaced Fig 2 with the IQM plots in Fig 1 in the attached PDF, which will help make the reporting of the violation rate more clear. Additionally, we would like to point out that feasible rewards is a metric that attempts to capture both safety and performance in one metric, as feasible rewards are the rewards obtained in a trajectory only up to the first constraint violation. This is the metric that is proposed for benchmarking ICRL methods in Liu et al 2023.
- “No detailed analysis of Figure 2 is presented in Section 5.1”.
-Fig 2 is a reduced version of Fig 4 which shows only the best overall and best per-environment modifications, versus the baselines. We did this to make comparison easier, however, we recognize that this may seem redundant and be confusing. Following the suggestion of another reviewer we have moved Fig 4 to the Appendix and replaced it with Fig 2 in the PDF (IQM plots of final performance).
- “Figure 3 is incomplete”
-Thank you for pointing this out. The issue is that some of the curves are overlapping, which makes them difficult to see. We will clarify this in the revision.
- "Unfortunately, its credibility is highly questionable given the small sample size (with only 3 seeds) and the fact that the error bars are largely overlapping"
- We understand your criticism of the statistical analysis, particularly the small number of seeds and reporting of standard deviation. We hope that we have addressed these concerns in our overall response and by including additional random seeds.
- “The conclusion is partly self-conflicting” [re the impact of batch norm and reward normalization].
-Thank you for pointing out the confusing wording here. What we meant to say is that overall (across all environments) BN and RN are not generally beneficial. However, in specific environments (eg. Cheetah and Walker) we do see benefits. So they are helpful, but only on an environment-specific basis. We will make this more clear in the revision.
---
Rebuttal 2:
Title: Request for Further Clarification
Comment: Dear authors,
I sincerely appreciate your time and effort in preparing the rebuttal. I also thank that the authors have re-run the evaluation with more seeds and performed more extensive baseline comparison, as well as the statistical post-processing.
Although these new results are more interpretable and encouraging, the authors seem to have only responded to my "major issues" in this individual thread. I would highly appreciate your individual responses to my "minor issues" as well as "questions" so I can make a more informed recommendation.
Best,
Reviewer xRd9
---
Rebuttal 3:
Title: Additional responses
Comment: Dear Reviewer,
Thank you for your continued discussion of our work and assistance in improving the research. We are glad that you found our additional results more interpretable. Thank you also for pointing out that our response to your review was incomplete. While we had addressed your questions and minor issues, we had erroneously neglected to copy these into our final response here. We sincerely apologize for this oversight and address these issues below:
- “In the IRL formulation, the class of constraint functions 𝐹𝑐 is more restrictive than the original formulation of Kim et al. (which is convex and compact). Can the authors comment on possible negative implications of this restriction?”
- Thank you for this question. In fact, the class of constraint function $\mathcal{F}_c$ that we consider is less restrictive than that of Kim et al. because we do not assume compactness (a convex cone violates compactness condition). This condition is required to prove regret bounds for inverse constrained RL given by Kim et al. 2023. However, we note that in large scale applications that approximately optimize a constraint model represented by a deep neural network, one is already in the setting in which these regret bounds do not hold. We make this point more clearly in the revision.
- “I did not fully follow the reward scaling part in Section 4.2. I wonder if the authors can elaborate on what they meant”
- The reasoning for using reward scaling in this case is to constrain the rewards from a possibly unbounded range to a smaller range of values, in order to facilitate learning the constraint function. In particular, we are seeking a simple approach to constraint inference that does not require significant environment-specific hyperparameter tuning as ICRL approaches do. We hypothesized that normalizing rewards could lead to a more consistent optimization across environments with different reward scales
- "Equation (8) and its subsequent analysis are central to the development of this paper, and hence may be worth stated as a proposition or a theorem."
- Thank you for this suggestion. We hope that Theorem 1 and its proof from our general response has addressed this concern. We have included Theorem 1 in the main paper of our revision and added the proof to the Appendix.
- “In equation (6), the Lagrange dual variable 𝜆 should always be the outer optimization variable”
- We believe the order of optimization is correct in Equation (6). Please see Kim et al 2023 Eq. 3 or refer to Theorem 3.6 in Altman 1999
- “The authors seem to consider general Imitation Learning (IL) and IRL as two separate problems”.
- Indeed we do consider the two to be separate problems, although they are of course highly related, as you point out. In our opinion, the two are primarily distinguished by the recovery of a reward function or, in this case, constraint function. This is important when we consider the motivation for ICRL which we outline in the introduction, i.e. to learn a constraint function that could be transferred to new tasks to facilitate safer learning. For this reason we consider the distinction between IRL and IL to be important in this case.
- “In equation (4), do we need the outer optimization in 𝜆 to be over non-negative real values, not the entire real line (because of the inequality constraint)?”
- Yes, thank you for pointing this out. We will correct it in the revision.
- "Besides the RL approaches, there is a thread of prior work in learning-based control that considers this problem through the use of barrier functions"
- Thank you very much for pointing us towards this prior work. We agree that this would be relevant to include as related work and will do so in the final revision.
- "There is one reference that is missing on Page 3, line 98."
- Thank you for pointing this out, we will correct it in the final revision.
K. Kim, G. Swamy, Z. Liu, D. Zhao, S. Choudhury, and S. Z. Wu. Learning shared safety constraints from multi-task demonstrations. Advances in Neural Information Processing Systems, 36, 2023.
Altman, E. 1999. Constrained Markov decision processes.
We believe these comments have addressed your remaining concerns with the work. If there are any outstanding issues we would be happy to continue discussion.
---
Rebuttal 4:
Title: Response to Authors
Comment: Thank you for further clarification. Most of the concerns and questions have been resolved thanks to your detailed responses. Thus, I will raise my score. However, I still respectfully disagree with the authors on the taxonomy of IL and IRL. Following prior literature as listed in my initial review, IRL is a particular approach to formulate the broader problem of Imitation Learning (IL). Behavior Cloning (BC) is an altenative approach to IL, but IRL distinguishes itself from BC by learning a reward function in addition to the expert behavior $\pi(a | s)$.
---
Rebuttal Comment 4.1:
Comment: Dear reviewer, we agree with your characterization of IRL as a class of BC which recovers the reward function. Thank you again for your valuable suggestions which have greatly helped us to improve our paper. We are glad our additional results and response have addresed your concerns and thank you for increasing your score. We will do our best to further improve our paper. | Summary: The paper explains how inverse constrained RL (ICRL) - the task of recovering (safety) constraints from expert behaviour respecting those constraints - is equivalent to (straight) inverse RL (IRL) - the task of recovering a reward function from expert behaviour approximately optimizing that reward. This allows the ICRL problem to be solved using the wide array of techniques available in IRL. The authors then go on to empirically show superior performance of using a basic IRL algorithm on the IRL formulation relative to specialized ICRL algorithms.
Strengths: - The paper is rising an important point: ICRL has been arising as a subarea of ML in the past few years with its own methods. Showing that it's equivalent to an existing, more developed area of research (IRL), is certainly useful since it (1) unlocks the use of IRL methods in ICRL (2) puts into question the need for ICRL to exist as a separate subarea and certainly puts more burden on ICRL researchers to show that their methods indeed solve the problem better than vanilla IRL and why. The results from this paper suggest that currently that's not the case.
- The paper is clear and easy to understand.
- The paper is a prime illustration of the maxim that a machine learning conference paper should ideally make a single crisp point.
- Experiments support the claims made in the paper.
Weaknesses: - In some places, the paper is a bit too wordy and the same thing could be said as clearly with a shorter sentence (an LLM will surely be happy to provide suggestions, of which at least some may be good).
- the statistical methodology is somewhat underwhelming in Section 5, but not a deal-breaker for me. Fig 1 could give confidence intervals on the mean. Similarly, it'd be useful to have a confidence interval on the mean in the other two figures and ideally across more seeds than 3 (since supposedly the hypothesis we're testing in is whether the mean is better than the baselines). Especially results in 5.2 are somewhat messy. Maybe estimating something like Shapley values may be helpful here? Also, normalizing the results and then aggregating across the environments could help paint a clearer picture (you could work with a normalized effect size relative to the IRL baseline as the main quantity). I definitely find mostly the final performance interesting, so I wouldn't be against moving the training curves into the appendix if you want to free up space.
Technical Quality: 3
Clarity: 4
Questions for Authors: Suggestions for improvement:
- There is often missing punctuation after equations (e.g. full stop at the end of line 150 or commas after equations 1 and 6). In general, equations are part of the sentence structure and should be punctuated as such (and don't necessarily need to be preceded by a colon).
- Weakly held opinion: equations 5 and 6 would feel more natural to me in negated form, i.e. finding a policy that maximizes the return subject to reward that minimizes the return of the policy relative to the expert. (yes, I'm taking into consideration also that this then leads to equations 7,8). Like this I find them a bit confusing at first without explanation.
- The colour-coding of Figure 1 seems a bit unfortunate and doesn't help me much in reading the figure - could e.g. positive values be shades of green and negative ones shades of red with white at zero (not fussy about particular colors, but something that creates more contrast)? Please add x axis labels to Figures 2,3,4. Not all of the horizontal lines in Fig 3 are visible (maybe add a slight offset to prevent perfect overlap?). I'd also prefer seeing confidence interval on the mean across more than 3 seeds, rather than standard deviation.
Minor typos:
- l.3 "However" -> "however"
- l. 98 has a missing reference
- l. 126 has double space
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: I think the paper has a clear scope corresponding to the scope of previous work and doesn't introduce significant limitations beyond that.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful consideration of our work. We are very glad that you found our work valuable and greatly appreciate your suggestions for improvement.
Regarding your main concern, “the statistical methodology is somewhat underwhelming in Section 5”, we have incorporated many of your suggestions in the overall response, namely (1) we report only final performance in Fig 2 in the rebuttal PDF, (2) We compute confidence intervals rather than standard deviation (3) we include 5 seeds rather than 3 and (4) We have included Fig 1 in the rebuttal PDF which aggregates across all the environments using an expert normalized score.
Regarding your minor suggestions:
- “Equations 5 and 6 would feel more natural to me in negated form”: Another reviewer was also confused by this - we will include equations in negated form in the final paper
- “The colour-coding of Figure 1 seems a bit unfortunate and doesn't help me much in reading the figure “ - We will replace Figure 1 with Figure 1 from the rebuttal PDF in the final report.
---
Rebuttal Comment 1.1:
Title: Follow up
Comment: Dear Reviewer,
We hope that you've had a chance to read our responses and clarification. As the end of the discussion period is approaching, we would greatly appreciate it if you could confirm that our updates have addressed your concerns.
---
Rebuttal Comment 1.2:
Comment: Thank you for your response. I appreciate all of the changes made, especially the inclusion of the confidence intervals. The figures now give a better sense of what's going on.
One last comment: in your main rebuttal and in the pdf, you claim that IRL methods Pareto-dominate. I think that claim would make me expect that (1) *each of the methods* considered performs at least as well as the baseline on (2) *each of the tasks* considered. I would probably to refrain from using the term when considering that the mean (across both tasks and IRL variants) is higher.
And a minor point: if you wanted to aim for plotting perfection, you could harmonize colour-coding between Figs 1 and 2, which would make comparison easier.
That said, I'm keeping the favourable score of "Accept".
---
Reply to Comment 1.2.1:
Comment: Dear reviewer, thank you very much for your feedback and favorable review. Your suggestions have been very helpful in improving our paper. We will incorporate both of these suggestions in our final revision as we continue to improve our paper. | Summary: The authors propose a method for learning the constraints from demonstrations. To achieve this goal they take note of the similarities between inverse reinforcement learning (IRL) and inverse constraint reinforcement learning (ICRL). The authors aim to reduce the tri-level optimization of constrained inverse reinforcement learning to a bi-level optimization using a special class of constraints.
Strengths: * The main claim of the paper with regard to reducing the ICRL to IRL under certain classes of constraint functions is original and can be quite impactful.
Weaknesses: * The authors' claim that the tri-level optimization can be reduced to a bi-level has a mathematical motivation but it not supported by a mathematical derivation. The authors aim to support the claim via experimental results but the results have high variance and are thus not fully convincing.
* The paper's main claims are tested using methods developed by prior work (MECL, GACL). The specifics of the modifications done by the authors are somewhat unclear however. The authors mention using SAC instead of PPO for the IRL implementation of the prior method which they compare with. Later it is mentioned that their results are directly taken from the prior works due to lower performance of their SAC-based implementation.
* The paper combines various modifications to prior work (batch and reward normalization, separate critics, policy reset) in order to examine their benefits and support the main claims. However due to large variance in results in limited locomotion environments, it is difficult to accept the results with high degree of certainty.
Technical Quality: 3
Clarity: 3
Questions for Authors: * The clarity of the paper can be significantly improved by adding diagrams or pseudocode describing their specific modifications to the previous IRL methods.
* Question mark instead of a citation on line 98 (Related Works -> Inverse Reinforcement learning).
* Eq. (1) describes the IRL objective. In its current form it aims to find a reward function that minimizes the difference between the return of the expert and the current policy while maximizing this difference. The objective should be the opposite where the policy aims to reduce the difference between the return on the expert and the current policy and the reward should aim to maximize the difference between the return of the expert and the current policy.
* Given that this paper focuses on learning policies that violate fewer constraints and given that the claims are supported mainly by experimental results, the experiments section should focus on a more diverse set of environments that require safety constraints or at least describe in more details the constraints present in the Mujoco environments used for the experiments.
* If the tri-level optimization can be reduced to a bi-level IRL problem, can it then be further reduced to a single level optimization using methods such as Inverse Q-Learning [1] or IQ-Learn [2]? How does the constraint violation rate of the proposed method compare to such algorithms?
[1] Kalweit, G., Huegle, M., Werling, M., & Boedecker, J. (2020). Deep Inverse Q-learning with Constraints. Advances in Neural Information Processing Systems, 33, 14291-14302.
[2] Garg, D., Chakraborty, S., Cundy, C., Song, J., & Ermon, S. (2021). IQ-Learn: Inverse soft-q learning for imitation. Advances in Neural Information Processing Systems, 34, 4028-4039.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: * High variance in the experimental results (acknowledged by the authors).
* The number of constrained environments and the discussion on how these constraints are present in each environment is lacking.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful consideration of our work. We would hope that the improvements we have made and highlighted in the overall response will alleviate some of your concerns, but we would like to address each of your concerns here specifically:
- “The authors' claim that the tri-level optimization can be reduced to a bi-level has a mathematical motivation but it not supported by a mathematical derivation.”
- We hope the provided proof in the general response alleviates your concerns here.
- “The specifics of the modifications done by the authors are somewhat unclear”, “The clarity of the paper can be significantly improved by adding diagrams or pseudocode describing their specific modifications to the previous IRL methods.”
- We have added pseuocode for the separate critics IRL method for inferring constraints in the attached PDF. The additional modifications (policy reset, batch norm and reward normalization) are implemented as standard.
- “Later it is mentioned that their results are directly taken from the prior works due to lower performance of their SAC-based implementation.”
- Indeed, we used SAC in our implementation because the ICRL method we compare against is motivated by maximum-entropy RL. In their implementation, they use an entropy bonus with a PPO objective. Though we tried various learning rates for the ICRL method in SAC (see Appendix 1), we needed to tune all hyperparameters from scratch due to the change in algorithm. In the code of Liu et al 2023, it appears that hyperparameters were tuned quite specifically to each environment, and this full hyperparameter tuning was beyond the scope of our work. Hence, we compare directly to the reported results from Liu et al 2023, without reimplementing in SAC.
- “However due to large variance in results in limited locomotion environments, it is difficult to accept the results with high degree of certainty.”
- We hope that our general response has alleviated the concerns regarding the variance in the results. Regarding the diversity in environments, we use these environments because of their precedent in prior work on ICRL (Liu et al 2023, Malik et al 2021)
- “Eq. (1) The objective should be the opposite”
- We believe the equation is correct, however, another reviewer pointed out that it would be more intuitive to report the negation of these equations. We will do this in the final version.
- "If the tri-level optimization can be reduced to a bi-level IRL problem, can it then be further reduced to a single level optimization using methods such as Inverse Q-Learning [1] or IQ-Learn [2]? How does the constraint violation rate of the proposed method compare to such algorithms? "
- Thank you for raising this question. It should potentially be possible to do this for IL purposes, however, in ICRL the goal is generally to recover the constraint function. It is not entirely straightforward how to disambiguate rewards and constraints in a method like IQ-learn which learns the Q function directly. This may be an interesting direction for future work.
G. Liu, Y. Luo, A. Gaurav, K. Rezaee, and P. Poupart. Benchmarking constraint inference in inverse reinforcement learning. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023.
S. Malik, U. Anwar, A. Aghasi, and A. Ahmed. Inverse constrained reinforcement learning. In International conference on machine learning, pages 7390–7399. PMLR, 2021
---
Rebuttal Comment 1.1:
Title: Follow up
Comment: Dear Reviewer,
We hope that you've had a chance to read our responses and clarification. As the end of the discussion period is approaching, we would greatly appreciate it if you could confirm that our updates have addressed your concerns. | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful comments and suggestions. We agree with the reviewers that the main claim of our paper is that treating ICRL as a separate problem from IRL may not confer a particular benefit and, in fact, it should be beneficial to not segment the problem class because it means we can make use of the wide literature in IRL to solve ICRL problems. We are glad that most reviewers agree this is an interesting idea. We also provide empirical analysis to validate this claim and provide details on how to best use IRL for ICLR. We understand the primary concern of several reviewers is the statistical robustness of these results, including the high variance, the low number of seeds and the reporting of the statistical significance. We have taken the following steps to alleviate these concerns:
- We have rerun all experiments across 5 seeds to improve the robustness of our results. This is consistent with the number of seeds used by Liu et al 2023.
- We believe that the way we are currently reporting results may overstate the variability of the results. In Figs 2,3,4 we are showing error bars as standard deviation across both seeds and the time smoothing window. Hence, the error bars contain both aleatoric uncertainty (variation across episodes in a single agent) and epistemic uncertainty (variation across seeds). Following several reviewers suggestions we propose replacing Figs 1 and 4 with Figs 1 and 2 in the attached PDF, respectively. This makes the following adjustments:
- In both figures, we use the following procedure. We first compute the mean of the last 50 testing episodes. We then compute bootstrapped confidence intervals of the IQM (interquartile mean) across the 5 seeds using the methodology proposed in Agarwal et al 2021
- In Fig 1 in attached, we have additionally normalized by expert performance and reported an overall performance curve for all environments, as recommended in Agarwal et al 2021. This further increases the sample size to reduce variance. Since the baseline MECL is trained with PPO, and so has a different training time, we take only the IQM of final performance and report this as a sold dashed line (without CI) as a comparison. We believe this very clearly illustrates that IRL methods pareto-dominate the baseline ICRL method in these environments.
- In Fig 2 in attached, we now report only final performance on a per-environment basis, which we hope facilitates readability when comparing across modifications and environments.
- We have also rerun the author’s provided code for the baseline method MECL from Liu et al 2023 and included the same IQM with confidence intervals for this method, computed over 5 seeds in Fig 2 attached. This allows us to also compare the variance of our method to that of the baseline MECL. We note that the variance in the results of MECL are also very high as can be seen in Fig 2 in the attached PDF. In general, high variance can be an issue for adversarial IRL, however, we do not think this issue is unique to our method. We also note that upon re-running the author’s code, we find that the reported performance of ICRL is not reproducible with the code provided, using the final performance (the author's may have reported best performance). This further improves the relative performance of IRL over the ICRL baseline.
Finally, we understand that several reviewers would like a more rigorous proof of the equivalence between ICRL and IRL. We offer the following, which is added to the Appendix of the paper:
**Theorem**. Let $\Pi$ denote a class of policies and let $\pi_E\in \Pi$ and $r:\mathcal{X}\times\mathcal{A}\to\mathbb{R}$ be a fixed (expert) policy and reward function, respectively. For any class of constraint functions $\mathcal{F}\subset \mathbb{R}^{\mathcal{X}\times\mathcal{A}}$, we define the following objectives:
$\mathsf{OPTicrl}(\mathcal{F}) = \max_{c\in\mathcal{F}}\max_{\lambda \geq 0}\min_{\pi\in\Pi}\left[J(\pi_E, r - \lambda c) - J(\pi, r - \lambda c)\right]$
$\mathsf{OPTsimple}(\mathcal{F}) = \max_{c\in\mathcal{F}}\min_{\pi\in\Pi}\left[J(\pi_E, r - c) - J(\pi, r - c)\right]$
where $J(\pi, f)$ denotes the expected return (averaged over initial states) earned by the policy $\pi$ for the reward function $f$. Then, if $\mathcal{F}$ is a convex cone, it holds that $\mathsf{OPTicrl}(\mathcal{F}) = \mathsf{OPTsimple}(\mathcal{F})$.
**Proof**. Suppose $\mathcal{F}$ is a convex cone. We will first show that $\mathsf{OPTsimple}(\mathcal{F})\geq \mathsf{OPTicrl}(\mathcal{F})$.
Let $\mu^\pi$ denote the (discounted) occupancy measure of policy $\pi$. It is well known that $J(\pi, f) = \mu^\pi f$. Then, we have
$\mathsf{OPTsimple}(\mathcal{F}) = \max_{c\in\mathcal{F}}\min_{\pi\in\Pi}(\mu^{\pi_E} - \mu^\pi)(r - c)$
$\geq \max_{c\in\mathcal{F}}\min_{\pi\in\Pi}(\mu^{\pi_E} - \mu^\pi)(r - \lambda c) \forall \lambda > 0$
$\geq \max_{c\in\mathcal{F}}\max_{\lambda\geq 0}\min_{\pi\in\Pi}(\mu^{\pi_E} - \mu^\pi)(r - \lambda c)$
$= \mathsf{OPTicrl}(\mathcal{F})$
where the first inequality holds since $c\in\mathcal{F}\implies \lambda c\in\mathcal{F}$ by the hypothesis that $\mathcal{F}$ is a convex cone.
It remains to show that $\mathsf{OPTsimple}(\mathcal{F})\leq\mathsf{OPTicrl}(\mathcal{F})$. This is simply shown by
$\mathsf{OPTsimple}(\mathcal{F}) = \max_{c\in\mathcal{F}}\min_{\pi\in\Pi}(\mu^{\pi_E} - \mu^\pi)(r - 1\cdot c)$
$\leq \max_{c\in\mathcal{F}}\max_{\lambda\geq 0}\min_{\pi\in\Pi}(\mu^{\pi_E} - \mu^\pi)(r - \lambda c)$
$= \mathsf{OPTicrl}(\mathcal{F})$
We will address each reviewer’s particular concerns in our individual responses. Thank you again for your thoughtful consideration of our paper.
Agarwal, Rishabh, et al. "Deep reinforcement learning at the edge of the statistical precipice." Advances in neural information processing systems 34 (2021): 29304-29320.
Liu,, Guiliang et al: Benchmarking Constraint Inference in Inverse Reinforcement Learning. ICLR 2023
Pdf: /pdf/58670ab7e0e6a0885e6ba3d19bc1894e2108515a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Heavy-Tailed Class Imbalance and Why Adam Outperforms Gradient Descent on Language Models | Accept (spotlight) | Summary: This paper addresses the important problem of why Adam outperforms SGD for language tasks, proposing that heavy-tailed class imbalance in the training dataset is the key factor in this performance gap. A series of experiments demonstrate that Adam consistently outperforms SGD under heavy-tailed class imbalance because SGD shows slow or no progress on low-frequency classes, whereas Adam progresses independently of class frequencies. This finding holds true across different data types (language and image data), architectures (CNNs and Transformers), and the stochasticity of the training algorithm (mini-batch and full-batch).
The paper also theoretically investigates a linear softmax classification problem, proving that the convergence of gradient flow (as a proxy for GD) depends heavily on class frequencies, whereas the convergence of a continuous-time variant of sign descent (as a proxy for Adam) is independent of class frequencies. This theoretical result establishes the benefits of sign-based methods under heavy-tailed class imbalance for a linear model.
Strengths: - The finding that heavy-tailed class imbalance leads to the performance gap between Adam and SGD is a solid and novel contribution. This offers new insights into understanding the important question of why and when Adam outperforms SGD.
- The experiments are well-designed and robust, including thorough ablation studies that strongly support the hypothesis.
- I think it's great that the authors can rigorously establish the effect of class frequencies on the training speed of GD and sign descent in a simplified setting. Additionally, the investigation into the correlation between gradient and Hessian across coordinates provides valuable insights into the underlying mechanisms by which sign-based methods benefit from heavy-tailed class imbalance.
Weaknesses: - The linear model studied is an oversimplified setting and is designed to be biased towards sign descent. Therefore, it is unclear whether the insights gained from this model extend to practical settings, particularly in understanding why Adam benefits from a heavy-tailed class imbalance in real-world scenarios. Specifically, it remains uncertain whether the correlation between gradient, Hessian, and class frequencies observed in the linear model holds true in more complex, practical settings.
Technical Quality: 4
Clarity: 4
Questions for Authors: - In the gradient norm and Hessian trace plots in Figures 24-26 in Appendix G, how are the weight blocks $w_c$ defined? Do they correspond to the last layer of each architecture? If so, how does class imbalance affect other layers beyond the last layer?
- Could the authors please cite the relevant works by Zhang et al. (https://arxiv.org/abs/2402.16788) and Xie and Li (https://arxiv.org/abs/2404.04454) that also study the benefits of Adam? In particular, the block heterogeneity discussed by Zhang et al. seems closely related to the intuition in this paper. Additionally, Xie and Li observe that Adam outperforms GD when the loss function has better properties under $\ell_\infty$ geometry, which could relate to heavy-tailed class imbalance. Therefore, I wonder if the authors can discuss and reconcile these works with the findings on heavy-tailed class imbalance.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The paper discusses its limitations and future directions in detail. For additional potential limitations, please refer to the 'Weakness' section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weight blocks for the gradient-Hessian correlation plots**
> In the gradient norm and Hessian trace plots in Figures 24-26 in Appendix G, how are the weight blocks $w_c$ defined? Do they correspond to the last layer of each architecture?
This is correct. The plots show the last layer for each problem. The last layer is a $[D \times C]$ matrix, where $D$ is the dimension of the last hidden layer and $C$ is the number of classes, and each $w_c$ correspond to a column of this matrix. We will make it clear in Appendix G.
**Effect beyond last layer**
> If so, how does class imbalance affect other layers beyond the last layer?
This question is difficult to answer as there is no direct "mapping" between specific weights and classes beyond the last layer. It is likely that the slow performance of GD on the last layer leads to slower performance on earlier layers. For example, it might be that some feature relevant to a particular class can only really be learned once this class is sufficiently separated from the others. But formalizing this idea is far from trivial.
However, other ablation studies that appeared after our submission also provide evidence that one of the primary benefit of Adam over GD is to fix the effect of class imbalance in the last layer. We discuss those works below.
**Correlation appears on realistic models**
> The linear model studied is an oversimplified setting and is designed to be biased towards sign descent. Therefore, it is unclear whether the insights gained from this model extend to practical settings, particularly in understanding why Adam benefits from a heavy-tailed class imbalance in real-world scenarios. Specifically, it remains uncertain whether the correlation between gradient, Hessian, and class frequencies observed in the linear model holds true in more complex, practical settings.
While we agree that the toy model of §3.3 is not a realistic depiction of modern complex models, we do believe that the main take-away message, that the performance of GD suffers in heavy-tailed class imbalance, holds across models. Proposition 2 only describes a correlation between gradients and Hessians for rows of the last layer, but the performance gap between GD and Adam already appears when tuning only the last layer, keeping everything else frozen (see Figure 5). The effect described by Proposition 2 should still hold for complex models (assuming the inputs to the last layer do not change drastically to undo the effect of the scaling by the assignment property) and we observe this correlation beyond the linear model. Appendix G shows this effect on a small transformer on PTB, a CNN on the imbalanced MNIST dataset and a ResNet18 on the imbalanced ImageNet dataset.
We include an additional plot in [**the supplementary pdf for the response**](https://openreview.net/attachment?id=xsoeNVdrpL&name=pdf) which shows that the correlation between gradients and Hessian observed in Fig 7 also holds on GPT2/WikiText, using the same training procedure as Fig. 1.
**Related work**
> Could the authors please cite the relevant works by [Zhang et al.](https://arxiv.org/abs/2402.16788) and [Xie and Li](https://arxiv.org/abs/2404.04454) that also study the benefits of Adam?
We will include a discussion of those works. Follow-up works to [Zhang et al. (2024a)](https://arxiv.org/abs/2402.16788) from [Zhang et al. (2024b)](https://arxiv.org/abs/2406.16793) and [Zhao et al (2024)](https://arxiv.org/abs/2407.07972), (both of which where arXived after our submission) also give some insights on the benefits of Adam beyond the last layer, and we will include a discussion of those works.
In addition to their work showing that the Hessian of transformers has a block-diagonal structure ([Zhang et al., 2024a](https://arxiv.org/abs/2402.16788)), [Zhang et al. (2024b)](https://arxiv.org/abs/2406.16793) show in a follow-up ablation studies. Instead of looking for properties of the model, they study changes to the training procedure and show that the diagonal/sign-descent behavior of Adam is mostly redundant. Revisiting earlier layer-wise normalization approaches, they show empirically that most of the benefit of Adam can be recovered by using a learning rate per weight matrix rather than per coordinate (dividing by the average of the gradient magnitudes). They show that per-layer normalization achieves the same performance as Adam, as long as this scheme is not applied to the last layer. [Zhao et al (2024)](https://arxiv.org/abs/2407.07972) provide a similar ablation study, but further shows that most layers can be trained using unnormalized SGD updates, except for LayerNorm layers and the last layer.
Both the experiments of [Zhang et al. (2024b)](https://arxiv.org/abs/2406.16793) and [Zhao et al (2024)](https://arxiv.org/abs/2407.07972) indicate that the performance suffers unless the last layer is normalized independently for each class. Although neither paper specifically shows that this effect is due to class imbalance, their observations support our result that the primary benefit of Adam over SGD comes from the per-class normalization of the last layer to address class imbalance, and show that the impact of choosing an SGD or Adam-style update on the rest of the network has a smaller impact.
---
**References**
- [**Zhang et al. (2024a)**](https://arxiv.org/abs/2402.16788)
Y. Zhang, C. Chen, T. Ding, Z. Li, R. Sun, Z. Luo
Why Transformers Need Adam: A Hessian Perspective
https://arxiv.org/abs/2402.16788
- [**Zhang et al (2024b)**](https://arxiv.org/abs/2406.16793)
Y. Zhang, C. Chen, Z. Li, T. Ding, C. Wu, Y. Ye, Z. Luo, R. Sun
Adam-mini: Use Fewer Learning Rates To Gain More
https://arxiv.org/abs/2406.16793
- [**Zhao et al (2024)**](https://arxiv.org/abs/2407.07972)
R. Zhao, D. Morwani, D. Brandfonbrener, N. Vyas, S. Kakade
Deconstructing What Makes a Good Optimizer for Language Models
https://arxiv.org/abs/2407.07972
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed clarifications and additional ablation studies. I have no further questions. I appreciate the authors' efforts and am happy to increase my review score, voting for strong acceptance. I look forward to seeing the additional experiments and discussions incorporated into the revision. | Summary: This paper argues that heavy-tailed class imbalances in natural language datasets, which follow because some words are much more frequent than others, causes (or significantly underlies) the performance gap typically observed between Adam and SGD. To make this argument, the authors
1. reproduce the performance gap by training different language models on some standard datasets, like WikiText-103 and PTB;
2. empirically demonstrate that the gap between Adam and SGD can also be reproduced in vision tasks, in which it’s normally absent, by training common vision models on artificially created image datasets with heavy-tailed class imbalances;
3. introduce a simple linear model trained on synthetic uniform data, whose class frequencies follow a power law, and again reproduce the performance gap;
4. rule out batch noise as the reason underlying the performance gap, by demonstrating that the latter remains even when they train their models with full (i.e. deterministic) gradient descent; and
5. demonstrate that the performance gap can be reduced by upweighting the loss of low-frequency classes.
Having done these experimental investigations, the authors then attempt to provide some understanding and intuition as to why these class imbalances affect SGD. The authors study a collection of simplified models and training problem and argue that the class imbalance leads to correlations between gradients, Hessians and class probabilities, which appear to help Adam during training.
Strengths: - This paper studies an important problem in modern machine learning, viz. the performance gap of common training algorithms, specially when dealing with language models.
- The problem and hypothesis are very clearly stated.
- The paper is very well written and structured. Indeed, the authors first present their empirical evidence and only later attempt to provide some intuitive understanding of the phenomenon under investigation.
- As summarised above, the paper provides significant empirical evidence that supports their main hypothesis, namely that heavy-tailed class imbalances in the training datasets cause the performance gap between Adam and SGD.
- The paper also provides some theoreticall understanding as to why the performance gaps actually takes place.
In sum, this paper is very sound, tackles a relevant problem and, in my view, represents an important contribution to the machine learning community.
Weaknesses: The main weaknesses or limitations of the paper pertain to the simplified models and assumptions in the theoretical investigations. However, the authors explicitly acknowledge and discuss them.
Although the paper is very well written and structured, the following suggestions might improve readability:
- The authors could explicitly mention that they reproduce the performance gap between Adam and SGD in language tasks, for different datasets and models. This information is hidden in Appendix A and could escape the casual reader.
- Some captions in the figures are misleading. The authors perform experiments both with SGD and full GD and the way these are labelled, or referred to, in some captions is confusing. For example, the caption in figure 2 refers to SGD while the figure contains results for GD. Similarly the caption in figure 1 refers to GD but the figure contains results for SGD.
- Perhaps section 2.2 could be moved inside section 3?
Typos: lines 152, 160 and 225.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Do you know why SGD eventually finds the minimum in the imbalanced MNIST case but does not in the imbalanced ImageNet?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes, they authors addressed the limitations of their study.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Clarification**
> Do you know why SGD eventually finds the minimum in the imbalanced MNIST case but does not in the imbalanced ImageNet?
We think MNIST and ImageNet might have been flipped in this comment. GD and Adam are closer at the end of the given budget on imbalanced ImageNet (Fig. 3) than on on imbalanced MNIST (Fig. 2), where GD appears to stall. We assume the intended phrasing was as given below and will comment on this. Please let us know if we misunderstood your question.
> why SGD [does not] find the minimum in the imbalanced MNIST case but [eventually does] in the imbalanced ImageNet
**Does GD get stuck on Imbalanced MNIST (Fig. 2)**
GD does eventually find a good solution for MNIST, if run for much longer than Adam. See the longer training run in [**the supplementary pdf for the response**](https://openreview.net/attachment?id=xsoeNVdrpL&name=pdf).
The apparent "plateau" reached by GD is not due to being stuck in a local minimum, but rather due to the optimization progress being slow compared to Adam. This issue is less apparent on ImageNet, possibly because the problem is more complex and Adam cannot find a good solution in 100 steps (note that Fig. 2 (MNIST) shows 300 iterations while Fig. 3 (ImageNet) shows 1500).
We comment on this behaviour in Appendix C.2, showing an example on a linear model where GD appears to stall on a short horizon (100 steps) but converges if run for longer (10k steps). As this question is likely to come up for more readers, we will mention this point explicitly in the MNIST paragraph (L96-103), add a forward reference to §C.2, and add the longer MNIST run to the appendix.
**Inconsistency in references to SGD or GD between caption/figures**
Those are indeed typos, apologies for the confusion. We will double-check those references.
**Remaining comments**
We will fix the typos and add references to the appendix to highlight the results on other language tasks.
---
Rebuttal Comment 1.1:
Comment: Yes, I flipped MNIST with ImageNet, my mistake. I thank the authors for their rebuttal and maintain my score. | Summary: The paper shows that under the class imbalance setting which is natural in language tasks, Adam can be faster than SGD. Meanwhile, the authors investigate the linear model deeply showing the relationship between gradient and Hessian and the convergence speed of sign-gd and gd algorithm.
Strengths: 1. The authors explain that class imbalance is a reason that Adam can outperform SGD.
2. For linear models, the authors establish the relationship between hessian and gradient showing the "correctness" of Adam who approximates Lipschitz with gradients.
3. Further, the authors point out that sign-gd can converge faster than gd for linear models.
Weaknesses: 1. All of the results are based on that the parameters of different classes are separable.
2. Since the optimal solution of NN can not be infinity due to some generalization constraints (e.g. adding weight decay), will the sign algorithm still be faster than gd?
Technical Quality: 3
Clarity: 3
Questions for Authors: When the parameters are not separable, does the conclusion still hold?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Does the conclusion still hold when the parameters are not separable**
> When the parameters are not separable, does the conclusion still hold?
> Since the optimal solution of NN can not be infinity due to some generalization constraints (e.g. adding weight decay), will the sign algorithm still be faster than gd?
We assume you have in mind phenomena such as the one identified by Nacson et al. (2019), where GD can be slower than normalized methods to converge (in direction) as the gradients and Hessian goes to 0. While our experiments use expressive models that can reach good classification accuracy on all classes, the observed behaviour does not require separability nor that the weights go to $\infty$. The behaviour we identify is different, and appears at the start of training, well before the gradients and Hessian go to 0.
In [**the supplementary pdf for the response**](https://openreview.net/attachment?id=xsoeNVdrpL&name=pdf), we show that the distinction between Adam and GD still appears when the model is regularized. The plots show the dynamics of the softmax loss (without the regularization) on a linear softmax model on the small Random Heavy-Tailed Labels dataset with varying levels of $L_2$ regularization ($\lambda \in [10^{-4}, 10^{-3}, 10^{-2}, 10^{-1}]$). For small values of $\lambda$, the weights of the model do not diverge to $\infty$ but the training dynamics are very similar to the unregularized case. Of course, if regularization is so large that the model cannot fit low-frequency classes, the loss per class after training reflects the class frequencies and the gap between GD and Adam is lessened. ($L_2$ penalty impacts performance on low-frequency more. Take a simple example with one class corresponding to $50\%$ of the data and $100$ classes corresponding to $.5\%$ of the data (each). Increasing $w_1$ can decrease the loss on ${\approx}50\%$ of the data. To achieve the same reduction on the remaining ${\approx}50\%$ of samples requires increasing $w_2, ..., w_{100}$, which is much more costly in $L_2$ penalty)
Those examples illustrate that while the observed behaviour does require the model to be expressive enough so that the model can fit low-frequency classes (otherwise, not fitting low-frequency classes will not be a problem), but it does not require separability nor that the weights diverge to $\infty$.
If the question regarding separability was aimed at the theoretical analysis, then we of course agree that the toy model of §3.3 relies on separability to obtain the result in Theorem 3. We believe that the conclusion that sign or per-class normalized methods can outperform GD holds more broadly. But finding a reasonable model where the computations can be carried out in closed form is far from trivial. We do not think that the gradient flow equations have a closed-form if we add an $L_2$ regularization term, and an analysis for discrete time gradient descent is even more complex.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I will keep my score. | Summary: This paper investigates the reason why Adam outperforms (S)GD by a large margin on language models when the performance gap is much smaller in other settings. The authors argue that language data often has a heavy-tailed class imbalance, where a large fraction of the data consists of several classes with a small number of words, which leads to slow improvement in the training loss of these classes for (S)GD, whereas Adam (or sign descent) is more robust to this type of imbalance. Since the fraction of classes with fewer samples is relatively large, this contributes to a difference in the rate of decrease of the average loss as well, in contrast to conventionally considered settings for binary classification with class imbalance. The authors show that the performance gap can be reproduced in other settings, such as when training CNNs on MNIST, ResNet on ImageNet, and even linear models on synthetic high-dimensional data, when the heavy-tailed class imbalance is introduced in these datasets. The authors also show that similar effects are observed without stochasticity and with a simpler algorithm (sign gradient descent), and present some theoretical results in simple settings showing a difference in the rate of decrease in the training loss between GD and sign GD.
Strengths: - The paper presentation is exceptional: it is very well-written with aesthetic plots that illustrate the results really clearly. The contribution, as well as the motivation for each of the experiments conducted in the paper, is discussed in detail; I especially like the last paragraph of Section 1.1.
- The experiments are designed well and thoroughly support the hypothesis that heavy-tailed class imbalance is a key reason for the performance gap between SGD and Adam on language models. Results showing that the gap can be reproduced in other settings when the heavy-tailed class imbalance is introduced are very convincing.
- The paper contributes to our understanding of when and why Adam performs better than GD, which is fundamental to ultimately improving optimization.
Weaknesses: There are no major weaknesses. The authors adequately acknowledge and discuss the limitations of the work. Some minor concerns are as follows.
- The theoretical results are in oversimplified settings. However, this seems justified since theory is not one of the primary contributions of the paper. It might be better to allocate less space to this part (e.g., Section 3.3).
- It would be nice to elaborate on lines 331-333 and include more discussion on this aspect in Section 3.
- Some corrections/suggestions:
- Missing ‘the’ in line 152. Extra ‘the’ in line 225. Missing ‘than’ in line 607. Extra ‘be’ in line 631.
- It would be good to add some description in lines 496-497.
Technical Quality: 4
Clarity: 4
Questions for Authors: For the Barcoded MNIST dataset, I think the number of new images should be $5\times 10\times (2^{10}-1)$ since one of the 10-bit patterns would be the same as the background. Can the authors clarify this?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: There are no potential negative impacts of this work. The authors discuss the limitations of their work at length in Section 4.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Clarification on barcoded MNIST**
> For the Barcoded MNIST dataset, I think the number of new images should be $5\times 10 \times (2^{10}-1)$ since one of the 10-bit patterns would be the same as the background. Can the authors clarify this?
Good catch, thanks! You are correct, the all-0 10-bit pattern would be indistinguishable from the background. This was a typo in the appendix, which should read "$10 \times (2^{10}-1)$ classes". The code did generate $2^{10}-1$ barcodes, excluding the all-0 string (`generate_combinations` in `code/src/optexp/datasets/barcoded_mnist.py:l27-33`). We will fix the appendix.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal and the clarification, I will maintain my score. | Rebuttal 1:
Rebuttal: We thank the reviewers for their careful readings and thoughtful comments.
We answer specific questions in individual replies:
- [**ujWn:** Clarification on barcoded MNIST](https://openreview.net/forum?id=T56j6aV8Oc¬eId=2QGkcNTqaR)
- [**HWpb:** Does the conclusion still hold when the parameters are not separable](https://openreview.net/forum?id=T56j6aV8Oc¬eId=ASSfu6V1bw)
- [**LhRX:** Does GD get stuck on Imbalanced MNIST (Fig. 2)](https://openreview.net/forum?id=T56j6aV8Oc¬eId=cKpTYeF9u4)
- [**CdGj:** Impact of class imbalance beyond the last layer](https://openreview.net/forum?id=T56j6aV8Oc¬eId=vrl5iFlM2d)
We will address the minor comments such as typos in a revision.
Pdf: /pdf/be7dc17e5fb412a8d4519429fee107a4cfaaab30.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Latent Paraphrasing: Perturbation on Layers Improves Knowledge Injection in Language Models | Accept (poster) | Summary: The paper proposes a new method named LaPael to enhance knowledge injection for large language models. Different from traditional data-level augmentations or noise-based methods, LaPael operates at the latent level, preserving the semantic integrity of the text while introducing meaningful variability. Experimental results on three question-answering datasets show that the proposed method outperforms the baselines.
Strengths: 1. The authors propose a latent perturbation for enhancing knowledge injection of LLMs, and the proposed method outperforms the baselines.
2. The paper is written-well, with a direct motivation and a well organized structure.
Weaknesses: 1. The difference between the method proposed in this paper and the previous perturbation or enhancement methods in the feature space is not explained clearly.
2. Why is the output distribution of Transformer assumed to be Gaussian in Training Part of Section 4.2? What is the effect of directly calculating the KL divergence of two distributions without considering the type of distribution? The author's explanation will help readers better understand the method.
3. The authors claim that the paraphrasing method requires a high computational cost, so they should compare the proposed method with the paraphrasing method on computational cost.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In Table 7, why was it not compared with the Fine Tuning (+para) method?
2. Will the proposed knowledge injection method have a negative impact on the knowledge already mastered by LLMs?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Please see above Weaknesses and Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your constructive and helpful comments. We initially address all your concerns and questions below.
`[4-1] Difference between the previous peturbation methods`
> W1. The difference between the method proposed in this paper and the previous perturbation or enhancement methods in the feature space is not explained clearly.
Thank you for giving us a chance to clarify the point. To be clear, we have included the following sentence after Line 109 of the main paper: "Compared to previous perturbation methods, our method significantly differs in terms of the training objective for a perturbation function and its application to the knowledge injection of LLMs. Specifically, previous methods use bi-level optimization to train the perturbation function, while we use KL divergence to optimize the perturbation function by directly matching the output distribution of the perturbed model with that of the model using paraphrased data."
---
`[4-2] Clarification on Gaussian used in the output distribution`
> W2. Why is the output distribution of the Transformer assumed to be Gaussian in the Training Part of Section 4.2? What is the effect of directly calculating the KL divergence of two distributions without considering the type of distribution? The author's explanation will help readers better understand the method.
We appreciate your valuable question. While it is possible to estimate with a more complex distribution, we chose to use a Gaussian distribution due to complexity issues. We found that even with this simplified modeling, our approach performs well.
---
`[4-3] Computational cost comparison against paraphrasing method`
> W3. The authors claim that the paraphrasing method requires a high computational cost, so they should compare the proposed method with the paraphrasing method on computational cost.
Thank you for highlighting this important point. Our method requires less computational cost than the paraphrasing method, as our method requires adding a tiny module on the top of the LLM. To compare computational costs, we calculate each method in FLOPs (Floating Point Operations), with fine-tuning the LLM costing $F_{LLM}$.
1. **Fine-tuning with paraphrased text**: This requires generating paraphrases, costing $F_{LLM} + F_{Paraphraser}$ FLOPs, where $F_{Paraphraser}$ is the cost of a forward pass on the paraphraser model.
2. **Fine-tuning with latent paraphraser**: This requires a forward pass on the latent paraphraser, costing $F_{LLM} + F_{LaPael}$ FLOPs, where $F_{LaPael}$ is the cost of a forward pass on the latent paraphrasers.
A single latent paraphraser has $3d^2 + 4d$ parameters. For a 7B LLM, this is about 50M parameters. Using 5 latent paraphrasers totals 250M parameters. Thus, using a latent paraphraser is **equivalent to a 250M parameter paraphraser**, which is impractical for generating high-quality paraphrases. Given that `gpt-3.5-turbo` (or a 7B LLM) is used as the paraphraser in experiments, our latent paraphraser is significantly cheaper in computational cost.
---
`[4-4] Additional experiments for Table 7`
> Q1. In Table 7, why was it not compared with the Fine Tuning (+para) method?
Thank you for pointing it out. We missed adding FreeLB and Fine-Tuning (+para) in Table 7. We conducted the experiments you mentioned and the results are as follows:
* Table 7 (with FreeLB and Fine-Tuning (+para))
| | EM | Recall | F1 |
| ----------- | ----- | ------ | ----- |
| No Adapt | 13.10 | 22.91 | 21.90 |
| Fine-Tuning | 24.10 | 38.49 | 34.78 |
| NEFTune | 22.30 | 39.07 | 33.90 |
| FreeLB | 22.30 | 39.95 | 34.31 |
| Ours | 27.20 | 46.66 | 39.66 |
| Fine-Tuning (+para)| 31.90 | 48.13 | 43.08 |
Experimental result shows that using data-level paraphrases is effective in the real-world data setting.
As we discussed in Section A Limitations, our latent paraphraser cannot resolve the reversal curse issue in the real-world data, while data-level paraphraser can. We have included the results and related discussions in Table 7 of the revision.
---
`[4-5] A negative impact on the knowledge retention`
> Q2. Will the proposed knowledge injection method have a negative impact on the knowledge already mastered by LLMs?
We appreciate your valuable and insightful suggestion. To evaluate the potential negative impact on the knowledge already mastered by the LLM, we used the EntityQuestions dataset [1], which asks simple questions about entities. Among these questions, we only used the simple `place-of-birth` questions from the frequent entities (e.g., Where was Leonardo da Vinci born?), with 988 questions in total.
We fine-tuned the Vicuna-7b-v1.5 model on a synthetic SQuAD document set ($D_K$) using each method and measured the QA performance on EntityQuestions as follows.
* QA Performance on EntityQuestions after fine-tuning LLM on $D_K$ of SQuAD
| | EM | Recall | F1 |
| ----------- | ----- | ------ | ----- |
| No Adapt | 59.00 | 64.38 | 63.46 |
| Fine-Tuning | 52.23 | 55.63 | 55.41 |
| NEFTune | 48.28 | 50.33 | 50.54 |
| FreeLB | 46.56 | 48.37 | 48.57 |
| Ours | 39.88 | 41.97 | 42.12 |
| Fine-Tuning (+para)| 33.50 | 35.18 | 35.28 |
The experimental results show that all fine-tuning approaches have a negative impact on the knowledge already mastered by the LLM. We also observe that the better the knowledge injection performance, the greater the negative impact on knowledge retention. As our work focuses on improving knowledge injection, knowledge retention is beyond its current scope. However, we believe discussing this point as a limitation is important as it can encourage future work in this direction. We have included this point and the experimental results in the limitations section of our revised manuscript.
---
**Reference**
[1] Sciavolino et al., Simple Entity-Centric Questions Challenge Dense Retrievers
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the detailed responses. After reading the other reviews and the rebuttals, I maintain the original rating. | Summary: This paper focusses on the ability of LLMs to learn new knowledge. Previous work has shown that paraphrasing techniques help learning this new knowledge. However, as the authors argue, explicit paraphrasing has a high computational cost, and the paraphrased data is of limited diversity. To circumvent this, this paper introduces latent paraphrasing, by adding a latent paraphrasing layer in the standard transformer module, in between the LayerNorm and the MLP. This layer is trained with paraphrased data, generated by an LLM. The optimization objective is to minimize the KL-divergence between the output of a “standard” transformer model with the paraphrases as input, and the output of a transformer including the paraphrasing layer with the original sentences as input.
The authors test their approach, called LaPael, on three datasets: SQuAD, StreamingQA, and ArchivalQA, and compare against multiple fine-tuning baselines. They show that LaPael mostly outperforms the baselines, even in cross-domain transfer experiments.
Strengths: * This paper presents a large number of ablation experiments, that show the benefit of each of the components of the proposed approach. These experiments are very insightful, as at first the method might sound a bit complicated. Moreover, the extensive ablation experiments can help steer future research in this direction.
* The proposed approach is effective, and outperforms the baselines with relatively large margins.
Weaknesses: * The objective of the paper is to explicitly focus on fine-tuning approaches, instead of approaches that rely on external memory. In that light, it makes sense to only compare against fine-tuning baselines. However, if one is just interested in what method performs best on the task, comparing against external memory based approaches would have been a good addition to the results section.
* As a related point, I am wondering to what extent the used datasets can really evaluate new knowledge injection. For example, SQuAD is based on Wikipedia, which the models have probably seen during pretraining. Also see my related question below.
* I also have a few other questions about the approach, which I have added in the question section below.
Technical Quality: 3
Clarity: 3
Questions for Authors: * The authors mention that they use Vicuna as the base model. It seems unlikely that the model has never seen the new knowledge represented in D_K during pre-training. At the same time, the authors show that the model without adaptation scores much lower than any of the other baselines. This makes me wonder to what extent the authors think that LaPael measures true new knowledge injection in the LLM?
Some clarifying questions:
* Line 177: regarding the gold mask for tokens that correspond to the named entity. How is determined what the named entities are in the sentence? Do the authors deploy some sort of NER tagger?
* Line 193: mentions “rephrasing each question of D_QA”. Should this be “rephrasing each document”?
* Line 217: mentions that “the LLM is fine-tuned on the test set”. To confirm — does this refer to the paraphrased version of the test set, D_K?
* Line 238: regarding “Combining LaPael and Paraphrases”. I understand the objective of this experiment, and I agree with the authors that it is insightful. I am not entirely sure if I understand the experimental setup here. Specifically, what does “number of paraphrases” mean in the LaPael setting in Figure 4? Is “ours” the default LaPael setup, and “fine-tuning” LaPael + fine-tuning?
* Table 7: Can the authors elaborate on how they adapt the baselines in this real-world data scenario?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations are addressed in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We do appreciate your positive feedback and valuable suggestions, and we hope our response fully addresses your concern.
---
`[3-1] Regarding new knowledge injection`
> W2. As a related point, I am wondering to what extent the used datasets can really evaluate new knowledge injection.
> Q1. It seems unlikely that the model has never seen the new knowledge represented in D_K during pre-training. At the same time, the authors show that the model without adaptation scores much lower than any of the other baselines. This makes me wonder to what extent the authors think that LaPael measures true new knowledge injection in the LLM?
Thank you for your valuable comment. We would like to clarify that we measure the degree of knowledge injection by assessing QA accuracy on the QA dataset ($D_{QA}$), which corresponds to the documents ($D_K$) as defined in Section 3. We consider the knowledge that the LLM fails to answer as new knowledge to the LLM, even if such knowledge was seen during pre-training. Previous works [1, 2] also support these observations, as referenced in Lines 25-29.
However, **we entirely agree with your opinion** on measuring true new knowledge injection. Therefore, we have provided additional experimental results on **two datasets** containing new knowledge to the LLM, with available raw documents (Please see Table A in the attached PDF file on the global comment). The results show that our method is effective with raw documents on new knowledge, confirming the same trend observed in our previous experiments.
We have included these results in Section 5 of the revised manuscript considering their significance.
> Details of the datasets:
> * Films 2024: A synthetic QA dataset based on raw Wikipedia articles under [2024 films](https://en.wikipedia.org/wiki/Category:2024_films). We generated QAs from wiki documents using GPT-4o following Jiang et al. [3].
> * Events 2024: A synthetic QA dataset based on raw Wikipedia articles under May, June, and July of [US 2024 events in the United States](https://en.wikipedia.org/wiki/Category:2024_events_in_the_United_States_by_month). We generated QAs from wiki documents using GPT-4o following Ovadia et al. [4].
---
`[3-2] Suggestion: Comparison with external memory-based approaches`
> W1. Comparing against external memory based approaches would have been a good addition to the results section.
As per your great suggestion, we have conducted experiments comparing our approach with Retrieval-Augmented Generation (RAG) on datasets with new knowledge, Films 2024 and Events 2024, as used in the previous answer `3-1`. We use [bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) model for the embedding.
* Films 2024
| | EM | Recall | F1 |
| ---- | ---- | --- | ---- |
| Fine-tuning | 13.39 | 30.03 | 28.84 |
| Ours | 16.29 | 35.04 | 32.56 |
| RAG | 48.31 | 72.45 | 67.13 |
* Events 2024
| | EM | Recall | F1 |
| ---- | ---- | --- | ---- |
| Fine-tuning | 10.98 | 43.76 | 39.62 |
| Ours | 15.26 | 56.70 | 46.45 |
| RAG | 27.17 | 64.02 | 55.71 |
Experimental results show that the memory-based approach (RAG) outperforms the fine-tuning approaches, as observed in Ovadia et al. [4]. However, our method can close the gap between the two approaches, indicating a potential pathway to further improve fine-tuning methods. We have added these experimental results to the revised manuscript to emphasize this point.
---
`[3-3] Answers on Questions`
Thank you for your thorough clarification. We have incorporated all clarifications into the revised manuscript. While addressing your questions, we realized that the knowledge injection pipeline with LaPael -- training latent paraphrasers on $D_{train}$ $\rightarrow$ fine-tuning LLMs on $D_{K}$ -- can be confusing. Therefore, we have added a supporting figure to clearly explain that in Section 4 of the revised manuscript.
> Line 177: Question about NER tagger
Yes, we use GPT-3.5-turbo as the NER tagger.
---
> Line 193: mentions “rephrasing each question of D_QA”. Should this be “rephrasing each document”?
We apologize for the confusion. We would like to clarify that we used **both question and answer** to generate the synthetic document. This is because we want to ensure that the synthetic document contains only relevant information to the question, as illustrated in Table 1.
---
> Line 217: mentions that “the LLM is fine-tuned on the test set”. To confirm — does this refer to the paraphrased version of the test set, D_K?
No, it refers to the documents $D_K$ defined in Section 3, not the paraphrased version of $D_K$. We used a paraphrased version of $D_K$ in experiments for Figure 4 and Fine-Tuning (+ para.) in Table 2.
---
> Line 238: regarding “Combining LaPael and Paraphrases”. Specifically, what does “number of paraphrases” mean in the LaPael setting in Figure 4? Is “ours” the default LaPael setup, and “fine-tuning” LaPael + fine-tuning?
The *number of paraphrases* in Figure 4 refers to the number of paraphrases for each document in the dataset used to fine-tune the LLM ($D_K$) for both "fine-tuning" and "ours."
---
> Table 7: Can the authors elaborate on how they adapt the baselines in this real-world data scenario?
In the real-world data scenario, baselines are also fine-tuned on the raw documents from the SQuAD test set ($D_K$). We would like to clarify that "ours" denotes fine-tuning the LLM on the raw documents from the SQuAD test set ($D_K$), with the latent paraphraser trained on synthetic documents from the SQuAD train set ($D_{train}$).
---
**References**
[1] Kandpal et al., Large Language Models Struggle to Learn Long-Tail Knowledge
[2] Allen-Zhu et al., Physics of Language Models: Part 3.1, Knowledge Storage and Extraction
[3] Jiang et al., Instruction-tuned Language Models are Better Knowledge Learners
[4] Ovadia et al., Fine-Tuning or Retrieval? Comparing Knowledge Injection in LLMs
---
Rebuttal 2:
Comment: Thank you for your response, and clarifications. They largely address my concerns, and I appreciate the additional results for the two new datasets.
I have one follow-up question about the results for the new RAG baseline. First of all, thank you for running these experiments, and including the numbers in the rebuttal, and for updating the manuscript with these numbers. Currently, the RAG numbers are quite far from the LaPael numbers. What do you think is the main advantage of using a method like LaPael over the retrieval based method?
---
Rebuttal Comment 2.1:
Comment: Thank the reviewer for your reply and follow-up question. We are happy to hear that your concerns are largely addressed by our response.
Regarding your follow-up question, the main advantage of fine-tuning approaches including ours, over retrieval-based approaches such as RAG lies in simplicity, inference efficiency, and computational cost. Fine-tuning approaches result in a self-contained model that *simplifies* the overall system architecture by eliminating the need for additional infrastructure for document retrieval and ranking during inference. This approach *avoids the extra computational costs* associated with query embedding and extended prompts with retrieved documents, making it more *cost-efficient*, especially in terms of GPU memory, and suitable for deployment in resource-constrained environments.
Due to the complexity-performance trade-off, the choice between two approaches depends on the deployment environment. Fine-tuning approaches are specifically suited for settings that require low-latency and resource-efficient deployment. We hope this explanation clarifies your question. If you have any further questions, please let us know. | Summary: This paper presents LaPael, a novel approach to injecting new knowledge into Large Language Models (LLMs). Previous works have shown that fine-tuning the model with data augmented by paraphrasing helps the model learn new knowledge. However, this requires high-quality paraphrased data each time new knowledge is injected, which is costly. The paper proposes a method that introduces a paraphraser layer within the LLMs, which acts as a latent paraphraser that paraphrases the input text in the latent space. The authors train the latent paraphraser using paraphrased data. Once the latent paraphraser is trained, it is used to fine-tune the LLM on the new knowledge. LaPael outperforms standard fine-tuning and paraphrasing approaches, showing improvements on question-answering benchmarks.
Strengths: 1. The paraphraser layer within LLMs introduces a novel method for incorporating new knowledge, reducing the need for high-quality paraphrased data and reducing the significant cost of fine-tuning the model.
2. The paper demonstrates that LaPael outperforms the supervised fine-tuning method on question-answering benchmarks.
Weaknesses: **The major weakness of the paper lies in the experiment section and the details provided regarding the experiment. Below are my concerns and questions related to this section:**
1. Line 190: "We use the subset of questions in the test set of each dataset for $D_{QA}$" -> Why is only a subset of questions chosen instead of the entire test set? No details about this are mentioned in the paper.
2. Creating a document out of question-and-answer pairs to inject knowledge into the LLM and then testing it on the same test set is inappropriate for evaluating knowledge injection. It would have been more suitable to use datasets with available documents, inject the knowledge from these documents, and then test the model on the QA. Additionally, the documents used should not have been seen by the pretrained LLM during its pretraining phase to quantify how much knowledge has been injected into the LLM.
3. There are no statistics available for the created documents, such as their size. Creating small paragraphs from question-and-answer pairs, fine-tuning the model on these paragraphs, and then testing it on the same question-and-answer pairs does not constitute a correct and fair evaluation.
4. Line 212: Why are in-context examples used instead of experimenting with a zero-shot setting to assess the extent of the model's learned knowledge?
5. Lines 184-185: It is not clear what the authors meant by "We sample N noise."
6. Line 182: "From Equation 10" should be "from Equation 15."
**The writing of the paper is also very weak. Here are a few examples:**
a. The acronym "LM" is used in the experiment section without being defined. The paper uses "LLM" for Large Language Models and sometimes "LM."
b. The paper uses "$D_{knowledge}$" and "$D_{K}$" interchangeably. Please ensure consistency.
c. There are missing details in the experiment section, and no correct reference is provided to indicate where in the appendix these details can be found.
Technical Quality: 1
Clarity: 1
Questions for Authors: Please refer to the weaknesses section.
Confidence: 5
Soundness: 1
Presentation: 1
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your valuable comments on our work. We understand your concerns and are certain that your comments will help improve the quality of our work. Should you have any unresolved concerns, please let us know. We are happy to discuss and do our best to address your concerns.
---
`[2-1] Experiments with available documents with new knowledge.`
> W2. It would have been more suitable to use datasets with available documents. Additionally, the documents used should not have been seen by the pretrained LLM during its pretraining phase to quantify how much knowledge has been injected into the LLM.
**Summary: We show that our method is still effective with raw documents and new knowledge by conducting supplementary experiments on four datasets. The experimental results are presented in Table A of the attachment.**
Thank you for your valuable suggestion. We agree that experiments on datasets with available raw documents and new knowledge are important. We acknowledge that the scope of existing experiments, including Table 7, is limited as they do not handle the new knowledge scenario.
To address this, we conducted new experiments on four datasets, including two with new knowledge, using available raw documents. Specifically, SQuAD-raw and StreamingQA-raw analyze knowledge injection with widely-used document-question datasets, while Films 2024 and Events 2024 test new knowledge injection, as Vicuna-7b-v1.5 is not pre-trained on them (see below for the details of the datasets).
Table A in the attachment on the global comment presents the experimental results, showing our method's effectiveness with available raw documents and new knowledge, consistent with previous synthetic document experiments in the paper. We have included these results in Section 5 of the revised manuscript considering their significance.
> Details of the datasets:
> * SQuAD-raw: The **entire set** of SQuAD test set with available raw documents.
> * StreamingQA-raw: The **entire set** of StreamingQA test set with available raw documents.
> * Films 2024: A synthetic QA dataset based on raw Wikipedia articles under [2024 films](https://en.wikipedia.org/wiki/Category:2024_films). We generated QAs from wiki documents using GPT-4o following Jiang et al. [1].
> * Events 2024: A synthetic QA dataset based on raw Wikipedia articles under May, June, and July of [US 2024 events in the United States](https://en.wikipedia.org/wiki/Category:2024_events_in_the_United_States_by_month). We generated QAs from wiki documents using GPT-4o following Ovadia et al. [2].
---
`[2-2] The explanations on why we create a synthetic document out of QA pairs to evaluate the knowledge injection`
> W1. Why is only a subset of questions chosen instead of the entire test set? No details about this are mentioned in the paper.
> W2. Creating a document out of question-and-answer pairs to inject knowledge into the LLM and then testing it on the same test set is inappropriate for evaluating knowledge injection.
We would like to clarify that we use datasets with synthetic documents to simplify the evaluation and analysis of knowledge injection, based on widely-used QA datasets. We used a subset of them to meet the budget limits on costs for `gpt-4-turbo` API calls and LLM fine-tuning. We constructed synthetic documents from existing question-and-answer pairs to ensure learning from these documents leads to knowledge injection, without considering variables like reversal curse and distracting contents, as mentioned in Line 193. We have clarified these points in the revision.
---
`[2-3] The statistics for the documents used in experiments.`
> W3. There are no statistics available for the created documents, such as their size.
Thank you for pointing it out. We missed referencing the statistics initially, although they were in Table 8 of the Appendix. We have corrected this in the revision.
We also supplemented Table 8 with dataset size measurements and presented these in Table B of the attachment in the global comment. Additionally, we measured token counts per document and QA pair for all datasets used, plotting the histogram in Figure A of the attachment. These statistics are now included in Appendix B.1 of the revised manuscript.
---
`[2-4] Clarification on in-context examples used.`
> W4. Line 212: Why are in-context examples used instead of experimenting with a zero-shot setting to assess the extent of the model's learned knowledge?
We use in-context examples to ensure that LLMs generate the answer in the desired format (phrase instead of sentence). We have clarified this point in Section 5.1 of the revision.
---
`[2-5] Regarding the noise sampling in lines 184-185.`
> W5. Lines 184-185: It is not clear what the authors meant by "We sample N noise."
We would like to clarify that "We sample N noise" means that we randomly sample N different $\alpha$ values from the Gaussian distribution with mean $\mu$ as defined in Equation 5. We have clarified this point in the revision.
---
`[2-6] Writing fixations.`
> W6. Line 182: "From Equation 10" should be "from Equation 15."
> Wa. The acronym "LM" is used in the experiment section without being defined. The paper uses "LLM" for Large Language Models and sometimes "LM."
> Wb. The paper uses "$D_{knowledge}$" and "$D_K$" interchangeably. Please ensure consistency.
> Wc. There are missing details in the experiment section, and no correct reference is provided to indicate where in the appendix these details can be found.
Thank you for pointing those out. We have ensured consistent use of the acronym "LLM" and the term "D_K" throughout the paper in the revision. We have also fixed the no-reference issue by adding the references to the correct subsection of the Appendix.
---
**References**
[1] Jiang et al., Instruction-tuned Language Models are Better Knowledge Learners
[2] Ovadia et al., Fine-Tuning or Retrieval? Comparing Knowledge Injection in LLMs
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their rebuttal. I have read the authors' responses and the comments from other reviewers.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for their engagement during this phase. We hope we have addressed the issues raised. It would be very helpful if the reviewer could provide any further concerns or suggestions that remain.
Thank the reviewer once again for your time and effort in this process. | Summary: This paper proposed a latent paraphraser to generate paraphrased data which will be used as augmented data in LLMs' fine-tuning.
To tackle the challenges of repetitive external model interventions, the latent paraphraser (LaPael) is trained to add a small perturbation at the latent feature level of the LLM and eliminate the need for users to repeatedly paraphrase using LLMs once latent paraphrasers are trained. Specially, with the paired-data as input, i,e., original and paraphrased sentence, the training objective is to minimise the KL divergence between distributions of original sentence after the paraphraser transformation and the distribution of the paraphrased sentence. The experimental results show that LaPael outperforms existing noise-based paraphraser in generating better augmentation data.
Strengths: 1. The paper is well writen and easy-to-follow and the research topic of generating augmentation data is important.
2. The empirical results clearly show the superiority of the proposed method, LaPael.
Weaknesses: 1. The cost of training the latent paraphraser is not clear, as expensive training will impedes the efficiency of the proposed method.
2. Lack of qualitative analysis of show how the phrased sentences differ from the given paraphrased sentences from GPT3.5.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In table3, the cross-domain transfer: why the results trained on X evaluated on X, is worse than trained on X but evaluated on Y.
2. Any explainations or case study showing the generated paraphase sentences from LaPael?
3. The computation cost comparsion with noise-based paraphraser.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The trade-off between task performance and computation cost.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your constructive and helpful comments. We initially address all your concerns and questions below.
---
`[1-1] Regarding the comparative analysis of paraphrased sentences`
> W2. Lack of qualitative analysis of show how the phrased sentences differ from the given paraphrased sentences from GPT3.5.
> Q2. Any explanations or case studies showing the generated paraphrase sentences from LaPael?
Thank you for your feedback. We would like to clarify our approach as follows. Instead of generating paraphrased sentences, our proposed method perturbs the latent representations within the LLM, as denoted in Lines 45-46. These perturbed latent representations do not directly correspond to paraphrased sentences. Therefore, comparing our method to paraphrased sentences from GPT-3.5 is not applicable. We hope this clarification addresses your concern.
---
`[1-2] Regarding the cost of training the latent paraphraser`
> W1. The cost of training the latent paraphraser is not clear, as expensive training will impede the efficiency of the proposed method.
> Q3. The computation cost comparsion with noise-based paraphraser
Thank you for your valuable feedback and questions. Our proposed method does introduce an additional cost for training the latent paraphraser once prior to fine-tuning an LLM, compared to noise-based fine-tuning baselines. However, we would like to emphasize that this cost becomes negligible over multiple knowledge injections due to the following factors:
1. **Size and Parameter Cost**: The latent paraphraser has only 3.6% of the parameters of the main LLM (we used a 7B model), keeping the training cost low.
- **FLOPs Comparison**: A single forward step for the LLM costs $F$ FLOPs (Floating Point Operations). Training the five latent paraphrasers requires approximately $1.108F$ FLOPs, compared to the $3F$ FLOPs needed for updating the LLM.
2. **Training Dataset Size**: The latent paraphraser can be effectively trained with a small dataset. Training with 50-100 documents is sufficient to surpass baseline performance, reducing the number of training steps and data required, as shown in Figure 5(a).
3. **Transferability**: The latent paraphraser is trained only once and can be reused over multiple knowledge injections without additional retraining, as shown in Table 3. As more knowledge is injected, the amortized training cost approaches zero.
We have clearly stated the cost of training the latent paraphraser compared to noise-based fine-tuning baselines in the revised manuscript.
---
`[1-3] Regarding the cross-domain transfer`
> Q1. In table3, the cross-domain transfer: why the results trained on X evaluated on X, is worse than trained on X but evaluated on Y.
Thank you for pointing this out. We would like to clarify the document on which the latent paraphraser is trained ($D_{train}$) differs from the document on which the LLM is fine-tuned ($D_{K}$). Therefore, it is possible that the latent paraphraser trained on $D_{train}$ of X can show better performance on $D_K$ of Y than on $D_K$ of X. Moreover, the performance differences are marginal, indicating almost no significant difference. This highlights the transferability of our method, demonstrating that the training dataset for the latent paraphraser does not significantly affect its performance. This flexibility is shown in Table 3 and described in Lines 230-238.
---
Rebuttal Comment 1.1:
Comment: Thanks very much for your detailed clarification!
Follow up questions:
(1) Can you give a comparison here between noise-based paraphraser (such as FreeLB) and LaPael?
(2) I am curious, after adding such a latent Paraphraser layer, what are the inference performances changes on Unrelated but Basic Reasoning datasets? such as math reasoning, code generation. As the inplementation is expected to not degrader the basic model capacity.
---
Reply to Comment 1.1.1:
Comment: Thank you for reviewing our response and providing additional comments. We greatly appreciate your feedback and will address your questions below.
---
> Can you give a comparison here between noise-based paraphraser (such as FreeLB) and Lapael?
Here is the comparison between two noise-based baselines and Lapael. The noise-based baselines, NEFTune and FreeLB, do not require a preliminary step prior to fine-tuning LLMs:
* **NEFTune:** During fine-tuning, NEFTune adds random noise to token embeddings during fine-tuning to improve model robustness. [1]
* **FreeLB:** During fine-tuning, FreeLB introduces adversarial perturbations to token embeddings, aiming to minimize adversarial risk by optimizing the model's performances across various perturbations. [2]
* **LaPael (ours):** Uses an *input-dependent noise generator* (latent paraphrasers) that applies noise to latent features, learning the noise distribution directly from given paraphrases. LaPael optimizes its latent pararphrasers by aligning the output distributions between the perturbed model with the original sentence and the model with paraphrased sentences, as detailed in Section 4.2 of the main paper. This approach enables LaPael to generate noise that is more contextually appropriate and effective.
We would like to emphasize that while LaPael requires an additional step of training latent paraphrasers before fine-tuning the LLM, this overhead becomes negligible (previous response `1-2`) and significant performance gains can be achieved compared to the baselines (Tables 2 and 3). We have added this detailed comparison in addition to the content on Lines 105-107 of the main manuscript.
---
> I am curious, after adding such a latent Paraphraser layer, what are the inference performance changes on Unrelated but Basic Reasoning datasets? such as math reasoning, code generation. As the implementation is expected to not degrader the basic model capacity.
Thank you for the insightful question. Fine-tuning LLMs for knowledge injection can indeed degrade the model's capacity due to *catastrophic forgetting* induced by the fine-tuning process. To illustrate this, we provide the math reasoning performance of the LLM (fine-tuned on the synthetic documents of the SQuAD dataset) on the GSM8K [3] test set.
* Performance on GSM8K test set after knowledge injection
| GSM8K | Accuracy |
| -------- | -------- |
| No adapt.| 23.05 |
| Fine-Tuning| 18.88 |
| Ours (Fine-Tuning with LaPael) | 18.04 |
The results show that fine-tuning itself largely contributes to the degradation of math reasoning performance. As discussed in our response to Reviewer NFcZ (`4-5`), maintaining reasoning performance after fine-tuning is beyond the scope of our work, which primarily focuses on improving knowledge injection. Nonetheless, we acknowledge the importance of addressing this problem and its potential to guide future research. We have included this point and the associated results in the limitations section of our revised manuscript.
---
We hope that our response sufficiently addresses your questions. Should you have any further inquiries or comments, please don’t hesitate to let us know so that we can resolve them. If you find that most of your concerns have been addressed, we kindly ask that you consider adjusting the score accordingly. Thank you so much for your effort and time in reviewing our work.
**Reference**
[1] Jain et al., NEFTune: Noisy Embeddings Improve Instruction Fine-tuning.
[2] Zhu et al., FreeLB: Enhanced Adversarial Training for Natural Language Understanding.
[3] Cobbe et al., Training Verifiers to Solve Math Word Problems
---
Rebuttal 2:
Comment: Q1: I am asking for the **computation cost comparison** between noise-based and the proposed method, as the author mentioned "We have clearly stated the cost of training the latent paraphraser compared to noise-based fine-tuning baselines in the revised manuscript.", But I couldn't find it.
Q2: Thanks for your results and the acknowledgment of "**this degradation of math reasoning performance**".
I keep my original score mostly because the inefficiency in keeping the (other) basic reasoning capability, e.g., math of the LLMs after inserting such paraphrase layer, as one of the important principle of knowledge injecting is not to destroy the existing knowledge capacity.
---
Rebuttal Comment 2.1:
Comment: Thank the reviewer for clarifying the questions.
> Q1: I am asking for the computation cost comparison between noise-based and the proposed method, as the author mentioned "We have clearly stated the cost of training the latent paraphraser compared to noise-based fine-tuning baselines in the revised manuscript.", But I couldn't find it.
We apologize for any confusion. Regarding the revised manuscript, we have consistently addressed all comments in the revised version of the manuscript, which unfortunately we cannot upload at this moment.
Here, we provide the computational cost comparison between noise-based baselines and the proposed method in terms of per-step FLOPs for training:
### Noise-based baselines:
* Fine-tuning LLM: $3F$ FLOPs
### The proposed method:
* Fine-tuning LLM: $3.036F$ FLOPs
* Additional training for latent pararphasers: $1.108F$ FLOPs.
We believe that our method offers a viable option to **enhance the effectiveness of knowledge injection** into LLMs, despite requiring some initial training cost. This cost leads to superior results compared to noise-based baselines.
> Q2: Thanks for your results and the acknowledgment of "this degradation of math reasoning performance". I keep my original score mostly because the inefficiency in keeping the (other) basic reasoning capability, e.g., math of the LLMs after inserting such paraphrase layer, as one of the important principle of knowledge injecting is not to destroy the existing knowledge capacity.
Thank the reviewer for sharing valuable thoughts.
While we understand the concern regarding fine-tuning and catastrophic forgetting, we would like to emphasize that **this is a common challenge across all fine-tuning methods, not just our approach**, as evidenced by the previous work [1].
Our method does not significantly exacerbate this issue while improving the knowledge injection performance. From this perspective, we believe our method offers a valuable contribution.
We are very grateful for the opportunity to discuss these points with the reviewer. The reviewer's engagement in this discussion is highly valuable to us. Please let us know before the discussion period ends if there are any further concerns or questions.
**Reference**
[1] Jang et al., Towards Continual Knowledge Learning of Language Models | Rebuttal 1:
Rebuttal: (R1=R-Zuau, R2=R-bjHu, R3=R-WC5j, R4=R-NFcZ)
We sincerely thank the reviewers for their thoughtful and constructive feedback. We appreciate the acknowledgment that the paper is well-written and organized (R1, R4), the superiority of the proposed method (R1, R2, R3, R4), the insightfulness of the experiments (R3), and the novelty of the method (R2).
Regarding the concerns and questions raised, we believe that we have adequately addressed each one and provided detailed responses in line with each review. We have also revised the manuscript according to your valuable feedback and suggestions.
In the attachment, we include the following:
- Table A: Experimental results on four datasets with raw documents, referenced in the responses to R2 and R3.
- Figure A: The distributions of token counts in documents, questions, and answers for each dataset used in our experiments, referenced in the response to R2.
- Table B: The size of datasets used in our experiments, referenced in the response to R2.
Pdf: /pdf/5fbdb2668d1f4bf642a7f0fc688b0008f7753ea3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
The ALCHEmist: Automated Labeling 500x CHEaper than LLM Data Annotators | Accept (spotlight) | Summary: The paper proposes a labeling workflow (Alchemist) using LLMs where instead of labeling each data point using a teacher model, we ask the teacher model to generate a program to label for the given task. Multiple such programs are generated and using an aggregation function, we get pseudo labels. These pseudo labeled points are then used as training data to train a student model. They show that such labeling mechanism is effective and less expensive compared to the teacher model labeling every data point. They also perform ablation studies for multi-modality, supplementary information during program generation, and diversity of programs.
Strengths: 1. Knowledge distillation using a larger LLM to label datasets is very prevalent now. It is an expensive process.
2. Alchemist is an interesting way to reduce costs.
3. The idea to generate weak labeling functions/programs is novel and as the weak supervision literature shows, it can be effective in certain cases.
4. The paper is clearly written, and the experiments section is organized to answer critical questions.
Weaknesses: As the results show, no performance degradation is not guaranteed (for e.g. table 1, Yelp dataset). For a new dataset, without comparing the approaches, it may be hard to understand if Alchemist works for it without impacting the performance.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why are the accuracy numbers for Alchemist with GPT-3.5 different in table 1 and table 3?
2. How does the performance compare if we use the generated programs for labeling the test set instead of training the student model?
3. Given that the performance gain/no loss is not consistent across datasets, what is the recommendation for a new dataset, i.e. how can one know whether this technique is going to work for that dataset? There are mentions that it works better for complex dataset, any other detailed characterization possible?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The main limitation is that it is hard to understand if Alchemist is going to work for your task without spending the money to label and compare. Performance loss when observed is not small to ignore. Generated programs may be reviewed by human experts, but they may not be straight-forward to interpret and gain confidence from.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer b4or
We are grateful for the review and the positive assessment. Thank you for acknowledging Alchemist is novel, interesting, and our paper is well-written. We address your questions below.
* **On Performance Degradation.**
* In the majority of cases (six out of eight datasets), Alchemist **performs as well as or even better than model-based annotation, in addition to being orders of magnitude cheaper, more extensible, and easier to inspect**. In fact, our results suggest that users should prefer Alchemist over model-based annotation.
* In cases where a user seeks to improve Alchemist's base performance, we can simply add more generated programs---**at extremely low cost.** In Table 1 in our paper, we report results with 10 generated programs. Increasing the number of generated programs enhances performance, as we showed in Figure 5, where generating 20 labeling programs for Yelp and IMDb datasets results in **better labeling performance compared to zero-shot prompting.**
* **On Accuracy Performance in Table 1 and Table 3.**
* Results shown in Table 1 are for a model trained on Alchemist-created labels (we report performance on the testing dataset), while in Table 3, we present the label model's performance (i.e., just using the ensemble of programs that Alchemist generates) on the testing dataset. These two approaches correspond to *two stages in the weak supervision process*: first, create a label model that just uses the labeling functions (programs, in our case), and second, obtain labels from the label model and train an *end model*. Using just the label model and training an end model have both been extensively studied in the weak supervision literature. It is known that training an end model on weakly-labeled data enables generalizing beyond the label model in many cases. The reason for this is based on the observation that the label model may not be capable of capturing all labeling patterns (i.e., all possible features). It may instead only use certain simple ones. Training an end model permits learning many patterns, both simple and complex, which enables better generalization in many scenarios. **Our results are consistent with these notions.**
* **On Labeling the Test Set.**
* We described the difference between the label model and the end model above. We organized Alchemist's results in Table 5 in our attachment. We show the testing performance using the label model and the trained end model for eight datasets. **In most of the cases, the end model is preferable to the label model.**
* **On Techniques to Evaluate Generated Programs.**
* There are many ways to evaluate the quality of generated programs in advance. First, expert users can quickly determine whether key problem properties are being used by looking at the code. Besides human inspection, Alchemist includes several automated measurement tools to diagnose generated programs. We analyze program outputs and compute their coverage, polarity, conflict, and overlap (see [1] for definitions). For instance, coverage measures the fraction of data points with at least one label. If coverage is below a certain bar (e.g., 10%), we discard the program and ask users to generate a new one.
* Moreover, if a validation dataset is available, Alchemist can run diagnostics to empirically compare accuracy with ground truth, offering more insight into the program's reliability. **Notably, these tools are not typically accessible with model-based annotation methods.**
* **On Data Characteristics.**
* In general, we have found that Alchemist works well when the labeling logic involves a mixture of logical formulas over a set of salient primitives. This captures most cases of interest. Formalizing a definition of what data or task characteristics are relevant is a very interesting problem for future work; we foresee studying, for example, the expressivity of programs and understanding if they can meet most ML tasks. Kolmogorov complexity or similar notions will likely be helpful here.
* **On Framework Limitation.**
* This limitation is common across many annotation approaches. We often only know the results after completing the entire labeling process, which is also true for model-based annotation methods. However, Alchemist significantly reduces expenses, requiring a lower order of magnitude and enabling more efficient reruns of the labeling process. In other words, **Alchemist offers a much more tractable approach to iteration compared to model-based annotation**.
[1] https://snorkel.readthedocs.io/en/v0.9.3/packages/_autosummary/labeling/snorkel.labeling.LFAnalysis.html
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed responses to my questions and additional experiments.
---
Reply to Comment 1.1.1:
Comment: Thank you! Your suggestions helped us to improve the paper. Please let us know if you have any further questions; we are very happy to follow up. If there are no further concerns, we would appreciate it if you increase your score---we appreciate it! | Summary: The authors propose an innovative solution for high cost of APIs that using large pretrained models to generate programs that act as annotators. This idea helps replace or supplement crowdworkers and allows for distilling large models into smaller, more specialized ones. Traditional methods can be costly and produce static, hard-to-audit datasets due to expensive API calls. To tackle this, they introduce Alchemist, a system that tasks models to generate reusable labeling programs. These programs can be applied locally, significantly cutting costs while maintaining or even improving performance.
Strengths: 1. Generating programs that can produce labels is an interesting idea. This paper introduces a mechanism that can generate multiple labels, transforming the one-to-one mapping between samples and labels into a sustainable one-to-many or many-to-many relationship. This approach can significantly reduce API costs.
2. From a practical implementation standpoint, Alchemist's flexibility and reusability are major strengths. The system allows users to create simple prompts for generating labeling programs that can be stored, reused, and extended. This adaptability makes it a versatile tool suitable for a wide range of tasks and ensures that users can tailor it to their specific needs.
2. The code is straightforward and easy to read, making it accessible even for those who might not be deeply familiar with the underlying concepts.
Weaknesses: 1. One of the limitations of Alchemist is that the generated programs tend to handle only fixed tasks, often relying on threshold-based judgments. This means that for more complex or varied tasks, the system might still need to call specialist models via API, which somewhat limits its flexibility.
2. Another weakness is the lack of experimental evidence regarding the stability of the generated programs. For instance, the paper doesn't thoroughly explore how factors like the temperature variable in the code could affect the quality and consistency of the generated labeling programs.
3. The authors could also improve their literature review. There are several highly relevant papers that discuss various aspects of data annotation that weren't cited, such as [1] [2] [3] [4] [5].
[1] https://arxiv.org/abs/2310.04668
[2] https://arxiv.org/abs/2303.15056
[3] https://arxiv.org/abs/2306.04349
[4] https://dl.acm.org/doi/pdf/10.1145/3613904.3642834
[5] https://dl.acm.org/doi/pdf/10.1145/3594536.3595161
Technical Quality: 4
Clarity: 3
Questions for Authors: Refer to weaknesses.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer d5ZD
Thank you for recognizing Alchemist’s flexibility, extensibility, and for your kind words about our paper! We address your questions below and include experimental results on more complex tasks and program stability.
* **On More Complex Tasks.**
* Alchemist is not limited to threshold-based simple tasks. It is capable of capturing more complex labeling logic, as we have observed in some of the generated programs. To further demonstrate this capability, we included two more challenging datasets: **ECG heartbeat classification** and **Census income classification.** Results are shown in Table 2 in our attachment and demonstrate that Alchemist can handle more complex modalities and perform well. Moreover, we compared Alchemist with human-crafted labeling functions from WRENCH. **Alchemist uses fewer labeling functions (programs) and reaches higher labeling performance.**
* Additionally, since submission, we have been working on an additional, highly timely, application of this work to LLM-as-a-judge. This is a crucial area since users and companies spend vast sums of money for evaluating model outputs using automated LLM-as-a-judge techniques. We have collaborated with a leading startup in the industry to use our technique in production environments to reduce costs. Our preliminary results in this area suggest that Alchemist can capture sufficiently complex logic to identify poor-quality responses from language model pipelines.
* **On the Stability of Generated Programs.**
* This is a great question. We conducted an experiment by varying the temperature in our query APIs and running Alchemist on four different datasets. We trained the end model five times with different random seeds and computed the average performance and the variance. Results are shown in Table 4 in our attachment. We observe **consistent labeling performance** across different temperatures (0.0, 0.5, and 1.0), demonstrating Alchemist’s stability. Additionally, the stability of generated programs highlights **the significance of including aggregation models** to handle noisy and diverse outputs, resolve conflicts, and produce final labels.
* **On Improving Literature Review.**
* Thank you for sharing these papers with us. We have included them in our related work section. **Key differences**: These papers prompt LLMs for labels **per-sample**, requiring a large number of API calls (scaling with the size of the dataset) and making it hard to inspect mistakes and revise labeling logic, unlike Alchemist.
---
Rebuttal Comment 1.1:
Comment: The authors have satisfactorily addressed most of my problems. Most concerns has been addressed and some senarios may out of scope of this paper. I have raised my score. Once again, I want to express my gratitude for your hard work and commitment.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response and valuable feedback. Your suggestions have improved our paper. If you have any further questions, feel free to ask—we’re happy to provide more information. Thanks again! | Summary: The paper presents a new method for creating data labels that leverage a Large Language Model and the weak supervision/data programming labeling paradigm. In this work, the LLM is used to generate labeling code, typically in the form of functions, which can then be used to create weak labels for weak supervision. The paper goes on to show this method’s utility across several benchmark text datasets and one image dataset. The paper also does an ablation study showing the impact of different elements of the method, like numbers of weak functions, and including examples in the prompt to create the functions.
Strengths: The paper is strong in its significance, clarity, and quality. For significance, the paper is attacking a significant problem. One of the great promises of large ML models, with multimodal or text only, is their ability to zero-shot label text. This ability allows LLMs to possibly overcome one of – if not the chief issue – with building machine learning models: labeled data. Thus, this paper is attacking a profoundly important problem for the application of ML to real-world problems. For clarity, the paper is well-written and the diagrams are very helpful. When combined with the appendices and supplementary material, I can not only easily see how to reproduce their results, but how to apply this method to my data labeling problems. In other words, I can easily see wide adoption and use of this method. Finally, the paper does a reasonably good job in its empirical testing to cover many variations on the application of the method (e.g., having both text and image datasets) and variations to the method (e.g., including examples, numbers of weak functions, etc.).
Weaknesses: The weakness of the paper is in its grounding/novelty and some of its methods, particularly when applied to other modalities. For the grounding, while the paper actually does capture many of the previous works that have done something similar, there are works like Cruickshank and Ng, “DIVERSE: Deciphering Internet Views on the U.S. Military Through Video Comment Stance Analysis, A Novel Benchmark Dataset for Stance Classification” where they used LLMs + weak supervision to create an actual new dataset. They did not, however, have the novel insight about creating the weak labeling functions that this work does.
For the method in the image or multimodal case, two works have done something very similar with CLIP. First, Adila et al. “Zero-shot Robustification of Zero-shot Models” use the same CLIP model and water birds dataset, but get substantial performance increases by “debiasing” the image embeddings, with text characteristics. This is very similar to what is done with the labeling functions where this paper tries to get labeling functions to classify aspects of the birds and ignore spurious contexts. Second, Bendou et al.’s “LLM meets Vision-Language Models for Zero-Shot One-Class Classification” presents a method for one-shot classification using VLMs, which uses an LLM to build negative classes around a positive class. This is similar to the insights in this paper around developing labeling functions to highlight important visual distinctions in the classes to improve the labels. Taken together, I think this paper might be able to incorporate the insights from these other papers to actually improve their results. For example, the debiasing in the first paper could be used with the proposed method.
Technical Quality: 3
Clarity: 4
Questions for Authors: I have no additional questions.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: One other societal limitation the work could address is how works like this are changing the data labeling industry. Data labeling is still a massive, multi-million-dollar industry and these works are changing the nature of that industry. Ideally, they are changing it positively, but it can still disrupt how the data labeling industry works and cost human labelers their income.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer uwkF
We are grateful for your review and for describing our paper as significant, well-written, and of high quality. We address your questions below and provide additional experimental results incorporating Roboshot into Alchemist!
* **On Grounding.**
* Thank you for sharing this paper with us. We agree it relates to Alchemist, and we have included it in our updated related work. **Key differences** include: unlike Alchemist, which automatically generates programs, this work requires domain expertise and more human effort to craft heuristics and search for proper keywords.
* In Sec. 4.4 in our paper, we include an experiment comparing Alchemist to human-crafted labeling functions. For example, in the SMS dataset, WRENCH requires 73 manually crafted labeling functions to obtain high-quality labels, whereas **Alchemist only needs 10 generated programs to achieve comparable performance and higher coverage**. This significant reduction highlights Alchemist's capability to assist humans in generating label sources, making it more accessible to users without extensive domain expertise.
* **On Two Similar Works.**
* Thank you for pointing out these two papers. The first paper prompts LLMs for spurious and correct correlation features, then calibrates image embeddings by projecting them to reject or accept concepts. The second paper asks LLMs for confusing visual objects to perform zero-shot one-class classification with CLIP. We have included them in the draft.
* **On Incorporating Roboshot into Alchemist.**
* This is an interesting idea. We implemented this suggestion by incorporating Roboshot into Alchemist. We first used GPT-4o to identify spurious correlations for classifying waterbirds and landbirds, then projected image embeddings onto these concept embeddings to reject spurious correlations through subtraction. We computed cosine similarity using the calibrated embeddings to obtain score sets, which were then fed into Alchemist’s generated programs. The spurious correlations identified by GPT-4o were *{'water background,' 'land background', 'aquatic plants,' 'trees and bushes'}*. Results are displayed in Table 3 in the attachment. We observe that integrating Roboshot with Alchemist using GPT-4o **enhanced robustness to spurious correlations by improving accuracies.**
* **On Framework Limitations.**
* Our motivation is to address the **drawbacks of pretrained model-based annotation, which is expensive, lacks extensibility, and makes results hard to inspect**. We believe Alchemist serves as a better labeling tool by reducing costs and providing human labelers with more power to inspect and refine labeling results. In addition, generated programs can serve as templates for human labelers to rewrite, extend, or customize labeling logic. These benefits make the labeling process easier and more efficient for humans.
---
Rebuttal Comment 1.1:
Title: Reply to Rebuttal
Comment: The Authors have improved the paper further. I am particularly impressed to see the Roboshot idea incorporated and that it further enhanced the method. I stand by my rating of this being string accept.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
Thank you for your valuable suggestion and your support. We are excited about the new results with Roboshot as well. We appreciate it! Thank you for your time! | Summary: The paper proposes an automated way to label large quantities of data by leveraging large language models to generate labeling functions which can be used to label data using programmatic weak supervision. The paper demonstrates that this procedure can generate labeling functions which are more accurate than manually generated LFs and querying LLMs for labels directly at a fraction of cost. The authors also show that prompt tuning benefits their method, and that it can be used for other richer modalities.
Strengths: 1. The paper is well written, and easy to follow.
2. The experiments are well designed.
3. The idea is simple, intuitive and has promising performance.
4. Evaluation on richer data modalities is challenging, and I appreciate the authors' inclusion of image datasets.
Weaknesses: Most of these concerns are minor, and some of them can be readily fixed.
1. **Comparisons with related work:**: I believe that comparisons with empirical ScriptoriumWS and DataSculpt are necessary to demonstrate the benefits of the proposed approach. With that said, I found the experiments to be rigorous. I would also like the authors to catalog the differences between their approach and ScriptoriumWS, DataSculpt, and [1] to better highlight the novelty of their methodology and findings. I am aware that [1] was published very close to the NeurIPS deadline so I do not expect the authors to compare their methods with [1], but conceptual differences (if any) can still be highlighted.
2. **Missing references:**: The idea of using keywords and LLMs to automatically create LFs is not new, so I would encourage the authors to reference some prior work (e.g. [2]).
3. **Lack of reproduciblity and claims:** The techniques presented in the paper and the results are not reproducibile with the details given in the paper. I would encourage the authors to release their code for the same. The paper mentions that "any particular program may be inaccurate, fail to compile, or may otherwise be flawed, ...". I am curious how the authors ensure that the programs are not downright inaccurate (because snorkel would require the accuracy of LFs to be greater than random chance), or fail to compile? I believe that some kind of a data-driven feedback loop is important to ensure that the programs compile, and the generated LFs have coverage, and are accurate. Therefore, I am excited about the possibility to include keywords, dataset descriptions, example in the prompt, but I feel that there has to be a feedback loop after the label model labels the data or identifies inaccurate LFs.
4. **Richer modalities:** The claim on scaling to richer modalities should be softened, because the authors only evaluate their approach on a relatively simple image dataset. More challenging tasks may include labeling chest X-rays [3], datasets for which are publicly available, and other richer modalities such as time series data [4]. I do not expect the authors to conduct these experiments during the rebuttal, but such an extension would be very interesting and valuable, in my opinion.
### References
1. Smith, Ryan, et al. "Language models in the loop: Incorporating prompting into weak supervision." ACM/JMS Journal of Data Science 1.2 (2024): 1-30.
2. Gao, Chufan, et al. "Classifying unstructured clinical notes via automatic weak supervision." Machine Learning for Healthcare Conference. PMLR, 2022.
3. Irvin, Jeremy, et al. "Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison." Proceedings of the AAAI conference on artificial intelligence. Vol. 33. No. 01. 2019.
4. Goswami, Mononito, Benedikt Boecking, and Artur Dubrawski. "Weak supervision for affordable modeling of electrocardiogram data." AMIA Annual Symposium Proceedings. Vol. 2021. American Medical Informatics Association, 2021.
Technical Quality: 3
Clarity: 3
Questions for Authors: I wonder what the authors thoughts are on points 3 and 4 above in the weaknesses.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed the limitation of their approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer n36E
Thank you for recognizing our paper as well-written with well-designed experiments. We appreciate your acknowledgment of our simple, effective idea and the significance of including diverse modalities. We appreciate your thoughtful review!
* **On Comparisons with Related Work.**
* We compared Alchemist with related works in Table 1 in our attachment. **Alchemist uses fewer programs while achieving comparable labeling accuracy.** **Key differences** include:
* **Motivation:** Both ScriptoriumWS and DataSculpt address weaknesses in weak supervision, whereas Alchemist focuses on the downsides of model-based annotation.
* **Approach:** In ScriptoriumWS, supplementary information is manually crafted into prompts to generate code, while Alchemist uses LLMs to distill self-knowledge, enhancing labeling performance but using less human effort. DataSculpt employs *hundreds of programs for obtaining high-quality labels, but Alchemist achieves comparable accuracy with just 10 programs*.
* **Complex Modalities:** Alchemist handles complex modalities beyond text, while ScriptoriumWS and DataSculpt *do not*.
* PromptWS significantly differs from Alchemist. PromptWS prompts LLMs *multiple times for each sample*, **resulting in more API calls (even compared to zero-shot prompting).** While this method can improve performance, it incurs extremely high labeling costs, is harder to audit, and lacks extensibility. Alchemist, in contrast, addresses these issues.
* **On Suggested References.**
* Thank you for sharing this paper. We have updated our related work and included it. While this work shares a similar idea with Alchemist, Alchemist goes beyond keyword extraction by capturing more complex and diverse labeling logic, and organizing them into executable programs (see Figure 1 in the paper). Moreover, extending to non-text modalities like image classification is challenging, but Alchemist is capable of this.
* **On Reproducibility.**
* We attached our code in the supplementary material of our submission. We will release our code and a full set of outputs, including generated programs, after paper notification.
* **On the Data-Driven Feedback Loop.**
* This is a good question, and it is a crucial step. Alchemist includes several tools for diagnosing generated programs. First, we analyze program outputs to compute coverage, polarity, conflict, and overlap (see [1] for definitions). For example, if coverage (the fraction of data points with at least one label) is below 10%, we discard the program and ask users to generate a new one. Second, if a validation dataset is provided, Alchemist computes empirical accuracy by comparing outputs with ground truth. Such a data-driven feedback loop ensures tractable program generation. We have included this step's description in the updated draft.
* **On Richer Modalities.**
* Thank you for the suggestion! We included two additional modalities in Table 2: **time-series** (ECG heartbeat classification [2]) and **tabular** (Census income classification [3]). For ECG heartbeat classification, we generated 10 labeling programs from GPT-4o. For the Census income dataset, we generated program codes for each attribute (e.g., gender, education, age, race). We used Snorkel as our label model. The results demonstrate Alchemist's capability to handle more complex modalities and produce satisfactory performance. Moreover, we compared Alchemist with human-crafted labeling functions from WRENCH. Alchemist **uses fewer labeling functions (programs) and reaches higher labeling performance.** In general, Alchemist will work well with any of these modalities as long as we have access to any cheap local feature extractor. This includes medical imaging tasks: [4] showed manually-crafted simple labeling functions were able to identify heart problems in MRI sequences based on very simple primitives, which could act as the feature extractors for Alchemist.
[1] https://snorkel.readthedocs.io/en/v0.9.3/packages/_autosummary/labeling/snorkel.labeling.LFAnalysis.html \
[2] https://physionet.org/content/mitdb/1.0.0/ \
[3] https://archive.ics.uci.edu/dataset/20/census+income \
[4] Fries, J.A., Varma, P., Chen, V.S. et al. Weakly supervised classification of aortic valve malformations using unlabeled cardiac MRI sequences. Nat Commun 10, 3111 (2019).
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal!
Comment: Dear Authors,
Thank you so much for your rebuttal and putting in time and effort. I have raised my score to reflect my current assessment of the paper.
Below are some general thoughts based on the rebuttal:
1. In addition to data-driven feedback, I also think that output driven feedback is important. In some cases the LLM might hallucinate and not generate LF code grounded on the input prompts, in which case an output-driven feedback loop can ideally guide the model to generate correct programs (accuracy is the second step, the model must generate correct programs first). I foresee this to be a problem in non-text modalities where the model is having to use APIs or local feature extractors. I think some discussion on this and your empirical findings would be beneficial to the community.
2. "*Both ScriptoriumWS and DataSculpt address weaknesses in weak supervision, whereas Alchemist focuses on the downsides of model-based annotation.*" Can you explain this statement?
3. It seems that ScriptoriumWS and Alchemist are similar in terms of accuracy and the # of programs which are generated. It seems from your description that the difference lies in extensibility to other richer modalities. Can you explain what exactly prevents the core ScriptoriumWS methodology to model richer modalities? The answer can be as simple as, "the method *can* be extended with X, Y and Z, but this was not evaluated".
4. I would like to authors to provide some details about their time series classification experiment. The results are encouraging, and this is out of curiosity. I re-read the section on richer modalities in the paper, and the extension to time series doesn't seem trivial. I would like to know more about how the authors carried out this experiment.
---
Rebuttal 2:
Comment: Dear reviewer,
Thank you for your valuable feedback! Your suggestions have improved our paper. Appreciate it. We answer your questions below.
1. Thank you for your suggestion! We agree and have added additional descriptions to the paper addressing the output-driven feedback loop. One way to handle the challenge of making this work in non-text modality settings is to develop a set of simple unit tests that can be used to drive the feedback loop.
2. The difference comes down to what the goals of these techniques are. ScriptoriumWS and DataSculpt focus on automating weak source (labeling function) creation, addressing the limitations of human-crafted labeling functions, which require subject matter experts to implement by hand. The ideal outcome of these systems is to speed up the process of performing weak supervision by reducing the complexity of writing correct code. In contrast, Alchemist addresses the downsides of model-based annotation, such as high cost, lack of extensibility, and difficulty in auditing. Its goal is to efficiently distill model capabilities. While the mechanisms currently used to accomplish these goals have similarities, they are motivated by different challenges.
3. Indeed, ScriptoriumWS does not operate on non-text modalities at all; that technique is to directly prompt a language model to generate programs/labeling functions that perform a text task. The most straightforward way to extend the ScriptoriumWS methodology is to prompt a multimodal model like GPT4o to generate programs over the target modality (e.g., images). There are two downsides we have identified to this approach to extending ScriptoriumWS:
* It requires access to a powerful multimodal model, which may not exist for many modalities of interest,
* Even if such multimodal models exist, they may struggle with spurious correlations.
While we did not directly evaluate ScriptoriumWS with GPT4o, we did extend Alchemist itself in this fashion as a baseline. In Table 2 in our paper, we reported our findings, which suggest that the spurious correlation issue is indeed a problem. In contrast, Alchemist’s approach, based on obtaining primitives and using a cheap feature extractor, reduces the impact of these spurious correlations. It additionally also mitigates the other challenge, as such feature extractors are much easier to obtain compared to a powerful multimodal model.
4. In this setting, we did not use the two-stage extension method to generate programs. Instead, we included supplementary information, such as dataset and prediction class definitions, directly in the prompts. Following the basic recipe for our work, we generate 10 programs and use Snorkel as the aggregation method to produce predictions. We show two generated programs below as examples. They use peak count as their labeling logic. The first program achieved 69.5% accuracy, while another reached 81.5%.
```
def label_by_fft_peak(time_series):
""" Label based on the frequency domain characteristics """
fft_result = np.fft.fft(time_series)
fft_magnitude = np.abs(fft_result)
peak_freq = np.argmax(fft_magnitude[1:]) + 1 # Ignoring the DC component
if peak_freq < 15:
return 0 # 'N'
elif peak_freq < 30:
return 1 # 'S'
elif peak_freq < 45:
return 2 # 'V'
elif peak_freq < 60:
return 3 # 'F'
else:
return 4 # 'Q'
```
```
def label_by_peak_count(time_series):
""" Label based on the number of peaks in the signal """
from scipy.signal import find_peaks
peaks, _ = find_peaks(time_series, height=0)
peak_count = len(peaks)
if peak_count > 7:
return 0 # 'N'
elif peak_count > 5:
return 1 # 'S'
elif peak_count > 3:
return 2 # 'V'
elif peak_count > 1:
return 3 # 'F'
else:
return 4 # 'Q'
``` | Rebuttal 1:
Rebuttal: ### General Response
We are grateful for all the comments and constructive feedback on our work. Reviewers consistently found our paper to be well-written and easy to follow and described our work as novel and offering promising performance.
Reviewer **n36E** complimented the inclusion of complex modalities in the evaluation. Reviewer **uwkF** highlighted Alchemist's ease of implementation, foreseeing wide adoption. Reviewer **d5ZD** acknowledged Alchemist's flexibility and reusability, making it a versatile tool for diverse tasks. Reviewer **b4or** agreed Alchemist enables cost reduction. We have adopted suggested clarifications, improved our literature review, and conducted new experiments, leading to a much stronger draft.
### New Results Included:
* **Comparison to Other Works [Reviewer n36E]:** We compared Alchemist with other works, including ScriptoriumWS and DataSculpt, as shown in Table 1. Results indicate that **Alchemist uses fewer generated programs while achieving comparable performance.** Additionally, Alchemist addresses data modalities beyond text, unlike these works. We cataloged more detailed differences in the threads below.
* **New Modalities [Reviewer n36E, d5ZD]:** Table 2 includes datasets for ECG heartbeat classification and Census income classification, representing time-series and tabular modalities, respectively. Both illustrate Alchemist's ability to address these settings.
* **Incorporating RoboShot [Reviewer uwkF]:** We integrated RoboShot into Alchemist, using GPT-4o to identify and reject spurious correlations. As shown in Table 3, this integration improved average accuracy and worst-group performance, **enhancing robustness to spurious correlations.**
* **Stability of Generated Programs [Reviewer d5ZD]:** We varied the temperature in GPT-4 API calls, **showing consistent performance** across four datasets, confirming the stability of generated programs (see Table 4).
* **Labeling on Test Set [Reviewer b4or]:** Table 5 presents performance results using the resulting weak supervision label models and trained models (on the annotations from the label models) on eight testing datasets. We generated 10 programs from GPT-3.5-Turbo for each. Results demonstrate that in the majority of scenarios, **using the trained model (with Alchemist annotations) is preferable**.
We have addressed reviewers' questions and placed our comments in their respective threads below. Thank you again for your questions and thoughtful reviews!
Pdf: /pdf/170f06d630680c4a23c895664f2dfd8c35e7e893.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Universal Rates for Active Learning | Accept (poster) | Summary: This work provides a characterization of distribution-dependent learning rates for realizable active learning in terms of combinatorial complexity measures on the hypothesis class -- the main result is that any hypothesis class is universally learnable (i.e. learnable with rate $CR(cn)$ for distribution dependent constants $C$ and $c$) with rate either arbitrarily small, exponentially, linear, or arbitrarily large, depending on a complexity measure profile of the class.
Strengths: This work presents an important contribution to a recent line of work on universal rates which includes similar characterizations in the passive and interactive learning settings. It naturally generates comparisons to the "rate partitions" in the passive and interactive settings given in prior work. The authors introduce a new complexity measure which serves to distinguish hypothesis classes for which the more powerful interactive querying model admits faster universal rates than the standard active learning model, as well as to distinguish hypothesis classes which admit exponential universal rate speed-up over passive learning. Given that the search for settings where active strategies can yield an exponential speedups over passive learning has long been a primary focus in the field of active learning, this seems to me to be a noteworthy contribution.
Weaknesses: Pages 8-9 probably should be polished a bit before publication. I think the content of Appendix A.4, which compares the results to passive and interactive learning analogues of this paper and thus gives clear insight into the power of active queries, probably should take up some real estate in the main body in the next version of the paper.
Technical Quality: 4
Clarity: 4
Questions for Authors: The results of this paper state that the universal optimal rate of active learning is exponentially faster than that of passive learning when there does not exist an infinite star tree for $\mathbb{H}$. Do the authors feel that the concept of ``infinite star tree'' yields insight into the phenomenon of active learning algorithms tending not to outperform passive sampling in practice?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the Reviewer for their detailed feedback. Please find our answers to your questions below.
> Pages 8-9 probably should be polished a bit before publication. I think the content of Appendix A.4, which compares the results to passive and interactive learning analogues of this paper and thus gives clear insight into the power of active queries, probably should take up some real estate in the main body in the next version of the paper.
Thank you for the suggestions. If the paper gets accepted, we will certainly make use of the extra space to polish the presentation of pages 8-9 and bring the context of Appendix A.4 to the main body.
> The results of this paper state that the universal optimal rate of active learning is exponentially faster than that of passive learning when there does not exist an infinite star tree for $H. Do the authors feel that the concept of ``infinite star tree'' yields insight into the phenomenon of active learning algorithms tending not to outperform passive sampling in practice?
Thank you for your question. Our work aims to understand the fundamental capabilities and limitations of the active learning setting itself. We do not aim to explain phenomena which may have been observed for specific heuristic active learning methods which have been applied in practice. This is also an interesting subject, worthy of a separate study. | Summary: This paper deals with universal active learning of binary classes in the realizable setting, which continues important work in related universal settings (interactive, online, ...). The authors propose a star number based VCL-tree variant, which together with original VCL-trees characterize the 4 possible rates of universal active learning.
Strengths: A strong paper with deep results further continuing the line of work on universal learning, this time active learning. This work nicely complements the "universal rates of interactive learning" paper where general queries are studied, while here only more common label queries are allowed.
The star number based modification of VCL trees is nice and natural.
Weaknesses: Please address my questions below. Am more than happy to raise my score if my first question on $e^{-n}$ learning of intervals (vs $\Omega(1/\varepsilon)$ rate under the uniform distribution) is clarified.
--- rebuttal update ----
raised from 5 to 7.
Technical Quality: 3
Clarity: 3
Questions for Authors: Your example with learning intervals on the real line is indeed very surprising. For example, in the Balcan et al. [2010] paper you cite, the authors discuss that actively learning an interval $[a,b]\subseteq (0,1)$ requires $\Omega(1/\varepsilon)$ queries **under the uniform distirbution on $(0,1)$**. Please clarify this apparent contradiction to your $e^{-n}$ claim. Sorry if I am missing something trivial.
The $o(1/n)$ rates seem somewhat related to the rates achieved by Attias et al., [COLT 2024] for universal regression in the absolute loss case. Are there any connections?
Do high-probability bounds ($\geq 1-\delta$) make sense in this active universal setting? Hanneke and Yang [2015] managed to fully remove the dependence on $\delta$ for uniform active learning
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the Reviewer for their detailed feedback. Please find our answers to your questions below.
> Intervals and Balcan et al. [2010].
This is a good point to clarify. Balcan et al. (2010) distinguish between the *true* query complexity and the *verifiable* query complexity of a learning task. The $\Omega(1/\varepsilon)$ query lower for learning intervals under the uniform distribution you refer to holds for the *verifiable* query complexity, not for the *true* query complexity, which is the definition we consider in our work (and in prior works in universal rates).
The *verifiable* query complexity refers to the number of queries needed to both produce an $\varepsilon$-good classifier *and* prove that its error is no more than $\varepsilon$ (with high probability). The *true* query complexity refers to the number of queries needed to only output such an $\varepsilon$-good classifier.
More formally, for a given $\mathcal{H}$ and marginal distribution $\mathcal{D}$ over $\mathcal{X},$ a function $Q(\varepsilon, \delta, h^*)$ is a *verifiable* query complexity if there exists an algorithm $A(n,\delta)$ that outputs a classifier $h_{n,\delta}$ and a value $\hat \varepsilon_{n,\delta} \in (0,1)$ after making at most $n$ label queries, so that for any target labeling function $h^* \in \mathcal{H}, \varepsilon, \delta \in (0,1)$ and for any query budget $n$ it holds that $\Pr[\mathrm{er}(h_{n,\delta}) \leq \hat \varepsilon_{n,\delta}] \geq 1-\delta,$ and for any $n \geq Q(\varepsilon, \delta, h^*)$ it holds that $\Pr[\mathrm{er}(h_{n,\delta}) \leq \hat \varepsilon_{n,\delta} \leq \varepsilon] \geq 1-\delta.$
The definition of *true* query complexity reads the same way, except now the algorithm is *not* required to output $\hat{\varepsilon}_{n,\delta}$. To illustrate how this subtle difference can have a big impact on the query complexity, consider learning intervals over $(0,1)$ under the uniform marginal distribution, and let us ignore the dependence on $\delta$ for simplicity.
Consider two cases: first assume that the target interval is the empty one. Then, there is a learning algorithm that needs 0 queries to learn a $\varepsilon$-good classifier (in fact, a zero error classifier). On the other hand, if the target interval has width $w > 0$, there is a learning algorithm that learns an $\varepsilon$-good classifier needing $O(1/w + \log(1/\varepsilon)) = O(\log(1/\varepsilon)) $ queries using the following strategy: initially the algorithm queries points uniformly at random until it finds some point $x^*$ that has label +1 (this will take roughly $1/w$ many queries) and then does binary search on the two intervals $[0,x^*], [x^*,1]$ to find two $\varepsilon$-approximate endpoints of the target interval, which requires $O(\log(1/\varepsilon))$ many queries. Thus, overall the strategy to learn an $\varepsilon$-good classifier for all target intervals is the following two phase approach: start querying points uniformly at random until either a point with label 1 appears, or the query budget runs out. In the former case, proceed to the “binary search” phase, otherwise output the all zero classifier. The previous argument shows that the *true* query complexity of this algorithm is $O(\log(1/\varepsilon))$.
Let us now consider the *verifiable* query complexity. If the target interval is the empty one, then for any given query budget $n$, the previous algorithm (and in fact, any algorithm) can only guarantee that its error is at most $O(1/n)$ (with high probability). However, if we consider its learning curve for all different values of $n$, we will observe an exponentially fast decay no matter what the target interval is, which is exactly what the definition of universal rates is capturing.
For a more formal discussion of the two definitions of sample complexity, we kindly refer you to Definition 1, 2 of Balcan et al. (2010) and the subsequent discussion in their paper.
Please let us know if this clarifies your concern.
> Attias et al. (2024).
Attias et al. [COLT 2024] show that there *exists* a class for which $o(1/n)$ rates are tight in the setting of regression. In our work, we give a *complete characterization* of the classes for which $o(1/n)$ rates are tight in the context of active learning for binary classification. Moreover, their result shows that a class cannot be learned at a rate faster than $o(1/n)$ when it has an infinite (scaled version of the) Littlestone tree, whereas our result shows that a class cannot be learned at a rate faster than $o(1/n)$ when it has an infinite star tree.
Furthermore, the reason that the $o(1/n)$ rates appear in our work and the $o(1/n)$ rates appear in Attias et al. [COLT 2024] are fundamentally different. In a nutshell, we get $o(1/n)$ because when the class has only finite VCL trees the probability that our algorithm queries the label of an unlabeled point is decreasing as $n \rightarrow \infty$, and we can make correct inferences of the labels of the points we do not query. This allows us to get a set $S$ with $|S| = \omega(n)$ correctly labeled points, and then train a supervised learning algorithm on these points, which has error $O(1/|S|) = o(1/n)$. In the construction of Attias et al. [COLT 2024], they get $o(1/n)$ rates because the magnitude of the errors they make on unseen points decreases, in expectation, as $n \rightarrow \infty$.
We will add a discussion comparing the two works in the next version of our paper.
> High-prob. bounds.
Studying high-probability bounds in the universal rates setting is an interesting direction. So far, all the works on universal rates have focused on establishing bounds that hold in expectation, in order to provide a cleaner characterization of the landscape of the optimal learning rates. It is an interesting future direction to see for which regimes the dependence on $\delta$ can be fully removed; for instance, for arbitrarily fast rates this is immediate.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications! I have raised my score. | Summary: This paper studies active learning for binary classification. The authors provide a complete characterization of the optimal learning rates for non-adaptive active learning algorithms. The authors also develop an active learning algorithm for partial concept classes with exponential rates.
Strengths: The authors provide a complete characterization of the optimal learning rates for non-adaptive active learning algorithms: arbitrarily fast, exponential, o(1/n), and arbitrarily slow. These results answers an open question by Balcan et al. 2010.
Weaknesses: It seems that the analysis heavily relies on the assumption that the active learning algorithm is non-adaptive. For completeness, can authors provided concrete examples of (i) non-adaptive active learning algorithms, and (ii) methods to convert existing active learning algorithms to its non-adaptive counterpart without performance degradation?
Technical Quality: 3
Clarity: 3
Questions for Authors: Besides the comments above, can author provide analysis on the computational aspects of the developed/studied active learning algorithms?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the Reviewer for their detailed feedback. Please find our answers to your questions below.
> It seems that the analysis heavily relies on the assumption that the active learning algorithm is non-adaptive. For completeness, can authors provided concrete examples of (i) non-adaptive active learning algorithms, and (ii) methods to convert existing active learning algorithms to its non-adaptive counterpart without performance degradation?
We would like to clarify that the non-adaptivity assumption is on the number of *unlabeled* examples that the algorithm requests. In particular, at the beginning of the execution, the algorithm specifies an arbitrarily large number of unlabeled examples that it will use. For example, this number could be $e^{e^{e^n}}$, where $n$ is the number of label queries it has (or something even larger than that). The label requests that the algorithm makes are indeed adaptive, and can depend on the unlabeled examples as well as the answers to previous label requests. Moreover, the only part of our characterization that relies on that assumption is the lower bound for the $o(1/n)$ rates. We believe that this is a mild assumption to make, since it still allows for the *label requests* to be done *adaptively*, and we place no upper bound whatsoever on the number of unlabeled examples it can request at the beginning of the execution. Furthermore, we believe that a construction similar to the one we use in our lower bound can also be used to handle the case where we do not place any such restrictions on the type of access to the unlabeled examples, but the technical details get more involved.
To give some more concrete examples, the well-known CAL algorithm of Cohn, Atlas, Ladner [1] or the Activized Learning algorithm of Hanneke [2] can be slightly modified to fit within our model by specifying a sufficiently large number of *unlabeled* examples that they need at the beginning of the execution.
[1] Cohn, D., Atlas, L. and Ladner, R., 1994. Improving generalization with active learning. Machine learning, 15, pp.201-221.
[2] Hanneke, S., 2012. Activized learning: Transforming passive to active with improved label complexity. The Journal of Machine Learning Research, 13(1), pp.1469-15
> Besides the comments above, can author provide analysis on the computational aspects of the developed/studied active learning algorithms?
This is an interesting question. The computational complexity analysis depends on the type of access we have to the underlying hypothesis class. Our main goal in this work is to come up with approaches that give the optimal rates with respect to the sample complexity of this problem, which is always the first step to understanding the computational complexity. In their current form, our algorithms are not computationally efficient, but we hope and believe that they will inspire computationally efficient approaches that work for, potentially, restricted classes and data-generating distributions.
---
Rebuttal Comment 1.1:
Comment: I'd like to thank the authors for their explanations. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Frequency Adaptive Normalization For Non-stationary Time Series Forecasting | Accept (poster) | Summary: This paper introduces FAN, a novel instance normalization technique designed to address both dynamic trends and seasonal patterns. FAN is a model-agnostic method that can be integrated with various predictive models. It significantly enhances performance, achieving average MSE improvements of 7.76% to 37.90%.
Strengths: - Non-stationary time series forecasting has long been a highly challenging problem. Despite extensive research, there remains significant value in further exploration.
- This paper is well-written, precise in expression, and comprehensive in content. It offers a new perspective on mitigating non-stationarity issues in non-stationary time series forecasting.
- The experiments in this paper are relatively comprehensive.
Weaknesses: - There are some shortcomings in the experimental descriptions. For example, the calculation methods for Trend Variation and Seasonality Variation are not clearly explained, and the ADF test values are provided only after normalization, lacking a direct comparison with the values before normalization.
- The selection of the hyperparameter K plays a crucial role in the effectiveness of the model.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate all your suggestions and hope that the following answers have clarified your questions.
**Q1: lacking experimental descriptions, e.g. dataset metrics Trend and Seasonality Variation.**
Thank you for your valuable advice. Due to page limitations, we have briefly discussed the calculation methods for the data in the main text (line 185-187, 248). Based on your suggestion, additional details will be provided in the appendix to improve clarity and reproducibility. Furthermore, we have also included a Jupyter notebook demonstrating the calculation of selected K and other metrics, e.g. Trend/Seasonal Variation, ADF test values, available on [anonymous repo](https://github.com/icannotnamemyself/FAN/blob/main/notebooks/metrics.ipynb). Speciffically, tread/seasonal variations are calculated as follows:
**Trend Variation**:, given a timeseries dataset $\mathcal{X} \in \mathbb{R}^{N\times D}$, we first chronologically split it into $\mathcal{X}^{\text{train}}$, $\mathcal{X}^{\text{val}}$, and $\mathcal{X}^{\text{test}}$, representing the training, validation, and testing datasets, respectively. The trend variations are then computed as follows:
$$
\text{Trend Variation} = \left|\frac{ \operatorname{Mean}_N(\mathcal{X}^{\text{train}}) - \operatorname{Mean}_N(\mathcal{X}^{\text{val}, \text{test}})}{\operatorname{Mean}_N(\mathcal{X}^{\text{train}})}\right|
$$
where the subscripts indicate the dimension of mean, $| \cdot |$ denotes the absolute value operation, and $\mathcal{X}^{\text{val}, \text{test}}$ represents the concatenation of the validation and test sets. Note that, to obtain relative results across different datasets, the trend variation is normalized by dividing by the mean of the training dataset. We fetch the first dimension to be the value in main content Table 1.
**Sesonal Variation**:Given the inputs, $X \in \mathbb{R} ^ {N_i \times L \times D}$, where $N_i$ is the number of inputs, we first obtain the FFT results of all inputs, denoted as $Z \in \mathbb{C} ^ {N_i \times L \times D}$. Then, we calculate the variance across different inputs and normalize this variance by dividing it by the mean of each input, computed as:
$$
\text{Seasonal Variation} = \frac{\text{Var}_{N_i}[\text{Amp}(Z)]}{\text{Mean}_L(X)}
$$
where the subscripts indicate the dimension of the operation. We sum the results across all channels for the value in the main text, Table 1.
**Q2: lacking the ADF test values before normalization.**
Thanks for the suggestion, we will include the original values in Table 1 in the latest version. Since we perform the ADF test after normalization at the instance level and RevIN's can bot handle internal non-stationarity within the input, **the values before normalization and the values after applying RevIN are too similar to distinguish** in Fig 4(a), as shown in the [anonymous notebook](https://github.com/icannotnamemyself/FAN/blob/main/notebooks/metrics.ipynb). Therefore, we only plot the RevIN values for a clearer comparison. However, we will include the original values in Table 1 following your advice.
**Q3: The selection of the hyperparameter K.**
Although we introduced a hyperparameter K, its selection is relatively straightforward based on the distribution in the frequency domain of the dataset (Fig 7); moreover, to avoid selecting the parameter K, we provide a heuristic formula selecting frequencies greater than 10% of the maximum amplitude (line 207) which is applied across our experiments. We further illustrate the effectiveness of this selection rule in **attached PDF Fig 1**.
We hope our responses have adequately addressed your concerns, and we are eager to provide further insights into our study.
---
Rebuttal 2:
Comment: I appreciate the authors' detailed responses, which address my concerns and I will raise the rating of the paper to 7.
---
Rebuttal Comment 2.1:
Title: Thanks for your positive feedback
Comment: Dear Reviewer Cudr,
we sincerely value your feedback and the constructive suggestions you've provided for enhancing our paper. If you have any further questions or concerns, please feel free to let us know.
Authors | Summary: The paper introduces Frequency Adaptive Normalization (FAN) to improve time series forecasting by addressing non-stationary data with evolving trends and seasonal patterns. Unlike reversible instance normalization, which handles trends but not seasonal patterns, FAN employs the Fourier transform to identify and model predominant frequency components. This model-agnostic approach, applied to four forecasting models, shows significant performance improvements across eight benchmark datasets.
Strengths: 1. The paper is well-written and structured, with a well-motivated idea and promising experimental results.
2. The code is available for checking and reproducing the results.
Weaknesses: 1. I noticed a significant disparity between the results reported in Table 2 and those reported by SAN, despite using the same model configurations. This performance gap is unexpected and warrants further investigation.
2. A key question is whether FAN's capabilities fully encompass those of SAN. Specifically, can FAN handle changes in statistical properties, such as mean and variation, that characterize seasonal and trend patterns? A more detailed comparison between FAN and previous relevant methods would be beneficial.
3. I noticed that FAN consumes 0.24 million parameters, which is significantly more than FITS [1], an entirely frequency-based benchmark model. It would be helpful to clarify why such a large number of parameters is necessary to address non-stationarity.
4. Previous studies [1][2] have shown that purely using frequency-domain representation can achieve accurate future variation estimation with simple, lightweight models. The proposed method, however, relies on a backbone model in addition to FAN to handle specific parts of the evolution. Does this combination offer an advantage over methods based solely on frequency components?
5. Furthermore, can FAN enhance frequency-based methods in a similar manner to DLinear and FEDformer?
[1] Xu, Zhijian, Ailing Zeng, and Qiang Xu. "FITS: Modeling time series with $10 k $ parameters." arXiv preprint arXiv:2307.03756 (2023).
[2] Yi, Kun, et al. "Frequency-domain MLPs are more effective learners in time series forecasting." Advances in Neural Information Processing Systems 36 (2024).
Technical Quality: 3
Clarity: 3
Questions for Authors: Please address the weak points I raised above.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: There is no discussion on the limitations or social impacts of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your constructive suggestions as they can aid in enhancing our work. We hope that responses below have addressed your concerns:
**Q1: performance gap in Tab 2 and reported by SAN.**
Thanks for this question. We have rechecked the results, papers, and codes to identify the reasons for the discrepancy between our results and those of SAN. The main reason is the **different data splitting methods used.** In results of SAN, the split ratio is 6:2:2 for the ETT dataset and 7:1:2 for the other datasets, while in our experiments, we used a split ratio of 7:2:1 for all datasets (as stated on line 185). We opted for this setup to unify the experimental setup, thereby increasing the reliability of the results.
**Q2: whether FAN's capabilities fully encompass those of SAN, can FAN handle changes in statistical properties?**
FAN can encompass SAN if we ignore variance as both FAN and SAN can be adjusted to achieve a constant zero mean in distribution. We have provided a theoretical analysis of FAN's effect on temporal statistical properties in **Sec C.3**. Based on the conclusions from the analysis in Sec C.3, FAN can handle statistical changes, as the mean will be zero and the variance variations will be largely reduced.
**Q3: A more detailed comparison between FAN and previous relevant methods.**
A comprehensive comparison is already provided in **Sec 4.3** covering aspects such as prediction performance and showcases, stationarity after normalization, model efficiency, etc. Additional results for the synthetic dataset can also be found in **Sec E.1**.
To address your concern, we list the table below to summarize key metrics and give as many additional details as possible:
|Method|techique|trend|seasonality|space(M)|training iter time(ms)|performance|training strategy|modification on input statistics|
|---|------|-----|-----|--------|-------|----|-------|---|
|FAN|Fourier-based|✅|✅|✅0.249|✅0.32|✅|✅end-to-end|$\mu=0$, $\sigma<<\sigma_{raw}$|
|SAN(2024)|statistics-based|✅|❌|❌0.351|❌0.32x2|⛔|❌two-stages|$\mu=0$, $\sigma=1$|
|DishTS(2023)|statistics-based|✅|❌|⛔0.307|0.43⛔|❌|✅end-to-end|$\mu=0$, $\sigma=1$|
1. ✅: better ❌: worse ⛔: average, SAN and DishTS are two sotas in recent years.
2. The actual training iteration time should be doubled to justify the average iteration time of SAN which is trained on two-stages.
We hope this has met your expectation, and we would appreciate it if you could provide any other comparisons that might help improve our work.
**Q4: why FAN consumes 0.24 million parameters, which is significant compared to FITS.**
Thanks for the insightful question and for bringing up the issue of parameters. In Fig 4b, the parameters are actually consist of DLinear (0.140 M), FAN (0.109M). Why FAN consumes 109k which is large compared to FITS is as follows:
Firstly, while FITS reduces the number of parameters in the input and output layers by assuming a constant dominant frequency and using upsampling, **FAN aims to model varying dominant frequencies**; hence, we cannot reduce parameters by modeling only a fixed subset of dominant frequencies, as FITS does.
Secondly, FAN has tunable parameters and **reducing the parameters does not significantly impact performance**, but **we chose a rather high performance with acceptable parameters across our paper ([64,128,128])**, as in the table below:
|Method|MSE|Parameter|
|-|-|-|
|FAN-[4,8,8]|0.1421|7.2k|
|FAN-[8,16,16]|0.1413|14.6k|
|FAN-[16,32,32]|0.1406|28.9k|
|FAN-[32,64,64]|0.1395|58.1k|
|FAN-[64,128,128]|**0.1390**|109k|
|FAN-[128,256,256]|0.1392|255k|
|FITS|0.1423|**5.5k**|
|FreTS|0.1391|3516k|
- the experiment is conducted on ETTm2 with $H=720$ without backbone.
As in table, after reducing the parameters to 7.2k, FAN can still achieve better performance to FITS (5.5k), possiblely due to our instance-wise selection of dominant frequencies (**Sec 4.5**).
**Q5 : Does FAN's combination of a backbone offer an advantage over methods based solely on frequency components?**
In **Fig 9**, it can be seen that the changes in the dominant frequency are relatively small compared to the changes in the residual frequencies; furthermore, our experiments in **Tab 6** show that model more parameters to predict the dominant frequency did not yield better results, which may be due to the relatively small and stable variations in the dominant frequency.**Therefore, incorporating the backbone indeed provides some advantages, as we use a robust model to predict the relatively small changes in the dominant frequency and a rather complex model to predict the larger variations in the residual frequencies.**
Furthermore, although we did find that even without the backbone, FAN already demonstrates good predictive capabilities by just forecasting varying dominant frequencies. **This combination with the backbone indeed further improves performance and we have provided an ablation study in the main content Sec 4.5 Tab 4 to show the performance variations.**
**Q6: can FAN enhance frequency-based methods in a similar manner to DLinear and FEDformer?**
Thanks for this question. FEDformer is itself a frequency-based method. And we specifically chose this backbone to compare FAN's performance on frequency-based method. Futhermore, we discussed the results and analysis of frequency-based methods, FEDformer/TimesNet/Koopa on line 50, line 92-98, lines 293-304, and **Sec E.3**. Also, we provide results of FITS and FreTS at **attached PDF Tab 2**.
As in the result, FAN can also enhance FITS/FreTS even they are predicting based only frequency components. This is possibly due to our instance-wise selection of dominant frequencies, treating them as the primary non-stationary component and predicting the residual frequencies separately.
We hope our explanations have sufficiently answered your questions, and we value the opportunity to explain our work further.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer MHdq,
We noticed that you have increased our score to 6, and we sincerely appreciate your positive feedback. We highly value all your efforts during the rebuttal phase. If you have any further questions or concerns, please do not hesitate to let us know.
Authors | Summary: This paper proposes a new instance normalization solution called frequency adaptive normalization (FAN) to address non-stationary data in time series forecasting. This paper extends instance normalization to handle both dynamic trend and seasonal patterns by employing Fourier transform to identify predominant frequency components. This paper introduces a simple MLP model to predict the discrepancy of frequency components between inputs and outputs. Several experiments are conducted to prove the effectiveness of the proposed method.
Strengths: 1. this paper is easy to understand. The writing is good.
2. this paper is well-motivated. Since the current normalization techniques are not specifically focused on periodic patterns, the authors try to use frequency techniques to enhance them.
3. the experiments are well-sounded. Several experiments are conducted.
4. the overall framework is simple but effective, which is good for time series forecasting against distribution shifts.
Weaknesses: 1. Missing related work. Several frequency-focused or periodicity-focused works can be discussed [1-3].
2. Most adopted backbones seem to be earlier works of time series forecasting. Adding more backbones such as PatchTST can make the experiments better.
[1] DEPTS: Deep expansion learning for periodic time series forecasting. In ICLR.
[2] Frequency-domain MLPs are more effective learners in time series forecasting. In NeurIPS.
[3] Deep Frequency Derivative Learning for Non-stationary Time Series Forecasting. In IJCAI.
Technical Quality: 4
Clarity: 4
Questions for Authors: Can more related works be discussed?
Can more results be provided?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: See weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your positive feedback on our work, hope that the answers provided below have resolved your inquiries:
**Q1: missing related work. e.g. DEPTS (ICLR 2022), FreTS (NIPS 2024), DERITS(IJCAI 2024).**
Thank you for bringing these works to our attention! These studies indeed help us enrich the related work section. Specifically, DEPTS models periodic states as hidden states and uses an expansion module to model the relationship between cycles and future prediction steps; FreTS uses MLPs to model both the channel and temporal dimensions in the frequency domain; DERITS employs reverse transformation to map the time series to the whole spectrum through K branches FDT and uses iFDT to generate output.
We will include these works to provide a comprehensive review of frequency-focused or periodicity-focused works in the camera-ready version.
**Q2: more results, e.g. PatchTST.**
Thanks for the advice. Here is the result of using PatchTST as backbone where FAN can indeed enhance PatchTST on most metrics. More results can be found at **attached PDF Tab 2**.
| Methods | | PatchTST | | +FAN | |
| ------------ | --- | --------- | --------- | --------- | --------- |
| Metric | | MAE | MSE | MAE | MSE |
| ETTm2 | 96 | 0.202 | 0.079 | **0.199** | **0.078** |
| | 168 | 0.225 | 0.097 | **0.219** | **0.093** |
| | 336 | **0.239** | **0.112** | 0.242 | 0.114 |
| | 720 | 0.272 | 0.142 | **0.268** | **0.141** |
| Electricity | 96 | 0.263 | 0.180 | **0.254** | **0.153** |
| | 168 | 0.263 | 0.176 | **0.255** | **0.158** |
| | 336 | 0.282 | 0.189 | **0.275** | **0.169** |
| | 720 | 0.319 | 0.220 | **0.300** | **0.189** |
| ExchangeRate | 96 | 0.189 | 0.063 | **0.172** | **0.056** |
| | 168 | 0.237 | 0.102 | **0.225** | **0.097** |
| | 336 | 0.333 | 0.198 | **0.293** | **0.160** |
| | 720 | 0.470 | 0.355 | **0.428** | **0.324** |
| Traffic | 96 | 0.323 | 0.384 | **0.314** | **0.374** |
| | 168 | **0.330** | **0.406** | 0.334 | 0.414 |
| | 336 | **0.338** | **0.427** | 0.340 | 0.430 |
| | 720 | 0.378 | 0.460 | **0.373** | **0.454** |
| Weather | 96 | 0.222 | 0.173 | **0.220** | **0.170** |
| | 168 | 0.257 | 0.210 | **0.251** | **0.209** |
| | 336 | 0.305 | 0.283 | **0.301** | **0.278** |
| | 720 | 0.363 | 0.351 | **0.350** | **0.344** |
Based on your advice, we will try to include these results in the latest version.
We hope our explanations have sufficiently answered your questions, and we value the opportunity to explain our work further.
---
Rebuttal Comment 1.1:
Comment: Since the authors have addressed all my questions, I have raised my score.
---
Rebuttal 2:
Title: Thanks for your positive feedback
Comment: Dear Reviewer 3SM3,
We are grateful for your positive evaluation of our work. We greatly appreciate the time and effort you have invested in reviewing our work. If you have any further questions, concerns, or suggestions for improvement, please feel free to reach out to us.
Authors | Summary: This paper presents FAN - frequency adaptive normalization - as an alternative approach to de-trending seasonality in non-stationary data through Fourier transform decomposition. The method relies on dynamically identifying K instance-wise predominant frequency components; the evolution of these components is then modeled using a simple MLP approach rather than assuming they remain unchanged. The authors employ FAN for eight benchmark forecasting problems using four different ML backbones and assess the performance with and without FAN. The authors provide the code through a public GitHub repository.
Strengths: The paper addresses the problem of evolving seasonality in non-stationary data through utilizing instance-wise frequency components, rather than global frequency analysis, coupled with an MLP approach to capture non-linear relationships in the evolution of the predominant non-stationary components. The main contribution appears to be a dynamic selection of the K relevant frequencies, as opposed to previous methods that assume a fixed frequency set across inputs or select frequencies randomly, as well as modeling the frequency components’ evolution using an MLP approach rather than assuming they remain constant or through predicting the statistics.
Weaknesses: While the contributions appear novel and the paper presents both theoretical considerations and applied results, the robustness of the methodology is not well established by the authors. The authors state that the selection of K is done based on inspection of the data as “the average maximum amplitude within 10% of the training set” and is shown to vary between 2 and 30 for the benchmark datasets. However, the ablation study (which is presented only in the Appendix and showcases only a subset of the datasets) does not properly support the selection method - the selected K for the presented datasets were 3, 5, 2, and 30 while the sensitivity analysis only utilizes K = 4, 8, 12, and 24. It would have been more pertinent to do dataset-specific analysis as the hyperparameter is dataset-specific. Furthermore, there appears to be a prediction length dependency that is not properly investigated. In terms of establishing the performance of FAN, the authors present comparisons between the performance of different backbone models with and without FAN as the main result and relegate comparison to other normalization methods to averages over all prediction lengths in the main paper. While the premise of improving prediction performance through normalization is indeed established through the former comparison, that is not the scope of the paper. Furthermore, we are unable to reproduce the percentage MAE and MSE improvements stated for FAN in section 4.2, especially those used to claim improved performance with prediction length.
The paper showcases an overall logical organization and contains most necessary sections for properly presenting the work (relegating the limitations section to the Appendix is a questionable choice). However, there are several places where grammar and phrasing reduce clarity and make readability difficult. A few examples: the description of the selection of the hyperparameter K as “the average maximum amplitude within 10% of the training set” is not clear; FAN is presented as a normalization *method* that can be combined with any ML model, however, in Section E1, FAN is referred to as a *model*; the y-axis in Fig 4b is nonsensical and overestimates the difference between models; Fig 10 would be clearer if it showed difference between input and output frequencies rather than overlaying the two. Also, the figures throughout the paper and Appendix would benefit from increased font size (most are barely legible without significant zoom). Further, there are a few statements throughout the paper that indicate overconfidence in the results, or confusion in interpreting them. For example, Fig 4c is used to support FAN’s improved performance with increasing input length compared to other models. While MSE is indeed lower for FAN for longer input lengths, all models show the same trend in decreased MSE with increased input length. Similarly, Fig 12 is stated to demonstrate the higher convergence speed of FAN compared to other normalization methods. Again, the metric (loss as opposed to MSE in Fig 4c) is indeed lower for FAN, but the compared-to methods seem to plateau at earlier epochs than FAN, indicating the opposite of the claim of which methods converge faster.
While the proposed normalization method does present some novelty that can be built upon by the authors and others, the significance of the contribution is difficult to assess due to the aforementioned limitations in proven robustness of methodology. The results would also benefit from statistical analysis to properly elucidate the improved performance of the methods - simply indicating lower errors while not investigating if the error reduction is significant compared to other methods and using models without FAN does not properly support FAN’s performance.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. What are the assumptions for correlation between input dimensions?
2. How come you select a data normalization method (z-score) that does not handle non-stationary series when your data is inherently non-stationary? How does this affect your results?
3. How have you calculated the MAE and MSE improvements, specifically for Tab 2? We are not able to recreate the stated percentages, and cannot support the statement about improved performance with increased prediction length.
4. What is the comparison between on line 266?
5. In Section C4 it is stated that "the range of the distribution mean has decreased to 8" - decreased from what? How is this seen in Fig 10?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors do not present limitations in the main paper but have opted to relegate this section to the Appendix. While we understand that the page restrictions for the paper puts restraints on the presented content, adjustments should be made to ensure important information is presented in the main paper. The content of the limitation section is also not satisfactory and, in fact, enforces the aforementioned concerns about the contributions of the paper. Namely, the authors discuss their proposed methodology for selection of the hyperparameter K, which is presented as one of the novel contributions of their works, and state that their approach “may lead to incorrect K value selection”. It would have been advisory to focus the current study on elucidating the robustness of the proposed methodology, rather than simply presenting results of one implementation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer LAap for such a comprehensive review of our work. We hope we have addressed all your concerns as follows:
**Q1: the ablation study and sensitivity analysis are provided only in the appendix and only utilizes K = 4, 8, 12, 24, which should be dataset-specific and not all dataset results have been provided.**
In the main content, we do provide an ablation study in Sec 4.5 and sensitivity analysis in Sec 4.4, and K ranges from 1 to 32, containing all selected K across datasets.
Futhermore, as shown in table below, due to the large differences in spectrum distribution across datasets (Fig 7), it is hard to make cross-comparisons when using different ratios in experiments. Therefore, we chose a consistent range for K to better present the results.
|ratios|0.1|0.2|0.3|
|------|---|---|---|
|Exchange|3|2|1|
|Weather|3|2|1|
|Electricity|18|3|1|
|ETTm2|7|5|2|
|Traffic|49|30|16|
However, to address your concern, we provide results on all datasets at **attched PDF Fig 1**, with dataset-specific K ranging from 1 to 49. This will be included in the latest version.
**Q2: Reproducibility.**
we have double checked our code, we are sure you can reproduce the results in our paper. To assist your review, we add some model checkpoints and training logs for you to test performance and compare training process in the [anonymous repo](https://github.com/icannotnamemyself/FAN/tree/main/results/runs/DLinear/ExchangeRate). The docs is updated accordingly.
**Q3: several places where grammar and words reduce clarity and readability.**
Thank you for carefully reviewing our paper. We will recheck the grammar/wording and make appropriate changes based on your suggestions.
**Q4: statements indicate overconfidence: (1) Fig 12 use "convergence speed". (2) Fig 4c where all methods' MSE/MAE are decreasing.**
Thank you for bringing up these issues. (1) We acknowledge that the word "convergence speed" may lack rigor; we will update the term to "better convergence" in the latest version. (2) The conclusion derivation in Fig 4c is consistent with previous approaches, e.g. Table 5 in DishTS, Fig 4 in SAN, Fig 7 in RevIN. The mentioned issues also exist in these works. We will consider changing to relative improvement rather than absolute metric values in the latest version.
Nevertheless, we are certain that except these issues, the remaining conclusions of this paper are correctly stated based on objective results.
**Q5: What are the assumptions for correlation between input dimensions?**
We follow a channel independence assumption for input dimensions (mentioned in line 129).
**Q6: Why use z-score and its effect on results.**
As in lines 192, and in previous works, e.g. SAN, z-score can "scale the dataset to the same scale, facilitating results readibility". Furthermore, it can help model training, e.g. the loss might be too small without scaling transformation. To evaluate its effect, we conduct experiments on Traffic/ETTh1 with $H=720$ and use DLinear as backbone, the MSE results are:
Traffic:
|Method|z-score|w/o z-score|
|-|-|-|
|FAN|**0.472**|**0.00126**|
|w/o FAN|0.532|0.00249|
|IMP| 11.27%|49.40%|
ETTh1:
|Method|z-score|w/o z-score|
|-|-|-|
|FAN|**0.158**|**15.686**|
|w/o FAN|0.179|19.940|
|IMP|11.73%|21.33%|
As shown in tables above, without z-score, the metrics are difficult to read/present. **Futhermore, the improvements made by FAN without z-score are even more pronounced**. For reviewers to reproduce the result, we update the repo docs accordingly.
**Q7: How the improvements are calculated? specifically for Tab 2.**
Thanks for the valuable question! The formula of MAE/MSE improvements is $f^*_{D,M} (\frac{MSE_{base}-MSE_{our}}{MSE_{base}})$ except in Tab 2 where both MSE and MAE are presented; in Tab 2, the average MSE and MAE improvement is $\frac{\sum_{D,M} (MSE_{base}+MAE_{base})-\sum_{D,M} (MSE_{our}+MAE_{our})}{\sum_{D,M} (MSE_{our}+MAE_{our})}$, since we combine both MAE/MSE results to give an uniform result. We believe that the former is a standard algorithm for calculating improvements, whereas the latter is not. For clarity, we would use the former formula across our paper in the latest version and separately list the improvements in MSE and MAE results.
The improvements in line 218 regarding Tab 2 will be updated to 9.87%/7.49%, 18.87%/14.73%, 36.91%/25.20%, 16.26%/13.24%, and 20.05%/18.79% for MSE/MAE, respectively. Note that this improvements are still substantial, which highlights the overall performance of FAN.
- `*`: $D$ and $M$ denote datasets and models, $f$ is some operations, e.g. max, avg across dataset/model, MAE follows the same procedure.
**Q8: stated percentage improvements regarding the prediction length can not be reproduced.**
Thanks for your careful review! After revising the calculation formula (Q7) for improvements in Tab 2, despite the conclusion regarding prediction length remaining valid for Informer (line 223), it cannot be generalized to all models. Therefore, we will remove this conclusion in the latest version.
**Q9: the comparison on line 266.**
The comparison is MSE compared to second best model SAN. The MSE improvements over SAN, increases from 0.49%(L=48) to 4.37%(L=336). We will add more detail on this to increase clarity.
**Q10: Sec C4, what decreased to 8? How is this seen in Fig 10?**
As shown in Fig 10, numbers on the polar axis shows the mean amplitude values of each frequency across the inputs. So, the amplitude distribution mean range decreased from 80 (SAN), 70 (DishTS), 70 (RevIN) to 8 (FAN).
We hope our responses have addressed your concerns, and we appreciate the opportunity to clarify our work.
---
Rebuttal Comment 1.1:
Comment: Thank you for answering my questions. I still have concerns regarding the scientific rigor, quality of the methods, results, and conclusions.
Q1. Sec 4.5 is indeed an ablation study but not the one I was referring to in my comment. I am, admittedly, not an expert in this field and so there is a possibility that I have misunderstood details of the presented work. However, my understanding is that the contributions in the presented work are (1) a dynamic selection of K and (2) using MLP to model the freq components evolution rather than assuming they remain constant. As such, the manuscript should focus on the effects of these aspects compared to alternative normalization methods to properly elucidate the benefits of the proposed methodology. I find that this is not the case. Instead, the manuscript focuses on the performance of predictive models with and without FAN. While it is indeed of interest to establish that normalization with FAN does improve performance, this does not put FAN in relation to alternative normalization methods.
Thank you for providing the additional figure and tables. I suggest looking them over once more to ensure that values in Tab 1 are correct. “The green background corresponds to the selected K across our experiments” seems to be wrong for ETTh1, ETTh2, ETTm1, and ETTm2.
Q2. My comment on reproducibility was not on the full study but on the percentage MAE and MSE improvements for FAN in section 4.2, the same as for my question 3. You address my question below (Q8) and seemingly admit to there being an error in the calculation.
Q4. Regarding the convergence speed: “Better convergence” is a weak statement that is still incorrect. Convergence refers to the plateauing of loss, not the absolute value of the loss. In fact, for Syn-7 and Syn-9, the FAN appears to result in slower convergence than the alternative normalization methods.
Regarding the input length: It is not the statement that FAN shows a reduction in MSE with increased input lengths that is the issue (the statement appears true based on Fig 4c), rather the phrasing “compared to other models”. This phrasing is incorrect (all models show this behavior) and misleads the reader. Is the MSE significantly lower for FAN at longer inputs compared to other models? If yes, then this is a significant result that should be pointed out. If not, then this is not a significant result and should not be made out to be “Notably”.This goes back to my original comment about backing your results up with statistical tests to prove significance in your presented numbers.
Q6. I am not questioning scaling the data, this is indeed necessary for the model training. I am questioning the choice of scaling method. I see that z-score is used also in the SAN and RevIN papers. Did you assess the distribution of the datasets prior to opting for z-score or was the choice based on the use in the aforementioned papers? Did you consider dynamic scaling alternatives that would have been more suitable for non-stationary data?
Q7. Thank you for the clarification. MAE and MSE do not provide the same information and should hence not be combined.
Q10. Thank you for the clarification. I highly suggest adding a more descriptive caption to the figure to help the reader understand the plots. As stated in my original review, I would also suggest displaying the difference between input and output rather than overlaying the two or, alternatively, splitting them up and presenting them in separate plots.
---
Reply to Comment 1.1.1:
Comment: We appreciate your time in reviewing our paper and giving us the opportunity to clarify our work further. We understand that there may be some misunderstanding, as you are not specialized in this field; however, we hope the following response will fully address any misunderstandings you may have about our work.
Q1: Our contribution has been clearly illustrated in main content **Sec 1**, and you have acknowledged these contributions in the strength of the original comments: (1) instance-wise select top K main components, not "dynamic selection of K" (2) modeling varying main frequency through MLP. We believe there are some misunderstandings regarding how to demonstrate the validity of these contributions. However, **we do not focus solely on the performance of predictive models with or without FAN**, to address your issue, here is a list:
1. **contribution 1**: In Sec 4.5 (Fig 6, Tab 5), we analyze the distribution of the primary frequencies to illustrate the necessity of selecting the instance-wise primary frequencies, not by merely listing the results;
2. **contribution 2**: In Sec 4.4, 4.5 Tab 4, Sec B.3 and D.2, we explain that a simple MLP might achieve better performance by analyzing relative variation of main and residual frequency components across datasets.
3. **comparison with alternative normalization RevIN, DishTS, SAN**: In Sec 4.3, we provide a comprehensive comparison regarding stationarity test after normalization, model efficiency, etc.
4. **full data analysis**: In Sec B, we conducted a comprehensive analysis of frequency distribution, selection distribution, and other relevant factors to further validate the necessity of Contribution 1 and Contribution 2.
5. **Theoretical analysis**: In Sec C, we discuss the effect of FAN on Fourier and temporal distribution, theoretically.
Hence, **a significant portion of our paper is based on model comparison/data analysis/theoretical results rather than merely on results with or without FAN**.
We admit some typos of ETT datasets in material Tab 1 and thanks for pointing out, below is the updated version, others remain the same:
| |ETTh1 | ETTh2 | ETTm1 | ETTm2 |
|-------|-------|-------|-------|--|
| 0 | 49 | 49 | 49 | 49 |
| 0.05 | 16 | 11 | 11 | 7 |
| 0.1 | 4 | 3 | 11 | 5 |
| 0.2 | 3 | 3 | 3 | 2 |
| 0.3 | 3 | 1 | 3 | 2 |
| 0.4 | 3 | 1 | 3 | 1 |
| 0.5 | 2 | 1 | 2 | 1 |
We sincerely apologize for making this mistake, and are willing to provide any code/notebooks to prove the reproducibility of our work.
Q2/Q7: We combined the results, not miscalculated, and this does not affect the overall conclusion of the paper. As in the rebuttal Q7: "the improvements in line 218 regarding Tab 2 will be updated to 9.87%/7.49%, 18.87%/14.73%, 36.91%/25.20%, 16.26%/13.24%, and 20.05%/18.79% for MSE/MAE, respectively", and the improvements are still substantial.
Q4: Fig 4c on line 264 is just a description of the results. We do not claim this as a generalized conclusion, as whether larger input length will contain more varying seasonal patterns is dataset-specific. We will remove notably as it might be too strong according to your advice.
Q6: **Z-score normalization is used mainly to scale data for better readability and presentation**, and we adopt z-score following prior work, e.g. SAN. **And this is actually a convention in the time series domain, e.g. DLinear, FEDformer, SCINet**. Furthermore, FAN is a reversible instance normalization method for handling non-stationarity. **Addressing non-stationarity during the preprocessing stage would hinder the demonstration of our model's effectiveness** in processing non-stationarity.
Note that the reversible normalization were introduced specifically to address issues caused by dynamic normalization. For example, sliding window normalization might mitigate distributions shift, but they remove too much information, making it difficult for the backbone model to predict the removed content. we recommend you refer to a pioneering work in this area, RevIN, for more details. Therefore, we do not adopt these dynamic normalization methods.
Q10: We will try our best to clarify the captions and adjust the tables/figures based on your valuable advice.
We hope above responses have addressed all your concerns. We noticed that Reviewer MHdq and Reviewer Cudr increased the score from 5 to 6 and 6 to 7 respectively, and you decreased the score from 4 to 3. If we may ask, could you please clarify the reason for lowering the score from 4 to 3? This will help us clarify any misunderstandings and improve our work further.
We sincerely look forward to your reply. | Rebuttal 1:
Rebuttal: Dear Reviewers, ACs and the SAC:
We thank you all for the review and valuable comments. We'll clarify them in the final version to address all relevant questions and suggestions.
To address the common concerns regarding our selection of K (Reviewer LAap, Reviewer Cudr) and our model effectiveness on more backbones (Reviewer 3SM3, Reviewer MHdq), we provide explanations and additional results in the **attached PDF**.
We are grateful for your helpful advice, as it can support the advancement of our work. We hope the those responses have satisfied your queries.
Pdf: /pdf/72a6370088391963e90aaf31ae5706ac96cc0295.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Differentially Private Reinforcement Learning with Self-Play | Accept (poster) | Summary: The paper studied two-player zero-sum episodic Markov Games under JDP and LDP. The authors designed DP-Nash-VI algorithm for the problems and derives both upper bounds and lower bounds.
Strengths: 1. The paper investigated interesting problem of two-player zero-sum episodic Markov Games under JDP and LDP.
2. It is good to derive the the best known regret for non-private multi-agent RL as byproduct.
3. The authors give algorithms design and solid proofs of upper bounds and lower bounds for the problems.
Weaknesses: There is no experimental result to verify their theoretical findings.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. In this paper, you consider bounded reward case. Can your method extend to heavy-tailed reward case in [1]?
2. Is it possible to further improve the gaps between your upper bounds and lower bounds?
[1] Yulian Wu, Xingyu Zhou, Sayak Ray Chowdhury, and Di Wang. Differentially private episodic reinforcement learning with heavy-tailed rewards. arXiv preprint arXiv:2306.01121, 2023b
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: There is no experiment but I understand the main contributions of paper is on theoretical side.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your high quality review and the positive score. Below we will reply to your comments.
**There is no experimental result to verify their theoretical findings.**
Thanks for the comment, we will conduct some experiments in the next version.
**In this paper, you consider bounded reward case. Can your method extend to heavy-tailed reward case in [1]?**
This is a very good question. [1] handled the heavy-tailed reward using a truncation step. Then a privatization step is followed. We are not sure whether this can be extended to the multi-player setting, while we believe incorporating the truncation step to deal with heavy-tailed rewards is an interesting future direction.
**Is it possible to further improve the gaps between your upper bounds and lower bounds?**
The gap for the JDP case is mostly on the lower order term, while there is a gap on the main term under the LDP case. Since our algorithm generalizes the best-known result under the single-agent RL case, we believe the practical way to close the gap is to improve the regret bound under the single-agent RL case. [2] argues that the extra dependency on the parameters may be inherent to model-based algorithms due to the explicit estimation of private rewards and transitions. Therefore, a possible direction is to privatize model-free algorithms, which is still an open problem in the literature.
[2] Evrard Garcelon, Vianney Perchet, Ciara Pike-Burke, and Matteo Pirotta. Local differential privacy for regret minimization in reinforcement learning.
Thanks again for the high-quality review. We hope our response could address your main concerns and we are happy to answer any further questions.
---
Rebuttal Comment 1.1:
Comment: Thank you for the reply. I keep the score. Good luck! | Summary: The authors address multi-agent self-play reinforcement learning (RL) with differential privacy (DP) constraints to protect sensitive data. They propose an efficient algorithm that meets JDP and LDP requirements, and its regret bounds generalize the best-known results for single-agent RL, marking the first study of trajectory-wise privacy in multi-agent RL.
Strengths: 1. The proposed algorithm demonstrates a statistically tight regret bound, supported by the authors' derived lower bound.
2. The differential privacy analysis is comprehensive, encompassing both Joint Differential Privacy (JDP) and Local Differential Privacy (LDP).
3. The writing is generally clear and concise.
Weaknesses: 1. The overall technical contribution seems limited. Could the authors emphasize their primary technical contributions? Specifically, is it feasible to address the problem setting using existing algorithms augmented with the Laplacian mechanism?
2. The absence of an experimental study, even a simple one, is noticeable. Conducting experiments is crucial to validate the efficacy of the proposed algorithm, especially given the authors' claims of its efficiency.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The technical convenience of using Definition 2.2 instead of Definition 2.1 is not clear. Can the authors confirm if their proposed algorithm also satisfies the differential privacy requirements of Definition 2.1? If it does not, could you provide an intuitive explanation?
2. Given that the reward value is known and determined, why does the agent still require reward feedback from the user (as mentioned in line 135)?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your high quality review and the positive score. Below we will reply to your comments.
**The overall technical contribution seems limited. Could the authors emphasize their primary technical contributions? Specifically, is it feasible to address the problem setting using existing algorithms augmented with the Laplacian mechanism?**
First of all, almost all the algorithms in the DP-RL literature are based on some well-known non-private algorithms. For instance, Private-UCB-VI and DP-UCBVI (in our Table 1) are both based on the famous UCBVI algorithm. Therefore, we choose the non-private algorithm Nash-VI as a base algorithm for designing private self-play algorithms. In addition, we only apply the technique for privatizing visitation counts from [Qiao and Wang, 2023], the construction of private bonus here is different. In the two player setting, we need to handle the other player, and the upper (lower) bound is for the Q value of the current policy when facing best responses. Therefore, the bonus is more complex compared to the single-agent setting. We manage to design a new private bonus term for the two-player setting, prove the validity of optimism and pessimism, and derive a near-optimal regret bound. These are the main techinical contributions of the paper.
**The absence of an experimental study, even a simple one, is noticeable. Conducting experiments is crucial to validate the efficacy of the proposed algorithm, especially given the authors' claims of its efficiency.**
Thanks for the comment, we will conduct some experiments in the next version.
**The technical convenience of using Definition 2.2 instead of Definition 2.1 is not clear. Can the authors confirm if their proposed algorithm also satisfies the differential privacy requirements of Definition 2.1? If it does not, could you provide an intuitive explanation?**
Our algorithm satisfies Def 2.2 but could not satisfy Def 2.1. Indeed, Def 2.1 is not consistent with a sublinear regret bound, even for the simpler single-agent RL setting and contextual bandit setting. An intuitive explanation is that Def 2.1 requires the agent to privately recommend an action to the user while protecting her own state, where a constant regret is inevitable in each episode in the worst case. Therefore, our algorithm with sublinear regret bound could not satisfy Def 2.1.
**Given that the reward value is known and determined, why does the agent still require reward feedback from the user (as mentioned in line 135)?**
The RL protocol we introduced is for the general case where the reward can be stochastic. The DP guarantees are also defined for the general setting. Actually our techniques can be easily extended to handle the setting with stochastic rewards, and the assumption of known rewards is only for the ease of presentation.
Thanks again for the high-quality review. We hope our response could address your main concerns and we are happy to answer any further questions.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the response. I will keep my score. | Summary: This paper explores multi-agent reinforcement learning (RL) with differential privacy (DP) constraints. The authors extend the concepts of Joint DP (JDP) and Local DP (LDP) to two-player zero-sum episodic Markov Games. They develop a provably efficient algorithm that combines optimistic Nash value iteration with the privatization of Bernstein-type bonuses, ensuring satisfaction of JDP and LDP requirements with appropriate privacy mechanisms. The algorithm achieves a regret bound that generalizes the best-known results in single-agent RL and could reduce to the best-known results in multi-agent RL without privacy constraints.
Strengths: 1. This paper extends the concepts of Joint DP (JDP) and Local DP (LDP) to two-player zero-sum episodic Markov Games, which is important and inspiring for future studies on differential privacy in multi-agent RL.
2. The proposed DP-Nash-VI algorithm could satisfy either Joint DP (JDP) and Local DP (LDP)constraints with corresponding regret guarantees. Their regret bounds strictly generalize the best-known results under DP single-agent RL, and their results could be reduced to the best-known results in multi-agent RL without privacy constraints.
Weaknesses: Though this is a purely theoretical paper, it may be better to include some simulation results to validate the theoretical results.
Technical Quality: 4
Clarity: 4
Questions for Authors: Could the authors highlight the key technical challenges involved in combining the techniques from Liu et al. [2021] and Qiao and Wang [2023]?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: No negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your high quality review and the positive score. Below we will reply to your comments.
**Though this is a purely theoretical paper, it may be better to include some simulation results to validate the theoretical results.**
Thanks for the comment, we will conduct some experiments in the next version.
**Could the authors highlight the key technical challenges involved in combining the techniques from Liu et al. [2021] and Qiao and Wang [2023]?**
First of all, almost all the algorithms in the DP-RL literature are based on some well-known non-private algorithms. For instance, Private-UCB-VI and DP-UCBVI (in our Table 1) are both based on the famous UCBVI algorithm. Therefore, we choose the non-private algorithm Nash-VI as a base algorithm for designing private self-play algorithms. In addition, we only apply the technique for privatizing visitation counts from [Qiao and Wang, 2023], the construction of private bonus here is different. In the two player setting, we need to handle the other player, and the upper (lower) bound is for the Q value of the current policy when facing best responses. Therefore, the bonus is more complex compared to the single-agent setting. We manage to design a new private bonus term for the two-player setting, prove the validity of optimism and pessimism, and derive a near-optimal regret bound. These are the main techinical contributions of the paper.
Thanks again for the high-quality review. We hope our response could address your main concerns and we are happy to answer any further questions.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply. I keep my score. Good luck! | Summary: This paper gives an algorithm for differentially private reinforcement learning in two-player zero-sum games. The paper considers a standard model for differential privacy already established for single-agent RL. In this model, in each episode a unique user follows a policy $\pi$ recommended by the RL agent i.e. the user encounters states $s$, takes actions $a$ sampled from $\pi(s)$, and receives rewards $r$. The goal is for the RL agent to learn optimal policy recommendations, without revealing the private information of each user consisting of the trajectory of states, actions, and rewards. The paper considers two standard models for privacy in RL, joint differential privacy (JDP) and local differential privacy (LDP). The proposed algorithm achieves nearly optimal (in certain parameter regimes) regret in both of these privacy constrained settings.
Strengths: Differential privacy in multi-agent reinforcement learning is an important problem, and achieving this in two-player zero-sum games could be step towards the general multi-agent case.
Weaknesses: - The algorithm proposed is a straightforward combination of prior work [1] achieving privacy for single-agent RL and [2] achieving low-regret learning for self-play in zero-sum games. It is not clear from the paper what new ideas, if any, are needed, beyond directly applying the private counts and bonuses from [1] to compute the upper and lower confidence bounds used in [2].
- While privacy for multi-agent RL seems quite relevant and interesting, privacy for two-player zero-sum games seems much less well-motivated. For example, in the autonomous driving case, general multi-agent RL corresponds to learning in a setting where there are many autonomous vehicles on the road and one wants to keep the information of each one private. Two-player zero-sum markov games instead correspond to the setting where there are exactly two autonomous vehicles on the road in each episode, and somehow they are in direct zero-sum competition (e.g. a one-on-one race). In fact, the only reason differential privacy makes sense in this setting is that the paper assumes that a different pair of users competes in each episode, and it is privacy across these different pairs that is preserved. In general, it really seems to me that the most important questions regarding privacy in RL relate to a large number of interacting agents, and that the setting of this paper was chosen specifically so that the techniques of [1] and [2] could be directly applied, rather than because the problem itself seemed important to solve.
Specific Issues:
- The text in Table 1 is too small to read.
[1] Qiao, Dan, and Yu-Xiang Wang. "Near-optimal differentially private reinforcement learning." International Conference on Artificial Intelligence and Statistics. PMLR, 2023.
[2] Liu, Qinghua, et al. "A sharp analysis of model-based reinforcement learning with self-play." International Conference on Machine Learning. PMLR, 2021.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Are there any technical challenges to overcome when combining the known algorithm for private single-agent RL with the algorithm for self-play in zero-sum games?
2. Why is privacy across episodes of two-player zero-sum games an interesting problem? Is there some natural approach to generalize to mutli-agent RL? Are there natural examples where one would want to preserve privacy in a sequence of direct competitions between two players?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your high quality review. Below we will reply to your comments.
**The algorithm proposed is a straightforward combination of prior work [1] achieving privacy for single-agent RL and [2] achieving low-regret learning for self-play in zero-sum games. It is not clear from the paper what new ideas, if any, are needed, beyond directly applying the private counts and bonuses from [1] to compute the upper and lower confidence bounds used in [2].**
First of all, almost all the algorithms in the DP-RL literature are based on some well-known non-private algorithms. For instance, Private-UCB-VI and DP-UCBVI (in our Table 1) are both based on the famous UCBVI algorithm. Therefore, we choose the non-private algorithm Nash-VI as a base algorithm for designing private self-play algorithms. In addition, we only apply the technique for privatizing visitation counts from [Qiao and Wang, 2023], the construction of private bonus here is different. In the two player setting, we need to handle the other player, and the upper (lower) bound is for the Q value of the current policy when facing best responses. Therefore, the bonus is more complex compared to the single-agent setting. We manage to design a new private bonus term for the two-player setting, prove the validity of optimism and pessimism, and derive a near-optimal regret bound. These are the main techinical contributions of the paper.
**While privacy for multi-agent RL seems quite relevant and interesting, privacy for two-player zero-sum games seems much less well-motivated. Is there some natural approach to generalize to mutli-agent RL?**
We agree that privacy for two-player zero-sum games is not as important as the multi-agent case, while we believe the progression of science is based on several small steps. This is the first paper considering trajectory-wise privacy protection in the multi-agent RL setting, and we believe this can be an important middle step towards understanding the role of privacy in multi-agent RL. Regarding extension to the case with more agents, some techniques in this paper can be readily applied. The privatization of visitation counts and the bonus can be combined with current model-based MARL algorithms, and a result like
Regret $\leq$ non-private regret + addition cost due to DP can be expected. The issue of such approach is that model-based approaches generally suffer from a regret with dependence $\Pi_i A_i$ ($A_i$ is the number of actions for the i-th player). To overcome such issue, we need to privatize model-free algorithms, which is still an open problem in the DP-RL literature, and we leave this as future work.
**The text in Table 1 is too small to read.**
Thanks for the comment, we will edit it to improve readability.
Thanks again for the high-quality review. We hope our response could address your main concerns and we are happy to answer any further questions. We would greatly appreciate it if you could consider raising your score.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I appreciate your points about having to modify the bonus, and about the obstructions regarding the use of model-based algorithms in multi-agent RL. After reviewing the discussion I will increase my score to 5.
---
Reply to Comment 1.1.1:
Comment: Thanks again for your high-quality review and your support.
---
Rebuttal 2:
Comment: Dear reviewer,
Since the discussion deadline is approaching, please let the authors know if their rebuttal has addressed your concerns. If you still have concerns, you could first acknowledge that you have read the rebuttal and discuss them in the remaining time (with the authors) or the next phase (with other reviewers).
Best,
Your AC | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Prompt Tuning Strikes Back: Customizing Foundation Models with Low-Rank Prompt Adaptation | Accept (poster) | Summary: This paper proposed a new prompt-tuning based approach called Low-Rank Prompt Adaptation (LOPA), which performs comparably to the state-of-the-art PEFT methods without the need for a server-based adapter. LOPA balances between sharing task-specific information across instances and customization for each instance to generate soft prompts, utilizing the low-rank decomposition for parameter efficiency. The effectiveness of proposed method is validated on multiple natural language understanding datasets.
Strengths: 1. The author considers a novel perspective to achieve fine-tuning for downstream tasks without manipulating the foundation model.
2. The method is simple, effective, and easy to implement.
3. The writing is clear and easy to understand.
Weaknesses: 1. The comparison with the PEFT methods only considered LoRA, other representative methods such as Adapter-tuning, P-Tuningv2, etc., were not taken into account. Additionally, methods mentioned in the related work such as LPT and SPT that may compete with LoPA were not compared in the experimental tables.
2. The ablation experiments are not comprehensive enough. The cost savings and performance sacrifices of using low-rank decomposition have not been discussed.
3. Would it be more accurate to replace "Foundation Models" in the title with "Language Models"? Although the proposed method appears to be a general approach, it was only validated on natural language datasets.
4. For the analysis of method principles, such as the offset subspace induced by LOPA, could some quantitative/visual verification be provided to demonstrate the changes brought about by the introduction of LOPA?
Technical Quality: 3
Clarity: 2
Questions for Authors: See the weaknesses above.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: See the weaknesses above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - *“The comparison with the PEFT methods only considered LoRA, other representative methods such as Adapter-tuning, P-Tuningv2, etc., were not taken into account. Additionally, methods mentioned in the related work such as LPT and SPT that may compete with LoPA were not compared in the experimental tables.”*
**Response:** Thank you for this feedback. We chose LoRA as a representative baseline because it requires storage of user-specific parameters on the server for LLM personalization. Additionally, we focused on soft-prompting methods (e.g., PT, IDPG) that enable model customization on the user side, without necessitating server-side modifications. Soft-prompting methods like P-TuningV2, LPT, and SPT require inserting prefix vectors within intermediate layers of the transformer network, necessitating server-side changes for every user query.
To address your concern, we showcase a comparison with P-tuning v2 (Liu et. al 2021), Prefix tuning (Li et. al 2021) and a recent parameter-efficient baseline DePT (Shi et. al 2023). We can observe that while DEPT and P-tuning v2 reduce parameters, they also lead to a significant performance drop (~21 and 16 pts average drop respectively) compared to LOPA. On the other hand, Prefix-tuning performs similarly to LOPA but does so at the cost of 16x more parameters.
| Approach | Params$\downarrow$ | RTE$\uparrow$ | MRPC$\uparrow$ | SST-2$\uparrow$ | QNLI$\uparrow$ | Average$\uparrow$ |
| -------- | ------- | ------- | ------- | ------- | ------- | ------- |
| LOPA | 1.6M | **83.39** | **91.09** | **95.99** | 93.74 | **91.05** |
| DEPT | 10.2K | 53.79 | 72.97 | 89.68 | 57.09 | 68.38 |
| P-tuning v2 | 0.49M | 53.43 | 70.18 | 89.91 | 85.21 | 74.68 |
| Prefix-tuning | 25.75M | 82.67 | 90.86 | 93.80 | **94.98** | 90.58 |
We will include this comparison and more task results in the paper.
- *“The ablation experiments are not comprehensive enough. The cost savings and performance sacrifices of using low-rank decomposition have not been discussed.”*
**Response:**
1. **Ablation study - cost vs. performance trade-off:** In the paper, Figure 4 studies this cost-performance tradeoff as a function of rank. Bar plots show the training costs in terms of number of trainable parameters and Line plots show performance.
| Approach | Params$\downarrow$ | RTE$\uparrow$ | MRPC$\uparrow$ | SST-2$\uparrow$ |
| -------- | ------- | ------- | ------- | ------- |
| LOPA(r=4) | 1.60M | **83.39** | **91.09** | **95.99** |
| LOPA_add(r=4) | 1.60M | 64.26 | 75.17 | 93.34 |
| IDPG+PHM (n=8) | 0.37M | 67.14 | 76.12 | 95.07 |
| IDPG+PHM (n=16) | 0.20M | 65.34 | 76.68 | 94.61 |
| IDPG+PHM (n=32) | 0.17M | 68.23 | 74.99 | 94.72 |
2. **Ablation study - PHM Layers vs. Low-Rank Decomposition:** We compare low-rank decomposition in LOPA with Parameterized Hypercomplex Multiplication (PHM) Layers implemented for IDPG baseline that serves as an alternate for parameter efficiency. $n$ representing the hyper-parameter balancing the parameter complexity and extent of factorisation in the Kronecker product $W = \sum_i^n A_i \bigotimes B_i$. Our study on three NLU tasks revealed that while PHM layers reduce parameter count, they also result in a significant performance drop (~15 points in RTE and MRPC), likely due to the structural constraints of Kronecker factorization limiting expressiveness (Zhang et al., 2021).
3. **Ablation study - Non-linear Composition of Z:** We also compared LOPA with LOPA_add, an additive approach for composing Z. The non-linear composition in LOPA, expressed as 𝑍 = 𝑍𝑆 ∘ 𝑔(𝑍𝐼), outperformed LOPA_add, indicating the importance of non-linear interaction for performance gains.
- *"Would it be more accurate to replace "Foundation Models" in the title with "Language Models"? Although the proposed method appears to be a general approach, it was only validated on natural language datasets.”*
**Response:** As the method is general, our preference is to use the term 'foundation model,' but the reviewer correctly points out the fact that the method was only evaluated with LLMs. We are happy to follow the reviewer's advice here.
The proposed approach has been validated on both natural language and code-generation datasets. Refer to Table 2 for evaluation of MBPP and CruXEVAL datasets.
- *“For the analysis of method principles, such as the offset subspace induced by LOPA, could some quantitative/visual verification be provided to demonstrate the changes brought about by the introduction of LOPA?”*
**Response:** This is an excellent suggestion, especially the development of an appropriate visizalization to illustrate the LOPA functionality. We plan to explore this idea and hope to add illustrative viz to an appendix.
*References:*
Zhang, Aston, et al. "Beyond fully-connected layers with quaternions: Parameterization of hypercomplex multiplications with $1/n $ parameters." arXiv preprint arXiv:2102.08597 (2021).
Shi, Zhengxiang, and Aldo Lipani. "Dept: Decomposed prompt tuning for parameter-efficient fine-tuning." arXiv preprint arXiv:2309.05173 (2023).
Liu, Xiao, et al. "P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks." arXiv preprint arXiv:2110.07602 (2021).
Li, Xiang Lisa, and Percy Liang. "Prefix-tuning: Optimizing continuous prompts for generation." arXiv preprint arXiv:2101.00190 (2021).
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. After reading the rebuttal and other reviewer’s comments, my concern has been addressed. | Summary: The paper introduces Low-Rank Prompt Adaptation (LOPA), an instance-aware prompt tuning-based approach. LOPA constructs soft prompts from a task-specific component (shared across samples) and an instance-specific component (unique to each sample), combining them using a gating function. It employs a low-rank decomposition of the instance-specific component to enhance parameter efficiency. Unlike Parameter-Efficient Fine-Tuning (PEFT) methods like LoRA, LOPA does not need to store adapter-like modules for each task as it is based on prompt tuning. The paper evaluates LOPA on natural language understanding and code tasks to demonstrate its effectiveness.
Strengths: 1. **Parameter Efficiency:** More efficient than traditional PEFT methods like LoRA.
2. **No Server-Side Changes:** Once trained, LOPA's soft prompts can be used as input prefixes without additional server-side computational cost.
Weaknesses: Major weaknesses of the paper is its lack of novelty.
1. The proposed approach is quite similar to the IDPG method referenced in [36]. The IDPG approach also uses both instance-specific and task-specific prompts. While it is true that in IDPG, updates to $Z_{S}$ and $Z_{I}$ are independent of each other, this issue can be addressed by adding a non-linearity after the second layer in their prompt generator network. Additionally, to reduce the parametric complexity of $Z_{I}$, the authors have used a low-rank decomposition of $Z_{I}$, similar to LoRA. Similarly, the IDPG paper uses Parameterized Hypercomplex Multiplication (PHM) Layers to reduce the complexity of $Z_{I}$. There is no analysis in the paper comparing the novelty and benefits of low-rank decomposition, as in LoRA, to PHM Layers.
2. Additionally, the performance of LoPA is inferior to LoRA on 6 out of 7 datasets in Table 1.
3. Missing important experimental details: The paper does not indicate how many epochs each of the methods was trained.
**Questions and Suggestions:**
a. Line 102: The paper states, "IDPG [36], which emphasizes an instance-specific prompt." Also, on Line 156: "existing instance-specific approaches [36]." In general, IDPG uses both instance-specific and task-specific prompts. Therefore, referring to it solely as an instance-specific approach may not be correct.
b. Lines 138-140: "However, encoding a matrix of size d×m can be expensive." Why not use a linear layer of dimension n×m as an encoding function f? This might not be as expensive.
c. How different and efficient is the low-rank decomposition of $Z_{I}$ compared to the Parameterized Hypercomplex Multiplication (PHM) Layers proposed in the IDPG paper (Section 3.2.1)? PHM layers also optimize the prompt generator network.
d. There is no discussion about the convergence of the proposed method, as prompt tuning is known for its slower convergence. It will good to show convergence of proposed method.
e. Line 122: $z_{k}$ should be $z^{k}$.
f. Line 197: "Evaluation.For" -> space is missing.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please refer to the weaknesses section.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - *"The proposed approach is quite similar ... decomposition, as in LoRA, to PHM Layers." "How different and efficient is the ...also optimize the prompt generator network."*
**Response:** Thank you for the detailed comparison. Here are the key distinctions and considerations:
1. **Encoding of $Z_S$ and $Z_I$:** The encoding of $Z_S$ and $Z_I$ differs fundamentally between IDPG and LOPA. In IDPG, $Z_S$ is the bias term from the last layer of the prompt generator, constructing $Z = Z_S + Z_I$. In LOPA, $Z$ is constructed as $Z = Z_S \circ g(Z_I)$, making $Z_S$ and $Z_I$ co-dependent through a gating function $g(.)$. We experimentally found this non-linear interaction crucial for the performance gains observed with LOPA, which was absent in existing soft-prompt-based learning approaches like IDPG, PT, etc.
2. **PHM Layers vs. Low-Rank Decomposition:** We acknowledge that PHM layers can reduce parameter complexity, similar to low-rank decomposition in LOPA. We have carried out an ablation study using PHM in IDPG to construct $Z_I$ with $n$ representing the hyper-parameter balancing the parameter complexity and extent of factorisation in the Kronecker product $W = \sum_i^n A_i \bigotimes B_i$. Our ablation study on three NLU tasks shows that while PHM layers reduce parameters, they also lead to a significant performance drop (~15pt in RTE and MRPC) compared to LOPA. This drop may be due to the structural constraints of PHM layers imposed by Kronecker factorisation, which could limit expressiveness (Zhang et al 2021).
| Approach | Params$\downarrow$ | RTE$\uparrow$ | MRPC$\uparrow$ | SST-2$\uparrow$ |
| -------- | ------- | ------- | ------- | ------- |
| LOPA(r=4) | 1.60M | **83.39** | **91.09** | **95.99** |
| IDPG+FC | 2.89M | 77.26 | 78.60 | 95.30 |
| IDPG+PHM (n=8) | 0.37M | 67.14 | 76.12 | 95.07 |
| IDPG+PHM (n=16) | 0.20M | 65.34 | 76.68 | 94.61 |
| IDPG+PHM (n=32) | 0.17M | 68.23 | 74.99 | 94.72 |
3. **Future Work:** We want to point out that PHM can be used in conjunction with low-rank decomposition in LoRA to reduce trainable parameters further. See below for a comparison of trainable parameter complexity. We can observe that while using either PHM or FC layers, LOPA can still be more parameter efficient by a factor of $(\frac{r}{d}+\frac{r}{m})$ (refer to Sect. 3.3 for notations). This is an interesting experiment that we leave for future work.
- IDPG + FC : $\mathcal{O}(hdm)$
- LOPA + FC : $\mathcal{O}(hdm(\frac{r}{d}+\frac{r}{m}))$
- IDPG + PHM : $\mathcal{O}(n^3 + \frac{hdm}{n})$
- LOPA + PHM : $\mathcal{O}(n^3 + \frac{hrm}{n} + n^3 + \frac{hrd}{n}) = \mathcal{O}(n^3 + \frac{hdm}{n}(\frac{r}{d}+\frac{r}{m}))$
- *"Additionally, the performance of LoPA is inferior to LoRA on 6 out of 7 datasets in Table 1."*
**Response:** This is true. However, the performance difference in Table 1 is marginal (< 1% in 5 of 6 tasks where LoRA is better). Given the additional 18 cases in Table 2, it is even less clear that there is any meaningful advantage to LoRA in terms of accuracy, as LOPA outperformed LoRA in 11 out of 24 cases. Given the other advantages of LOPA (parameter efficiency, no need for deployment on the server), we argue that the method has significant value.
- *“Missing important experimental details: The paper does not indicate how many epochs each of the methods was trained.”*
**Response:** Thank you for pointing this out. Here are the training details. In NLU Tasks, FFT and LoRA were trained for 10 epochs, while prompt-tuning approaches were trained for 20 epochs. In MBPP, all methods were trained for 10 epochs across all foundation model (FM) backbones. In CruxEval Tasks, for FM backbones under 7B, PEFT approaches were trained for 20 epochs, while larger FMs (≥7B) were trained for 10 epochs. FFT on CruxEval tasks for FM backbones under 7B was trained for 5 epochs. We will include these details in the final manuscript.
- *“Lines 138-140: "However, encoding a matrix of size d×m can be expensive." Why not use a linear layer of dimension n×m as an encoding function f? This might not be as expensive.”*
**Response:** The IDPG baseline considered in the paper indeed uses a linear layer of dimension $d \times m$ as an encoding function $f$. Resultantly, it is more computationally expensive than LOPA’s encoding of $f$. Consider the following - If the input features have dimension h, the parameter complexity of $f$ in IDPG is $\mathcal{O}(hdm)$. In comparison, LOPA uses two low-rank factors encoded with linear layers of sizes $d \times r$ and $m \times r$, reducing the parameter complexity to $\mathcal{O}(hdm(\frac{r}{d}+\frac{r}{m}))$ by a factor of $\frac{r}{d}+\frac{r}{m} < 1$.
- *“There is no discussion about the convergence of the proposed method, as prompt tuning is known for its slower convergence. It will good to show convergence of proposed method.”*
**Response:** Thank you for highlighting this aspect. Refer to the enclosed pdf for the training plots. Faster convergence is indeed a key benefit of LOPA. We present enclosed plots comparing the training loss and performance on NLU tasks (QQP, QNLI, MNLI) for Prompt Tuning (PT), IDPG, and LOPA. The results show that instance-dependent methods like IDPG and LOPA converge faster than traditional prompt tuning. Moreover, LOPA converges faster and achieves higher accuracy or F1 scores compared to IDPG. We appreciate the suggestion and will include this analysis in the paper.
*References:*
Zhang, Aston, et al. "Beyond fully-connected layers with quaternions: Parameterization of hypercomplex multiplications with $1/n $ parameters." arXiv preprint arXiv:2102.08597 (2021).
---
Rebuttal 2:
Title: Question to Authors
Comment: Thank you for your detailed response.
I have a question:
Is it not correct that by adding a non-linearity after $Z_{I}$ in IDPG and using the Hadamard product instead of addition between $Z_{S}$ and $Z_{I}$, we can achieve the same effect as LOPA?
---
Rebuttal Comment 2.1:
Comment: There are three differences. First, as the reviewer suggests, use non-linearity and the Hadamard product. Second (and most crucially) modify the prompt only on the input, not after each transformer block, so there is no special server-side computation required for the adaption (IDPG performs computation at every layer). Finally, drop the server-side classifier head used by IDPG and use the transformer output directly. | Summary: The paper introduces Low-Rank Prompt Adaptation (LOPA), a novel parameter-efficient fine-tuning (PEFT) approach that improves soft prompt tuning, delivering performance on par with LoRA and full fine-tuning methods. LOPA addresses scalability issues of traditional PEFT methods by using a low-rank decomposition for instance-specific soft prompts. This technique combines task-specific and instance-specific prompts through a gating function, offering a balance between customization and parameter efficiency. The paper demonstrates LOPA's effectiveness across various natural language understanding and code generation tasks, positioning it as a competitive alternative to adapter-based methods.
Strengths: * The paper is well-written.
* The proposed method is simple and the authors demonstrate its effectiveness across various natural language understanding and code generation tasks.
* The authors provide a thorough analysis to understand the relative importance of several aspects of their approach.
Weaknesses: I felt there were several obvious questions left unexplored, noted below, which raise concerns regarding the significance of the paper's contributions.
* The authors only experimented with a rather small model, i.e., 355M RoBERTa, for classification tasks while trying much larger models (up to 8B) for code generation tasks, which raises concerns about whether the proposed method works with larger models for classification tasks.
* The authors focused solely on a few classification tasks and two code generation tasks. This raises concerns about the proposed approach's effectiveness for other tasks, like open-ended generation, where prompt tuning often underperforms (An et al., 2022).
* Finally, I am concerned about the practical adoption of the proposed approach since it is unclear whether the proposed approach performs better than LoRA generally.
References:
An et al., 2022: https://arxiv.org/pdf/2203.03131
Technical Quality: 3
Clarity: 3
Questions for Authors: LOPA constructs the sort prompt as $𝑍 = 𝑍_𝑆∘𝑔(𝑍_𝐼)$. Did you try an additive approach by concatenating the two vectors instead?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The author discussed several limitations of their approaches, including the effectiveness of LOPA on practical tasks, the assumed positioning of the learned soft prompt, and the need for further exploration of LOPA as a conditional auto-encoder.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - *“The authors only experimented with a rather small model, i.e., 355M RoBERTa, for classification tasks while trying much larger models (up to 8B) for code generation tasks, which raises concerns about whether the proposed method works with larger models for classification tasks.”*
**Response:** Thank you for the observation. For natural language tasks, it is established that prompt tuning becomes competitive and comparable to fine-tuning with large models (>1B) (Lester et al., 2021; Liu et al., 2021). Therefore, similar to other recent works (Liu et al., 2021; Wu et al., 2022; Zhu et al., 2023), our focus is improving prompt tuning efficacy for medium-sized models (100M to 1B). Conversely, we experimented with much larger models (up to 8B) for code generation tasks, as the impact of prompt tuning methods in this area has not been extensively studied.
- *“The authors focused solely on a few classification tasks and two code generation tasks. This raises concerns about the proposed approach's effectiveness for other tasks, like open-ended generation, where prompt tuning often underperforms (An et al., 2022).”*
**Response:** To address this concern, we conducted experiments on the standard E2E and WebNLG benchmarks for open-ended natural language generation. We followed the hyper-parameter setup of Hu et al. (2021) and fine-tuned GPT2-medium with LoRA, Prompt Tuning (PT), and our approach.
| | | | E2E | | |
| -------- | ------- | ------- | ------- | ------- | ------- |
| **Approach** | **BLEU $\uparrow$** | **NIST $\uparrow$** | **METEOR $\uparrow$** | **ROUGE-L $\uparrow$** | **CIDEr $\uparrow$** |
| LoRA | 68.78 | 8.81 | 46.52 | 71.36 | 2.49 |
| PT(m=100) | 32.98 | 0.65 | 27.54 | 57.04 | 0.76 |
| Ours | 65.85 | 8.39 | 43.10 | 68.65 | 2.27 |
| | | | WebNLG | | | |
| -------- | ------- | ------- | ------- | ------- | ------- | ------- |
| **Approach** | **BLEU-U $\uparrow$** | **BLEU-S $\uparrow$** | **BLEU-A $\uparrow$** | **TER-U $\downarrow$** | **TER-S $\downarrow$** | **TER-A $\downarrow$** |
| LoRA | **46.89** | **63.27** | **55.85** | 0.45 | **0.33** | **0.39** |
| PT(m=100) | 29.59 | 31.98 | 30.94 | 0.54 | 0.54 | 0.54 |
| Ours | 44.78 | 55.46 | 50.65 | **0.44** | 0.37 | 0.40 |
The results show that while standard prompt tuning underperforms, our method significantly outperforms PT and closely matches LoRA's performance on both benchmarks. For WebNLG, we further report results across seen(S), unseen(U), and all(A) categories. Our approach demonstrates strong extrapolation performance on unseen WebNLG categories (see BLEU-U and TER-U), indicating its ability to handle diverse domains in the data without server-side personalization of the foundation model. This suggests that our method is also effective in open-ended generation scenarios. We will include this benchmark comparison and more baseline results in the paper.
- *“Finally, I am concerned about the practical adoption of the proposed approach since it is unclear whether the proposed approach performs better than LoRA generally.”*
**Response:** True, we find no significant difference between LoRA and LOPA in terms of accuracy. However, LOPA has two key advantages. First, all other things being equal, a purely prompt-based method (such as LOPA) is preferable to one that requires integrating an adaptor with the model at the server (such as LoRA). LOPA allows the model to be specialized at the client (or via the use of a middleware), without modification at the server. LOPA does not require that any use-case-specific parameters be stored on the server, which can be costly, especially if the number of specializations is large. Second, we find that LOPA is more parameter-efficient than LoRA.
- *“LOPA constructs the sort prompt as 𝑍=𝑍𝑆∘𝑔(𝑍𝐼). Did you try an additive approach by concatenating the two vectors instead?”*
**Response:** We experimentally found that the non-linear composition of $Z$ via $𝑍=𝑍_𝑆∘𝑔(𝑍_𝐼)$ is crucial for the performance gains observed with LOPA. See the following ablation study on a subset of NLU tasks, where we observe LOPA_add that opts for an additive approach underperforms.
| Approach | Params$\downarrow$ | RTE$\uparrow$ | MRPC$\uparrow$ | SST-2$\uparrow$ |
| -------- | ------- | ------- | ------- | ------- |
| LOPA(r=4) | 1.60M | **83.39** | **91.09** | **95.99** |
| LOPA_add(r=4) | 1.60M | 64.26 | 75.17 | 93.34 |
We appreciate your feedback and will include these comparisons and numbers on the remaining tasks in the final manuscript.
*References:*
Lester, Brian, Rami Al-Rfou, and Noah Constant. "The power of scale for parameter-efficient prompt tuning." arXiv preprint arXiv:2104.08691 (2021).
Liu, Xiao, et al. "P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks." arXiv preprint arXiv:2110.07602 (2021).
Wu, Zhuofeng, et al. "Idpg: An instance-dependent prompt generation method." arXiv preprint arXiv:2204.04497 (2022).
Zhu, Wei, and Ming Tan. "SPT: learning to selectively insert prompts for better prompt tuning." Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023.
Hu, Edward J., et al. "Lora: Low-rank adaptation of large language models." arXiv preprint arXiv:2106.09685 (2021). | null | null | Rebuttal 1:
Rebuttal: Thank you for the insightful comments. We enclose the convergence plots of the prompt-tuning based baselines and the proposed approach on a subset of NLU tasks.
Pdf: /pdf/b943f0cd04415453ce4c022555c3e65f230cb6d9.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Are Graph Neural Networks Optimal Approximation Algorithms? | Accept (spotlight) | Summary: This paper draws analogy between the optimization of, e.g., Max-Cut and Max-SAT problems, and the message-passing algorithm, and accordingly constructs OptGNN to implement the message-passing algorithms towards the problems. The generated optimal solutions are shown with provable bounds. Finally, empirical studies show the effectiveness and efficiency of the proposed OptGNN.
Strengths: 1. The logic of the paper in writing/presentation are absolutely easy to read and understand, and the project in design are reasonable.
2. The proposal is presented with theoretical guarantees, i.e. the bounds of the optimal solutions are shown.
3. The proposed framework OptGNN is easy to implement and the empirical studies are consistent and demonstrate the effectiveness of the proposal.
4. The discussion of related work is extensive and satisfactory.
Weaknesses: 1. Take the example of Max-Cut problem as an example, how to show the optimization problem are the same in equation (2) and the message-passing diagram shown in equation (3)? Same for other optimization problems discussed in the paper.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. In Table 2, what's the usual setting for the clause? Since the ratio is relatively high. What's the relationship between the total computational complexity and this ratio?
Confidence: 2
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Based on my review, this part is not well addressed by the authors. But since myself is not an expert in optimization approximation algorithms, please the authors/ACs kindly refer to the weaknesses and questions raised by pother reviewers.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback. We are glad that the reviewer finds that the paper is well-presented and explained and that they value both the empirical and theoretical aspects of this work!
Next, we address the weaknesses/questions in order:
>Take the example of Max-Cut problem as an example, how to show the optimization problem are the same in equation (2) and the message-passing diagram shown in equation (3)? Same for other optimization problems discussed in the paper.
Let's take the Max-Cut problem as an example (equation 1). To update the embedding $v_i$ corresponding to node $i$ we take the gradient of the Lagrangian $\frac{\partial{\mathcal{L}(\mathbf{v})}}{\partial v_i}$. The update will be as follows:
$$ v_i' = v_i - \eta \frac{\partial{\mathcal{L}(\mathbf{v})}}{\partial v_i}.$$
For the Max-Cut problem, we can enforce the constraint by normalizing the embeddings. So we just need derivatives with respect to the objective from equation 1. It is hopefully easy to see that computing $\frac{\partial{\mathcal{L}(\mathbf{v})}}{\partial v_i}$ of the objective in equation 1 leads to the expression in the parenthesis in equation 2. The normalization for equation 2 enforces the constraint in equation 1. The central observation is that this kind of approach to minimizing the Lagrangian using gradient steps will lead to message-passing steps on the constraint graph for constraint satisfaction problems. In equation 3 we show a simple way to parameterize this message passing by adding learnable matrices.
For the question:
>In Table 2, what's the usual setting for the clause? Since the ratio is relatively high. What's the relationship between the total computational complexity and this ratio?
The ratio is the clauses-to-variables ratio. This means that the higher the ratio the more constrained the instance will be, which will typically make it harder or potentially impossible to solve. More specifically, it is known that there is a phase transition point at a ratio of 4.26. Formulae with higher ratios than that become increasingly more likely to be unsatisfiable. Formulae with ratios at this phase transition point are typically considered to be hard.
>Limitations:
Based on my review, this part is not well addressed by the authors. But since myself is not an expert in optimization approximation algorithms, please the authors/ACs kindly refer to the weaknesses and questions raised by pother reviewers.
The main limitation of this work has to do with its applicability to different combinatorial optimization problems. The theoretical connection to optimal approximability results only holds for Max-CSPs. Of course, our method can be applied to essentially any SDP/polynomial optimization problem in practice. The main practical challenge in that case is enforcing the feasibility of the solutions. For example, consider the Travelling Salesperson Problem (TSP). In that case, our model requires a way of enforcing that the solutions produced by the model are valid tours. That would have to be solved with a post-processing step. Covering such cases is no trivial task and would require additional arguments and empirical work to be done in a clean and mathematically coherent way. We believe this is certainly a promising avenue for future work.
Again, we thank the reviewer for the comments and we will make sure to include an extensive discussion of the limitations in the final version of the paper. We would also like to encourage the reviewer to have a look at our responses to the other reviewers as well, in case that helps further clarify some of their concerns. We hope our response addresses the concerns of the reviewer! If the reviewer finds our answers satisfactory, we would be grateful if they could further increase their score.
---
Rebuttal Comment 1.1:
Title: Appreciate the authors' responses
Comment: I believe my questions are well addressed by the response, and I am happy to increase my score to 7 accordingly.
---
Rebuttal 2:
Title: minor remark
Comment: Regarding the question:
>In Table 2, what's the usual setting for the clause? Since the ratio is relatively high. What's the relationship between the total computational complexity and this ratio?
We wanted to mention that higher ratios means larger constraint graphs (because the number of clauses increases) which implies also larger memory costs. As we said in our response to reviewer WeYu, for a graph with vertices $|V|$ and edges $|E|$ , for embedding dimension
$d$, and depth $L$, the total runtime of OptGNN is $O(Ld^\omega |V| + Ld|E|)$,
where $\omega$
is the matrix multiplication constant. | Summary: The papers propose graph neural architectures that can be used to capture optimal approximation algorithms for a large class of combinatorial optimization problems.
Strengths: - The paper for the most part is well written with clear motivation.
- The contributions made in the paper are manifold across various optimization problems like Max-cut, Min-vertex cover etc. and have shown commendable performance.
- The paper also showcases a good theoretical basis.
Weaknesses: No significant weakness as such.
Fig1. and Fig. 2 the visibility could be improved.
Technical Quality: 4
Clarity: 4
Questions for Authors: NA.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Yes, the authors has provided limitations section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and for appreciating both the empirical and theoretical elements of the paper's contributions. | Summary: The paper establishes that polynomial-sized message-passing GNNs, can learn and replicate the optimal approximation capabilities of traditional algorithms based on SDP relaxations for Max-CSP under the assumption of the UGC. The authors propose OptGNN, which effectively integrates the theoretical framework of SDP to produce high-quality solutions for combinatorial problems such as Max-Cut, Min-Vertex-Cover, and Max-3-SAT.
Strengths: The paper explores integrating semidefinite programming into GNNs under the Unique Games Conjecture to optimally approximate combinatorial optimization problems. It introduces the OptGNN, a model designed to utilize polynomial-time message-passing algorithms and low-rank SDP relaxations to effectively solve problems such as Max-Cut and Max-3-SAT. The paper establishes OptGNN's potential through PAC learning arguments and empirical validation against traditional solvers and neural baselines. The paper is well-structured and mathematically sound.
Weaknesses: - The paper lacks a comprehensive analysis of how well OptGNN scales with increasing graph sizes and complexity.
- Although the out-of-distribution generalization of OptGNN has been tested, an analysis of its generalization over problem size is missing.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could you clarify how the proposed approach scales with increasingly large problem instances, particularly regarding computational complexity and performance stability?
2. What are the generalization capabilities over problem scale of the proposed approach?
3. What are the limitations of OptGNN? please explain in the text.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Authors need to discuss the limitation of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the detailed feedback. We appreciate their positive comments on the soundness and the structure of the paper!
To address the weaknesses:
>The paper lacks a comprehensive analysis of how well OptGNN scales with increasing graph sizes and complexity.
The cost of OptGNN scales linearly with the size of the graph (number of nodes/edges) as it is a message passing GNN. Empirically, we show that OptGNN runs on graphs of several thousands of nodes (GSET and 3-SAT experiments) without scalability issues.
>Although the out-of-distribution generalization of OptGNN has been tested, an analysis of its generalization over problem size is missing.
Our Max-Cut GSET results act also as both size and OOD generalization tests since the training was done on ER graphs of 500 nodes with a fixed edge density, and the testing was done on the GSET instances which can have tens of thousands of nodes and are not from the same ER distribution. The model performs competitively on up to an order of magnitude larger graphs than the ones it was trained on.
>Could you clarify how the proposed approach scales with increasingly large problem instances, particularly regarding computational complexity and performance stability?
Please see answers above and our response to reviewer WeYu.
>What are the generalization capabilities over problem scale of the proposed approach?
It is not quite clear what is meant by this question. Is this is about generalization to other problems, e.g., training on Max-Cut and testing on Vertex Cover? If that is what is meant by the reviewer, OptGNN in that case will typically not be able to transfer its performance to different problems without further training. It is certainly an interesting question to explore whether pre-training on certain problems and fine-tuning on others can impact performance in a positive way.
>What are the limitations of OptGNN? please explain in the text.
Thank you for pointing this out! We will make sure to discuss limitations and scaling in more detail in the paper. Briefly, the main two limitations are that the applicability of our theoretical result only covers CSPs and not all CO problems. Empirically, there is also the issue of "rounding" a convex relaxation to a integral solution, which is typically undertaken by a branch and bound tree search (i.e for producing a valid tour in the TSP). Tackling this issue is an important avenue for future work and would require non-trivial additions to our framework. Please also see our response to reviewer Ubhy for the same question.
Again, we thank the reviewer for their comments. If this answer has addressed your concerns please consider raising your score. In any case, we will gladly respond to any further questions!
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. Please update the paper in the camera-ready version with these explanations. Your response meets my expectations, and I have increased my score to 7. | Summary: This work presents a significant advancement in the field of combinatorial optimization by developing a graph neural network (GNN) architecture named OptGNN. The authors demonstrate that OptGNN can capture optimal approximation algorithms for a broad class of combinatorial optimization problems, leveraging the power of semidefinite programming (SDP) and the Unique Games Conjecture (UGC). They prove that polynomial-sized message-passing GNNs can learn the most powerful polynomial-time algorithms for Max Constraint Satisfaction Problems, resulting in high-quality approximate solutions for challenges such as Max-Cut, Min-Vertex-Cover, and Max-3-SAT. Additionally, OptGNN provides a method for generating provable bounds on the optimal solution from the learned embeddings. The empirical results show that OptGNN outperforms classical heuristics, solvers, and state-of-the-art neural baselines across various datasets, making it a robust tool for efficient and optimal combinatorial optimization.
Strengths: 1. The paper is theoretically solid. It builds on well-established concepts in approximation algorithms and semidefinite programming, providing a strong theoretical basis for its claims.
2. The development of OptGNN, a graph neural network that can capture optimal message-passing algorithms for combinatorial optimization problems, seems to be a significant innovation. This architecture leverages message-passing updates use SDP relaxations to prove its high-quality approximate solutions.
3. The paper includes a variety of evaluations, such as out-of-distribution tests and ablation studies, which help validate the practical utility and robustness of OptGNN.
Weaknesses: 1. Although OptGNN shows strong performance on several benchmarks, it does not outperform all state-of-the-art methods. This indicates room for improvement in the model’s optimization and training processes.
2. The theoretical guarantees provided by OptGNN rely on the truth of the UGC, which, while widely believed, remains unproven. This dependence could limit the certainty of the results. That being said, I consider this more like a limitation than weakness.
3. The writing needs to be improved, there are multiple typos and inconsist notations.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Can you elaborate on the scalability of OptGNN for very large graphs or more complex real-world problems? What are the practical limitations in terms of computational resources and time?
2. How adaptable is OptGNN to new combinatorial optimization problems that were not directly addressed in the paper? What modifications would be necessary to extend its applicability?
3. Can you also point out some future directions for this research? Both application and theoretical wise are good.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for all their detailed comments and review of our work! To address the
concerns in order.
1. For a graph $G$ with vertices $V$ and edges $E$, for embedding dimension $d$, and depth $L$, the total runtime of OptGNN is $O(L d^\omega |V| + Ld|E|)$ where $\omega$ is the matrix multiplication constant $2.37$. That is, the computational cost scales linearly in the size of the graph just as it does with message passing neural networks. In practice, the max cut GSET instances serve as a scalability test.
The GSET dataset is comprised of graphs with up to tens of thousands of nodes, whereas the OptGNN training set was comprised of graphs with up to 500 nodes. We observe the model maintains strong performance for instances that are up to 3000 nodes and subsequently tapers off when the number of nodes substantially exceeds 3000.
2. The OptGNN construction can be performed for any polynomial optimization subject to polynomial constraints (polynomial system solving). Any polynomial system, of which MaxCSP is a prominent example, admits a vector form SDP relaxation corresponding to the degree-2 Lasserre hierarchy relaxation. A prominent example of a problem that does not have a well known formulation as a polynomial optimization is the traveling salesman problem.
3. Promising avenues of research include constructing tighter bounds on optimality for the outputs of neural architectures/OptGNN. Tighter bounds directly translate to superior performance for branch and bound tree searches that require a certificate of optimality for termination. This would be a promising avenue for empirical investigation.
4. Weaknesses: Indeed, further empirical work is required to achieve state of the art across a wide variety of benchmarks. With respect to the UGC, although its truth is yet undetermined, another way to think about it is from the perspective of algorithms. OptGNN captures the algorithms with the best approximation ratios for Max CSP that are known in the literature.
We once again thank the reviewer for their questions and time in reviewing our work, and hope our response addresses their concerns!
---
Rebuttal 2:
Comment: I have read the authors response and I would like to keep my score. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning Goal-Conditioned Representations for Language Reward Models | Accept (poster) | Summary: The paper introduces an additional contrastive loss term for reward model training that targets the learning of goal-conditioned representations that encode expected reward for partially complete sequences. The results show that this has a positive impact on reward model accuracy, downstream RLHF with the reward model, and guided generation.
Strengths: The paper shows novelty and insight in using representation learning and goal conditioning to improve reward models and experimentally demonstrates the utility of doing so.
Experiments cover a number of useful metrics (spanning reward model accuracy, downstream utility in RL, downstream utility in guided generation) on appropriate benchmarks in two distinct settings (reasoning and alignment).
The paper is clearly written, with methods, results and analysis effectively communicated.
Weaknesses: Whilst the paper demonstrates positive results on guided generation, the mechanism for getting the prototype seems quite arbitrary and it would be useful to present a few (maybe use case dependent) alternative approaches.
As the authors mention, the performance gains from using their reward model are less than the gains in reward model accuracy (and relatively small in general). Given the authors claim that this is most likely due to off-policy issues, it is unclear why the experiment of updating the reward model and training using the updated reward model was not run.
Technical Quality: 4
Clarity: 4
Questions for Authors: In the appendix, the authors claim statistical significance by simply citing the size of the evaluation set. Given that confidence intervals are not reported (presumably due to computational constraints), is this claim grounded in anything more rigorous?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors effectively outline the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their useful comments and insightful feedback.
We respond to the reviewers comments and questions.
We plan on incorporating all these responses in the final work.
**"..the mechanism for getting the prototype.."** Please see our shared response to the reviewers.
Additionally, we consider two other methods: (a) prompting the model to generate the prototype, and (b) using an auxiliary dataset (in our case, HelpSteer to construct the prototype). Both of these methods underperform the original method on Helpful-Harmless (a: $70.5 \rightarrow 69.5$, b: $70.5 \rightarrow 69.6$), asserting our original mechanism of getting the prototype.
**"..updating the reward model and training.."** Thank you for your comment.
While we state it could be possible to improve performance of the policy further by continuing to update the reward model and perform PPO (4.1.4), we choose to limit policy training in our experiments to a single iteration due to the significant labeling and computational costs associated with experimenting with further iterations of RLHF training.
**"..statistical significance by simply citing the size of the evaluation set.."** Thank you for pointing this out.
For the natural language alignment experiments, we evaluated statistical significance by performing a Student's t-test. For both the experiments in Section 4.2.2 the p-values are significant, namely, the p-value for Llama 8b Reward experiment is 0.002 and the p-value for the Q-function 8B reward experiment is 0.001.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their further engagement and clarifications. I have read the rebuttal and maintain my score, as I see this as technically robust work with compelling results and novel insights.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their response. We are pleased to know they found the work robust and to have compelling results. | Summary: The paper frames the reward learning problem for LLM as a goal-conditioned RL. It uses the contrastive learning loss from Eysenbach et al. 2022 as an additional objective for reward training. The main innovation is the adjustment of goal-conditioned RL to pairwise preference datasets. The paper shows that adding that contrastive loss on the hidden representation of the reward model can lead to a better reward model and, consequently, better policies.
Strengths: * Although the proposed method is not new and relies heavily on Eysenbach et al. 2022, it has not been used before with LLMs.
* The methods and experiments are described in a straightforward and easy-to-follow manner.
* I've found the results of experiments 4.1.2 and 4.2.1 particularly interesting. There has been no change to the objective of RLHF reward models since its inception, and the proposed loss seems to be able to improve the reward model without any further annotations.
Weaknesses: * The fact that the definition of a "goal state" in the language space is ambiguous since the state includes the entire generated response is ignored in the paper. Even in tasks that we have a clear "goal" in, like math, it is a bit weird. Given two responses with the same final solution but different intermediate reasoning, do you expect their hidden representation to be exactly the same? What is the meaning of averaging representations of different preferred responses? Is it supposed to be an approximation to the average cosine similarity to all vectors?
* This leads me to the fact that the prototypes used for Q-value estimation during inference time seemed a bit arbitrary. Did you ablate this during your work on the paper?
Technical Quality: 4
Clarity: 4
Questions for Authors: * Regarding the evaluation of the reward model using the AUROC metric, can you elaborate on how you calculated it? Do you use the BT model output as classifier prediction? Where do the GT annotations come from? I looked at the references you provided (line 206), but it doesn't seem like AUROC was used there.
* In the experiment described in section 4.1.3, was the filtration done using the Q-value of the full answer or the partial one? In addition, this experiment is missing a baseline of best-of-50 using the vanilla reward model and your own reward model. This is a standard baseline when improving decoding using reward functions.
* For experiment 4.1.4, can you provide CI over multiple seeds of PPO training? It is a common practice since RL training is known to be unstable, and the performance can vary between experiments [1]. I also agree with the authors that seeing the results of an on-policy reward model will be interesting, although this can be expensive to train because of the need for annotations.
* Regarding experiment 4.2.2, it is well established that using Q values during decoding can improve performance. Wouldn't it be more relevant to compare this with other methods that use Q functions during decoding [2]? A beam search over SFT seems to me to be too weak of a baseline.
[1] Agarwal, Rishabh, et al. "Deep reinforcement learning at the edge of the statistical precipice." Advances in neural information processing systems 34 (2021): 29304-29320.
[2] Han, Seungwook, et al. "Value Augmented Sampling for Language Model Alignment and Personalization." arXiv preprint arXiv:2405.06639 (2024).
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The paper properly addresses its limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewers insightful and useful comments. In the following sections, we address the questions and comments raised by the reviewer. We plan on incorporating all our discussions into the final version of this paper.
**"The definition of a "goal state" in the language space..."** One of the benefits of our method is that it allows capturing goals which depend on previous information within a generation.
For example, for reasoning tasks such as GSM8k and MATH, correct solutions depend on the final answer and intermediate reasoning.
Similarly, for natural language tasks like helpfulness and harmfulness, a preferred solution with respect to a particular goal may depend on several parts of a generation.
By taking a set of responses and averaging, we are able to produce a more accurate representation of the goal (see the shared reviewer response where we find averaging on more examples leads to better performance).
**"...Given two responses with the same final solution but different intermediate reasoning, do you expect their hidden representation to be exactly the same?"** We observe that the representations for correct completions to the same prompt can be different depending on the reasoning path.
For example, we run UMAP on 5K sample completions from the train set as well as a test set (GSM8k) and plot the resulting 2D plot for 7 prompts that have multiple solutions in Figure 3 within the additional supplementary material page.
Still, we observe preferred and disprefferred completions are separated. In Figure 3, we also plot 5K preferred and 5K dispreffered base model completions from the train and test (GSM8k) datasets and plot the UMAP on the hidden representation.
Finally, we would like to emphasize we take the average across many preferred completions, to more accurately capture the concept they represent.
As demonstrated in the results provided in the shared reviewer response, averaging across more preferred completions produces a better representation, which improves performance.
**"..What is the meaning of averaging representations of different preferred responses?"** We refer the reviewer to the shared response.
**"..the prototypes used for Q-value estimation during inference time seemed a bit arbitrary. Did you ablate this during your work on the paper?"** We refer the reviewer to the shared response.
**"..evaluation of the reward model using the AUROC metric, can you elaborate on how you calculated it.."** For each problem in the benchmarks, we take the greedy generation of the base model along with annotations of whether each completion provides the correct answer, which are provided by Toshiniwal et al. [6], to compute the AUROC score.
Concretely, for each completion, we first format using the Nemo format template and use the Reward Model to predict a reward score.
In predicting the reward score, we also retrieve the predicted reward score for each token in the completion, not including the prompt tokens. Utilizing the predicted reward score and the annotation of the correctness of each completion, we compute an AUROC score using the Python scikit-learn package. Using the same procedure, we compute the partial AUROC scores at every tenth percentile of each completion.
**"..was the filtration done using the Q-value of the full answer or the partial one.."** By definition, the Q-value is computed for each token in the sequence by taking the cosine similarity of the goal-state and the RM representation of the token, which is given by the last hidden layer. A model completion is filtered if the Q-value for any token in the completion sequence is less than the threshold, which was 0 for our experiments.
**"..baseline of best-of-50 using the vanilla reward model and your own reward model.."** Since we have a total of 50 generations per problem, we use the baseline reward model (Codellama RM) and our reward model (Q-Function RM) to select the Top 1, 5, 10, and 25 samples as ranked by the reward scores.
With the selected sample, we perform majority vote and also note the average proportion of the sample K that are correct solutions.
The results are provided below.
| Model| Top-K | GSM8k Accuracy (%) | GSM8k Prop. Correct (%) | MATH Accuracy (%) | MATH Prop. Correct (%) |
|-|-|-|-|-|-|
| Q-Function RM | 1| 84.6| 84.6| 51.7| 51.7|
|| 5| 86.2| 84.5| 59.7| 51.1|
|| 10| 86.2| 83.8| 59.5| 50.3 |
|| 25| 85.5| 81.8| 57.8| 47.0|
|Codellama RM| 1| 80.8| 80.8| 43.8| 43.8|
|| 5| 84.3| 81.9| 54.0| 45.8|
|| 10| 85.2| 82.1| 56.0| 46.9|
|| 25| 85.8| 81.1| 56.5| 46.1|
The results show that our Q-Function RM clearly outperforms the baseline.
**"..CI over multiple seeds of PPO training.."** Thank you for pointing this out.
These results are the average accuracy across 4 independent runs.
We included the 95\% CI in Table 7 in the appendix (the CI's between the baseline and Q-Function are non-overlapping), and apologize for not including them in the main text.
We will include them in the main text in the final version of our paper.
"..other methods that use Q functions during decoding.." While there are other methods that use Q functions during decoding (Seungwook, et. al.), these rely on training a reward model (RM) and subsequently training value networks from large offline datasets (30K-100K examples).
Our work focuses on improving the representations of RMs, so that we can compute Q values in a extremely lightweight manner.
For instance, in 4.2.2, we use 20 examples.
Hence, these experiments focuses on comparing with baselines in this low data setting.
**"..on policy reward model.."**
We chose to limit policy training in our experiments to a single iteration due to the significant labeling and computational costs associated with experimenting with further RLHF iterations.
[6] Toshniwal, Shubham et al. “OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset.” ArXiv abs/2402.10176 (2024).
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal and found it compelling. Specifically, the best-of-N results are strong and prove that the Q-function RM is better than the vanilla one. Therefore, I've raised my score. I'll add that the clarifications regarding the choice of goal state and experimental assumptions (AUROC, CI, comparison to other methods that use Q values in decoding, etc.) improve my understanding of the paper and should be incorporated into the final version.
---
Rebuttal 2:
Comment: We thank the reviewer for their response. We will incorporate these clarifications into the final version. | Summary: This work combines contrastive representation learning, goal-conditioned RL, and reward models used in RLHF for language model alignment. The authors introduce a new method that uses a contrastive loss to encourage the reward model to learn what they define as "goal-conditioned representations." These representations essentially encode the expected reward at different steps of the generated text, supposedly helping the model predict which generations lead to good (or bad) outcomes. They test this out on math problem-solving tasks and a helpfulness/harmlessness dataset. They find some improvements in reward model accuracy and show that these representations can be used for things like filtering out bad solutions and steering the language model during generation.
Strengths: - Originality: The paper creatively combines ideas from goal-conditioned RL techniques to improving reward models for LM alignment. This results in a novel method for training reward models which further improves expressiveness of the reward model via the additional loss.
- Significance: In addition to improving the overall reward models, the authors obtain an additional quantity that measures the $Q$ values of a given state (or more precisely state-action-goal), which enable a wide set of downstream use-cases, such as early misalignment detection. While these are preliminary results, they demonstrate exciting potential for further exploration.
Weaknesses: - Correlation between Q-values and reward scores: The authors acknowledge the high correlation between Q-values and reward scores for partial sequences, raising concerns about the policy model potentially gaming the reward model during RLHF. While they suggest further exploration of decoupling these signals, this issue deserves more thorough analysis.
- Clarity: The paper lacks clarity in multiple areas. In particular, the functional form of the $Q$ function is not entirely clear to me. Choosing the goal states is another source of obscurity. While it is clear that during training, positive and negative examples are chosen from the respective completions, in my opinion the more important part, which is the choice of goal states during inference is not entirely clear to me. The authors briefly mention that they "we ake the mean representation of multiple goal state completions from the training data", but given that this is an important part of the work, I believe it requires a lot more analysis.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Have you considered investigating approaches such as Upside Down RL [1, 2] given that conditioning in this case is not on a goal state but the desired rating?
1. https://arxiv.org/abs/1912.02875
2. https://arxiv.org/abs/2106.01345
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors have adequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for useful comments and for appreciating the originality and performance improvements of the method.
We address the reviewers questions and comments in the following sections.
Additionally, we plan on incorporating all our responses and discussions into the final version of the paper.
**Correlation between Q-values and reward scores** Thank you for your comment.
To evaluate the risk of the policy gaming the reward function, we conduct a new experiment where we run PPO training for an extended time (5 episodes) in order to evaluate whether reward hacking occurs in practice.
The reward scores and response lengths are shown in Figure 2 within the additional supplementary
material page and the performance of the model at the end of each episode is given here.
| Model | Episode | GSM8k | MATH | algebra222 | GSM-Hard | Asdiv | mawps | svamp |
|--------------|---------|-------|------|------------|----------|-------|-------|-------|
| Codellama | 1 | 79.7 | 43.6 | 66.3 | 61.2 | 77.9 | 92.0 | 78.5 |
| | 2 | 79.6 | 45.0 | 72.5 | 61.5 | 78.0 | 91.6 | 79.9 |
| | 3 | 80.5 | 45.3 | 69.8 | 62.0 | 78.3 | 92.3 | 80.9 |
| | 4 | 80.5 | 45.5 | 70.7 | 62.5 | 78.3 | 92.0 | 79.5 |
| | 5 | 80.5 | 45.2 | 70.7 | 62.5 | 78.6 | 92.0 | 80.5 |
| Q-Function | 1 | 80.2 | 45.9 | 67.6 | 62.0 | 79.2 | 93.9 | 81.6 |
| | 2 | 80.6 | 46.5 | 74.3 | 61.9 | 79.5 | 93.1 | 80.3 |
| | 3 | 81.7 | 46.6 | 73.0 | 63.5 | 79.2 | 93.2 | 81.6 |
| | 4 | 81.0 | 46.5 | 73.4 | 63.2 | 79.2 | 93.4 | 81.0 |
| | 5 | 81.1 | 46.5 | 74.3 | 63.0 | 79.8 | 93.6 | 81.0 |
These results indicate training is fairly stable and the policy model does not game the reward function in practice.
One explanation for why gaming does not occur is that typically during RLHF a relatively low learning rate is used, making large shifts in the models generations less likely to occur.
**"..the functional form of the Q function.."** The Q function is parameterized by a scoring function $f$ and encoder.
In this paper, we use the cosine similarlity as $f$ and the encoder is a causal LM, such as Llama.
In particular, to compute the Q-value for a goal-state and state-action pair, we embed both the goal state and the hidden state for the state-action pair, and take the cosine similarity between the two representations.
**"..choice of goal states during inference"** We refer the reviewer to the shared response.
**"..investigating approaches such as Upside Down RL.."** Our work focuses on improving the representations learned by reward models (RMs) for aligning LMs.
Our improved methods of training RMs leads to improvements in RM performance, downstream policy learning, and model steerability.
Approaches such as upside down RL do not focus on improving RM performance to better align LMs, so we did not consider investigating them in this work.
---
Rebuttal Comment 1.1:
Title: Further clarification welcome
Comment: Thank you for providing detailed clarifications. After reading your rebuttal and the other reviews, and after going through the paper again, I can say that I am happy with the clarification and your ablations on the goal state used. Before making updates, I would like to ask authors for two more clarifications that came up during this process.
1. When selecting the goal state during inference, you assume a set of completions. What is this set - is it the full training dataset, or some parts of the training dataset, and does this set (and the resulting goal state) change across experiments or does it stay fixed? More concretely, in 4.1.3 you compute Q-values to filter completions - is the goal state used here the fixed goal state from the training dataset or is it constructed in a different manner?
2. In 4.1.2, could you please explain in more detail the x-axis of Figure 2? Does the percentile refer to the set of generations ordered by reward?
---
Reply to Comment 1.1.1:
Comment: Thank you for your clarification questions.
1. We refer the reviewer to the **Computing Q-values** paragraph of the paper (line 165, Section 3.2). In particular, the set of completions used for the experiment in 4.1.3 is the unique preferred completions from the Preference Ranking Dataset. This choice stays constant in experiments, except for the steering experiments (4.2.2), where we construct the goal state from a smaller set of examples in order to evaluate steerability from a smaller set of examples.
2. In Figure 2, the percentile refers to the percent of the completion considered. For instance, if a completion has 100 tokens and the percentile is 0.2, we consider the reward score placed on the 20th token. To compute the AUROC score, we compare the reward scores assigned at the 20th percentile of all completions and the annotation of whether the completion is correct. | Summary: This paper presents a method of applying goal-conditioned Q-functions to learn representations via contrastive learning to capture the expected reward. By incorporating an auxiliary contrastive loss for training the reward model, the performance of language model alignment obtains improvement. Experiments on GSM8k and MATH further validate the superior performance of the proposed method.
Strengths: - This paper adapts the goal-conditioned representation learning in RL to help boost the performance of the language-based reward model for LLM alignment, which is novel in the LLM area.
- A thorough set of experiments demonstrates the superior performance of the proposed method.
Weaknesses: See questions.
Technical Quality: 3
Clarity: 2
Questions for Authors: - How is the performance of the proposed method compared with DPO? Since DPO is a well-known LLM alignment method, I encourage the authors to add a comparison and discussion between them.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations have been discussed in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for their thoughtful review and insightful suggestions.
We are pleased to know they acknowledge the novelty of the goal-conditioned approach with LLMs and found our experiments to be thorough.
The reviewer brings up the interesting point of comparing our proposed method with DPO.
We compare the performance of DPO with our proposed method on the mathematical reasoning with code execution tasks (Section 4.1).
Furthermore, we intend to incorporate this response and discussion into the final version of the paper.
In particular, we use the GSM8K + MATH preference ranking dataset that we used for training both the baseline and contrastive reward models as the DPO training dataset.
We compare performance of DPO with PPO training using the baseline reward model and our contrastive reward model.
We present average accuracy across 4 independent runs for PPO and 2 independent runs for DPO.
The base model results are also shown as a reference point presented in [6].
| Model | GSM8k | MATH | algebra222 | GSM-Hard | Asdiv | mawps | svamp |
|-----------------|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|
| Base | 75.9 | 43.6 | 65.6 | 60.1 | 77.7 | 93.5 | 79.6 |
| DPO | 80.1 ± 0.1 | 44.8 ± 0.2 | 64.8 ± 0.8 | 59.9 ± 0.4 | 76.9 ± 0.0 | 90.3 ± 0.2 | 76.6 ± 0.1 |
| Codellama PPO | 79.3 ± 0.2 | 43.4 ± 0.2 | 65.8 ± 1.6 | 61.1 ± 0.3 | 77.4 ± 0.2 | 91.6 ± 0.3 | 78.5 ± 0.9 |
| Q-Function PPO | **80.5 ± 0.3** | **45.1 ± 0.1** | **70.9 ± 1.7** | **62.7 ± 0.5** | **79.5 ± 0.5** | **93.6 ± 0.3** | **81.2 ± 0.4** |
Interestingly, we observe that DPO performs on par with our method for the In-Distribution tasks (GSM8k and MATH). However, the policy trained with the contrastive RM performs much better on OOD tasks (algebra222, GSM-Hard, Asdiv, mawps, and svamp), compared to DPO.
These findings could be explained by related works that analyze DPO and find it can perform poorly in OOD settings [1]. All in all, these results indicate that PPO training with the contrastive RM leads to better generalization than training with DPO, particularly in OOD settings.
[1] Xu, Shusheng et al. “Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study.” ArXiv abs/2404.10719 (2024).
[6] Toshniwal, Shubham et al. “OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset.” ArXiv abs/2402.10176 (2024).
---
Rebuttal Comment 1.1:
Comment: Thanks for your response and hard work. After reading your rebuttal, I've decided to maintain my score. I tend to accept this paper for its novelty and good performance.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We are pleased to hear the reviewer found our work novel and to have good performance. | Rebuttal 1:
Rebuttal: We thank the reviewers for their valuable and insightful comments.
It is appreciated that the reviewers found our work to be well-written, novel, and insightful for improving RM and policy performance.
In this section, we provide further elaboration on our method of constructing the goal state.
We will include this discussion and results in the final paper.
**Goal State Construction**
Because reviewers brought up several questions about the goal state, we briefly review how we construct the goal state.
In order to construct the goal state, given a set of completions $\\{y_i = [y_{(i, 0)}, \dots, y_{(i, t_i)}] | i = 1, \dots, N\\}$ we take the last reward model hidden state of the final token, $h(y_{(i, t_i)})$, and then we average across all completions: $ \frac{1}{N} \sum_{i=1}^N h(y_{(i, t_i)}) $, where $h$ represents the last hidden layer of the reward model.
**Choice of Goal State**
Reviewers brought up several questions about why we chose to average across a set of preferred completions from the training dataset as the goal state during inference.
We experimented with using the mean across a set of completions that contain certain desirable attributes (e.g., being correct or helpful) as the goal state.
Our choice of taking the mean across these representations is motivated by prior work, which has found taking averages across representations can more accurately encode concepts or relationships [2, 3].
For example, by averaging across the set of correct completions in Math, we better capture the notion of the solution being correct.
To ablate this choice of goal state during inference, we conduct a new experiment under the mathematical reasoning with code setting (Section 4.1) where we evaluate the effect of (a) choosing poor examples to construct the goal state and (b) the number of examples used to construct the goal state.
In particular, we evaluate 3 settings for picking the sample completions for constructing the goal state.
First, we vary the number of preferred completions used to construct the goal state to evaluate the effect of adding more examples of good reasoning.
Second, we vary the number of dispreferred completions used to construct the goal state, to evaluate whether bad examples lead to poor performance.
Third, we incrementally add more dispreferred completions to a fixed sample of preferred completion in order to measure the robustness of our goal state computation to negative examples.
We refer to the addition of dispreferred completions on top of all preferred completions as adding ``corrupted'' examples.
The results are in Figure 1, within the additional supplementary material page.
Overall, these results show that negative or unhelpful completions degrade performance, indicating the importance of choosing relevant examples for computing the goal state.
Additionally, they demonstrate that having more examples of the concept leads to better performance as more generations are filtered with comparable proportion of those remaining generations being correct.
We additionally ablate our choice of using the last token in the completion sequence to construct the goal state.
In this experiment, we repeat our filtering experiments from 4.1.3.
Except, we randomly sample a token from in the completion sequence and use it as the goal state:
| Sampling Method | GSM8k Accuracy (%) | GSM8k Prop. Correct (%) | MATH Accuracy (%) | MATH Prop. Correct (%) |
|-----------------|--------------------|-------------------------|-------------------|------------------------|
| Last Token | 86.0 | 84.0 | 59.6 | 52.0 |
| Random Token | 85.9 | 81.8 | 57.3 | 45.3 |
From the result of this ablation, we see that last token sampling has superior performance, indicating using the last token helps achieve better performance.
[2] Mikolov, Tomas et al. “Efficient Estimation of Word Representations in Vector Space.” International Conference on Learning Representations (2013).
[3] Le, Quoc V. and Tomas Mikolov. “Distributed Representations of Sentences and Documents.” International Conference on Machine Learning (2014).
Pdf: /pdf/7d68c48219400b1d210d021aca29da6d43d9c033.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Detecting and Measuring Confounding Using Causal Mechanism Shifts | Accept (poster) | Summary: This paper introduces some measures of (conditional) confounding, based on information theoretic quantities. Some of their properties and an algorithm to estimate the measures using data from different environments. Although not included in the main text, the authors test their theory on some synthetic data.
Strengths: - The ideas on the paper are original as far as I know and I believe that the question regarding the measurement of confounding is an important one. The authors differentiate their work from previous work.
- Beyond the errors outlined in the weakness section, the paper is well-written and presented. It is a bit unfortunate that the experiments were not included on the main paper.
Weaknesses: - There are several typos/mistakes on the paper:
-- On line 158 and 169: $p$ is defined as the inequality between two distributions, this definition doesn’t make sense. I checked their reference 33 and 35 and it says in there that it is the probability of the distributions being different. Furthermore, p-value is a well-defined and known term, why overloading it?
-- On definition 4.1 what they call directed information is not an -expected- KL divergence, it is simply a KL divergence.
-- On Table 2, on rows 1 and 2 the equality with 0 is inverted. For example, if $X_i\to X_j$, then $P(X_i\mid X_j)=P(X_i\mid do(X_j))$ so that the log of such quantity is 0 for all values of $X_i$ and $X_j$, that is $I(X_i\to X_j)=0$.
-- Is definition 4.6 really a definition? That sounds like a proposition to me.
- I’m unsure of how large is the significance of the contribution being made (see questions and limitations). Which paired with the errors and typos above make me recommend for rejection of the paper.
Technical Quality: 2
Clarity: 2
Questions for Authors: I would like to get a feeling for why the properties of their measure of confoundedness are valuable (Theorems 4.2, 4.4 and 4.6). The only really essential property that I can think of such a measure is that it is 0 if the variables are non confounded and not 0 otherwise. Why for example do we want positivity? One could even be interested in a negative measure if the variables have a negative correlation under confoundedness, for example.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: This research does not contain any obvious negative societal impact. The authors do include a couple of sentences at the end of the paper where they state that there is a challenge to find real-world data to test their theory. Although I value the author's honesty about this limitation, I think it is too strong of a limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > On line 158 and 169: p is defined as the inequality between two distributions, this definition doesn’t make sense... why overloading it?
We undestand the concern. To fix this, following [35], we will edit line 158 in the revised manuscript as follows.
*"For example, the $p\text{-value}(\mathbb{P}^c(X_i|\mathbf{PA}_i^o)\neq\mathbb{P}^{c'}(X_i|\mathbf{PA}_i^o))$ where $\mathbf{PA}_i^o$ is the set of..."*
> On definition 4.1 what they call directed information is not an -expected- KL divergence, it is simply a KL divergence.
We agree with you. We will change **"expected KL divergence"** to **"conditional KL-divergence"** in the revised manuscript.
> On Table 2, on rows 1 and 2 the equality with 0 is inverted...
We beg to differ. If $X_i\rightarrow X_j$, then we have $\mathbb{P}(X_i|X_j) \neq \mathbb{P}(X_i|do(X_j))$ and hence $I(X_i\rightarrow X_j)>0$. This is because, in $X_i\rightarrow X_j$, conditioning on $X_j$ does not make $X_i, X_j$ independent. However, intervening on $X_j$ makes $X_i, X_j$ independent when there is no confounding between $X_i, X_j$. Formally, when there is no confounding, $\mathbb{P}(X_i|do(X_j)) = \mathbb{P}(X_i)$. On the other hand, $\mathbb{P}(X_i|X_j)\neq \mathbb{P}(X_i)$.
> Is definition 4.6 really a definition? That sounds like a proposition to me.
We agree with you, we will change the definition to proposition. The proof is trivial and follows from the definition of mutual information, which we will include in the revised manuscript.
> I would like to get a feeling for why the properties of their measure of confoundedness are valuable...
We believe that these properties are essential to ensure the correctness of the proposed metrics from a measurement perspective. The monotonicity property is crucial for understanding which set of variables are more confounded than others. Since our metrics are derived from positive quantities like directed information and mutual information, they return positive values. Studying negative confounding is a potential future direction.
> The authors do include a couple of sentences at the end of the paper where they state... I think it is too strong of a limitation.
We'd like to point out that we only didn't focus on real-world application, and rather deferred it to future work to apply our methods to real-world datasets. We have included two simple real-world examples in the uploaded rebuttal PDF (Figure 1) demonstrating the applicability of our methods (the purpose of these examples is only to show its practical value, and not intended to be comprehensive or complex). For example, we can study settings 2 and 3 using the SACHS dataset [35,37]. To study setting 1, it is enough to check for datasets that are obtained from randomized control trials where a set of variables are intervened.
We will modify the line in conclusions to reflect this point.
References:
[35] Mameche, Sarah, Jilles Vreeken, and David Kaltenpoth. "Identifying Confounding from Causal Mechanism Shifts." International Conference on Artificial Intelligence and Statistics. PMLR, 2024.
[37] Mooij, Joris M., Sara Magliacane, and Tom Claassen. "Joint causal inference from multiple contexts." Journal of machine learning research 21.99 (2020): 1-108.
We hope our responses address your concerns. We will update the manusript accordingly. We are happy to discuss further if you have any additional questions.
---
Rebuttal Comment 1.1:
Title: Answer to the rebuttal
Comment: I thank the authors for taking the time to answer my questions.
About Table 2: you are absolutely right, I got confused with the causal graphs and the interventions when I was trying the measures out.
About the real world examples and the properties of the definition: I appreciate those, and I would be ok if the paper was purely theoretical and had no experimental results, but then in that case I would expect very strong theoretical results, which in my opinion is not what the authors provide. In fact, the authors' answer to my question about the feeling of the properties seems to be that the properties are post-hoc; that is, they define some measure of confounding using information-theoretic quantities, they realise the properties hold because they defined it in such a way and then they write some Theorems based on that. In other words, I would say the authors have not answered my question of why the properties (besides monotonicity) are important for measuring confounding. I am happy to discuss this further, if the authors are interested in that.
---
Reply to Comment 1.1.1:
Title: Response to the reviewer
Comment: Thank you for your reply. Please note that **we do not** cast our paper as a purely theoretical one. Due to space constraints, we had to move the experimental results to the appendix. The rebuttal PDF also includes additional experimental results. We will include the experimental results in the main paper, using the additional page provided in the final version if our work gets accepted.
There are four properties of our measures that we propose: *reflexivity*, *symmetry*, *positivity*, and *monotonicity*. We are glad that you already acknowledged the use of *positivity* and our explanation in the rebuttal for *monotonicity*.
*Positivity* and *monotonicity* properties are **not post-hoc** results because the measures themselves are motivated by the definitions of confounding (Definitions 4.2, 4.3, 4.6). Based on these definitions, we propose measures whose values can be compared to find the relative strength of confounding between any two sets of variables. If it helps the reader, in the revised manuscript, after the definitions of confounding, we will introduce positivity and monotonicity as important properties that any confounding measure should satisfy and then introduce the definitions of our proposed measures.
*Reflexivity* and *symmetry* act as validations for the correctness of the proposed measures and cater to extreme cases such as measuring the confounding between a variable and itself (reflexivity) and measuring the confounding by changing the order of variables (symmetry). We would be happy to clarify this, or reduce the emphasis on these two properties in the revised manuscript.
We are happy to discuss further if you have any additional questions. | Summary: The paper addresses the challenge of identifying and quantifying (unobserved) confounding in causal inference. They propose a more comprehensive approach by relaxing the classic assumption of causal sufficiency and leveraging the sparse causal mechanism shifts assumption. The authors introduce methods to detect and measure confounding effects, distinguish observed and unobserved confounding, and evaluate the relative strengths of confounding among variable sets. An empirical validation supports their theoretical analysis.
Strengths: (S1) The paper provides a thorough study of confounding from several perspectives: detecting and measuring confounding between pairs of variables and among multiple variables; distinguishing between observed and unobserved confounding; assessing the relative strengths of confounding among different sets of observed variables. To my knowledge, this is the first study exploring all these aspects in a unified framework.
(S2) The paper is well written and easy to follow.
Weaknesses: (W1) I found the paper to be lacking in experimental evaluations. It is not clear how hard (both statistically and computationally) it is to compute the measures of confounding that were proposed, especially the ones measuring confounding among multiple variables.
(W2) I am somehow skeptic about the practical relevance of the results presented in the paper (and the lack of experiments reinforces this point, see W1). In particular, people focused a lot on the marginal sensitivity model (and variations) as a measure of confounding strength because sensitivity analysis bounds can be easily derived from it. I am not sure the same would be possible under the proposed measure of confounding, and hence I am not sure how it would be useful in practice.
(W3) I think some relevant related works are missing. In particular, when interventional data is available (e.g. RCTs) [1] and [2] propose testable implications for detecting hidden confounding. Further, [3] and [4] propose to lower bound the strength of hidden confounding (as measured by the marginal sensitivity model).
[1] Falsification of Internal and External Validity in Observational Studies via Conditional Moment Restrictions. Hussain et al. AISTATS 2023.
[2] Benchmarking Observational Studies with Experimental Data under Right-Censoring. Demirel et al. AISTATS 2024.
[3] Hidden yet quantifiable: A lower bound for confounding strength using randomized trials. De Bartolomeis et al. AISTATS 2024.
[4] Detecting critical treatment effect bias in small subgroups. De Bartolomeis et al. UAI 2024
Technical Quality: 3
Clarity: 3
Questions for Authors: (Q1) I got confused at lines 103-106: "While these methods use data from different contexts (an approach we also leverage in this work), they assume the absence of unobserved confounding variables; we instead focus on capturing both observed and unobserved confounding using data from multiple contexts.". [27] proposes a test for the presence of hidden confounding and hence allows for unobserved variables. Can the author briefly comment on how their measures compare with the approach proposed in [27]?
(Q2) I am confused slightly confused with Assumption 3.2 (line 169): $$ P^c( X_i | \text{PA}_i) \neq P^{c'}(X_i | \text{PA}_i) $$
what does it mean for conditional distributions to be different?
(Q3) Can the authors comment with one motivating example of how their proposed confounding measure could be used in practice? (See W2)
[27] Rickard Karlsson and Jesse Krijthe. Detecting hidden confounding in observational data using multiple environments. Advances in Neural Information Processing Systems, 36, 2023.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > I found the paper to be lacking in experimental evaluations. It is not clear how hard (both statistically and computationally) it is to compute the measures of confounding that were proposed, especially the ones measuring confounding among multiple variables.
Due to space constraints, we included experiments in Appendix Section B (our paper required significant space for discussing the considered 3 settings and our formulations therein). We have now included additional experimental results in the uploaded rebuttal PDF. If accepted, we will use the additional page available in the final version to include the results in the main paper itself.
As stated in the appendix, our experiments were conducted on a CPU and are straightforward to run. We have provided the code in the supplementary material, and it can be executed quickly.
> I am somehow skeptic about the practical relevance of the results presented in the paper ... people focused a lot on the marginal sensitivity model ... I am not sure the same would be possible under the proposed measure of confounding, and hence I am not sure how it would be useful in practice. I think some relevant related works are missing...
In the uploaded rebuttal PDF, Figure 1 demonstrates two simple real-world examples where our methods can be applied (the purpose of these examples is only to show its practical value, and not intended to be comprehensive or complex). As explained in the previous response, we have included experiments in the Appendix and new experiments in the uploaded rebuttal PDF.
We appreciate your suggestion regarding sensitivity analysis. We will incorporate the following points in the revised manuscript for completeness.
1. Our method aims to estimate the exact value of confounding rather than approximate it using bounds obtained via sensitivity analysis.
2. The difference between the total confounding and the conditional confounding by conditionining on observed confounding variables can be interpreted as the unobserved confounding estimate obtained via sensitivity analysis.
3. As discussed with reviewer n1WD, we can connect marginal sensitivity definition of confounding and the confounding based on directed information used in our paper. We will include this connection in the revised manuscript.
We also thank you for providing the relevant papers. We will discuss them in the revised manuscript and include a discussion on sensitivity analysis as mentioned before for completeness.
> Q1) I got confused at lines 103-106: "While these methods use data from different contexts... [27] proposes a test for the presence of hidden confounding and hence allows for unobserved variables... compare with the approach proposed in [27]?
Apologies for the oversight. The study of hidden confounding detection in [27] primarily focuses on downstream causal effect estimation. In contrast, our work aims to provide a unified framework for studying and measuring both observed and unobserved confounding across different types of contextual information. This framework supports various downstream applications beyond causal effect identification, including assessing the relative strengths of confounding and measuring confounding between pairs and sets of variables. We will add these points to the related work in the revised manuscript. We also study downstream causal effect estimation tasks. As demonstrated in the results in the rebuttal PDF, confounding detection using our method contributes to improved causal effect estimation.
> Q2) I am slightly confused with Assumption 3.2 (line 169): what does it mean for conditional distributions to be different?
Consider a variable $X_i$ and its parents $PA_i$. If $\mathbb{P}(X_i=x_i|PA_i = pa_i)$ is different in two contexts/environments $c,c'$ for atleast one pair $(x_i, pa_i)$, we say that the causal mechanisms are different in two contexts $c,c'$. According to Assumption 3.2, such causal mechanism shifts are rare/sparse for a variable $X_i$.
> (Q3) Can the authors comment with one motivating example of how their proposed confounding measure could be used in practice? (See W2)
We have included two simple real-world examples in the uploaded rebuttal PDF (Figure 1) demonstrating the applicability of our methods. Additionally, since the benchmark datasets from the BNLearn repository come with known data-generating processes, we can generate different contexts by performing interventions and subsequently applying our method, similar to [35,37].
References:
[27] Rickard Karlsson and Jesse Krijthe. Detecting hidden confounding in observational data using multiple environments. Advances in Neural Information Processing Systems, 36, 2023.
[35] Mameche, Sarah, Jilles Vreeken, and David Kaltenpoth. "Identifying Confounding from Causal Mechanism Shifts." International Conference on Artificial Intelligence and Statistics. PMLR, 2024.
[37] Mooij, Joris M., Sara Magliacane, and Tom Claassen. "Joint causal inference from multiple contexts." Journal of machine learning research 21.99 (2020): 1-108.
We hope our responses address your concerns. We will update the manusript accordingly. We are happy to discuss further if you have any additional questions.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response, and I maintain my original score. | Summary: The authors mainly introduce capturing both observed and unobserved confounding using data from multiple contexts. Leveraging experimental data proposes a comprehensive approach for detecting and measuring confounding effects from three different settings that don't need the parametric assumptions and relaxes the causal sufficiency assumption. The authors provide the measure for detecting confounding effects (relative strengths of confounding).
For each of the proposed measures, this article presents key properties and an algorithm for detecting and measuring confounding using data from multiple contexts.
Strengths: The entire article primarily introduces the detection and quantification of confounding effects without making parametric or causal sufficiency assumptions. It presents a well-structured logical framework to discuss this concept.
Weaknesses: I think overall the authors did interesting research, but my main concerns are listed below.
1. When the environment changes, for example, if $c$ changes to $c'$, will the original causal relationship change?
2. If the length of the environment node is only 1, will the method described in the article fail?
3. Why do settings 2 and 3 introduce mutual information to define confounding? Would there be any difficulties in using the KL divergence?
4. In line 104, the author said "an approach we also leverage in this work", but the author did not introduce the method in which article in detail.
5. In line 111, the article describes "we measure the effects of both observed and unobserved confounding". However, the subsequent sections do not appear to provide a direct method for measuring causal effects.
6. In fact, the confounding effect is not always a number between 0 and 1. Converting it to a number between 0 and 1 only measures its relative strength, but does not directly calculate its confounding effect.
7. In Setting 1: When there is confounding between two variables, CNF-1 is used to calculate it. If multiple variables have common confounding, CNF-1 is calculated for each variable with the others and then summed. Will this lead to the repeated accumulation of confounding?
8. The detect confounding method proposed in the article, but this measure is relatively rarely used in real life. In fact, it is unrealistic to only conduct simulation experiments. In this part of the experiments, the author did not specifically introduce the settings, so why were only the CNF-2 results output, and not the CNF-3 results? Is there a significant difference between these two results? For the section on "Measuring Conditional Confounding," the author should add experiments involving observed confounding, such as replacing the unobserved confounding with observed confounding in Experiment 1.
9. The authors repeat write “decision variables” in lines 149 or 150, and "two" in line 156.
10. The authors miswrite “$\mathcal{D^{C}}$” as “$\mathcal{D}$” in the third line of ALgorithm 1.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the above Weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors provide a detailed discussion of the limitations and applicability of their method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > When the environment changes, for example, if c changes to c', will the original causal relationship change?
It depends on the type of intervention. If context change is a result of soft intervention on a variable, the underlying causal relationships do not change. If the context change is a result of hard intervention, the causal relationships can change. A similar discussion can be found in Section 2 of [35]. In this paper, we consider both types of interventions on variables as highlighted in Table 1.
> If the length of the environment node is only 1, will the method described in the article fail?
When there is only one context available, there is a fundamental limitation: confounding cannot be uniquely identified, as explained in lines 43-45. Rather than calling it a limitation, we state that our method is not applicable when there is only one environment.
> Why do settings 2 and 3 introduce mutual information to define confounding? Would there be any difficulties ..?
We build upon earlier works that utilize mutual information to measure confounding. According to our formulation, mutual information is among the most suitable choices for measuring the dependency between random variables induced by changes in the mechanism, as explained in lines 295-297. See [35] for a similar approach using mutual information for identifying confounding. In our experiments, we did not encounter any difficulties using KL divergence.
> In line 104, the author said "an approach we also leverage in this work", but the author did not introduce the method in which article in detail.
Apologies for the confusion. By "an approach," we refer to the idea of using different contexts to measure confounding. We will either modify or remove the text in brackets to clarify this point and avoid any confusion.
> In line 111, the article describes "we measure the effects of both observed and unobserved confounding". However, the subsequent...
We believe the reason for confusion is the word 'effect'. We aim to study and quantify the confounding **bias** instead of its effect on causal **effects** as a downstream application. Our goal in this paper is to provide a unified framework for confoudning. We will replace **effect** with **bias** to remove the ambiguity.
We appreciate your interest in the results on causal effects. We conducted experiments to study the impact of the proposed confounding measures on reducing bias in estimated causal effects. Table 2 in the uploaded rebuttal PDF demonstrates that controlling for the variables detected as confounders by our method helps reduce the bias in estimated causal effects.
> In fact, the confounding effect is not always a number between 0 and 1. Converting it to a number between 0 and 1...
We provide a reason for studying relative strength in lines 57-62 of the introduction section. Also, it is trivial to evaluate the actual confounding effect that is not between 0 and 1 by ignoring the exponential transformation used in our definition of confounding measure.
> In Setting 1: When there is confounding between two variables, CNF-1 is used to calculate it. If multiple variables have common confounding...
Since this is a first effort to define a measure of confounding among a set of variables, our focus herein was on the stated settings. It is possible however to accumulate the confounding. It is easy to see that our current definition of measuring joint confounding satisfies positivity and monotonicity properties. We believe that exploring other ways of defining joint confounding can be interesting directions of future work.
> The detect confounding method proposed in the article, but this measure is relatively rarely ...
We included two simple real-world applications of our method in the uploaded rebuttal PDF, as shown in Figure 1. (The purpose of these examples is only to show its practical value, and not intended to be comprehensive or complex.) For the conditional confounding experiments, we observe similar results for Settings 2 and 3; therefore, we report results for Setting 2 only. Additionally, our experiments on conditional confounding assume that the confounding variables are observed. We have provided additional experimental results in the uploaded rebuttal PDF, detailed in Tables 1 and 2, which demonstrate the usefulness of our methods.
> The authors miswrite $D^c$ as $D$ in the third line of ALgorithm 1.
As explained in line 245, we combine data from all contexts to evaluate conditional probability. We will update third line of Algorithm 1 to include $\mathcal{D} = \cup_{c}\{\mathcal{D}^c\}$ to make it clear.
References:
[35] Mameche, Sarah, Jilles Vreeken, and David Kaltenpoth. "Identifying Confounding from Causal Mechanism Shifts." International Conference on Artificial Intelligence and Statistics. PMLR, 2024.
We hope our responses address your concerns. We will update the manusript accordingly. We are happy to discuss further if you have any additional questions.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. The authors have addressed my main concerns, so I raise my score accordingly. I strongly recommend that the authors incorporate real-world applications into the main paper to enhance its overall completeness.
---
Reply to Comment 1.1.1:
Title: Thank you for reply
Comment: We thank you for your reply. We will include the real-world applications in the main paper for completeness. | Summary: This paper presents methods that:
1. define a measure of confounding between sets of variables,
2. separate the effects of observed and unobserved confounders, and
3. assess the relative strengths of confounding between sets of variables,
in three different settings. In each setting they assume that data from several different contexts is available, and that the changes in causal mechanisms between the different contexts are known.
Strengths: This paper provides thorough theoretical justification for their methods. The analyses given for the latter two settings seem particularly novel to me.
Weaknesses: The assumption "that the causal mechanism changes are known for each variable across different contexts" seems to be quite strong, but is not mentioned until relatively late in the paper (line 160).
The formal definition of a mechanism shift comes quite late in the paper. It would be nice to have intuitive explanations in the abstract and introduction. I also wonder if the formal notation could start with the notion of multiple environments baked in. As it is written now, it is unclear what the mechanism shift is relative to. Is it a shift between an environmental distribution and the interventional distribution, or is it only necessary to know the relative shift between environments?
The relationship between definitions 4.2 and 4.3 and ignorability and conditional ignorability as independence and conditional independence assumptions needs to be added to the related works. There is also perhaps a connection to [Scalable Sensitivity and Uncertainty Analyses for Causal-Effect Estimates of Continuous-Valued Interventions](https://arxiv.org/pdf/2204.10022) that could be discussed. Notably, the relationship between the likelihood ratios and the KL divergence between the interventional and observational distribution mentioned in Appendix A.3.1.
While the theoretical results are thorough, the empirical results are very limited. A major concern with hidden confounding is that it can arbitrarily bias estimated treatment effects. The magnitude of the bias should be straightforward to calculate in your synthetic experiments. It might be more demonstrative to show how the proposed measures correlate to induced biases.
## Points of confusion
**Definition 3.1**
- Looks like $X$ is being redefined here as to include any variable in $V$ rather than the subset that excludes $Z$. Mildly confusing.
- line 131: $P(X_i \mid \mathrm{PA}_i)$ is not defined in definition 3.1
**Assumption 3.1**
- talks about changes in causal mechanisms, but this concept hasn't been formalized yet which makes this assumption hard to understand.
**Line 147:** what is a context node? what is an extended causal graph? Might help to define context specific distributions before rather than after.
**Line 156:** two two
**Line 156:** $P^c$ notation could be introduced earlier to formalize a mechanism shift.
**Assumption 3.3:** $dPC$ is missing brackets
**Line 182.** shifts -> shift
**Line 183:** "extent of hidden confounding." word choice is a bit vague. maybe strength, or the effect of the confounder on each variable, or how much it may bias some estimand.
**Definition 4.1:** The notation is a little awkward with the condition on $P(X_j)$ within $D_{\mathrm{KL}}$.
**Theorem 4.2:** strongly -> more strongly
**Definition 4.6:** missing definition of mutual information.
Technical Quality: 3
Clarity: 3
Questions for Authors: **line 128:** unclear notation, what information is $P_x(V), X \subseteq V$ adding here? If $P_*$ is a set, why not write the set that defines it instead?
**line 128:** (i.e., $X = \emptyset$). why isn't $X$ bold?
**Line 154:** "Let $C_{S \wedge \neg R}$ be the set of contexts in which we observe mechanism changes for the set of variables $X_S$ but not for the variables $X_R$." Changes relative to what? The interventional distribution? Just between two contexts? Is the shift the same across each of the contexts in this set?
**Line 158:** This does not look like a p-value. This looks like an indicator taking a value of true or false, not a probability. Is there a missing $P$ outside of the brackets? If so, what is the relevant random variable? $c$? If $c$, confusing to have it lower-case while a potential value $X_i$ is upper case. Same questions for **Assumption 3.2**.
**Theorem 4.1:** missing comma? $\\{X_iX_j\\}$ -> $\\{X_i, X_j\\}$
**Line 286:** Are the $\epsilon$ normally distributed? or is zero mean and any distribution ok?
**Definition 4.7:** mutual information is defined between random variables. Are these random variables with respect to contexts? If so lower case $c$ is a bit confusing.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately discuss the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The assumption "that the causal mechanism changes are known for each variable across different contexts" ..
To measure confounding between **all pairs** of nodes in a causal graph, we need to know the mechanism changes for **each variable** across contexts. However, if the number of nodes among which confounding is to be identified is small, **we only need to know that the causal mechanisms are changed for that small set of nodes, but not for all nodes**. This assumption has been made in [35, 37]. We will edit line 160 as follows to make it clear that the assumption is not too strong.
"*Hence, we focus on detecting and measuring confounding among a set of variables, assuming that the causal mechanism shifts are observed among that subset of variables.*"
>The formal definition of a mechanism shift..?
We thank you for the suggestions. We will ensure that the intuitive explanations of mechanism shifts appear early in the paper. In particular, we will move lines 131-133 and lines 163-169 to introduction line 48 to introduce causal mechanisms and how the shift in causal mechanisms are useful in detecting confounding. Also, it is sufficient to know relative shift between environments, where each environment is a result of either a soft or hard intervention to a set of variables.
>The relationship between definitions 4.2 and 4.3 and ignorability....
This is an interesting insight. We thank you for bringing up this point. The above paper presents an intriguing way of defining confounding as the difference between nominal and complete propensity scores in the potential outcomes framework. This is inherently connected to the ignorability assumption. Since directed information also relies on the KL-divergence between conditional and interventional distributions, we can leverage this definition to define relative confounding strength between sets of variables. We will include all of these points in the revised manuscript and discuss them in the related work section.
> While the theoretical results are thorough...
Our primary focus in this work was on effectively identifying confounding variables, which is non-trivial by itself. Once we effectively identify observed confounding variables, we can control for them to reduce bias in estimated causal effects. We appreciate your interest in the results on bias in estimated causal effects. We conducted experiments to study the impact of the proposed confounding measures on reducing bias. Table 2 in the uploaded rebuttal PDF demonstrates that controlling for the variables detected as confounders by our method helps reduce the bias in estimated causal effects.
> Definition 3.1 Looks like X is being redefined here ...
**Definition 3.1:** We undestand the concern. We will use a different variable in Defintion 3.1, say $\mathbf{W}$, in place of $\mathbf{X}$. With this change, none of the other contents gets affected.
**Line 131:** We will edit line 131 as follows.
*"For a node $X_i$, $\mathbb{P}(X_i|\mathbf{PA}_i)$ is called the *causal mechanism* of $X_i$."*
> Assumption 3.1 talks about changes in causal mechanisms, but this concept hasn't been formalized...
Assumption 3.1 discusses independent causal mechanisms but not changes in causal mechanisms. We introduce the concept of causal mechanism shifts in lines 145-146 and later in Assumption 3.2, we discuss sparse causal mechanism shifts.
> Line 147: what is a context node?...
Thank you for your suggestions. A context node can be viewed as an exogenous parent to the set of nodes on which an intervention is performed. Our analysis does not depend on this extended causal graph. However, we will define context specific distributions as you suggested to make this clear.
> Definition 4.1: The notation is a little awkward with the condition on $\mathbb{P}(X_{j})$ within $D_{KL}$.
We followed [60] to use this particular notation of conditioning on $\mathbb{P}(X_{j})$ within $D_{KL}$. We believe this is to differentiate between conditional and interventional probabilities. Since it is clear from the context, we will exclude conditioning on $\mathbb{P}(X_{j})$ in the revised manuscript.
> Line 128: unclear notation, what information is $P_{x}(V), X\subseteq V$ adding here?...
We use $P_x (V), X \subseteq V$ and $P_*$ to formally define causal Bayesian networks. They do not have any relation to our method. $X$ should be bold here. We will fix this typo.
> Line 154: "Let $C_{S\wedge \neg R}$ be the set of contexts in which we observe mechanism changes for the set of variables $X_S$ but not for the variables $X_R$." Changes relative to what?...
In your question, if $i\in S$, we have $P^c (X_i|\mathbf{PA}_i) \neq P^{c'}(X_i|\mathbf{PA}_i)$. If $i \in R$, the inequality becomes equality. The shifts do not have to be the same across all contexts. Contexts are a result of either soft or hard intervention on the nodes. Hence, the changes between contexts are relative to interventional distributions.
>Line 158: This does not look like a p-value. This looks like an indicator taking a value of true or false, not a probability... Same questions for Assumption 3.2.
Following [35], we will edit line 158 as follows:
*"For example, the $p\text{-value}(\mathbb{P}^c(X_i|\mathbf{PA}_i^o)\neq\mathbb{P}^{c'}(X_i|\mathbf{PA}_i^o))$ where $\mathbf{PA}_i^o$ is the set of..."*
The above p-value tests whether the causal mechanisms of $X_i$ are different in two different contexts $c,c'$. We will remove $p$ from line 158 to avoid confusion. In assumption 3.2, similar to [35], we use $p$ to indicate the probability of two distributions being different.
>Line 286: Are the $\epsilon$ normally distributed? or is zero mean and any distribution ok?
The only restriction on $\epsilon$ is that it has zero mean with no other restriction on the underlying probability distribution.
> Definition 4.7: Lower case $c$ is a bit confusing.
We will replace $E^c_i$ to $E^C_i$ to avoid confusion in the revised manuscript.
---
Rebuttal 2:
Comment: Thank you for replying to my review. After reading your responses, I have decided to raise my score to a 5. I still think this paper needs significant revision. I choose to trust that the changes promised to all reviewers will be made, and leave the final decision to the AC.
---
Rebuttal Comment 2.1:
Comment: We sincerely appreciate your time, insightful comments, and positive response. We will update the manuscript by incorporating all the suggestions from the reviewers. We have provided detailed information on our planned improvements in a common response to all reviewers above. Please see our response here: https://openreview.net/forum?id=SvmJJJS0q1¬eId=ukyx7Qc4m9 | Rebuttal 1:
Rebuttal: ## Common response to all reviewers
We thank all reviewers for their thoughtful feedback. We are pleased to see the following encouraging comments from the reviewers.
1. The problem addressed in this paper is of significant importance (e4Fq).
2. The ideas presented are both original (e4Fq) and novel (n1WD).
3. The theoretical justification for our methods is thorough (n1WD).
4. We are the first to study various aspects of confounding (K6Rd) within a unified framework (K6Rd, 3wgM).
5. The paper is well-written and well-presented (K6Rd, e4Fq).
As can be seen from the reviews, most of the reviewers' concerns are about clarifications, which we have addressed below. We have also uploaded a PDF with additional results as suggested by the reviewers. Our paper required significant space for discussing the considered 3 settings and our formulations therein; for purposes of clarity, the experimental results were moved to Appendix. If accepted, we will use the additional page available in the final version to include the results in the main paper itself. The additional results, included in the rebuttal, further demonstrate the usefulness of our methods. We will update the manuscript as per the suggestions, and will release our code publicly on acceptance.
Pdf: /pdf/4f40f47781f493bf03f2b44440f06256f3d6b0ad.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Deep Equilibrium Algorithmic Reasoning | Accept (poster) | Summary: This work builds deep equilibrium graph neural networks for algorithmic reasoning. The paper tests their models on a variety of algorithm problems and finds mixed results with some positive and encouraging observations. One focus of the work is on speeding up NARs, and the paper also proposes regularizers to boost performance. To be honest, I am not an algorithms person, and a lot of ideas in the paper are foreign to me, so I am not confident about my review.
Strengths: The paper is well-written, polished, and clear. In general, the idea of learning algorithms with neural networks has the potential for practical value, although it seems to me that this paper focuses on emulating existing known algorithms and not learning new ones. Also, the experiments show encouraging results.
Weaknesses: The results seem mixed and not entirely positive.
In my own experiments with DEQs, I’ve found that often they overfit to the solver used during training. Other solvers can find fixed points, but those fixed points don’t always map on to solutions. Whether or not the model actually has unique fixed points corresponding to solutions may be worth studying.
The paper mentions another paper on DEQs for algorithmic reasoning (“Deep Equilibrium Models For Algorithmic Reasoning”), but does not discuss it in detail. This other paper has a nearly identical title and probably demands a more careful discussion and contextualization. As is, it would be easy for a reader to think that this paper is the first one to think about DEQs for algorithmic reasoning.
It might be worth discussing the work on non-GNN recurrent networks for learning algorithms. For example, I have found that a model from "End-to-end Algorithm Synthesis with Recurrent Networks: Logical Extrapolation Without Overthinking" actually behaves similar to DEQs in practice.
Technical Quality: 3
Clarity: 4
Questions for Authors: N/A
Confidence: 1
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors adequately discuss limitations throughout the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks to the reviewer for their thoughtful review and for finding our paper’s presentation excellent. Allow us to address both the comments and questions you have raised.
*The results seem mixed and not entirely positive*
The effectiveness of DEAR may be misinterpreted due to the mixture of results for each algorithm. The varied algorithms found in CLRS-30 means that some models are expected to perform better on some tasks compared to others. To highlight the strengths of our model, we have added an average across all algorithms (overall row) found in Table 1 in the rebuttal document.
The main comparison of DEAR is against the baseline NAR, which has an overall performance increase of 4% (Table 1). The NAR model and other non-equilibrium models have access to exact number of steps at both train and test time. The processor (a recurrent GNN) is unrolled for the given number of steps (may differ between samples). As shown in Table 1 for NAR (HCS), the model is susceptible to step change. For certain algorithms 64 steps may be just right, but for others it might be more or less, hardcoding the number of steps makes the performance 15% worse than DEAR.
A fairer comparison is the NAR (LT) model (Table 1; rebuttal PDF), which has a trained architecture to decide the number of steps during test time; we outperform by 5%. To reiterate, this model is still trained on the exact number of steps.
To further motivate the strengths of our model we opted to include the state-of-the-art processor architecture of Triplet-MPNN. Commendably, our model performs 2% worse than a model that has many other improvements besides equilibrium. In Table 1 there are two versions of Triplet-MPNN, the latter includes causality based regularisation. We had to copy results from related work (Bevilacqua et al., 2023) as we do not have access to their implementation. Triplet-MPNN is a very strong baseline to compare against, as each categorically-inspired GNN layer considers interactions between each edge and nodes that are not necessarily connected to the edge. This edge-node interaction, of course, comes at the expense of having to materialise O(V^3) messages. Notably, the DEAR approach can be used in conjunction with Triplet-MPNN (Table 5; rebuttal PDF), giving a state-of-the-art overall performance (Table 5; rebuttal PDF).
Finally, we investigated binary search as it was an anomaly for DEAR’s performance. We found several issues for binary search in CLRS-30: sampling process, calculation of ground truths and misspecification of output type. As a result, we ran new baselines of the search algorithm (Table 2; rebuttal PDF). With this change, the new overall for DEAR is 7% higher than the baseline NAR, and comparable to both Triplet-MPNN variants.
*In my own experiments with DEQs, I’ve found that often they overfit to the solver used during training…*
We thank the reviewer for providing this observation. We have investigated this, in the context of the search algorithm. We observed improved performances indeed (by ~1%) but we found the increase not as substantial compared to other ablations presented in the rebuttal PDF. We integrate this result and other algorithms for the final version of the paper. Nonetheless, this could pose an interesting setting for future exploration.
*The paper mentions another paper on DEQs for algorithmic reasoning …, but does not discuss it in detail…*
The concurrent work we highlight in our paper does indeed follow a similar research direction and was submitted to a peer-reviewed conference earlier this year, hence the citation. However it is not a paper, but rather a blogpost, and as a result we strongly recommend the reviewer to treat it as such. Of course blogposts are essential for progressing science by publicly exchanging ideas, but pragmatically the timeline from idea creation to a paper publication differs greatly.
The key differences between our paper and the blogpost are in how we approach using equilibrium points in NAR. We do not claim we were the first to conjecture the existence of this connection (we accredit this to the blogpost). However, we do claim:
* We thoroughly formalise the DEQ-NAR connection
* Following the formalisation we build the first robust equilibrium algorithmic reasoner; The model from the blogpost performs terribly on the simplest task of BFS.
* Our model outperforms non-equilibrium baselines and is competitive to a model using a more expressive GNN
* We show our model is also efficient
*It might be worth discussing the work on non-GNN recurrent networks for learning algorithms...*
Ideas from this paper are already implemented in our baselines as well as our equilibrium reasoners (eq. 5, L175). U and E are embeddings of input node/edge features, something given to the DEAR model at each step, which is the recall feature. A similar thing to an incremental progress training algorithm has also been seen in NAR, where trajectories are chunked into independent segments, but we did not find it necessary for the good performance in NAR.
Once again we thank the reviewer for their insightful comments and excellent questions, especially in regards to their insights about DEQs that present interesting avenues of future work. Consequently, even though you state you’re not an “algorithm person”, we strongly respect your viewpoints and intuition and we believe your confidence score can be improved; a 3 seems like the minimum as per reviewer guidelines.
Finally, we hope that our rebuttal addresses the empirical strengths of the DEAR model via the updated results and the differences between the blogpost and paper, such that it may convince you to increase your score. We are of course happy to further engage with the reviewer for any remaining doubt.
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: Thanks for your response. I keep my review.
---
Rebuttal 2:
Comment: Thank you for taking the time to read our rebuttal. We believe we have addressed all your concerns, therefore could we kindly ask if there is any particular reason why you have not increased your score or confidence? We found your review insightful and would be happy to listen to further comments. We are available for any additional concerns or questions. | Summary: This paper proposes to solve the neural algorithmic reasoning by attacking the equilibrium solutions directly, without leveraging the recurrent structures which imitate the iterations in algorithms. They proposed deep equilibrium algorithmic reasoner (DEAR), and compare it with baselines including NAR (w/ and w/o Triplet-MPNN).Their training dynamics is shown to be more stable than baselines, and inference time is smaller than than baselines, although the accuracy is sometimes worse than baselines.
Strengths: * The motivation is clear.
* Experiments are thorough.
Weaknesses: * It is not immediately clear what is the takeaway from the experiments. My understanding is that efficiency is the key selling point of the new algorithm. However, there is only one table for efficiency in Appendix G.
* Section 4 is notation heavy and hard to understand.
* I fully appreciate authors' honesty to report the results (Table 1) which are not to their advantage, but the inferior accuracy seems to limit the usefulness of the method.
Technical Quality: 2
Clarity: 2
Questions for Authors: * I'm not an expert so I have trouble understanding Section 4. Is it possible to make it clearer how does it connect to the proposed algorithm?
* Eq. (4) seems still a recurrent structure? I was thinking the point of the new method is to get rid of recurrent structures? But looks like from Figure 1, the point is to reduce the number of recursive steps. Is this correct?
Confidence: 1
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks to the reviewer for their thoughtful review, and allow us to address both the comments and questions you have raised.
*It is not immediately clear what is the takeaway from the experiments. My understanding is that efficiency is the key selling point of the new algorithm…*
The key selling point is that finding the equilibrium is a proper way to decide termination during training and inference time. Equilibrium also acts as an additional architectural bias, giving further boost in accuracy (Table 1; rebuttal PDF). Fixing the number of steps at test time has detrimental effects. Using a dedicated architecture to learn when to terminate the algorithm also falls short of our approach.
*Section 4 is notation heavy and hard to understand.*
We thank the reviewer for raising this point as we understand that these topics may be unfamiliar to readers from the deep learning domain.
As a result, we improved this section to provide a more readable and comprehensive understanding of domain theory. Upon taking your advice, we also added an appendix solely dedicated to the IMP language and added more details regarding the denotation of a while loop in the appendix.
Unfortunately, the NeurIPS rules prevent us from uploading a revised version of the paper. Some of the key improvements to provide yourself the confidence that we have addressed this concern are:
* Analogies with the elements of popular programming languages, such as C
* More details on the definition of States
* Expanded the definition of Commands with more examples
* More details on the utility of Domain Theory in our setting
* Further motivation for our theoretical excursion
The goal of our theoretical analysis is to show that there is a well formalised relationship between equilibrium models and algorithms.
*I fully appreciate authors' honesty to report the results (Table 1) which are not to their advantage…*
The effectiveness of DEAR may be misinterpreted due to the mixture of results for each algorithm. The varied algorithms found in CLRS-30 means that some models are expected to perform better on some tasks compared to others. Thus to highlight the strengths of our model, we have added an average across all algorithms (overall row) found in Table 1 in the rebuttal document.
The main comparison of DEAR is against the baseline NAR, which has an overall performance increase of 4% (Table 1). The NAR model and other non-equilibrium models have access to exact number of steps at both train and test time. The processor (a recurrent GNN) is unrolled for the given number of steps (may differ between samples). As shown in Table 1 for NAR (HCS), the model is susceptible to step change. For certain algorithms 64 steps may be just right, but for others it might be more or less, hardcoding the number of steps makes the performance 15% worse than DEAR.
A fairer comparison is the NAR (LT) model (Table 1; rebuttal PDF), which has a trained architecture to decide the number of steps during test time; we outperform by 5%. To reiterate this model is still trained on the exact number of steps.
To further motivate the strengths of our model we opted to include the state-of-the-art processor architecture of Triplet-MPNN. Commendably, our model performs 2% worse than a model that has many other improvements besides equilibrium. In Table 1 there are two versions of Triplet-MPNN, the latter includes causality based regularisation. We had to copy results from related work (Bevilacqua et al., 2023) as we do not have access to their implementation. Triplet-MPNN is a very strong baseline to compare against, as each categorically-inspired GNN layer considers interactions between each edge and nodes that are not necessarily connected to the edge. This edge-node interaction, of course, comes at the expense of having to materialise O(V^3) messages. Notably, the DEAR approach can be used in conjunction with Triplet-MPNN (Table 5; rebuttal PDF), giving a state-of-the-art overall performance (Table 5; rebuttal PDF).
Finally, we investigated binary search as it was an anomaly for DEAR’s performance. We found several issues for binary search in CLRS-30: sampling process, calculation of ground truths and misspecification of output type. As a result, we ran new baselines of the search algorithm (Table 2; rebuttal PDF). With this change, the new overall for DEAR is 7% higher than the baseline NAR, and comparable to both Triplet-MPNN variants.
*…Is it possible to make it clearer how does it connect to the proposed algorithm?*
Section 4 serves as a formal motivation for using neural equilibrium models when learning to simulate algorithms. What we propose is not an algorithm per se, but rather we propose a new equilibrium-based way to decide termination of algorithm simulation both during training and inference. The most elegant way (in our opinion) to formalise this concept is through denotational semantics in comparison with other approaches, such as coalgebras (category theory concept).
In our revised version we have emphasised the connection between DEAR and denotational semantics, especially the paragraph “Finding the fixed point” (section 5, L178 of our paper draft), by referring specifically to the concepts defined in Section 4.
*Eq. (4) seems still a recurrent structure? ... new method is to get rid of recurrent structures? But ..., the point is to reduce the number of recursive steps. Is this correct?*
The point is to find a robust way to decide termination in neural algorithmic reasoning that doesn’t greatly compromise accuracy or efficiency.
Once again we thank the reviewer for their insightful comments and excellent questions. We hope our reply addresses all concerns and questions, such that it may convince you to increase your score. We are of course happy to further engage with the reviewer for any remaining doubt.
---
Rebuttal Comment 1.1:
Comment: I want to thank the authors for nicely addressing my concerns. Therefore, I'm raising my score.
---
Rebuttal 2:
Comment: We would like to sincerely thank you again for the insightful feedback from your rebuttal, and for additionally raising your score. We truly believe you helped strengthen our paper. Please let us know if you have any further suggestions to help us improve our work. | Summary: This paper explores a novel approach to NAR using GNNs. Traditional NAR models typically use a recurrent architecture where each iteration of the GNN corresponds to an iteration of the algorithm being learned. Instead, this paper proposes that since many algorithms reach an equilibrium state where further iterations do not alter the outcome, it is possible to directly solve for this equilibrium. By training neural networks to find the equilibrium point, the authors aim to improve the alignment between GNNs and classical algorithms, potentially enhancing both the accuracy and speed of NAR models. Empirical evidence from the CLRS-30 benchmark supports the viability of this equilibrium-based approach.
Strengths: 1. The proposed method is clearly presented.
2. The performance of the proposed method on benchmark datasets is good.
Weaknesses: 1. The transition from denotational semantics to the proposed architecture could be further clarified to improve the overall understanding of the method. (Question 1)
2. It is not entirely clear whether the comparison with the baseline is accurate, and further justification or explanation may be needed. (Question 2)
3. The authors might consider revising the presentation of their contributions to more effectively convey the significance and novelty of the paper to the audience. (Question 3)
4. Including real-world experiments in the paper could substantially strengthen the contributions and further demonstrate the practical applicability of the proposed method. (Question 4, 5)
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. From reviewer's understanding, denotational semantics primarily motivate the equilibrium methods.
- The connection between the proposed architecture (PGN with gating) and denotational semantics is not clear.
- The motivation for equilibrium methods does not seem to require denotational semantics, as graph algorithms such as Bellman-Ford inherently have equilibrium points.
Could the authors provide further clarification on this aspect?
2. It appears that the data for the experiments was generated by the authors rather than from CLRS-30, as they mention "For each algorithm, we generate" in Line 219. However, the baseline scores are identical to those reported in previous papers. To ensure a fair comparison, the authors should consider re-running the baselines on their own dataset.
3. From the reviewer's personal perspective, the introduction and contributions sections may not be well-suited for an academic paper in their current form. The investigated problem could benefit from better motivation, and the contributions might not fully capture the paper's key theoretical and empirical results.
4. The training and test graph sizes used in the experiments seem to be relatively small, although consistent with CLRS-30. The reviewer recommends training on larger graphs (e.g., 50 nodes) and testing on even larger graphs (e.g., 300 nodes) to better demonstrate the performance gains of the proposed methods.
5. The experiments presented in the paper are limited to synthetic graph datasets. The reviewer suggests that demonstrating the performance gains on real tasks (e.g., physical systems [1] or mathematical problems [2]) would enhance the contributions of this paper.
6. The authors mention that DEQ models can be difficult to train, which might be due to the instability of DEQ's gradient. The reviewer recommends considering the use of [3] for training the models, as it provides a more stable approach.
[1] Battaglia, Peter, et al. "Interaction networks for learning about objects, relations and physics." NeurIPS 2016.
[2] Lample, Guillaume, and François Charton. "Deep learning for symbolic mathematics." ICLR 2020.
[3] Fung, Samy Wu, et al. "Jfb: Jacobian-free backpropagation for implicit networks." AAAI 2022.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks to the reviewer for their thoughtful review and directly linking the weaknesses with their questions. We address all the questions below.
*The connection between the proposed architecture (PGN with gating) and denotational semantics is not clear.*
Section 4 serves as a formal motivation for using neural equilibrium models when learning to simulate algorithms. We propose not an architecture (PGN was invented by Veličković et al. 2020), but a new way to decide termination of algorithm simulation both during training and inference. We have now emphasised the connection between DEAR and denotational semantics, especially the paragraph “Finding the fixed point” (section 5, L178 of our paper draft), by referring specifically to the concepts defined in Section 4.
Lastly, our proposed model uses PGN due to being a lightweight and well-performant NAR architecture, which serves as the ideal baseline. However, DEAR is architecture agnostic and it can work with Triplet-MPNN (Table 5; rebuttal PDF).
*The motivation… does not seem to require denotational semantics...*
In our experience, we have observed sometimes intuition can mislead deep learning. (cf. “Understanding deep learning requires rethinking generalisation” by Zhang et al.) Hence, we decided to formally motivate the paper using denotational semantics. This inspired the alignment scheme proposed and the decision to pick the least fixed point if more than one fixed point exists.
*It appears that the data for the experiments was generated by the authors … To ensure a fair comparison, the authors should consider re-running the baselines on their own dataset.*
For clarification, the results are generated with our data and code except for Triplet-MPNN with casual registration (Bevilacqua et al., 2023) and DEM (Xhonneux et al., 2024); the implementation for both are not public, hence we reported theirs.
However, we have taken your advice and generated our own baseline for Triplet-MPNN (Table 1; rebuttal PDF) without casual registration. Commendably, our model performs only 2% worse than a model whose categorically inspired GNN layer considers interactions between each edge and nodes that are not necessarily connected to the edge. This edge-node interaction, of course, comes at the expense of having to materialise O(V^3) messages. The rebuttal PDF (further explained in the global rebuttal) also highlights the strengths of our models compared to our new experiments. DEAR is 4% better than the baseline NAR model.
*From the reviewer's personal perspective, the introduction and contributions sections may not be well-suited for an academic paper in their current form...*
Unfortunately, the NeurIPS rules prevent us from uploading a revised version of the paper. We have rewritten our contributions to more clearly highlight the main outcomes of our paper, which are:
* The DEQ-NAR connection is formally motivated
* The first robust equilibrium algorithmic reasoner; the model from the DEM blogpost performs terribly on the simplest task of BFS.
* A regularisation scheme to encourage alignment with algorithm execution traces when training deep equilibrium neural algorithmic reasoners
* A comprehensive evaluation that shows DEAR is competitive to a model using a more expressive GNN
*...The reviewer recommends training on larger graphs and testing on even larger graphs to better demonstrate the performance gains ….*
As highlighted by the reviewer, the current standard setting in literature is a training size of 16 nodes and a test size of 64 nodes. We appreciate the recommendation, however, given the tight time constraint, our limited hardware access and the requirement of training 10 algorithms, for 3 seeds, for all models (for a fair comparison) the total number of runs goes beyond 100. This provides an interesting setting for future work.
Nevertheless, this comment inspired us to evaluate our currently trained models on larger instances (Table 3; rebuttal PDF); 128 nodes (8x), 256 nodes (16x) and 512 nodes (32x). The results highlight that besides certain algorithms (search; see global rebuttal) we are extremely competitive to the baseline. This is offset by the fact that our model is much faster (Table 4; rebuttal PDF; we never exceed 0.5s/sample at any scale) and speedups sometimes exceed 25x-40x.
*The experiments presented in the paper are limited to synthetic graph datasets. The reviewer suggests that demonstrating the performance gains on real tasks ...*
Our experimental procedure follows the standard in the NAR literature. However, the focus of this paper is presenting a different approach for unrolling an algorithm execution which can serve as a new foundational model in NAR Consequently, even though we agree that it is interesting to test these reasoners in real-world scenarios (Numeroso et al., 2023), this is out-of-scope for this paper and would require work that does not align with the NAR standards, our goals and experiments.
*The authors mention that DEQ models can be difficult to train, … The reviewer recommends considering… [3]...*
We thank the reviewer for the provided reference. However, in this work, we never mention that DEARs are difficult to train; they converge to a slightly larger final training loss, but we found the training overall stable. We kindly ask the reviewer to point us towards any confusing paragraphs and we will add a reference to the mentioned paper as it can be useful for future readers of our work.
Once again we thank the reviewer for their insightful comments and excellent questions. We hope our reply addresses all concerns and questions, such that it may convince you to increase your score. We are of course happy to further engage with the reviewer for any remaining doubt.
---
Rebuttal 2:
Comment: I appreciate the authors' efforts to address my concerns, which has led me to increase my evaluation to a score of 5. The size generalization performance of DEQ is particularly noteworthy, aligning with the out-of-distribution generalization capabilities demonstrated in previous DEQ studies.
Concerning the first point, the term 'formal motivation' used by the authors in reference to denotational semantics has left me a bit perplexed. While I recognize the attempt to provide a theoretical grounding, it appears to me more as an intuition rather than leading to a rigorous formal derivation that substantiates the utility of DEQ in graph reasoning. Moreover, the connection between denotational semantics and graph reasoning problems seems tenuous, with the only clear link being the capability to solve such problems through programming.
As for the second point, **I have reservations about the rigorousness of reusing scores from a previous paper when the test dataset has changed**, but I am not sure about it.
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer for promptly replying to our rebuttal and we’re glad we were able to address some of your concerns.
*[..] the term 'formal motivation' used by the authors in reference to denotational semantics has left me a bit perplexed. [..] the connection between denotational semantics and graph reasoning problems seems tenuous [..]*
We appreciate the comment from the reviewer, and we would like to clarify that we do not aim at providing a connection between "denotational semantics and graph reasoning problems", but rather to provide a connection between finding the fixed point of a function and executing an algorithm to termination. This then provides an analogy between finding the fixed point of a neural algorithmic reasoner and executing an algorithm to termination, which is the base motivation of our proposed method. We will change our language in our paper to remove any ambiguities and make clear that we use the mathematical background as a strong motivating factor for our method.
*[..] I have reservations about the ethical implications of reusing scores from a previous paper [..]*
We understand the comment from the reviewer and we would like to add a comment, as perhaps the words used should be related to scientific rigour, rather than ethical concerns.
Reporting results from other papers, providing proper citation and mention (we agree that it would be extremely unethical without this), is a common practice in literature, especially when the code for some models is not made public. We make sure to use the **same data generation code** used by all considered papers in the literature. For CLRS there isn't a predefined dataset, but rather everyone uses the same data-generation library, ensuring the same data distribution.
We note however, that there is some [official downloading code](https://github.com/google-deepmind/clrs/blob/d1c2ad7af8437c7536cf329d9cef8fdf93184d9d/clrs/examples/run.py#L152) for test data generated with CLRS. All algorithms are reported in the results below except for SCC. This algorithm was skipped as pointers in the downloaded dataset were not in the edge-set. PyTorch Geometric does not support this feature and this difference will be mentioned in Appendix B within our paper.
We report results with DEAR on all remaining algorithms. They are consistent with our data and even improve on DSP, Bellman-F. and Floyd-W..
| Algorithm | Mean | Std Dev |
|--------------------------|---------|----------|
| Bellman-F. | 98.59% | 0.34 |
| Floyd-W. | 63.92% | 0.04 |
| DSP | 92.95% | 1.38 |
| MST Prim | 89.33% | 1.09 |
| BFS | 99.65% | 0.15 |
| DFS | 38.27% | 0.82 |
| Search (Binary) | 74.00% | 11.5 |
| Minimum | 99.33% | 1.15 |
| Sort (Ins) | 85.48% | 6.90 |
We thank the reviewer again for all the insightful comments, and remain available for further clarifications if any doubts remain.
---
Rebuttal 3:
Comment: Although the performance of the proposed method is superior, I am not convinced that the motivation outlined by the authors differs significantly from any other theoretically inspired motivations. Therefore, I will maintain my score as a borderline accept. | Summary: This paper proposes Deep Equilibrium Algorithmic Reasoner (DEAR) which uses a deep equilibrium model (DEQ) to solve algorithmic tasks in CLRS30. The paper first introduces denotational semantics, which can be used to denote programs. It also provides an overview of Domain theory, and uses it to show that algorithms have fixed points. The paper then trains pointer graph network on algorithms from CLRS30 as a DEQ — fixed point solving is done with Anderson solver. The paper also discusses different methods (including failure cases) that were attempted to further improve performance of DEAR — use of Cayley Graph Propagation and alignment loss. DEAR improves performance over NAR (Neural Algorithmic Reasoner, the preivous state-of-the-art) on many algorithms.
Strengths: 1. The premise of this paper makes perfect sense. Message passing in GNNs is known to converge to an equilibrium. Therefore, it is obvious to combine NAR with DEQs.
2. DEAR improves OOD generalization of algorithms such as Floyd-Warshall, DFS, SCC, Sorting, and performs comparably to NAR (previous state-of-the-art) on algorithms like Breadth-first search, Minimum.
3. DEAR improves inference speed of many algorithms as shown in Appendix G.
Overall, this work is quite novel. While there are some prior works (See Anil et al. 2022) that have applied DEQs on very small GNNs and simple algorithmic tasks, this paper picks up a relatively difficult problem of solving standard algorithms. This hasn't been explored before with DEQ based architecture.
Weaknesses: 1. DEAR hurts performance on some algorithms like Binary Search (significant drop in performance), DSP, and MST Prime, and the reasoning is unclear.
2. The authors have done an excellent job at explaining literature that many readers might not be familiar with. However, the readability of paper can be improved further if the authors provide more background on domain theory for those who are not familiar with it. It will also help if some discussion is added in Appendix A. I couldn’t understand it even after attempting to read it multiple times.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Why use a partial function in equation 3?
2. There is some prior work [1] which indicates that more steps to find equilibrium points help with better OOD generalization. From my understanding, DEAR penalizes longer trajectories (due to misalignment with domain theory). Is it possible to get improved performance if we ignore the potential conflict with domain theory?
[1] Anil, Cem, et al. "Path independent equilibrium models can better exploit test-time computation." Advances in Neural Information Processing Systems 35 (2022): 7796-7809.
3. What loss objective is used to train DEAR (e.g. cross entropy)? I understand that there are auxiliary losses such as alignment loss in Lines 277-298.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The limitations have been discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks to the reviewer for their thoughtful review, and allow us to address both the comments and questions you have raised.
*DEAR hurts performance on some algorithms like Binary Search… and the reasoning is unclear.*
Firstly, we would like to emphasise that the varied algorithms found in CLRS-30 means that some models are expected to perform better on some tasks compared to others. Thus to highlight the strengths of our model, we have added an average across all algorithms (overall row) found in Table 1 in the rebuttal document.
As stated by the reviewer, the performance of binary search is an anomaly for DEAR’s performance, therefore we chose to investigate this algorithm further. We found several issues for binary search in CLRS-30 (Veličković et al. 2022): sampling process, calculation of ground truths, and misspecification of output type. Consequently, we ran new baselines of the search algorithm (Table 2; rebuttal PDF). With this change, the new overall for DEAR is 7% higher than the baseline NAR, and comparable to both Triplet-MPNN variants. Additionally, our decreased performance on binary search is due to overfitting. Our investigation shows that this is a data-hungry algorithm. To verify this claim, we trained 1 seed with three times larger training data (same number of epochs) and confirmed we got close to perfect performance.
To provide further clarification of the strength of DEAR we will explain the empirical results found in the rebuttal PDF. The main comparison of DEAR is against the baseline NAR, which has an overall performance increase of 4% (Table 1). The NAR model and other non-equilibrium models have access to exact number of steps at both train and test time. The processor (a recurrent GNN) is unrolled for the given number of steps (may differ between samples). As shown in Table 1 for NAR (HCS), the model is susceptible to step change. For certain algorithms 64 steps may be just right, but for others it might be more or less, hardcoding the number of steps makes the performance 15% worse than DEAR.
A fairer comparison is the NAR (LT) model (Table 1; rebuttal PDF), which has a trained architecture to decide the number of steps during test time; we outperform it by 5% overall and with ~1 standard deviation on DSP/MST Prim. To reiterate this LT model is still trained on the exact number of steps, something never given to our model.
Finally, the DEAR approach is independent of the GNN processor architecture, therefore it can be used in conjunction with the state-of-the-art Triplet-MPNN. In Table 5, we train DEAR with TripletMPNN on the subset of the algorithms Triplet-MPNN improves the most. The overall accuracy was increased by 4%, when using DEAR, suggesting that our approach is also architecture agnostic.
*The authors have done an excellent job at explaining literature ... However, the readability of paper can be improved further if the authors provide more background on domain theory ... It will also help if some discussion is added in Appendix A...*
We thank the reviewer for raising this point as we understand that these topics may be unfamiliar to readers from the deep learning domain.
As a result, we improved this section to provide a more readable and comprehensive understanding of domain theory. Upon taking your advice, we also added an appendix solely dedicated to the IMP language and added more details regarding the denotation of a while loop in the appendix.
Unfortunately, the NeurIPS rules prevent us from uploading a revised version of the paper. Some of the key improvements to provide yourself the confidence that we have addressed this concern are:
* Analogies with the elements of popular programming languages, such as C
* More details on the definition of States
* Expanded the definition of Commands with more examples
* More details on the utility of Domain Theory in our setting
* Further motivation for our theoretical excursion
*Why use a partial function in equation 3?*
The domain of a denotation is always a State and the codomain depends on the type of expression. For commands, the codomain is also a state. As some commands may not terminate, the denotation for them is undefined, i.e., we have a function which is not defined for some input arguments, hence a partial function.
*There is some prior work [1] which indicates that more steps to find equilibrium points help with better OOD generalization. … DEAR penalizes longer trajectories... Is it possible to get improved performance if we ignore the potential conflict with domain theory?
...*
We will make sure to include the provided reference [1] in our paper. Furthermore, we would like to clarify that DEAR does not necessarily penalise longer trajectories. Regularisation is used only in the case we use alignment and even then the loss is normalised by the length of trajectory (see line 487). We are aware of the observation that more steps to find equilibrium points help with better OOD generalisation (reference [31] in our draft) and we used it with the alignment scheme.
*What loss objective is used to train DEAR (e.g. cross entropy)?...*
The loss function is algorithm specific as specified in the CLRS-30 paper. Thus motivated by this comment, we have emphasised (\emph) and reworded L228-229 to make it clearer for any future readers: “Each task is independently learned, minimising the output loss plus any regularisation loss. The exact output loss is algorithm specific, therefore it can be either binary cross entropy or categorical cross entropy, cf. CLRS-30 for further details.”
Once again we thank the reviewer for their insightful comments and excellent questions. We hope our reply addresses all concerns and questions, such that it may convince you to increase your score. We are of course happy to further engage with the reviewer for any remaining doubt.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed responses my questions. I hope the authors will include the new experiments as well as additional literature on domain theory as promised. I would like to retain my score.
---
Rebuttal 2:
Comment: Thank you for taking the time to read our rebuttal. We are happy we have addressed all your questions through our detailed response, and will ensure that the updated experiments and additional literature on domain theory will be included in our paper. For these reasons, could we therefore kindly ask if there is any particular reason why you have not increased your score? We found your review insightful and it helped strengthen our paper. Please let us know if you have any further questions or suggestions. | Rebuttal 1:
Rebuttal: We thank the reviewers for their thorough reviews and insightful comments. Each review has helped improve our paper by identifying areas that may have been misinterpreted and required further clarification. Here, we address the most important points which were raised by several of the reviewers.
*DEAR is a foundational model*
DEAR is a new equilibrium-based way to decide the termination both during training and inference in the setting of NAR. Using deep equilibrium models (DEQs) is motivated well theoretically, our empirical evidence suggests that equilibrium serves as an implicit bias towards better generalising solutions. While we refer to DEAR as a “model” in our paper, it is rather a *class of models* / *foundational model* as it can natively support different types of processors (Table 5; rebuttal PDF). DEAR targets specifically NAR, and any NAR application out there, e.g. Numeroso et al. (2023; ICLR), may benefit from it, but the purpose of the paper is to motivate the DEQ$\leftrightarrow$NAR relationship and integrate DEQs with NAR models.
*DEAR improves on the baselines and can achieve competitive results with highly advanced GNNs*
During the rebuttal period, we conducted new experiments showing DEAR is better than all other baselines that are of the same size/complexity class; this does not include Triplet-MPNN. This is especially exemplified on methods that are not given termination information at test time and also over methods that are given the ground-truth number of steps. After resolving anomalies with CLRS-30 (we give details below), DEAR with PGN outperforms comparable baselines and closely matches TripletMPNN with causal regularisation (2% difference).
If DEAR is used with Triplet-MPNN (Table 5), we achieve the best overall performance from all models.
Summary of new experimentations and updated results:
* We have added an overall model performance metric across all algorithms. DEAR outperforms baseline models of its size.
* We have shown that standard NAR models are fragile, w.r.t. changing the number of steps, showcasing that a good termination condition is essential to good performance.
* We compared a model that uses a dedicated NN layer to decide termination and we achieved a 5% overall improvement.
* We highlighted (by diamonds; see Table 1; rebuttal PDF) that our model *never* uses any ground-truth termination information – neither at train time nor at test time.
* We have included extreme out-of-distribution up to 32x larger sizes (Table 3; rebuttal PDF) tests with DEAR.
* We have included efficiency measures at those scales, showing that our model can improve inference speeds substantially, while still being performant (Table 4; rebuttal PDF).
* We have included Triplet-MPNN experiments as a baseline and combined with DEAR. We note that Triplet-MPNN is more computationally expensive, so we could manage to only train on a subset of the algorithms (Table 5; rebuttal PDF). We gave priority to those algorithms, that non-DEQ Triplet-MPNN improves over non-DEQ NAR, such as FW/DFS/etc.
*(Binary) Search anomalies*
**For those familiar with CLRS-30 (Veličković et al. 2022)**: The computation of the ground location in the official CLRS-30 implementation, which we use to generate the data, is slightly noisy.
In Binary Search, we aim to find the place to insert the target value `x` in the sorted array. Thus we need to point to the graph node that holds the smallest value in the array `A`, which is greater than `x`. However, if `x>max(A)` the answer is a pointer to the last value of the array, which by the convention used by CLRS-30 means we’d be inserting `x` at the wrong place. In other words, the answer to `A=[0.1, 0.2, 0.3] x=0.25` and `A=[0.1, 0.2, 0.3] x=0.35` is the same – insert x to the left of 0.3. This contributed some noise, so we fixed the sampler to always give `x` within `[0, max(A))`.
The other changes were to explicitly use `graph+pointer` instead of `node+mask_one` as the location and datatype of the pointer to the position in the array. This is something also done by Engelmayer et al. (2024). We also add an additional supervision signal, as done in Engelmayer et al. (2024), but at the output level rather than the hint level, since DEAR decouples algorithm iterations and solver iterations (L197-203; our paper draft).
**We have, of course, reran all models with this new search algorithm.** (Table 2; rebuttal PDF)
*Denotational Semantics*
Some of you raised the concern that Section 4 is hard to read and understand. As a result, we spent considerable time improving that part, and its corresponding appendix. We have added a new appendix dedicated to IMP. However, the NeurIPS rebuttal rules prevent us from being able to share the revised revisions, thus we opted to provide detail in individual replies by highlighting the changes we have made.
-----------
Summary of all the new changes:
* Highlighting key contributions of our paper:
* The first robust equilibrium algorithmic reasoner; the model from the DEM blogpost performs terribly on the simplest task of BFS.
* A regularisation scheme to encourage alignment with algorithm execution traces when training deep equilibrium neural algorithmic reasoners.
* A comprehensive evaluation that shows DEAR is competitive to a model using a more expressive GNN.
* Greatly improved efficiency – the speedup gains are sometimes as high as 50x (Table 4, rebuttal PDF) and we never exceed 0.5s/sample even at the most extreme scales.
* Improved denotational semantics chapter & appendices, better connection with other sections and improved motivation of our theoretical excursion.
* The DEQ-NAR connection is formally motivated.
* Many new experiments showing integrating equilibrium with NAR results in strong models irrespective of the GNN architecture chosen as the processor.
Pdf: /pdf/88262c060353320241cd9112ed7c246a74e0e749.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Diffusion Model with Cross Attention as an Inductive Bias for Disentanglement | Accept (spotlight) | Summary: The paper introduces a new approach using diffusion models with cross-attention to improve the learning of disentangled representations in images. By encoding an image into concept tokens and using cross-attention to connect the encoder and the U-Net of the diffusion model, the authors show that the diffusion process creates information bottlenecks that promote disentanglement.
Strengths: - The paper is clear and well-written
- In general, research disentanglement with diffusion models is promising and a good direction for improving the disentanglement community.
- The presented inductive biases are somewhat novel and produce empirically strong disentanglement results with a more straightforward framework.
Weaknesses: - The ablation studies are conducted only on one dataset. I am concerned that the results could be inconsistent between different languages.
- The model does not compare itself to Diff-AE[24] because it does not explicitly disentangle the data. However, to show improvement, it is essential to compare the models to see if they are better since they are similar in many aspects. Diff-AE does show disentanglement qualities in the paper. In addition, a comparison of high-quality Diff-AE and the suggested method could be interesting.
- The resource comparison between the method and other competitive methods is unclear and lacks empirical results.
Technical Quality: 2
Clarity: 2
Questions for Authors: - It is unclear to me if section 3.1 is claimed to be the background or contribution of the paper?
- In Eq.4, the author introduces the reverse diffusion process. However, the equation describes the probability of $x_t$ given $x_{t-1}$, which is, as far as I know, the forward process. Is there a typo maybe?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: No significant limitations compared to current disentanglement methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your constructive suggestions, and positive feedback on the paper novelty, strong disentanglement results, and writing. We have carefully considered your valuable suggestions and comments and will incorporate them into our revised manuscript. Please find our detailed responses below.
**Q1**: The ablation studies are conducted only on one dataset. I am concerned that the results could be inconsistent between different languages.
**A1**: Thank you for your suggestion. We also conducted the main ablation study on another dataset MPI3D. The results are shown in Table C below. We can observe that the trends are consistent with that on Shapes3D (see Table 3).
Table C-1: Influence of the two inductive biases on MPI3D. For EncDec w/o Diff, we replace the diffusion model with a decoder while cross-attention is preserved. For EncDiff w/ AdaGN, we replace the cross-attention with AdaGN.
| Methods | Factor VAE score | DCI |
| :-----:| :----: | :----: |
| EncDiff w/o diff |0.355 ± 0.075 | 0.143 ± 0.038 |
| EncDiff w/ AdaGN| 0.592 ± 0.111 | 0.268 ± 0.062 |
| EncDiff | 0.872 ± 0.049 | 0.685 ± 0.044 |
Table C-2: Ablation study on the two design alternatives on obtaining the token representations.
| Methods | Factor VAE score | DCI |
| :-----:| :----: | :----: |
| EncDiff-V|0.863 ± 0.075 | 0.629 ± 0.047 |
| EncDiff | 0.872 ± 0.049 | 0.685 ± 0.044 |
**Q2**: The model does not compare itself to Diff-AE[24] because it does not explicitly disentangle the data. However, to show improvement, it is essential to compare the models to see if they are better since they are similar in many aspects. Diff-AE does show disentanglement qualities in the paper. In addition, a comparison of high-quality Diff-AE and the suggested method could be interesting.
**A2**: Thank you for your helpful suggestion. We followed your suggestions and evaluated Diff-AE on Shapes3D as shown in Table D below. Consistent with the trends on CelebA (see Table 2 in the manuscript), our method outperforms Diff-AE with a large margin on Shapes3D.
Table D: Performance comparison with Diff-AE on Shapes3D.
| Methods | Factor VAE score | DCI |
| :-----:| :----: | :----: |
| Diff-AE| 0.1744 | 0.0653 |
| EncDiff | 0.872 | 0.685 |
**Q3**: The resource comparison between the method and other competitive methods is unclear and lacks empirical results.
**A3**: Besides the computational complexity comparison in Table 6 in our manuscript, we will add more comparisons in Table E below. In particular, we additionally added the computational complexity of the VAEs and GANs. The computational complexity and inference time of VAEs and GANs still have strength, while diffusion has better generation and disentangling ability (see Table 2). When compared with other diffusion-based models, EncDiff has better generation and disentangling ability but less computational cost and inference time.
Table E: Computational complexity comparison.
| Models | Params (M) | FLOPs (M) | Time (s) |
| :-----:| :----: | :----: |:----: |
| FactorVAE | 11.9 | 892.1 | < 1|
| BetaTCVAE | 7.9 | 542.1 | < 1|
| DisCo | 12 | 907.2 | < 1|
**Q4**: It is unclear to me if section 3.1 is claimed to be the background or contribution of the paper?
**A4**: We will revise it for better clarity. Section 3.1 is the contribution of the paper which introduces our overall framework.
**Q5**: In Eq.4, is there a typo maybe?
**A5**: Thank you for pointing it out. It is a typo here and we will revise it.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for your kind response.
My concerns have been addressed, and I will raise my score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your great effort and valuable feedback! | Summary: This paper proposes a representation learning method which employs an encoder as part of a latent diffusion model. During training the encoder takes the target clean image and encodes it into a compressed representation vector. This vector is then used to condition the denoising UNet as it tries to denoise the noisy observation. The conditioning is done with cross attention where queries come from the UNet inner layer activations and key and values come from the conditioning vector.
The method is trained end to end on some simple datasets and is shown to learn disentangled representations, unsupervised. Results are compared to some existing disentanglement methods with favourable results. It is also qualitatively shown that at least on simple datasets the method works well - each factor in the learned representation learns a different source of variability in the data - colour, orientation etc.
Strengths: This model delves into a relatively underexplored side of diffusion models and while not particularly original is an interesting specific and simple (in a good way) combination of existing methods. See below for some reservations about the originality (though not a major decision factor here).
The paper supports, on quite a small scale, most of its claims. The experiments are mostly well run and analysis and ablation is adequate. I particularly liked the attention visualization figures.
The paper is well structured, figures are clear and all in all well presented. I found the language and writing a bit dense however.
Weaknesses: All in all this is a nice paper which suffers from a few (sometimes major) weaknesses.
Disentanglement is a thorny subject when it comes to larger scale experiments. Not just technically, but because when data is more complex the actual definition of what are the disentangled factors becomes blurrier and blurrier. In that sense the paper is inherently in trouble - because the paper focuses so much on disentanglement there is almost no point of me criticising the small scale of experiments, but I will still do it. Using such small datasets really does take away from the potential strength of the paper - these toy-ish datasets are good to get a general idea of how a method works, but I don't think that it can be the final experimental set in today's standards. Gains on "disentanglement" metrics don't mean much in my opinion, and the differences between different model on such simple data is, I think, negligible.
That being said, even for this specific problem setup, there are still issues at hand - the use of "latent" diffusion, as opposed to pixel diffusion, is both surprising, not explained in the paper and not analyzed properly. I would expect to see results directly on pixels, taking into account the potential of the LD encoder to perform much of the work of disentanglement. No appropriate experiments is shown in the paper, and the LD use is taken for granted.
Finally, there are some missing works and baselines that at least should have been discussed, if not compared to in the paper: DIPVAE (Variational inference of disentangled latent concepts from unlabeled observations. 2018), and more recently SODA (https://arxiv.org/abs/2311.17901)
Technical Quality: 3
Clarity: 3
Questions for Authors: See above.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: These are mostly discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your constructive suggestions, and the appreciation of the method, adequate experiments, attention visualization, and paper structure. We understand your concerns and we aim to move a small step to advance this field and inspire more future works. Please find our detailed responses below.
**Q1**: Disentanglement is a thorny subject when it comes to larger scale experiments. Not just technically, but because when data is more complex the actual definition of what are the disentangled factors becomes blurrier and blurrier. In that sense the paper is inherently in trouble - because the paper focuses so much on disentanglement there is almost no point of me criticising the small scale of experiments, but I will still do it. Using such small datasets really does take away from the potential strength of the paper - these toy-ish datasets are good to get a general idea of how a method works, but I don't think that it can be the final experimental set in today's standards. Gains on "disentanglement" metrics don't mean much in my opinion, and the differences between different model on such simple data is, I think, negligible.
**A1**: Thank you for your feedback. As commented by you, disentanglement is a thorny subject when it comes to larger scale experiments because the actual definition of what are the disentangled factors becomes blurrier and blurrier. We agree that disentanglement is still at the dawn of its development. It is still a very challenging but valuable field, with great potential to reshape the technique revolution. We endeavor to move a step forward to expand the boundary, even though we are still far from the terminus.
In this paper, we introduce a new perspective and framework, demonstrating that diffusion models with cross attention can serve as powerful inductive bias to facilitate the learning of disentangled representations. **The experimental results (see results in Table 2) on the real-world dataset CelebA, which contains 10,177 number of identities with 202,599 number of face images, reveals our method’s potential to extend to more complex datasets.** Compared with the state-of-the-art approach DisDiff [37], our EncDiff significantly improves the disentanglement performance from 0.305 to 0.638 in terms of TAD on CelebA.
We made some investigation during the rebuttal on some other real world data. Based on the key idea/perspective of this work, we designed a similar framework, which inherits the exploration of the inductive biases of time-varying bottleneck and cross attention of diffusion model, to study the disentanglement of semantics from images. We show the framework and results in **our rebuttal pdf file**. The target is to disentangle concepts or properties (color, long-hair, big-eared) from the inverted objects (white dog). If the concept were disentangled, we can combine the concepts from different instances to create new objects (e.g., white big ear dog: combine white dog and big-eared dog). Figure 2,3,5 demonstrated that properties (color, long-hair, big-eared) are learned in our framework where the concepts are correctly swapped.
We will conduct more deeper study on complex datasets in future.
We hope our work will inspire further investigations on diffusion for disentanglement to address the more sophisticated data analysis and understanding.
**Q2**: I would expect to see results directly on pixels, taking into account the potential of the LD encoder to perform much of the work of disentanglement.
**A2**: Thanks for your insightful suggestion. As described in Section 3.2.1 line 156, our analysis also holds in pixel space. Following your suggestion, we trained EncDiff directly on pixel space on the Shapes3D dataset, and show the disentanglement results in Table B below. Our framework in pixel space still achieves excellent disentanglement performance, demonstrating the LD encoder is not the key for achieving disentanglement. We will conduct experiments on other datasets and add more analysis in our revision.
Table B: Disentanglement performance of our EncDiff in pixel space (EncDiff pixel) and latent space (EncDiff) in terms of factor VAE score, and DCI (both the higher the better), evaluated on the Shapes3D dataset.
| Models | factor VAE score | DCI |
| :-----:| :----: | :----: |
| EncDiff pixel |1.0 ± 0. | 0.981 ± 0.015 |
| EncDiff |0.999 ± 0.000 | 0.969 ± 0.030|
**Q3**: There are some missing works and baselines that at least should have been discussed, if not compared to in the paper.
**A3**: Thank you very much for pointing out the two related works. We will add the following discussion in the related work section: DIP-VAE introduces a regularizer on the expectation of the approximate posterior over observed data, by matching the moments of the distributions of latents. The recent work SODA leverages a diffusion model for representation learning, revealing its capability of capturing visual semantics. In this work, we analyze and identify two valuable inductive biases of diffusion that promote the disentangled representation learning in our framework.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you taking the time to answer my concerns.
Beyond the basic limitations of disentagnlement papers most of my concerns have been answered and I am raising my score.
---
Rebuttal 2:
Comment: Thank you very much for your great efforts in reviewing our paper and responses. We appreciate your thoughtful and constructive feedback and will incorporate the suggestions into our revision. | Summary: The paper proposes a novel method that utilizes a concept-extracting image encoder and the cross-attention mechanism in conditional diffusion models for achieving the learning of disentangled representations. Comprehensive experiments, visualizations and ablation studies confirm the effectiveness of the proposed method.
Strengths: I find this paper a strong submission.
*Novelty*. To the best of my knowledge, utilizing cross-attention with features from an image encoder to achieve feature disentanglement is a novel approach.
*Presentation*. The paper is overall well-written and organized. The figures for framework/concept demonstration are also quite clear.
*Good intuition and solid experiment*. The proposed method is well-motivated and the paper demonstrates strong empirical performance. Furthermore, the empirical observations validate the functionality of each component in the proposed method, making the paper more sound.
Weaknesses: I don't see any major flaws in the paper.
*Typos and writing related*.
* Line 174, it would be better to briefly describe why this information bottleneck promotes disentanglement instead of just cite other papers.
* Line 282, "utilizing reconstruction l2 loss is used to optimize the entire network." utilizing and used are redundant.
* In 4.1, Implementation Details does not mention details about the diffusion model, readers can mistakenly think that diffusion models are not trained.
* I don't understand the first ablation study "Using Diffusion as Decoder or Not". The designed experiment seems to remove the upper half of a U-Net model, why does this provide evidence about the importance of diffusion?
* Table 6 only list computational complexity of diffusion-based method, it would be good to also include other baseline methods compared in Table 1.
Technical Quality: 3
Clarity: 3
Questions for Authors: * The current method is trained end-to-end and the authors mention that achieving disentanglement in complex data is still hard. Is it possible to fine-tune existing pre-trained LDM using the proposed approach for achieving better disentanglement in complex data?
* In the third ablation study, why does scalar-valued perform better? It seems that the vector-based can potentially extract more information and hence perform better? Can the author further elaborate here?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your appreciation and recognition of our work regarding the novelty, writing, and strong performance. We will incorporate your helpful suggestions into our revision.
**Q1**: Typos and writing related.
**A1**: Thank you very much for your helpful suggestions. We will clarify/rewrite those in our revision.
**Line 174**: We will add more explanation as: The information bottleneck promotes disentanglement by forcing the model to efficiently compress the input data into a limited latent space. This constraint encourages the model to represent distinct and independent features of the input in separate latent dimensions. As a result, each latent variable tends to capture a unique aspect of the data, leading to disentangled representations where different latent variables correspond to different generative factors.
**Line 282**: We will remove “is used”.
**Details in 4.1**: We will move such crucial details from Appendix D to Section 4.1.
**About ablation on Using Diffusion as Decoder or Not**: We will make it clearer in our revision. We designed a variant (EncDec w/o Diff) of EncDiff to have an autoencoder-like structure, by reusing the image encoder as encoder and the lower half of the U-Net structure as decoder for reconstruction. In contrast to EncDiff, we discard the multiple step diffusion process and only run once feedforward inference to get the reconstruction. If the autoencoder’s performance drops significantly, this will provide evidence about the importance of the diffusion process instead of the U-Net architecture.
**Computational complexity on more methods**: The computational complexity of the VAE-based and GAN-based methods are also listed as follows. The computational complexity and inference time VAEs and GANs still have strength, but diffusion has much better generation and disentangling ability. Among the diffusion models, our EncDiff has better generation and disentangling ability but less computational cost and inference time.
Table A: Computational complexity comparison.
| Models | Params (M) | FLOPs (M) | Time (s) |
| :-----:| :----: | :----: |:----: |
| FactorVAE | 11.9 | 892.1 | < 1|
| BetaTCVAE | 7.9 | 542.1 | < 1|
| DisCo | 12 | 907.2 | < 1|
**Q2**: Is it possible to fine-tune existing pre-trained LDM using the proposed approach for achieving better disentanglement in complex data?
**A2**: Thank you for the helpful suggestion. We agree and believe that leveraging the pre-trained LDM would ease the disentanglement in complex data. Due to limited rebuttal time for implementation, we will add the studies in our revision.
**Q3**: In the third ablation study, why does scalar-valued perform better? It seems that the vector-based can potentially extract more information and hence perform better? Can the author further elaborate here?
**A3**: Consistent with your understanding, we think that the vector-based representation potentially extracts more information and hence enforces a looser bottleneck than scalar-valued representation. Please note that the more information encoded, there is a higher probability that the encoded information is correlated, which is contradictory for disentanglement. Therefore, the tighter bottleneck from scalar-valued representation leads to (a slightly) better performance. We will add more explanation in the revision.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I thank the authors for the rebuttal.
After reading it along with other reviews, I'll keep my original score since I believe this is a strong submission. In the meantime, due to the somewhat simplified experimental setting, I'll not raise the score further.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your great efforts in reviewing our paper and responses, and the recognition of our work! We appreciate your valuable feedback and will incorporate these good suggestions into our revision. | null | null | Rebuttal 1:
Rebuttal: Dear Reviewers,
We sincerely appreciate the time and effort you have invested in reviewing our manuscript. Your constructive feedback has been instrumental in identifying areas for improvement, and we are grateful for your positive feedback on paper novelty (Reviewer HVuH, Mm6Q), good intuition (Reviewer HVuH, Mm6Q), strong (Reviewer HVuH, Mm6Q) performance and adequate ablation (Reviewer HVuH, rNx8), and paper presentation (Reviewer HVuH, rNx8, Mm6Q).
We have carefully considered each of your comments and suggestions, and below, we provide detailed responses to address the concerns raised. We believe that incorporating your valuable insights will significantly enhance the quality and clarity of our paper.
We are committed to making the necessary revisions and are eager to engage in further discussions. Your additional questions or concerns are most welcome, as they will help us refine our work to meet the high standards of the conference.
Thank you once again for your invaluable input.
Best regards,
All authors
Pdf: /pdf/92dbc18520c5b58bcc6e746dfa8e225912454468.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Wild-GS: Real-Time Novel View Synthesis from Unconstrained Photo Collections | Accept (poster) | Summary: Wild-GS proposes a heuristic appearance decomposition strategy to deal with arbitrary images captured in the wild. Specifically, the authors decompose the appearance of each Gaussian into three components: global appearance, local appearance, and intrinsic features. Compared to existing methods, this paper achieves the highest visual quality and the fastest training and rendering speed.
Strengths: 1. the paper is well-written and easy to understand.
2. this paper proposes a novel hierarchical appearance decomposition method, achieving high-quality appearance transfer from arbitrary images (no matter inside and outside the training dataset).
3. Wild-GS achieves SOTA performance on three in-the-wild datasets compared with baselines (NeRF-W, Ha-NeRF, CR-NeRF)
Weaknesses: 1. **Lack of Novelty**. First, this task is boring because it has already been successfully addressed in NeRF, making it likely that it can also be applied to 3D-GS. Second, in line 67, the authors summarize the contributions into four points. However, I believe that the first, second, and fourth contributions can be considered as a single contribution, while the third contributions are inherited from 3D-GS. Therefore, this paper can be seen as having only one main contribution. Third, although the authors discuss the comparison with concurrent in-the-wild 3D-GS works, they do not directly compare with them due to the absence of released code. However, Scaffold-GS [1] and Octree-GS [2] can also handle in-the-wild images effectively due to their appearance MLP. Additionally, VastGaussian [3] also proposes an appearance embedding module by the CNN network. Considering this, I believe that Wild-GS may not surpass them in terms of visual quality if not consider the transient objects.
2. **Confused of Module Design**. I struggle to understand the motivation for using a triplane to represent local appearance. In line 14, the authors clarify that they aim to explicitly align pixel appearance features with corresponding local Gaussians. This raises a question: it appears that the authors require only a continuous volume representation. Therefore, any representation that provides a continuous volume, such as triplane, vector and plane components like TensoRF [4] or hash-encoding, could be suitable.
3. **Some sentences seem to overstate their claims.** In lines 160 and 182, the authors describe a local appearance design intended for physical interactions, such as distinct specular highlights and shadows. However, I have not seen any experiments that demonstrate this, and there is even no ablation experiment without the local feature. Similarly, in lines 161 and 216, the authors explain that they maintain a learnable intrinsic feature, which is said to represent inherent material properties. I am curious to know what the results would be if there were no global and local appearance features.
[1] Scaffold-gs: Structured 3d gaussians for view-adaptive rendering
[2] Octree-gs: Towards consistent real-time rendering with lod-structured 3d gaussians
[3] VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction
[4] Tensorf: Tensorial radiance fields
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. I am curious to know about the visual quality, training speed, and rendering speed compared to Scaffold-GS.
2. If the triplane representation is replaced with hash encoding, how would the performance be affected?
3. It would be beneficial if the authors provided an ablation experiment without the local feature, as it can demonstrate the effectiveness of the local appearance design in modeling distinct specular highlights and shadows.
4. I am interested in knowing the results if there were no global and local appearance features, as it would provide insights into the quality of the decomposition.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: the authors discuss the limitation in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your thoughtful comments. Below, we address your questions and concerns. Similar questions and weaknesses are merged.
---
**W**: " This task is boring because it has already been successfully addressed in NeRF, making it likely that it can also be applied to 3D-GS."
**A**: Even though several existing baselines based on NeRF have improved the performance for handling in-the-wild photo collections, they all show common issues in appearance modeling and training or inference efficiency. Simply replacing NeRF with 3DGS cannot give good results, considering the former designs are mainly suitable for implicit representation. Therefore, it is worth studying how to reasonably adapt 3DGS to handle in-the-wild photos following the nature of 3DGS without losing too much of its efficiency. Our experimental results show that our model surpasses previous SOTA by a big margin and significantly improves the capacity of 3DGS for handling in-the-wild images.
---
**W**: "Although the authors discuss the comparison with concurrent in-the-wild 3D-GS works, they do not directly compare with them due to the absence of released code."
**A**: During the review process, one of the concurrent work (GS-W [1]) released its code. Thus, we provide the comparison results in the pdf of our rebuttal, please check it. Our model still performs better than other methods.
[1] Gaussian in the Wild: 3D Gaussian Splatting for Unconstrained Image Collections
---
**W**: "Scaffold-GS and Octree-GS can also handle in-the-wild images effectively due to their appearance MLP. Additionally, VastGaussian also proposes an appearance embedding module by the CNN network."
**A**: Our design in Wild-GS is mainly from the external, so one can change the original 3DGS to other advanced 3DGS models, such as Scaffold-GS and Octree-GS, freely. We will provide the results by replacing the 3DGS with Scaffold-GS in the Appendix for the reader's benefit. VastGaussian implements appearance transfer after the rendering, which is not suitable for 3DGS. To maintain the high-speed rendering of 3DGS, we should move all the design components before the rendering starts (the sh coefficients can be cached).
---
**W**: "I struggle to understand the motivation for using a triplane to represent local appearance. Any representation that provides a continuous volume, such as triplane, vector and plane components like TensoRF or hash-encoding, could be suitable."
**A**: Triplane is bridge between 2D and 3D, where one can process 3D information in 2D space. Even though 3D representations, such as voxel or octree, can model more complex scenes, they will involve a 3D network to process the 3D information. Compared with 2D networks (2D UNet), 3D networks will introduce more time and space complexities. Therefore, we choose triplane to process and represent the 3D local appearance. However, the decomposition in TensoRF and the structure of hash-encoding are too complex and cannot be processed and predicted by a simple 2D network.
---
**W**: "The authors describe a local appearance design intended for physical interactions, such as distinct specular highlights and shadows. However, I have not seen any experiments that demonstrate this, and there is even no ablation experiment without the local feature."
**A**: The Fig. 7 in the Appendix of the main paper contains the results for local appearance modeling. Also, the ablation experiment results without the local feature are provided in the pdf of our rebuttal. Please refer to it.
---
**W**: "I am curious to know what the results would be if there were no global and local appearance features."
**A**: Table 1 and Fig. 5 in the main paper give the results for w/o global features. Table 1 and Fig. 1 in our rebuttal pdf present the results for w/o local appearance features and w/o intrinsic feature. Please refer to it.
---
**Q**: "I am curious to know about the visual quality, training speed, and rendering speed compared to Scaffold-GS."
**A**: Scaffold-GS can replace the original 3DGS in our method for potentially better performance. Also, we cannot find an in-the-wild dataset that only has the appearance variations between images.
---
**Q**: "If the triplane representation is replaced with hash encoding, how would the performance be affected?"
**A**: Hash encoding cannot be simply processed and predicted by a 2D Network.
---
**Q**: "It would be beneficial if the authors provided an ablation experiment without the local feature, as it can demonstrate the effectiveness of the local appearance design in modeling distinct specular highlights and shadows."
**A**: Similar question in Weakness. Please refer to the former answers.
---
**Q**: "I am interested in knowing the results if there were no global and local appearance features, as it would provide insights into the quality of the decomposition."
**A**: Similar question in Weakness. Please refer to the former answers.
---
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's sincere reply. However, my concern remains partially unresolved.
1. Due to the similarity of the depth regularization and transient objects module with other works, I believe it would be more beneficial to implement a version on Scaffold-GS and then conduct a fair comparison with it, instead of merely stating that "we cannot find an in-the-wild dataset that only exhibits appearance variations between images." That's the correct way to demonstrate novelty and performance.
2. I am still confused as to why triplane cannot be replaced by hash encoding. While the authors argue that hash encoding cannot be easily processed and predicted by a 2D Network, I believe that it's the same with tri-plane.
3. Local appearance is intended to model the specific image's appearance, and I acknowledge its performance. However, the claim regarding "distinct specular highlights and shadows for physical interactions" appears to be overstated.
---
Rebuttal 2:
Comment: Thanks for your prompt and valuable response. We address your remaining concerns below:
---
**Concern 1**: "It would be more beneficial to implement a version on Scaffold-GS and then conduct a fair comparison with it."
**Answer**: Following NeRF-W [1], this work focuses on the reconstructing 3D scene from unconstrained photo collections, such as tourism photos over the internet, where the transient objects appear constantly in the existing datasets. However, we agree including Scaffold-GS as additional baseline will be beneficial to better showcase novelty.
***Implementation details***:
Based on the suggestion, we implement an in-the-wild version of Scaffold-GS: (a) For each image, there will be a learnable appearance embedding, and it will be concatenated with original Gaussian features to serve as the input for the color MLP; (b) 2D UNet is leveraged for transient mask prediction and learned in the same way as Wild-GS; \(c\) The appearance embedding should be optimized for each given reference image in the inference stage and this process takes around 5 seconds. Except (b), the entire process is similar to NeRF-W, and we call this version Scaffold-GS-W. We conduct experiments on the six datasets used in the main paper and rebuttal and provide the results below:
Results on Main Paper Datasets:
| Dataset | PSNR | SSIM | LPIPS |
|--------------------|-------|--------|--------|
| Brandenburg Gate | 25.55 | 0.9193 | 0.1106 |
| Sacre Coeur | 22.77 | 0.8695 | 0.1352 |
| Trevi Fountain | 22.08 | 0.7931 | 0.1684 |
Results on Additional Datasets:
| Dataset | PSNR | SSIM | LPIPS |
|-----------------------|-------|--------|--------|
| Palace of Westminster | 23.32 | 0.8611 | 0.1792 |
| Pantheon Exterior | 23.48 | 0.8637 | 0.1219 |
| Buckingham Palace | 24.90 | 0.8890 | 0.1436 |
Model Efficiency on Single GPU:
| Metric | Value |
|-----------------|---------|
| Training Time | 0.15 hrs|
| Rendering Speed | 192 FPS |
***Observations***:
(a) Scaffold-GS-W significantly outperforms 3DGS-AE (in the rebuttal), especially in SSIM and LPIPS, indicating better reconstruction of local textures and structures.
(b) There is still a big margin on evaluation metrics between Scaffold-GS-W and Wild-GS, even though Wild-GS is based on the original 3DGS.
\(c\) Scaffold-GS-W's inference process is more time-consuming due to the per-image optimization, while Wild-GS offers faster inference by parsing appearance in a single forward pass.
(d) Wild-GS keeps the inference speed of 3DGS, while Scaffold-GS-W shows slightly slower rendering on these datasets compared with 3DGS.
Given the superior reconstruction capabilities of Scaffold-GS over 3DGS, we believe integrating our hierarchical appearance modeling with Scaffold-GS could further enhance Wild-GS's performance. This potential improvement will be discussed in the conclusion and future work section. All the results above will be included in the paper.
[1] NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections
---
**Concern 2**: "I am still confused as to why triplane cannot be replaced by hash encoding. While the authors argue that hash encoding cannot be easily processed and predicted by a 2D Network, I believe that it's the same with tri-plane."
**Answer**: Triplane can be processed by 2D networks because each plane (i.e., xy, yz, zx) in triplane is a spatially continuous 2D plane that acts as one "2D projection" of the 3D scene. Using a 2D network (i.e., UNet) to create and process these 2D planes is known to be effective in recent works [1-3]. In contrast, hash encoding [4] is inherently a dictionary/look-up table, which does not have a meaningful 2D spatial structure. Therefore, it cannot be processed by a 2D network that relies on spatial operations.
[1] 3D Neural Field Generation using Triplane Diffusion
[2] Tri-Perspective View for Vision-Based 3D Semantic Occupancy Prediction
[3] RODIN: A Generative Model for Sculpting 3D Digital Avatars Using Diffusion
[4] Instant neural graphics primitives with a multiresolution hash encoding
---
**Concern 3**: "Local appearance is intended to model the specific image's appearance, and I acknowledge its performance. However, the claim regarding "distinct specular highlights and shadows for physical interactions" appears to be overstated."
**Answer**: Thanks for your suggestion. We will tone down this statement in the paper.
---
---
Rebuttal Comment 2.1:
Title: Response to the Reviewer's weaknesses
Comment: I thank all reviewers for their diligent efforts in reviewing. However, I publicly condemn expressing such statements as the following:
> First, this task is boring because it has already been successfully addressed in NeRF, making it likely that it can also be applied to 3D-GS.
- It is inappropriate to label the task as "boring" simply because similar issues have been addressed in NeRF. As researchers, our role is to objectively assess scientific work without letting personal biases, such as deeming a topic "boring", influence our judgment.
- The assertion that problems have been resolved by NeRFs is inaccurate. While initial settings like NeRF-in-the-Wild have been explored, there remains substantial potential for improvement in areas such as quality, editability, and training/inference performance. Labeling any problem as "solved" by a particular approach oversimplifies the complexities of scientific research.
- The potential applicability of a theory or method, such as Gaussian Splatting's relation to NeRF through the NTK perspective, warrants investigation. It is crucial to explore these avenues thoroughly, regardless of preliminary assumptions about their success.
I appreciate the authors for providing additional results with their 3DGS-AE, which contributes valuable insights to the field.
---
Rebuttal 3:
Comment: Thanks for all the comments and discussions from all the reviewers. we would like to continue to address the remaining concerns from Reviewer E9p7:
---
**R:** "Could you clarify why there remains a significant gap in evaluation metrics between Scaffold-GS-W and Wild-GS? Is it due to the inefficacy of using MLP to model appearance?"
**A:** No, Wild-GS also utilizes MLP for appearance prediction. We think directly applying the existing appearance modeling methods used in NeRF (i.e., learnable appearance embedding in NeRF-W) to 3DGS without consideration of the explicit and discrete nature of this new representation is suboptimal, which is the reason why there is a gap between Scaffold-GS-W and Wild-GS. Our appearance modeling pipeline is more advanced and effective, and this is one of our contributions in this paper.
---
**R:** "I believe the field of novel view synthesis could benefit from fresh insights. There's been a lot of work published recently, all tackling similar problems with the same datasets, which feels somewhat incremental. Research shouldn't just focus on incremental performance improvements."
**A:** First of all, we believe our research is **not incremental**, and we do bring new insights to the field:
1) We are the first method that makes real-time rendering from in-the-wild image set possible, and the fast training and real-time rendering performance of Wild-GS will significantly propell the real-world applications of novel view synthesis from unconstrained photo collections.
2) Our novel appearance modeling pipeline follows the explicit and discrete nature of 3DGS and will inspire the following works on improving the model performance for this task.
3) The extensive experimental analysis and study will serve as a good starting point and expedite the research in this direction.
Secondly, new datasets or new tasks are crucial for advancing a specific research field. However, improving existing tasks using established datasets is equally important for solidifying research progress. For instance, the enhanced performance of the YOLO series [1-4] in real-time object detection has significantly propelled the application of vision models in the real world. Similarly, the evolution of CNN architectures from ResNet [5] to ConvNeXT [6] has laid the groundwork for many vision tasks in recognition, detection, and segmentation. Recently, numerous impressive works [7-10] have adapted 3DGS to existing tasks in novel view synthesis, greatly advancing 3D vision. Therefore, we believe that innovative designs that improve model performance on established tasks are still worth exploring.
[1] You Only Look Once: Unified, Real-Time Object Detection
[2] YOLO9000: Better, Faster, Stronger
[3] YOLOv3: An Incremental Improvement
[4] YOLOv4: Optimal Speed and Accuracy of Object Detection
[5] Deep Residual Learning for Image Recognition
[6] A ConvNet for the 2020s
[7] Human Gaussian Splatting: Real-time Rendering of Animatable Avatars (CVPR 2024)
[8] Text-to-3D using Gaussian Splatting (CVPR 2024)
[9] Dynamic 3D Gaussians: Tracking by Persistent Dynamic View Synthesis (CVPR 2024)
[10] DNGaussian: Optimizing Sparse-View 3D Gaussian Radiance Fields with Global-Local Depth Normalization (CVPR 2024)
---
**R:** "After reading the paper and especially watching the supplementary video, I found the approach rather uninspiring, particularly for someone who's been in this field for years."
**A:** As we discussed in the former questions, directly applying existing methods from NeRF to 3DGS is suboptimal, and our Hierarchical Appearance Modeling approach follows the nature of 3DGS and accomplishes more accurate appearance modeling and transfer capabilities than existing methods. Also, the entire Wild-GS pipeline is highly efficient by providing very fast training and similar inference speed with 3DGS, which other existing and concurrent works cannot reach. Therefore, we believe Wild-GS will **inspire the following works on the design of model architecture and appearance modeling pipeline**.
---
**R:** "While this method may be somewhat effective, the results aren't surprising and feel incremental, which I believe falls short of the standard for NIPS."
**A:** Wild-GS significantly outperforms existing state-of-the-art models on this task. For instance, compared to CR-NeRF [1], **Wild-GS achieves an approximately 3 PSNR increase while reducing training time by 200 times and increasing rendering speed by 10,000 times**. Even compared with concurrent works, Wild-GS still presents better results. Therefore, from the perspective of experimental results, we still think Wild-GS is **not incremental** and will serve as a good foundation for the following works.
[1] Cross-Ray Neural Radiance Fields for Novel-view Synthesis from Unconstrained Image Collections (ICCV 2023)
---
Please let us know if you have any other technical concerns so that we can address them on time.
--- | Summary: The authors propose a method that adopts recently introduced Gaussian Splatting to work in an in-the-wild setting. The major contribution introduces a decoupling between the global and local changes to the splats. A part of the framework shows how to leverage a given point cloud (from the camera calibration) to condition splats so that they can reproduce local variability in the scene. The experiment results show that the proposed approach improves over the past works, and the introduced components are necessary to obtain them.
Strengths: - The proposed method is novel regarding the Gaussian Splatting applications,
- The qualitative and quantitative results show that the method performs better than the selected baselines.
- Additionally, the ablation study clearly shows that all the components are necessary to obtain the presented results.
- The extraction of features using a point cloud is an interesting novelty that may be used in future research.
Weaknesses: - The model is complex in terms of the number of used components. It uses a pretrained Depth Anything model (for the depth prediction), a 2D UNet that encodes the reference image into a global descriptor, and a 3D UNet that processes the triplane representation. In such a case, how does a method that learns the representations (the global descriptor and triplane representation as local descriptors) in an auto-decoder version (as in NeRF-in-the-Wild) perform?
- As it is the first method (I am not including the preprints here or recently accepted) that applies Gaussian Splatting to the in-the-wild setting, a simpler baseline would strengthen the evaluation. How would a simple learnable per-image latent concatenated with each Gaussian would perform in such a setting?
- All the chosen baselines have their codebases publicly available. I do not understand why the authors limited their evaluations to 3 scenes only. In contrast, NeRF-in-the-Wild uses 6 scenes in total from the Phototourism dataset.
- Some parts of the paper need further explanation, notably:
- Why is the cropping necessary? It would make sense in the case of triplanes that exceed 1024x1024 (for example) pixels in resolution. However, it seems that such a resolution would suffice.
- Is the compliment learnable vector $v$ the same for all gaussian outside of AABB?
- What does "Efficiency" in Table 1. denote exactly?
Technical Quality: 2
Clarity: 3
Questions for Authors: I will repeat some of the questions asked already in the weaknesses:
- Why is the cropping necessary? It would make sense in the case of triplanes that exceed 1024x1024 (for example) pixels in resolution. However, it seems that such a resolution would suffice.
- Is the compliment learnable vector `v` the same for all gaussians outside AABB?
- What exactly does "Efficiency" in Table 1 denote?
I also suggest the authors improve the mathematical notation. For example, the $I_R[M_{I_R} > Th]$ can be decoupled as $I_R \odot \hat{M}_R$ where $\hat{M}_R = \unicode{x1D7D9}[M_R > \alpha]$. Some symbols can have additional explanations:
- BP in Eq. 6
- $\hat{D}$ in Eq. 10
- $xyz_i$ as $\mathbf{x}_i$
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The limitations of the paper are clearly stated in the supplementary section. No negative societal impacts are mentioned. However, those do not seem to be of high importance. I suggest the authors mention that next to the limitation section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your thoughtful comments. Below, we address your questions and concerns.
---
**W**: "How does a method that learns the representations (the global descriptor and triplane representation as local descriptors) in an auto-decoder version (as in NeRF-in-the-Wild) perform?"
**A**: In the Table 1 of the pdf (our rebuttal), we provide the results for 3DGS-AE (change the encoding (global and local) components to a learnable embedding for each image, work as NeRF-in-the-Wild but keep other components of Wild-GS). As you requested, we also tried to provide a triplane representation (3x1024x1024x32xN) for each image. However, this implementation will cause the out-of-memory issue since the number of training images (N) is too large (around 1000).
---
**W**: "How would a simple learnable per-image latent concatenated with each Gaussian would perform in such a setting?"
**A**: Please refer to the former question. We will include this baseline (3DGS-AE) in the main paper.
---
**W**: "Why the authors limited their evaluations to 3 scenes only. In contrast, NeRF-in-the-Wild uses 6 scenes in total from the Phototourism dataset. "
**A**: For a fair comparison with Ha-NeRF and CR-NeRF (they only use these 3 scenes), we provide the results of these 3 scenes in the main paper. I guess the reason they limited the experiments to only 3 scenes is because the URLs to download other scenes used NeRF-in-the-Wild became invalid. Considering this, we select 3 extra scenes, which can still be downloaded now, in the Photorism dataset and provide the results (comparing with concurrent work GS-W) in the pdf of our rebuttal. Please refer to it. The new results will be included in the Appendix.
---
**Q**: "Why is the cropping necessary? "
**A**: In Table 1 of the main paper, we can notice that after cropping the triplane, the training time can be reduced by around 40\% with even a slight improvement in synthesis performance. The reason is that the majority of Gaussian points only occupy small part of the triplane and cropping the triplane will reduce the complexity for sampling and processing.
---
**Q**: "Is the compliment learnable vector the same for all Gaussians outside AABB?"
**A**: Yes, it is the same.
---
**Q**: "What exactly does "Efficiency" in Table 1 denote?"
**A**: The efficiency is quantified by training time (number of hours) and inference speed (frame per second) of different models on a single RTX3090 GPU.
---
**Q**: "Improve the mathematical notation"
**A**: Thanks so much for your suggestion, and we will change the notation according to your advice.
---
**L**: No negative societal impacts are mentioned."
**A**: Thanks for reminding us, and we will mention the societal impacts in the limitation section.
---
---
Rebuttal Comment 1.1:
Title: Response to the Author's Rebuttal
Comment: I appreciate the author's response and the additional results provided. I'm inclined to support the acceptance of the paper for NeurIPS 2024. However, I have reservations about giving a higher score due to certain concerns:
- **The tackled problem**: The issue addressed by this paper is well-established within the academic community, with numerous methods already proposed.
- **Complexity**: The method involves multiple components to achieve the final results, although I still consider the framework to be novel.
- **Datasets**: The datasets utilized have been previously published.
Despite these points, rejecting this paper would be a disservice to both the research community and practitioners. Considering that more specialized conferences like ECCV have later deadlines and would require additional work to reach the standards of CVPR, I advocate for the acceptance of this paper at NeurIPS.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the insightful, objective, and constructive feedback from Reviewer TYNp, and we are grateful for your support in accepting our paper. | Summary: This paper proposes a new pipeline, which is based on 3D Gaussian Splatting, for in-the-wild rendering. Wild-GS decomposes the appearance into global feature vector, local feature encoded in triplane features and per-Gaussian intrinsic features. Wild-GS achieves best performance on three scenes, while keeping the training and rendering efficiency from 3DGS. However, the novelty is limited since Wild-GS mainly combines existing techniques and replaces NeRF with 3DGS. Besides, the triplane representation may be not suitable if the scenes further scale and become more complex, e.g., self-occlusions.
Strengths: The paper is overall well-written.
Wild-GS achieves better performance than existing methods, while keeping the training and rendering efficiency from 3DGS.
Weaknesses: Limited Novelty. Wild-GS mainly replaces NeRF with 3DGS for the in-the-wild rendering setting and combines existing techniques. Global appearance encoding is estimated in a similar way as Ha-NeRF. Unsupervised visibility mask is also similar to Ha-NeRF. Depth regularization follows FSGS. Combination of triplane features and 3DGS is motivated by previous works, e.g., TriplaneGaussian. Projecting 3D point cloud to generate triplane features is similar to ConvOccNet [1*].
The triplane representation is mainly used in scenes that are not very complex. For example, EG3D works on human faces and objects, while LRM [2*] works on objects. Though the three scenes tested in the paper are buildings, their structure are still relatively simple. If the scenes further scale and become more complex, e.g., self-occlusions, using triplane representation may not be the best choice.
Learned visibility masks should be visualized as Ha-NeRF and CR-NeRF to understand to which extent the model removes transient objects.
[1*] Peng et al. Convolutional Occupancy Networks. ECCV 2020.
[2*] Hong et al. LRM: Large Reconstruction Model for Single Image to 3D. ICLR 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: Is the encoder of the UNet (pretrained ResNet-18) fixed or finetuned?
To get the point cloud with rendered depth, visibility mask is used to remove transient objects. Is there a warm-up process to start using visibility mask since the learned mask may not be good in the beginning of training?
Are the directions of triplanes manually chosen? In Fig.3, the left triplane re-projection nicely corresponds to the front view of the gate.
Is the UNet to process triplane features also pretrained?
What’s the ablation results of not using intrinsic feature?
For appearance transfer, more details need to be discussed. Is the appearance transferred by replacing the $Emb^g$ with the global encoding of the style image?
More details about the video demo should be given. There are two videos in supplementary. Does each video correspond to 2 reference images? What does reference image look like?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations are discussed in supplementary. For societal impacts, privacy should be concerned since in-the-wild images usually contain people’s faces.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your thoughtful comments. Below, we address your questions and concerns.
---
**Q**: "Is the encoder of the UNet (pretrained ResNet-18) fixed or fine-tuned?"
**A**: It is fine-tuned during the training process, which provides better performance than the fixed version.
---
**Q**: "Is there a warm-up process to start using visibility mask since the learned mask may not be good in the beginning of training? "
**A**: Yes, as we stated in the paper (line 245), in the initial training stage (3k iterations), we do not use the depth regularization and explicit appearance control strategies, whose functionalities are highly dependent on the mask
---
**Q**: "Are the directions of triplanes manually chosen? "
**A**: No, since we project the point cloud to all 6 cube surfaces (all the sides), the direction of the triplane will not affect the results.
---
**Q**: "Is the UNet to process triplane features also pretrained? "
**A**: No, the input for this UNet is 6 channels (not traditional RGB.), so we cannot get the pre-trained version.
---
**Q**: "What’s the ablation results of not using intrinsic feature? "
**A**: We provide this ablation results in the pdf of our rebuttal, please refer to it. The intrinsic feature is very important for appearance modeling.
---
**Q**: "Is the appearance transferred by replacing the 𝐸𝑚𝑏𝑔 with the global encoding of the style image? "
**A**: If the style image is from the dataset (the camera pose is known), we can directly use both global and local encodings for more accurate appearance control. However, if the style image is arbitrary, we should provide an arbitrary camera pose for it to generate the local embeddings, where the global encoding will matter more. We will give more details about this implementation in the Appendix.
---
**Q**: "Does each video correspond to 2 reference images? What does reference image look like? "
**A**: Yes. The video is generated by linearly combining 2 reference images (appearance tuning) and changing the camera pose simultaneously. The reference images will be presented in the Appendix for the reader's benefit.
---
**L**: "For societal impacts, privacy should be concerned since in-the-wild images usually contain people’s faces."
**A**: That is true. We will recommend users mask the people's faces when leveraging in-the-wild images in the Limitation section.
---
**W**: "Wild-GS mainly replaces NeRF with 3DGS for the in-the-wild rendering setting and combines existing techniques."
**A**: Our major contribution in this paper is the establishment of the hierarchical appearance modeling approach following the nature of 3DGS. For each component in this pipeline, one can replace it with different techniques. Through extensive experiments, we finally found that the proposed design is the most effective and efficient, showing better performance than other NeRF-based or 3DGS-based methods.
---
**W**: "The triplane representation is mainly used in scenes that are not very complex."
**A**: Even though other explicit representations, such as voxel and octree, can model more complex scenes, the processing networks for 3D inputs are usually very heavy and require more time or space complexity compared with 2D networks. One of the major merits of triplane is that it can move the 3D operation to 2D for better efficiency, which is why we chose triplane. Based on our hierarchical appearance modeling pipeline, one can change the triplane to different representations for explicit appearance control to meet different requirements.
---
**W**: "Learned visibility masks should be visualized as Ha-NeRF and CR-NeRF to understand to which extent the model removes transient objects."
**A**: In Fig.2 of the pdf (our rebuttal), we provide the visualizations of the learned masks (threshold by 0.5). One can change to other advanced pre-trained networks (such as DINO v2) for better performance. More mask results will be included in the appendix.
---
---
Rebuttal 2:
Comment: Dear Reviewer JabQ,
We are truly grateful for the time and effort you have invested in reviewing our paper. We have submitted our responses to your comments, along with a PDF file of the rebuttal. If you have any further questions or need additional clarification, please leave us a comment. We are keen to address any concerns during the discussion period to ensure our manuscript aligns with your expectations.
Thank you once again for your insightful feedback and guidance.
Warm regards,
The Authors
---
Rebuttal 3:
Comment: Thanks for the authors' reply. My questions are answered. About triplane feature, I agree with Reviewer E9p7 that triplane is limited for large scenes. For example, recent SMERF [1*] also points out the limitation of triplane representation in MERF [2*] for large scenes and thus tries to solve it. Take a city scene for example, where the region of interest may contain many buildings. Using triplane features is certainly insufficient because of occlusions. As suggested by Reviewer E9p7, I think using pipelines like Scaffold-GS is an interesting direction. For the computation complexity, since 3D point cloud of 3DGS is a sparse data structure, many methods [3*, 4*] that are specialized to deal with such sparse voxels can be used to improve efficiency.
[1*] Duckworth et al. SMERF: Streamable Memory Efficient Radiance Fields for Real-Time Large-Scene Exploration. SIGGRAPH 2024.
[2*] Reiser et al. MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes. SIGGRAPH 2023.
[3*] Choy et al. 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks. CVPR 2019.
[4*] Tang et al. TorchSparse: Efficient Point Cloud Inference Engine. MLSys 2022.
---
Rebuttal Comment 3.1:
Comment: We are pleased to hear that your questions have been addressed.
We agree that replacing the triplane and 2D network with other 3D representations and efficient 3D networks can further enhance the robustness of Wild-GS. Our primary contribution in this paper is the introduction of the appearance modeling pipeline, which involves the decomposition of global, local, and intrinsic features. This approach is particularly suited for 3DGS, and components can be freely replaced with more advanced models based on the specific application scenario of Wild-GS.
Thank you once again for your time and effort in reviewing our paper. | Summary: **Summary
The paper presents a method called Wild-GS, an adaptation of 3D Gaussian Splatting (3DGS) designed for creating realistic novel views from a collection of unconstrained photographs, such as those taken in varied tourist environments. The method addresses the challenges of dynamic appearances and transient occlusions by employing a hierarchical appearance modeling strategy that decomposes the appearance into global and local components, along with intrinsic material attributes for each 3D Gaussian. Wild-GS introduces an explicit local appearance control using triplane representation, which aligns high-frequency details from reference images to 3D space, and incorporates depth regularization and transient object handling to improve geometric accuracy and rendering quality. Extensive experiments demonstrate Wild-GS's superior performance in rendering efficiency and quality compared to existing techniques, with the promise of publicly available code post-review.
Strengths: **Positive points
1. Wild-GS achieves state-of-the-art rendering performance with significantly improved efficiency in both training and inference times.
2. The method uses triplane representation for explicit local appearance modeling allows for the transfer of high-frequency detailed appearance from reference views to 3D space.
Weaknesses: **Negative points
1. How do you densify and prune the 3D Gaussians, what is the starting iteration and ending iteration and the interval iterations? will the inaccurate mask affect the procedure of the densify and pruning? How does the method perform on the in-the-wild-images? The authors can show more reconstructions in the wild and provide high-resolution images in the main text to make the results of the authors' method more convincing.
2. Will increasing L{m} increasing M{ir}, leading to masking everthing?
3. It would be better to introduce the details of evaluation, for example, NeRF-W, Ha-NeRF and CR-NeRF are test with only half of the images, does the authors uses the same evaluation implementation?
4. From the ablation studies, why does the performance of the baseline method, such as w/o depth in Table.1 also performs well, outperforming existing methods with a large margin?
5. how to ensure the extracted global appearance embedding controls the LF appearance changes?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weakness section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your thoughtful comments. Below we address your questions and concerns.
---
**Q**: "How do you densify and prune the 3D Gaussians, what is the starting iteration and ending iteration and the interval iterations?"
**A**: For the densification \& pruning and iteration settings, we directly use the default hyperparameters of the original 3DGS. We will recommend the readers to refer to the original implementation of 3DGS.
---
**Q**: "Will the inaccurate mask affect the procedure of the densify and pruning?"
**A**: Yes, if the mask cannot locate the transient objects, 3DGS will automatically create lots of Gaussians in these areas to represent their complex and dynamic appearance between different views.
---
**Q**: "How does the method perform on the in-the-wild-images?"
**A**: We implemented Wild-GS on three extra datasets consisting of highly "in-the-wild" tourist photos. Please check the quantitative and qualitative results in the pdf of the rebuttal.
---
**Q**: "Will increasing L{m} increasing M{ir}, leading to masking everything?"
**A**: Since there is no annotated ground truth for the training images, we need an unsupervised way to learn the visibility mask. The coefficient of the mask loss is a hyperparameter and should not be too large or too small. When it is small, the training process will become unstable, and the regions that are hard to model will be masked, causing geometry errors and even everything being masked. If it is too large, the transient objects cannot be localized.
---
**Q**: "NeRF-W, Ha-NeRF and CR-NeRF are test with only half of the images, does the authors uses the same evaluation implementation?"
**A**: NeRF-W optimizes the appearance embedding based on the left half of the image, while recent methods such as Ha-NeRF and CR-NeRF directly encode the appearance features using deep CNN from the whole image (based on their code implementation). Wild-GS follows the same setting as Ha-NeRF and CR-NeRF for a fair comparison and we will provide the results using half of the image in the Appendix.
---
**Q**: "Why does the performance of the baseline method, such as w/o depth in Table.1 also performs well, outperforming existing methods with a large margin?"
**A**: The success of Wild-GS mainly originates from the proposed hierarchical appearance modeling. Even though the metric has no significant improvement, the geometry in the rendering results is more accurate. Without depth regularization, sometimes there are some geometry errors in the renderings.
---
**Q**: "How to ensure the extracted global appearance embedding controls the LF appearance changes?"
**A**: Wild-GS applies the extracted global appearance embedding to all the 3D Gaussians to capture the global styles or tones for given reference images, ensuring the appearance changes (determined by the global embedding) between Gaussians are insignificant. This is inspired by previous NeRF-based methods (existing baselines NeRF-W and Ha-NeRF), which leverage similar appearance embedding for all the 3D volume points, and their visualization results successfully capture the global low-frequency appearance changes. As shown in Fig. 5 of the paper, with the extracted global embedding, the tones (low-frequency) of the rendering results and references are almost the same and can be transferred to other viewpoints.
---
---
Rebuttal Comment 1.1:
Comment: Thank the authors for answering my questions. I am still confused about the evaluation with only half of the images. In Table 1 of the manuscript, does the author test with half of the image or use the whole image?
---
Rebuttal 2:
Comment: Dear Reviewer ZrLe,
We sincerely appreciate the time and effort you have dedicated to reviewing our paper. We have submitted our responses to your comments, along with a PDF file of the rebuttal. If you have any additional questions or require further clarification, please let us know. We are eager to address any concerns during the discussion period.
Thank you once again for your valuable feedback and support.
Best wishes,
Authors
---
Rebuttal 3:
Comment: Thanks for your comments.
Following Ha-NeRF and CR-NeRF, we use the whole image to extract the appearance embeddings in Table 1 of the main paper. Ha-NeRF states that "NeRF-W optimizes appearance vectors on the left half of each test image while Ha-NeRF does not", considering NeRF-W "can not hallucinate new appearance
without optimizing during training." Recent methods (Ha-NeRF and CR-NeRF) directly predict the appearance vector/feature instead of optimizing it for each test image so the whole image can be utilized. Since our method also uses direct appearance encoding, for fair comparison, we follow the same setting. However, for the reader's benefit, we will also attach the evaluation results using only half of the image for testing. Here are the comparison results (average value on three extra datasets used in the rebuttal) of Wild-GS using whole and half of the image:
| | PSNR | SSIM | LPIPS |
|-------|:---------:|:--------:|:--------:|
| Whole | 26.4470 | 0.8788 | 0.1415 |
| Half | 26.1326 | 0.8747 | 0.1441 |
There is a slight performance decrease when only half of the image is used. We will also include the results in the manuscript appendix.
If you have any further questions, please let us know. Thanks once again for your efforts.
---
Rebuttal 4:
Comment: Thanks. But there are still some ambiguities.
I understand that extracting appearance requires a whole image.
I only want to know, in Table 1 of the manuscript, in the Brandenburg dataset, do you use half of the image for calculating PSNR SSIM and LPIPS, or the whole image?
For example, for a rendered image $\hat{I}$ and the ground truth image $I$. Do you calculate PSNR using $f_{PSNR}(\hat{I}, I$), or do you use $f_{PSNR}(\hat{I}[:, w//2:], I[:,w//2:]$)? Where w is the width of the image, and $f_{PSNR}$ is for calculating PSNR.
---
Rebuttal Comment 4.1:
Comment: Thanks a lot for your detailed clarification.
We use the whole image to calculate all the metrics, and $f_{PSNR}(\hat{I}, I)$ is used to calculate the PSNR.
Please let us know if you have any other questions, thanks!
---
Rebuttal 5:
Comment: Thanks so much for reminding us of this issue.
The table we provided in the former comment is not **calculated** by the whole and half of the image. We actually misunderstood your question before, and the results reported in that table are generated by using the whole and half of the image to parse the appearance embeddings. Thus, we say, "There is a slight performance decrease when only half of the image is used." because less information is used.
Here are the new results **calculated** by the whole and half of the image:
| | | gate | | | coeur | | | fountain | |
|-------|:-----:|:------:|:------:|:-----:|:------:|:------:|:-----:|:--------:|:------:|
| | PSNR | SSIM | LPIPS | PSNR | SSIM | LPIPS | PSNR | SSIM | LPIPS |
| Whole | 29.65 | 0.9333 | 0.0951 | 24.99 | 0.8776 | 0.1270 | 24.45 | 0.8081 | 0.1622 |
| Half | 29.92 | 0.9354 | 0.0843 | 25.01 | 0.8797 | 0.1207 | 24.31 | 0.8093 | 0.1593 |
Similar to previous works, these two results do not significantly differ. However, with a scientific and rigorous attitude, we need to greatly thank the reviewer for mentioning this issue. We will replace the metrics in Table 1 of the main paper with these new results on **the right half of the image** ($[w/2, h]$).
---
Rebuttal Comment 5.1:
Comment: Thank you for addressing my concern. I will keep my score as borderline accept.
---
Reply to Comment 5.1.1:
Comment: We are very pleased to be able to address your concern. Thanks a lot for your effort in reviewing our paper. | Rebuttal 1:
Rebuttal: We thank all reviewers for their valuable comments. We are encouraged that:
- our novelty is recognized (TYNp)
- the superior performance is appreciated (ZrLe, JabQ, TYNp, E9p7)
- the writing clarity is accredited (JabQ, E9p7)
We have tried our best to respond to all the valuable concerns. Please refer to the attached PDF containing figures and tables for the requested experiments.
Pdf: /pdf/fca56a563d29fbbb4a3ff34129ce6570c39f5691.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Diffusion-Inspired Truncated Sampler for Text-Video Retrieval | Accept (poster) | Summary: The paper introduces a new method, Diffusion-Inspired Truncated Sampler (DITS), designed for text-video retrieval tasks. It addresses the primary challenge of bridging the modality gap between text and video data, a problem that existing retrieval methods often fail to solve effectively. The authors propose DITS to harness the strengths of diffusion models in reducing this gap and enhancing the alignment between text and video embeddings within a joint embedding space.
Strengths: * The authors investigate the use of Diffusion models for addressing the text-video modality gap and identify limitations in applying standard Diffusion models to retrieval tasks. Specifically, they highlight that the L2 loss is not well-suited for the ranking problems inherent in retrieval, and that there is a dependency on varied initial points from an isotropic Gaussian distribution.
* Extensive experiments on five benchmark datasets demonstrate DITS's state-of-the-art performance. The method shows flexibility in adjusting the retrieval scope over time and improves the structure of the CLIP embedding space.
* The authors have committed to releasing the code, which will facilitate further research and application of the proposed method.
Weaknesses: * Generalization ability. The authors highlight the importance of selecting an optimal number of timestamps for the truncated diffusion process. If the number of timestamps is too large, the model might lose its generalization ability.
* Computational efficiency. Computational efficiency is a common concern with Diffusion models, which typically require substantial computational resources for training and inference.
* There are some confusions in the writing. In Section 3.2, it is mentioned that the condition for diffusion is the textual feature. However, in Section 3.3, the condition is not specified. Additionally, in Appendix A.3, the authors state that an empty condition works best.
Technical Quality: 3
Clarity: 2
Questions for Authors: * The authors would do well to make a comparison of inference times.
* Since the timestamps for the truncated diffusion process have a huge impact on performance, could an automated method be devised to determine time steps?
* The performance of unconditional diffusion surpasses that of text or video diffusion, which is counter-intuitive. Could the authors provide further analysis to explain this phenomenon?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The limitations of the method have been discussed in detail.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We much appreciate that Reviewer `vaB9` provides valuable comments and finds the proposed method shows state-of-the-art performance. We are committed to release the training and inference code, as well as the pretrained models.
**`R5.1`**: Generalization ability. The authors highlight the importance of selecting an optimal number of timestamps for the truncated diffusion process. If the number of timestamps is too large, the model might lose its generalization ability.
**`A5.1`**: Thanks for the valuable comment. Since CLIP embedding space offers a good proximity between text and video (L57\~L58) as shown in prior works [R1, R2], the timestamp is not supposed to be too large. We also empirically find that DITS
achieves good performance with a small timestamp, e.g., $T’=10$ on MSRVTT, LSMDC, Charades, and VATEX (L270). Both the previous study [R1, R2] and empirical results suggest that DITS retains a good generalization ability using a relatively small and consistent timestamp setting, being free from the elaborate tuning.
[R1] Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning. In NeurIPS 2022.
[R2] Shifted diffusion for text-to-image generation. In CVPR 2023.
**`R5.2`**: Computational efficiency. Computational efficiency is a common concern with Diffusion models, which typically require substantial computational resources for training and inference.
**`A5.2`**: Thanks for the valuable suggestion. We provide a thorough discussion on the computational efficiency including runtime, resource usage for both training (`Table T1`) and inference (`Table T2`) in the `Global Rebuttal`. Overall, DITS requires comparable GPU resources and time for training (e.g., DITS: 14.90h v.s. DiffusionRet: 212.35h). DITS also reports comparable inference efficiency, such as compared to previous methods of TS2-Net (DITS: 70.17s v.s. TS2-Net: 119.50s) and T-MASS (DITS: 70.17s v.s. T-MASS: 76.41s).
**`R5.3`**: There are some confusions in the writing. In Section 3.2, it is mentioned that the condition for diffusion is the textual feature. However, in Section 3.3, the condition is not specified. Additionally, in Appendix A.3, the authors state that an empty condition works best.
**`A5.3`**: Thanks for the detailed proofreading. For the Diffusion model baseline (Section 3.2), the condition is the textual feature. For the proposed DITS (Section 3.3, Appendix A.3), no condition is needed (empty condition). To clarify, the conventional Diffusion model (Section 3.2, proposed baseline) has to adopt the text condition to guide the learning. Differently, based on the experiment in Appendix A.3, we find an empty condition for DITS works best, thus does not specify condition for DITS in Section 3.3. Notably, DITS and conventional Diffusion models adopt the same text embedding in different ways, DITS takes the text embedding as the starting point, and the diffusion model takes text embedding as the condition. We will make it more clear by pointing out in Section 3.3 that the condition is not used in DITS.
**`R5.4`**: Since the timestamps for the truncated diffusion process have a huge impact on performance, could an automated method be devised to determine time steps?
**`A5.4`**: Thanks for the valuable comment. Devising an automated method for timestamp selection can be an attractive direction and may improve performance. However, our experimental results suggest that setting the timestamp to a small range, e.g., $1\sim15$ yields reasonable performances (Table 7) and within this small range, a general and consistent setting ($T’=10$) enables leading performances across diverse datasets, e,g, MSRVTT, LSMDC, Charades, and VATEX (L270). Thus, we adopted the empirical timestamp setting to retain computational efficiency and method expansibility.
We could provide a potential solution with a bi-level optimization framework. The upper-level can optimize the timestamp hyperparameter and the lower-level can take the timestamp to guide the iterative alignment. One can potentially perform an alternating training strategy with contrastive loss. However, a series of challenges can persist: timestamps may affect the iterative diffusion process non-linearly, introducing the local minima or saddle points. The search space for timestamps can be high-dimensional, considering the broad context on which the timestamp operates. Evaluating different timestamps can be computationally intensive.
Overall, DITS provides a new viewpoint and foundation for future research and we will continue exploring per reviewer’s inspiration. We will add this in the manuscript.
**`R5.5`**: The performance of unconditional diffusion surpasses that of text or video diffusion, which is counter-intuitive. Could the authors provide further analysis to explain this phenomenon?
**`A5.5`**: Thanks for the valuable suggestion. The diffusion process for this task emphasizes ``accurate alignment’’, unlike the general Diffusion models highlighting diversity in creative works. The previous intuition of the diffusion condition may not be applicable in this work. We provide further analysis to elaborate this phenomenon:
* Due to the one-to-many mapping of video-to-text, one video can map to multiple gap vectors. Taking the video as a condition can guide the model learn the undesired modality gap (L599\~L601), empirically dropping the performance.
* DITS start from the text, ensuring the alignment starts from a semantically meaningful initial point. The text condition acts as the constraint or guideline for the learning. Simultaneously applying the text embedding as the starting point and the condition can cause conflicting instructions, empirically decreasing the performance.
* Joint use of text and video conditions can inherent both limitations, empirically leading to an unsatisfactory performance.
We will incorporate the above analysis in the final version.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed reply. My concerns have been addressed. I have also read the comments from the other reviewers and the authors' responses. I think the paper as a whole is interesting, though it would benefit from more detailed descriptions and discussions. Therefore, I will keep my rating.
---
Reply to Comment 1.1.1:
Title: Response To Reviewer vaB9
Comment: We appreciate the reviewer's recognition of our response and support for our work! | Summary: This paper smartly leverage the diffusion model to solve a famous modality gap problem in the a CLIP retrieval problem. The proposed method called DITS set the text embedding as the initial point in the 1D diffusion model and try to generate the video embedding. Finally, by using the contrastive loss to align the generated video embedding and ground truth video embedding to finish the training of the diffusion model. The work show convincible result comparing to existing SOTA methods and the ablation study in Table 4 and 5 show the effectiveness of the method comparing to raw CLIP and existing work DiffusionRet.
Strengths: The idea to leverage the diffusion model to solve modality gap is highly novel.
It is a nice viewpoint to the famous modality gap problem.
After this paper, the problem will highly to be reexamined in the community.
1. The gap should be some intrinsic problem in the multi-modality model training.
2. With this paper, the gap is kind of filled and most of old retrieval methods and work will be re-actived in this research area.
Weaknesses: The figure is too small to read.(miner issue)
Figure 4, it looks like the gap is still large as the mean absolute distance is ~1.28. why it is not some small value likes 0.5?
Is there any investigation on cases, which video retrieval cases got improved and which video cases become worse?
Technical Quality: 4
Clarity: 4
Questions for Authors: check the weakness
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We much appreciate that Reviewer `UKTF` provides valuable comments and finds the idea is novel, showing effectiveness in filling the gap.
**`R4.1`**: The figure is too small to read.(miner issue) Figure 4, it looks like the gap is still large as the mean absolute distance is ~1.28. why it is not some small value likes 0.5?
**`A4.1`**: Thanks for the valuable comment. We will revise the layout of the figure to make it more clear. In the manuscript, Fig. 4 computes the $L\_2$-norm of the modality gap distribution, corresponding to the Euclidean distance. The values (such as mean around 1.28) are scaled by $L\_2$ norm and are non-bounded.
We also provide the mean absolute distance version (using $L\_1$-norm) of the same data in `Fig.F2` (left) in the attached rebuttal PDF. As shown, $L\_1$-norm enables a different scale and is also non-bounded. The difference between “Joint Train” and “Fix CLIP” is more remarkable under the $L\_1$-norm scale.
We also provide the cosine similarities version of the same data in `Fig. F2` (right) in the attached rebuttal PDF. As shown, cosine similarities are bounded (i.e., $[0,1]$). The similarity values are generally enlarged with joint train, being consistent with Fig. 4 and `Fig. F2` (left). We will put above evidence and analysis into the manuscript and provide more illustrations to the values in Fig. 4.
**`R4.2`**: Is there any investigation on cases, which video retrieval cases got improved and which video cases became worse?
**`A4.2`**: Thanks for the valuable comment. We provide investigations and potential intuitions on both retrieval cases when comparing DITS with the raw CLIP baseline in Fig. 4. (1) Since DITS performs the iterative alignment, it can better avoid drastic changes and provide more chances to adjust the misalignment step. We find when the text and video data is more challenging (such as vague texts or blurry videos), DITS can identify the relevant pairs and get the performance improved. (2) Conversely, on the simpler cases where text is rich and informative, or the video is temporally consistent, we did not see a large difference between DITS and the raw CLIP baseline. However, we observe when the caption data is incorrectly annotated, DITS can have a chance to miss the relevant video. We will add the above discussions into the manuscript. | Summary: The authors introduce Diffusion-Inspired Truncated Sampler (DITS) that jointly performs progressive alignment and modality gap modeling in the joint embedding space. Experiments on five benchmark datasets suggest the state19 of-the-art performance of DITS.
Strengths: 1. The motivation is clearly described and easy to understand.
2. This work studies the Diffusion model to bridge the modality gap of text-video retrieval, identifying two key limitations of the vanilla Diffusion model.
3. Extensive experiments on five datasets (MSRVTT, LSMDC, DiDeMo, VATEX, and Cha76 rades) suggest that DITS achieves state-of-the-art performance.
Weaknesses: 1. In Table 1, under the CLIP-ViT-B/32 feature extractor, the author's method has limited performance improvement compared to the comparison method.
2. In Formula 2, these two terms are added, but why are these two the same? They should be different terms added.
3. What does it mean to multiply ϵ, t, c in the norm of Formula 6? This is not standard.
4. The writing of the methods section needs further standardization and improvement.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. In Table 4, do Diffusion and Fine tune parts not need L2 loss? Why does Pretrain have L2 loss, while Fine tune does not need L2 loss and the result is high?
2. In Table 4, does the last row DITS not need L2 loss?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: See the weaknesses section
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We much appreciate that Reviewer `QeV9` finds the motivation is clear and easy to follow, and the proposed method achieves the state-of-the-art performance.
**`R3.1`**: In Table 1, under the CLIP-ViT-B/32 feature extractor, the author's method has limited performance improvement compared to the comparison method.
**`A3.1`**: (1) For the performance comparison, other methods can use larger settings than DITS (L283\~286), e.g., larger batch sizes such as 64 or 128 to help improve their performance. Due to limited computational resources, DITS adopts batch size of 32
(L274) and achieves a remarkable boost, especially compared with the previous diffusion-based method, DiffusionRet (+2.9\% at R@1 on MSRVTT, +4.4\% at R@1 on DiDeMo). (2) DITS enables better scalability. The boost over T-MASS is enhanced (MSRVTT: 2.3\% at R@1, LSMDC: 0.7\% at R@1) when changing CLIP-ViT-B/32 to CLIP-ViT-B/16.
**`R3.2`**: In Formula 2, these two terms are added, but why are these two the same? They should be different terms added.
**`A3.2`**: The two terms in the Eq.2 are different. We follow the standard notations of symmetric-formed cross entropy loss (specifically, InfoNCE loss [R1]) that is used in text-video retrieval domain (including both text-to-video and video-to-text) [R2, R3, R4, etc.]. To clarify, we directly copied it from the manuscript and highlight the difference between two terms below:
$\mathcal{L}\_{\texttt{sce}} = - \frac{1}{B}\sum\limits^{B}\_{i=1}\log\frac{e^{s(\mathbf{t}^{(i)}, \mathbf{v}^{(i)})\cdot \tau}}{\sum\nolimits\_{\textcolor{red}{j}}e^{s(\mathbf{t}^{(i)}, \mathbf{\textcolor{red}{v}}^{\textcolor{red}{(j)}})\cdot \tau}} + \log\frac{e^{s(\mathbf{t}^{(i)}, \mathbf{v}^{(i)})\cdot \tau}}{\sum\nolimits\_{\textcolor{red}{j}}e^{s(\mathbf{\textcolor{red}{t}}^{\textcolor{red}{(j)}}, \mathbf{v}^{(i)})\cdot \tau}}$
* The first term makes a summation over $v^{(j)}$ in the denominator, representing the sum of similarities between the i-th query text $\mathbf{t}^{(i)}$ and all key videos $\mathbf{v}^{(j)}$
* The second term makes a summation over $t^{(j)}$ in the denominator, representing the sum of similarities between the i-th query video $\mathbf{v}^{(i)}$ and all texts $\mathbf{t}^{(j)}$.
We will put more descriptions into the manuscript and are willing to illustrate any point that remains unclear to the reviewer.
[R1] Representation Learning with Contrastive Predictive Coding.
[R2] X-pool: Cross-modal language-video attention for text-video retrieval. In CVPR 2022.
[R3] Unified Coarse-to-Fine Alignment for Video-Text Retrieval. In ICCV 2023.
[R4] Text is mass: Modeling as stochastic embedding for text-video retrieval. In CVPR 2024.
**`R3.3`**: What does it mean to multiply ϵ, t, c in the norm of Formula 6? This is not standard.
**`A3.3`**: We kindly remind the reviewer that $\textcolor{red}{\epsilon}$, $\textcolor{red}{t}$, and $\textcolor{red}{c}$ are not multiplied in Eq.6. $\textcolor{red}{\epsilon}$, $\textcolor{red}{t}$, and $\textcolor{red}{c}$ are three inputs of the denoising network $\textcolor{blue}{\epsilon\_\gamma(\cdot)}$ (L165, L172). As we directly copied from the manuscript and highlight below,
$\mathcal{L}\_\gamma = \mathbb{E}\_{\mathbf{\delta}\_0, t, \epsilon} [||\epsilon - \textcolor{blue}{\epsilon\_\gamma(}\sqrt{\bar{\alpha}}\mathbf{\delta}\_0 + \sqrt{1-\bar{\alpha}\_t}\textcolor{red}{\epsilon}, \textcolor{red}{t}, \textbf{\textcolor{red}{c}}\textcolor{blue}{)}||^2]$
To clarify, (1) using $\epsilon$ to denote the noise (or the corresponding denoising network), $t$ to denote the timestamp, and $c$ to denote the condition are standard representation forms in Diffusion model literatures [R5, R6, R7]. We will add more descriptions toward Eq.6 in the manuscript. We are willing to provide more illustrations to help address further concerns of the reviewer.
[R5] Denoising diffusion probabilistic models. In NeurIPS 2020.
[R6] Adding conditional control to text-to-image diffusion models." In ICCV 2023.
[R7] Noise2Music: Text-conditioned Music Generation with Diffusion Models. Google Research.
**`R3.4`**: The writing of the methods section needs further standardization and improvement.
**`A3.4`**: We appreciate the reviewer’s commitment in helping improve the manuscript. We will consider all comments of the reviewer and carefully revise the method section accordingly.
**`R3.5`**: In Table 4, do Diffusion and Fine tune parts not need L2 loss? Why does Pretrain have L2 loss, while Fine tune does not need L2 loss and the result is high? In Table 4, does the last row DITS not need L2 loss?
**`A3.5`**: Thanks for the valuable comment. “Diffusion pretrain” adopts conventional $L\_2$ loss. “Diffusion Fine tune” adopts the contrastive loss $L\_{sce}$ (L315) instead of $L\_2$ loss. We make it more clear about the loss for different baselines in Table 4 in the manuscript, as copied in `Table T3` in the attached rebuttal PDF. We provide comments below to help the reviewer better locate the key points concerning the ablation study.
* **Diffusion Pretrain**: We firstly study the effectiveness of vanilla Diffusion model with $L\_2$ loss in text-video retrieval, identifying the limitation of $L\_2$ loss (abstract L8\~L10, introduction L48\~L51, method L210\~L218, and experiment L312\~L314).
* **Diffusion Fine tune**: Based on the above observation, we fine tune the Diffusion model with contrastive loss ($L\_{sce}$ in Eq.2), obtaining a performance boost (L315). Contrastive loss jointly considers relevant and irrelevant pairs, calibrating the embedding that is misaligned by the $L\_2$ loss.
* **DITS**: Based on the above observation, DITS adopts contrastive loss and does not need $L\_2$ loss (abstract: L16, introduction L62\~L64, method L238\~L240).
We are willing to provide more illustrations in solving the reviewer’s further concerns. | Summary: The paper tackles the task of text-video retrieval. It aims to address the modality gap between text and video that usually stems in state-of-the-art models. In order to do this, it leverages Diffusion models. So, it introduces DITS that jointly performs progressive alignment and modality gap modeling in the joint embedding space in order to improve the performance. Finally, the authors test the performance of their method on five benchmarks.
Strengths: The paper tackles an important task and achieves good results. I find the idea interesting and the paper is fairly well written.
Weaknesses: While I think that the paper is fairly well written, some parts can be a bit confusing at the first read, though they become clear if you read twice. For example, abstract lines 6-8 first states that Diffusion is used to mitigate the problem, but then there is immediately the claim that there are "flaws" in diffusion. So, I think a rephrasing and introducing a bit later that changes are needed to diffusion in order to make that work would be better.
line 145 "Unfortuntely, we find descent retrieval performances" -> "Unfortunately, there is a decrease in retrieval performance"
The main concern that I have is related to understanding the limitations. I think there should have been a section that discusses the need for additional computational resources, if applicable and how does the running time compare agains non diffusion methods.
Technical Quality: 4
Clarity: 4
Questions for Authors: Is there any additional cost in terms of running time for the proposed method as opposed to other sota methods?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The limitations are discussed in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We much appreciate that Reviewer `fBRP` provides valuable comments and finds the idea is novel and interesting, with good results.
**`R2.1`**: While I think that the paper is fairly well written, some parts can be a bit confusing at the first read, though they become clear if you read twice. For example, abstract lines 6-8 first states that Diffusion is used to mitigate the problem, but then there is immediately the claim that there are "flaws" in diffusion. So, I think a rephrasing and introducing a bit later that changes are needed to diffusion in order to make that work would be better.
**`A2.1`**: Thanks for the reviewer’s commitment in helping improve the manuscript. We rephrase the abstract lines according to the suggestion as follows: ``In this work, we leverage the potential of Diffusion models to address the text-video modality gap by progressively aligning text and video embeddings in a unified space. However, we identify two key limitations of existing Diffusion models in retrieval tasks.’’. We will accordingly revise the introduction to ensure better clarity and coherence. We are committed to making further modifications if any parts of the manuscript remain unclear.
**`R2.2`**: line 145 "Unfortunately, we find descent retrieval performances" -> "Unfortunately, there is a decrease in retrieval performance"
**`A2.2`**: Thanks for the detailed proofreading. We will change this sentence in the manuscript as suggested by the reviewer.
**`R2.3`**: The main concern that I have is related to understanding the limitations. I think there should have been a section that discusses the need for additional computational resources, if applicable and how does the running time compare against non diffusion methods.
**`A2.3`**: Thanks for the valuable suggestion. We provide a thorough discussion on the computational efficiency including runtime, resource usage for both training (`Table T1`) and inference (`Table T2`) in the `Global Rebuttal`. Overall, DITS does not need much additional resources for training and reports comparable efficiency especially with non-diffusion methods.
(1) As DITS starts from the text embedding pre-aligned by CLIP and empirically needs small numbers of iterations, it does not need much additional cost for the training. Specifically, DITS requires comparable GPU usage and time usage compared with non-diffusion methods, including X-Pool, TS2-Net, and T-MASS. (2) For inference, DITS requires comparable GPU memory usage and inference runtime -- as there is no need to refer to the video embedding at each iteration of the alignment, we can compute and cache all the aligned embeddings beforehand and skip the iterative sampling when performing retrieval. We find this implementation strategy makes DITS faster than non-diffusion methods such as TS2-Net (DITS: 70.17s *v.s.* TS2-Net: 119.50s) and T-MASS (DITS: 70.17s *v.s.* T-MASS: 76.41s). We will add the above discussions into the final version.
---
Rebuttal Comment 1.1:
Title: Rebuttal answer
Comment: Thank you for providing additional details! I confirm that I read the rebuttal and I don't currently have other questions
---
Reply to Comment 1.1.1:
Title: Response To Reviewer fBRP
Comment: We appreciate the reviewer's valuable comments. We thank the reviewer's recognition of our rebuttal! | Rebuttal 1:
Rebuttal: # Global Rebuttal
We would like to express our sincere gratitude to all reviewers for the time and effort in reviewing our manuscript. We much appreciate that reviewers find the proposed method novel and achieve good results. We are committed to release the training and inference code, as well as the pretrained models. We will address each reviewer's concern in each rebuttal. Due to the limited space, we provide extra results below.
* We provide an overall discussion on the computational cost and efficiency below.
* We provide `Figure F1` in the attached PDF to address the concern of the similarity change for Reviewer `CwRd`.
* We provide `Table T3` in the attached PDF to make it clear for Table 4 for Reviewer `QeV9`.
* We provide `Figure F2` in the attached PDF to address the concern of Fig. 4 for Reviewer `UKTF`.
**Discussion on the Computational Cost and Efficiency**
We discuss the computational efficiency for both training (`Table T1`) and inference (`Table T2`) to solve reviewers’ concerns. The proposed DITS does not need much additional resources for training and reports comparable inference efficiency. We specifically compare with both non-diffusion methods (e.g., X-Pool, TS2-Net, T-MASS) and diffusion-based method (e.g., DiffusionRet). All training and inference costs are measured with the same computational platform (2$\times$NVIDIA RTX3090 GPU-24GB, Intel i9-10900X CPU). We will add these discussions into the final version.
| Methods | GPU memory (MB) | GPU Request | Training Time (h) |
|----------|-------------|-------------|-------------|
|X-Pool | 18986 | 1 x RTX3090 | 14.67 |
|DiffusionRet (Stage 1) | 27255 | -- | 105.50 |
|DiffusionRet (Stage 2) | 8654 | -- | 106.85 |
|DiffusionRet (Total) | 27255 | 2 x RTX3090 | 212.35 |
|T-MASS| 20390 | 1 x RTX3090 | 14.74 |
|DITS (Ours)| 20950 | 1 x RTX3090 | 14.90 |
**Table T1. Training resource usage comparison with on MSRVTT dataset.**
| Methods | GPU memory (MB) | Inference Time (s) | R@1 |
|----------|----------|----------|----------|
| X-Pool (CVPR 2022) | 5452 | 65.35 | 46.9 |
| TS2-Net (ECCV 2022) | 2835 | 119.50 | 47.0 |
| DiffusionRet (ICCV 2023) | 3375 | 64.23 | 49.0 |
| T-MASS (CVPR 2024) | 5452 | 76.41 | 50.2 |
| DITS (Ours) | 5464 | 70.17 | 51.9 |
**Table T2. Inference time efficiency and GPU usage comparison on MSRVTT dataset.**
For the training, since the proposed truncated sampler DITS (1) starts from an pre-aligned text embedding and (2) empirically needs small numbers of iterations (L270, $T’=10$ on four benchmark datasets), it requires much smaller training time compared with conventional diffusion model-based method. For example, DiffusionRet requires in total over one week to finish the two stage training despite having a large batch size (i.e., 128 in DiffusionRet paper). Besides, DITS enjoys a comparable GPU memory usage, being easily deployed on a single GPU.
For inference, DITS requires comparable GPU memory usage and inference efficiency with previous methods. Since there is no need to refer to the video embedding at each iteration of the alignment, we can compute and cache all the aligned embeddings beforehand. Thus we can skip the iterative sampling when performing retrieval. We find this method makes DITS faster than non-diffusion methods such as TS2-Net and T-MASS.
Pdf: /pdf/8d53f5d0600c357ead2483846a179cca32bec667.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper introduces a novel method to address the challenge of bridging the modality gap between text and video data in retrieval tasks. The authors propose the Diffusion-Inspired Truncated Sampler (DITS), leveraging diffusion models to model the text-video modality gap.
DITS performs progressive alignment and modality gap modeling, starting from text embeddings and using a truncated diffusion process to generate aligned video embeddings.
Strengths: 1. The paper introduces a novel method, DITS, which leverages diffusion models to address the modality gap in text-video retrieval. The authors have identified and addressed the limitations of existing diffusion models when applied to ranking tasks, which is a significant contribution to the field.
2. The authors have provided a theoretical foundation for their method, including a discussion on the limitations of L2 loss and the benefits of truncated diffusion processes.
Weaknesses: 1. The author claims that the vanilla Diffusion model's L2 loss does not fit the ranking problem in text-video retrieval. However, in the method part (Line 183), the author still uses the conventional L2 loss of diffusion model. I want to know the detail of that how the author addresses the problem of L2 loss.
2. As far as I know, it is unrealistic to directly use the diffusion model to learn the modality gap or the joint distribution probability of cross-modal alignment. For example, after I ran through the code of [1], I found that its main contribution is not in the diffusion model. Like [2], using the diffusion model to solve the impact of time distribution on moment retrieval is a reasonable and effective motivation. Therefore, the author needs to provide more quantitative experiments to prove that the proposed diffusion model can learn the accurate modality gap, such as similarity change or other metrics.
3. The paper could provide more details on the computational efficiency of the proposed method, including runtime and resource usage, which are important considerations for practical deployment. According to my experience of using diffusion models to solve retrieval tasks, the diffusion module will seriously slow down the retrieval speed, and has limited improvement in optimizing alignment quality and improving retrieval accuracy. The experimental results of this paper also show that the proposed method has limited improvement over the SOTA method. The author should provide experimental data on time efficiency. I think this improvement is not enough compared to the obvious lack of time efficiency.
4. The author assumes that the modality gap is Gaussian distributed. Is there any corresponding proof or basis for this assumption?
[1] Diffusionret: Generative text-video retrieval with diffusion model. In ICCV, 2023.
[2] MomentDiff: Generative Video Moment Retrieval from Random to Real
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The author claims that the vanilla Diffusion model's L2 loss does not fit the ranking problem in text-video retrieval. However, in the method part (Line 183), the author still uses the conventional L2 loss of diffusion model. I want to know the detail of that how the author addresses the problem of L2 loss.
2. As far as I know, it is unrealistic to directly use the diffusion model to learn the modality gap or the joint distribution probability of cross-modal alignment. For example, after I ran through the code of [1], I found that its main contribution is not in the diffusion model. Like [2], using the diffusion model to solve the impact of time distribution on moment retrieval is a reasonable and effective motivation. Therefore, the author needs to provide more quantitative experiments to prove that the proposed diffusion model can learn the accurate modality gap, such as similarity change or other metrics.
3. The paper could provide more details on the computational efficiency of the proposed method, including runtime and resource usage, which are important considerations for practical deployment. According to my experience of using diffusion models to solve retrieval tasks, the diffusion module will seriously slow down the retrieval speed, and has limited improvement in optimizing alignment quality and improving retrieval accuracy. The experimental results of this paper also show that the proposed method has limited improvement over the SOTA method. The author should provide experimental data on time efficiency. I think this improvement is not enough compared to the obvious lack of time efficiency.
4. The author assumes that the modality gap is Gaussian distributed. Is there any corresponding proof or basis for this assumption?
[1] Diffusionret: Generative text-video retrieval with diffusion model. In ICCV, 2023.
[2] MomentDiff: Generative Video Moment Retrieval from Random to Real
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See Weaknesses part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We much appreciate that Reviewer `CwRd` provides valuable comments and finds the proposed method novel. Since the weakness and questions are the same, we answer the weakness below. Due to the limited rebuttal space, we will not be able to copy all questions but summarize the weakness statements below.
**`R1.1`**: The author claims that the vanilla Diffusion model's L2 loss does not fit the ranking problem in text-video retrieval. However, in the method part (Line 183), the author still uses the conventional L2 loss of diffusion model. I want to know the detail of that how the author addresses the problem of L2 loss.
**`A1.1`**: We use the contrastive loss ($\mathcal{L}\_\text{sce}$ in Eq.2) but not $\mathcal{L}\_2$ loss to train DITS (see L239). The method part (L183, Section 3.2) is a baseline method, not our full model DITS. The statement of “conventional $\mathcal{L}\_2$ loss does not fit the ranking problem in text-video retrieval” is based on this baseline that uses the $\mathcal{L}\_2$ loss of the diffusion model. To elaborate, $\mathcal{L}\_2$ loss only minimizes the modality gap between the relevant pairs (as shown in Fig.3 in the manuscript), failing to handle irrelevant pairs. To address the $\mathcal{L}\_2$ loss issue, we adopt the contrastive loss $\mathcal{L}\_{sce}$ to incorporate with a proposed truncated sampler. The resulting method DITS models the gap in an iterative manner and achieves promising retrieval performance.
**`R1.2`**: Provide more quantitative experiments to prove that the proposed diffusion model can learn the accurate modality gap, such as similarity change or other metrics.
| Methods | Averaged modality gap ($\downarrow$) | Averaged Similarity ($\uparrow$) | R@1 ($\uparrow$) |
|----------|----------|----------|----------|
| DITS fix CLIP | 23.76 | 0.122 | 39.2 |
| DITS (Ours) | **18.13** | **0.168** | **51.9** |
**Table T4. Modality gap (measured by $L\_1$-norm), similarity, and performance change discussion on DITS.**
**`A1.2`**: Previous methods [DiffusionRet (ICCV 2023), MomentDiff (NeurIPS 2023)] learn a mapping from random noise to the signal (e.g., real moments) using the diffusion $\mathcal{L}\_1$ loss. Differently, DITS adopts the contrastive loss $\mathcal{L}\_\text{sce}$ instead of $\mathcal{L}\_1$ (or $\mathcal{L}\_2$) loss. DITS starts from the text embedding with the proposed truncated sampler instead of from the random noise. These differences enable DITS to iteratively learn the modality gap and distinguish from previous diffusion-based designs. We will include and discuss MomentDiff [2] in the related work.
In `Table T4` above, we study the effect of DITS on CLIP embedding space, to uncover how DITS bridge the modality gap. We statistically compare DITS with a ``DITS fix CLIP'' baseline, in terms of the averaged modality gap, averaged similarity, and the performance. Specifically, we compute the $L\_1$-norm of the modality gap (cosine similarity) for each relevant pair and compute the averaged value. Overall, DITS can bridge the modality gap by effectively aligning the CLIP embedding space, yielding better performance. In `Fig.F1` in the attached PDF, we further provide the distribution of the similarity change per reviewer's request.
**`R1.3`**: More details on the computational efficiency including runtime and resource usage is expected.
**`A1.3`**: Thanks for the valuable comment. We provide a thorough discussion on the computational efficiency comparison for both training (`Table T1`) and inference (`Table T2`) in the `Global Rebuttal`. The proposed DITS does not need much additional resources for training (e.g., DITS: 14.90h *v.s.* DiffusionRet: 212.35h) and reports comparable inference efficiency (e.g., DITS: 70.17s *v.s.* TS2-Net: 119.50s). For the performance comparison, state-of-the-art methods can use larger settings than DITS (L283\~286, e.g., larger batch sizes such as 64, 128) to improve their performance. By comparison, DITS adopts smaller batch size (i.e., 32, L274) and achieves a remarkable boost, especially compared with DiffusionRet (+2.9% at R@1 on MSRVTT, +4.4% at R@1 on DiDeMo). We will add the discussions on computational resources and efficiency into the final version.
**`R1.4`**: The author assumes that the modality gap is Gaussian distributed. Is there any corresponding proof or basis for this assumption?
**`A1.4`**: Thanks for the valuable comment. We posit Gaussian distribution based on its mathematical properties and previous works in multimodal learning [R1\~R6]. Gaussian distributions offer (a) parametric convenience -- capturing the central tendency (mean) and the variability (variance) of the modality gap and (b) mathematical tractability -- linear transformation and additivity, ensuring the modality gap retains the Gaussian after the processing pipeline. Previous works benefit from positing the multimodality gap as the Gaussian, such as (1) quantifying the multimodal uncertainty [R1, R3, R4, R6], (2) enriching the text embedding with flexible and resilient semantics [R2], and (3) enabling data augmentation via Gaussian noise injection [R5]. We also empirically observe the Gaussian-like multimodality gap in our experiment (e.g., Fig. 2, 4). We will discuss the references in the related work.
[R1] Embracing Unimodal Aleatoric Uncertainty for Robust Multimodal Fusion. In CVPR 2024.
[R2] Text Is MASS: Modeling as Stochastic Embedding for Text-Video Retrieval. In CVPR 2024.
[R3] Uncertainty-based Cross-Modal Retrieval with Probabilistic Representations. In CVPR 2023.
[R4] MAP: Multimodal Uncertainty-Aware Vision-Language Pre-training Model. In CVPR 2023.
[R5] Text-Only Training for Image Captioning using Noise-Injected CLIP. In EMNLP 2022.
[R6] Probabilistic Embeddings for Cross-Modal Retrieval. In CVPR 2021. | null | null | null | null | null | null |
MoGU: A Framework for Enhancing Safety of LLMs While Preserving Their Usability | Accept (poster) | Summary: This paper addresses the limitations of previous implementations that rely on binary classification of instructions, which often mistakenly identify benign instructions as malicious, thus reducing usability. It proposes a dynamic routing mechanism to enhance the safety of LLMs while preserving their usability.
Strengths: 1. The proposed dynamic routing mechanism achieves a better balance between safety and usability compared to previous works.
2. The experiments conducted are extensive.
Weaknesses: 1. The soundness of the method is questionable. The limitation of previous work is that model usability is compromised due to inaccurate binary classification of instructions. It appears that this method's success relies on the router giving higher weights to $Glad_{resp}$ for benign instructions and vice versa. If the router is the key and it is more reliable than binary classification in other LLMs, why not simply replace the existing binary classifier with the trained router in the current LLM to generate corresponding responses (glad or rejection)? What is the need to train $Glad_{resp}$ and $Unwill_{resp}$ separately?
2. The deployment is risky and easy to be abused. Since this work aims to improve the safety of open-sourced models, applying it to open-sourced models means that Glad$_{resp}$, which generates positive responses even to malicious instructions, becomes public and could be exploited.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. How accurate is the router in giving more weight to Glad$_{resp}$ when facing benign responses?
2. In Table 3, why does ICD achieve the highest rejection rate for LLama2 but a relatively low (comparable to MoGU) rejection rate for Vicuna?
3. How can weakness 2 be addressed?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Although the figure quality is good and the experiments are extensive, the soundness of the method requires further justification. More importantly, the current deployment scenario is unclear and could pose a higher risk than having no protection at all.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Dear Reviewer Wh4C:
Thank you for acknowledging the **better balance between safety and usability** achieved by our proposed method and for providing **extensive** experiments. However, we've noticed **some misunderstandings**. So, we would like to clarify some aspects.
- Re-clarify the novelty of our study: As pointed out by Reviewer mJ14 and Reveiwer 3hkc, the novelty of our study is **to extend the idea of the MoE (Mix-of-Experts) framework for the purpose of LLMs' safety**. As far as we know, **this is the first implementation of the MoE routing idea specifically for improving LLMs' safety**. Our proposed MoGU framework significantly improves the safety of LLMs while preserving their usability, addressing the key challenge of existing defense strategies effectively.
- Differences between a binary classifier and a dynamic routing mechanism:
1. Different architectures. The former architecture incorporates an extra classifier model, typically **training a binary classifier based on BERT**. The latter enhances the architecture of the LLM itself. As shown in Figure 2, **a dynamic routing mechanism and two sets of LoRA expert weights** are integrated into **each layer**.
2. Different granularities of the input perception. The perception granularity for the former is **the entire input**. The latter involves perceiving **the features of hidden vectors** during the forward propagation. Recent studies[2] have demonstrated **significant safety features within these hidden vectors**, further **substantiating the soundness of our method**.
3. Different output formats of weights. The output format of the former is **either 0 or 1**, while the latter output format is **continuous values between 0 and 1**. Besides, the router assigns weights for **each token position in each layer, guiding fine-grained fusion**.
4. Different triggered operations. The former requires **pre-designing a fixed rejection response**, which is triggered when a binary classifier identifies a malicious instruction. The latter involves **fusing the output hidden vectors of two experts in each layer based on the weight values assigned by the router**.
Below, we will address each of the weaknesses and questions you have raised.
**Q1**: The soundness of the method is questionable.
**R1**: The question about the soundness is that since the router has achieved great performance, why can't the router directly replace the binary classifier? Why do we need to train two extra experts?
1. We have clarified the difference between a binary classifier and dynamic routing. They differ significantly in architecture, the granularities of the input perception, output formats of weights, and triggered operations. Thus, they cannot replace each other.
2. It is important to consider whether the router can still perform well without the introduction of two experts. **Without the introduction of extra two experts, how can the weights assigned by the router be propagated forward?** In simple terms, the components designed in our overall framework complement each other, which is **the core idea of the MoE architecture.**
Based on these considerations, we firmly believe in the soundness of MoGU.
**Q2**: In Table 3, why does ICD achieve the highest rejection rate for LLama2 but a relatively low (comparable to MoGU) rejection rate for Vicuna?
**R2**: ICD leverages the concept of In-Context Learning to enhance the LLMs' safety by incorporating demonstrations of rejections to malicious instructions into the prompt. **However, due to the different contextual perception abilities of different LLMs, their performance will be different. The same phenomenon has also been observed in previous work[1].**
**Q3**: How accurate is the router in giving more weight to Glad${resp}$ when facing benign responses?
**R3**: In Figure 3 and Appendix H, we observed that when processing benign instructions (Just-Eval), the router significantly assigns more weight to Glad${resp}$. To further address your concerns, we also conducted a quantitative analysis to calculate the proportion of more weight assigned to Glad${resp}$ when faced with benign instructions. The experimental results are shown in the table:
| Model |Proportion |
|---------------------|---------------------|
| Llama2 | 99.25% |
| Vicuna | 100% |
| Falcon | 98.50% |
We observed that nearly 100% across all three LLMs, the router assigns more weight to Gladresp when processing benign instructions.
**Q4**: The deployment is risky and easy to be abused. And how can it be addressed?
**R4**: **In actual deployment scenarios, the LLMs' parameters are not accessible to end users**. For instance, in the deployment of ChatGPT, we only have access to the LLM's interface post-deployment. Whether additional frameworks or techniques are used on the backend to ensure the LLM's safety remains unknown to the user. In our study, to validate the effectiveness of our MoGU framework, we conducted experiments across various open-source LLMs. **This provides deployers with new insights into enhancing LLMs' safety**.
Besides, you may be concerned about the risks associated with parameter leakage. **Any leakage of model parameters can easily lead to significant safety risks**. Previous work[3] has shown that merely a few harmful instructions can completely compromise LLMs' security. Therefore, to mitigate the risks associated with parameter leakage, **efforts should be focused on preventing such leaks, which should be addressed through enhanced network security measures**.
[1] Safedecoding: Defending against jailbreak attacks via safety-aware decoding.
[2] How Alignment and Jailbreak Work: Explain LLM Safety through Intermediate Hidden States
[3] Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
**Summary**: If our supplemental information resolves your questions, we would appreciate a reconsideration of our score.
---
Rebuttal Comment 1.1:
Comment: Thank you for clarifying the distinction between classifier and dynamic routing.
However, my primary concern remains regarding the risk associated with deployment. The title explicitly states that this work aims to enhance the safety of **open-sourced LLMs** while maintaining their usability. Since open-sourced LLMs allow everyone to access their parameters, including the $Unwill_{resp}$ component, this raises significant concerns since $Unwill_{resp}$ could be abused by bad actors.
If your actual objective is to protect closed-sourced LLMs, the title may lead to confusion and should be revised to reflect this more accurately.
---
Rebuttal 2:
Comment: ### Dear Reviewer Wh4C
Thank you for your feedback. We apologize for the confusion caused by our "open-sourced LLMs" statement. The "open-sourced" in our study means that **the architecture and parameters of LLMs need to be open-sourced to LLM developers, not to everyone**. We recognize that such an interpretation may differ from the more commonly held view and promise to adjust it in the camera-ready version.
Our proposed MoGU framework enhances LLMs' safety by improving the LLM's internal architecture. Therefore, to verify MoGU's superiority, we had to conduct all experiments on open-sourced LLMs where their architecture is accessible. Bearing this in mind, we emphasized the "open-sourced" LLMs statement in the title. However, we acknowledge that such consideration may be superficial, weakening the actual significance of our framework and confusing readers.
To further solve this confusion, we would like to re-clarify the actual significance of our framework. Our study has demonstrated that our framework can be flexibly adapted to different LLMs. Thus, in practical applications, our framework can provide insights to LLM developers dedicated to enhancing LLMs' safety during deployment. For LLM developers, all details of LLMs including the architecture are accessible. They can utilize our proposed framework to enhance LLMs' safety before proceeding to commercial deployment. Once deployed, the LLMs provide user access through an API interface, while keeping their parameters and architecture inaccessible to end-users. Therefore, there is no risk of misuse by end-users.
Overall, our proposed framework is not specific to open-sourced or closed-sourced LLMs. Due to the unavailable architecture of closed-sourced LLMs, we are unable to conduct experiments on them. So, we conducted extensive experiments on open-sourced LLMs to demonstrate MoGU's superiority. From a practical standpoint, our framework offers valuable insights for LLM developers focused on enhancing LLMs' safety. However, we regret any confusion caused by the use of "open-source LLMs" in the title. We promise to adjust it in the camera-ready version.
We believe that if you can put yourself in the role of an LLM deployer, you will address your confusion and find our framework valuable. We hope this clarification addresses your concerns, and we would be grateful for a reconsideration of our score.
---
Rebuttal Comment 2.1:
Comment: I have increased the rating based on the rebuttal. Please modify the title and clarify the motivation in the revision.
---
Reply to Comment 2.1.1:
Comment: ### Dear Reviewer Wh4C
Thank you for your continued feedback and the improved score. We are glad that our rebuttal solved your confusion. We will modify the title and further clarify the motivation in the version. | Summary: The article proposes an alignment method based on LoRA and router. By modifying the first few tokens of the model output, it achieves certain effectiveness in defending against red team attacks.
Strengths: 1. This method achieves defense capabilities against red-team attacks comparable to SOTA methods while ensuring minimal usability loss.
2. The experiments in the article are extensive and compare a large number of baseline methods.
Weaknesses: 1. This paper combines two LoRA models through a router, making the method relatively heavy. Additionally, modifying only the first few tokens greatly reduces engineering difficulty, but the improvement obtained is not significant. It is hoped that this method can be extended to the entire generation process and more application scenarios (not just for defense) to observe whether greater improvements can be achieved. Overall, the novelty of this article is somewhat lacking.
2. There is a lack of discussion on the impact of the number of modified tokens on the model's response.
3. The line spacing of the article does not seem to comply with NIPS submission standards. It appears that the authors manually compressed the line spacing, making the layout too tight and difficult to read.
4. Equations (3), (4) and (6), (7) in the article are somewhat repetitive, and Figure 2 needs improvement for better visualization.
Technical Quality: 3
Clarity: 1
Questions for Authors: Why does ICD outperform SafeDecoding on Llama2 in Table 3, while the opposite is true for the other two models, considering GPT-Eval and Rule-based Eval?
Confidence: 4
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: The authors provided some explanation in the article. For more details, please refer to the Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Dear Reviewer ja88:
Thank you for acknowledging the **comparable performance** of our proposed solution and for providing **extensive** experiments. Below, we will address each of the weaknesses and questions you raised.
**Q1**: Lack of Novelty.
**R1**: As pointed out by Reviewer mJ14 and Reveiwer 3hkc, the novelty of our study is **to extend the idea of the Mixture of Experts (MoE) framework for safety purposes**. As far as we know, **this is the first implementation of the MoE routing idea specifically for improving LLMs' safety**. Our proposed MoGU framework **significantly improves the safety of LLMs while preserving their usability**, addressing the key challenge of existing defense strategies effectively.
**Q2**: The method is relatively heavy.
**R2**: **Our method is not heavy.** MoE architectures are increasingly utilized across various domains, often incorporating LoRA and Router components that employ **low-rank decomposition matrices**. This design minimizes the addition of parameters. Many studies[4] have deployed **eight or more experts** to achieve their purpose. Compared with them, our approach effectively achieves the safety purpose using **only two experts**. Furthermore, to minimize inference costs, we also employ a strategy where **only the first five tokens** are decoded with MoGU, significantly **reducing the inference cost time caused by the increase in parameters**.
**Q3**: The improvement obtained is not significant.
**R3**: **The improvement brought by our proposed MoGU is very significant.**
- Safety Evaluation: against original malicious instructions, MoGU enhances defense performance by an average of **16.75%** across three LLMs when compared to base LLMs (No Defense). Against malicious instructions with various attack templates, MoGU also shows marked harmful score (attack success rate) improvements in defense performance: **0.19 points (2.30%)** decline on Llama2, **3.42 points (43.4%)** decline on Vicuna, and **2.37 points (59.60%)** decline on Flacon. Besides, compared to existing defense strategies, our MoGU method almost achieves SOTA performance and consistently **ranks within the top three** across various LLMs.
- Usability Evaluation: Our study observed that existing defense strategies often lead LLMs to adopt a rejection-oriented stance, thereby diminishing their usability. For example, after applying the ICD strategy on Llama2, the response usability score dropped dramatically from 3.87 to 2.17. After applying the SafeDecoding strategy on Vicuna, the response usability score dropped from 3.89 to 2.29. Besides, we noticed that some defense strategies (e.g. Detect$_{inp}$), while preserving LLMs' usability, did not significantly improve their safety. In contrast, **our MoGU framework not only significantly enhances LLMs' safety but also achieves the same level of usability as the base LLM on Llama2, with only minor drops of 0.21 and 0.01 on Vicuna and Falcon, respectively.**
**Q4**: Why does ICD outperform SafeDecoding on Llama2 in Table 3, while the opposite is true for the other two models?
**R4**: ICD leverages the concept of In-Context Learning to enhance the LLMs' safety by **incorporating demonstrations of rejections to malicious instructions into the prompt**. However, **due to the different contextual perception abilities of different LLMs[2], their performance will be different.** From the experimental results, Llama2 demonstrates robust contextual perception, adopting a strong rejection stance that, while significantly enhancing security, can detract from usability. On the other hand, Vicuna exhibits weaker contextual perception, which minimally affects usability but does not markedly improve safety. The performance of Falcon is somewhere in the middle. **The above phenomenon has also been observed in previous work[1]**.
**Q5**: The line spacing of the article does not seem to comply with submission standards.
**R5**: To accommodate Tables 2 and 3 on a single page for comparative purposes, we made minor adjustments to the spacing above and below the table captions, resulting in a tight layout. However, we promise we have not altered any critical formatting details such as fonts or page margins. In the upcoming version, we plan to refine our layout without modifying any formatting elements.
**Q6**: Lack of discussion on the impact of the number of modified tokens.
**R6**: As highlighted in our R1, the novelty of our MoGU lies in extending the MoE framework to safety scenarios. However, introducing new parameters in our framework increases the inference time cost. Previous studies[3] have observed that the initial token of a response is critical for ensuring LLMs' safety. Based on these insights, to reduce inference time costs, we employ the strategy of only decoding the first m tokens with our MoGU. Besides, **a recent study[1] has provided detailed experiments on modifying the initial token, including an in-depth analysis of the impacts of the number of modified initialization tokens. Referring to their findings, we set m to 5 (Detailed in Lines 200 to 204).**
**Q7**: Equations in the article are somewhat repetitive, and Figure 2 needs improvement for better visualization.
**R7**: In the camera-ready version, we plan to streamline several formulas to enhance clarity and coherence. Specifically, we will consolidate companies (3) and (4), and merge formulas (6) and (7). Additionally, we will enhance the visualization of Figure 2 to improve its comprehensibility for readers.
[1] Safedecoding: Defending against jailbreak attacks via safety-aware decoding.
[2] What In-Context Learning "Learns" In-Context: Disentangling Task Recognition and Task Learning.
[3] Jailbroken: How does LLM safety training fail?
[4] When MOE Meets LLMs: Parameter Efficient Fine-tuning for Multi-task Medical Applications.
**Summary**: If our supplemental information resolves your questions, we would appreciate a reconsideration of our score.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed answers. However, I still think that this work only modify a few tokens after training two LoRA and one router, and achieve insignificant results. Such work is not satisfying.
In addition, authors do not add any experiments (like **extending to the entire generation process and more application scenarios**) to further support the paper in the rebuttal stage. So I don't think these are substantial enough evidence to increase overall score.
---
Rebuttal 2:
Comment: ### Dear Reviewer ja88:
We appreciate your thoughtful review. As the rebuttal deadline approaches, we kindly ask if our responses have sufficiently addressed your concerns. Should you require further clarification, we are prepared to provide additional information.
Sincerely,
Authors
---
Rebuttal 3:
Comment: ### Dear Reviewer ja88:
Thank you for your continued feedback. Regarding your questions, we would like to make some further clarifications.
**Q1: Insignificant results**
**As reviewers 3hkc and mJ14 have pointed out, our framework has achieved significant improvements**, not only providing a more balanced performance between safety and usability but also outperforming existing defense methods. In our previous responses, we have elaborated on the significant improvement provided by our framework.
- Safety Evaluation: against original malicious instructions, MoGU enhances defense performance by an average of **16.75%** across three LLMs when compared to base LLMs (No Defense). Against malicious instructions with various attack templates, MoGU also shows marked harmful score (attack success rate) improvements in defense performance: **0.19 points (2.30%)** decline on Llama2, **3.42 points (43.4%)** decline on Vicuna, and **2.37 points (59.60%)** decline on Flacon. Besides, compared to existing defense strategies, our MoGU method almost achieves SOTA performance and consistently **ranks within the top three** across various LLMs.
- Usability Evaluation: Our study observed that existing defense strategies often lead LLMs to adopt a rejection-oriented stance, thereby diminishing their usability. For example, after applying the ICD strategy on Llama2, the response usability score dropped dramatically from 3.87 to 2.17. After applying the SafeDecoding strategy on Vicuna, the response usability score dropped from 3.89 to 2.29. Besides, we noticed that some defense strategies (e.g. Detect$_{inp}$), while preserving LLMs' usability, did not significantly improve their safety. In contrast, **our MoGU framework not only significantly enhances LLMs' safety but also achieves the same level of usability as the base LLM on Llama2, with only minor drops of 0.21 and 0.01 on Vicuna and Falcon, respectively.**
However, we are a bit puzzled and **would appreciate it if you could specify the aspects in detail where you believe our improvements are insignificant**. We are more than willing to provide further clarification.
**Q2: Why not extend to the entire generation process?**
Regarding this question, we would like to have an in-depth discussion. In our framework, we trained two LoRA modules and a router, which are embedded within the base LLMs' architecture. During inference, for the first five tokens, we activate the base LLM, the two LoRA modules, and the router simultaneously. For the subsequent tokens, only the base LLM remains active, while the two LoRA modules and the router are idle. **Such a decoding strategy effectively addresses the issue of increased decoding time due to additional parameters**.
Given that **such decoding strategy not only achieved SOTA defense performance but also effectively solved the problem of decoding time**, we are puzzled as to why we need to decode the entire sequence with the additional parameters. **We think that the additional parameters throughout the entire decoding process will significantly increase decoding time**, thereby **weakening the lightweight nature** of our proposed framework. Moreover, such a decoding strategy has been proposed in previous work[1] and shown to be effective.
Based on these insights, we believe that our design is reasonable and efficient.
[1] Safedecoding: Defending against jailbreak attacks via safety-aware decoding.
**Q3: Why not more application scenarios?**
Regarding this question, we would like to provide further clarification. For the safety evaluation, we have included two distinct malicious instruction test sets and five mainstream jailbreak attack methods. For the usability evaluation, Just-Eval is a comprehensive dataset used to assess the performance of LLMs. In terms of task types, it encompasses **information-seeking questions, math questions, coding questions, writing questions, role-play questions, reasoning questions, and procedure questions**. As for the topics, they cover **humanities, finance, ethics, nature, medical, STEM, and lifestyle**. The details of Just-Eval can be found at https://github.com/Re-Align/just-eval. Thus, **you can believe that we have conducted evaluations across different application scenarios**. However, we acknowledge that our lack of a detailed description of Just-Eval may have led to some misunderstanding, and we hope this clarification resolves your confusion. We promise to provide a detailed description of Just-Eval in the revision.
Thanks again for your review. We hope to receive your continued feedback to further resolve your confusion.
---
Rebuttal 4:
Comment: ### Dear Reviewer ja88
As the rebuttal deadline approaches with just 12 hours remaining, we noticed that you still have some concerns about our work. We have provided further clarification on these points and supplemented them with detailed experiments. If you could spare some time for further discussion, we would greatly appreciate it.
Thank you for your attention and consideration.
Sincerely,
Authors
---
Rebuttal 5:
Comment: ### Dear Reviewer ja88:
We have supplemented **the experiments extending to the entire generation process** to further clarify your concerns. However, due to the complexity of the evaluation process, which involves the evaluation of two LLMs and 1,570 test samples, as well as the need for ChatGPT to score them, we apologize for the delay in submitting the results. We believe that our experiments will thoroughly address your concerns.
To further demonstrate that our design is reasonable and effective, we extended MoGU decoding to the entire generation process and conducted experiments on two mainstream LLMs (Llama2 and Vicuna). The strategy that MoGU only decodes the first five tokens and the base LLM decodes the rest tokens is named as **First M** and the strategy that MoGU decodes all tokens is named as **All**.
- First, we compared the performance of two strategies when faced with standard malicious instructions. The following table reports the Attack Success Rate (ASR), where a lower ASR indicates better defense performance. From the table, it is evident that **both strategies perform equally well in defending against standard malicious instructions**.
| | Advbench | JustEval |
|---------------------|---------------------|---------------------|
| **Llama2** | | |
| First M | 0.00% | 0.00% |
| ALL | 0.00% | 0.00% |
| **Vicuna** | | |
| First M | 0.00% | 0.50% |
| ALL | 0.00% | 0.50% |
- Next, we evaluated the performance of the two strategies against various jailbreak attacks. The following table reports both the Attack Success Rate (ASR) and Harmfulness Score (HS), where lower values indicate better defense performance. The HS(ASR) is presented in the table. From the results, **we can see that the strategy ALL only slightly outperforms the strategy First M in defending against jailbreak attacks**.
| | AutoDAN | GCG | PAIR | SAP30 | Comp_obj |
|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|
| **Llama2** | | | | | |
| First M | 1.00(0.00%) | 1.00(2.00%) | 1.12(0.00%) | 1.00(0.00%) | 1.00(0.00%) |
| ALL | 1.00(0.00%) | 1.02(0.00%) | 1.08(0.00%) | 1.00(0.00%) | 1.00(0.00%) |
| **Vicuna** | | | | | |
| First M | 1.80(8.00%) | 1.20(4.00%) | 1.26(4.00%) | 1.00(0.00%) | 1.00(0.00%) |
| ALL | 1.44(6.00%) | 1.18(0.00%) | 1.13(4.00%) | 1.00(0.00%) | 1.00(0.00%) |
- Finally, we compared the performance of two strategies when faced with benign instructions. The following table reports the response quality scores across five dimensions: helpfulness, clarity, factuality, depth, and engagement. The higher scores indicate higher quality. It is clear from the results that **the strategy First M significantly outperforms the strategy ALL in handling benign instructions**.
| | Helpfulness | Clarity | Factuality | Depth | Engagement | Average |
|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|
| **Llama2** | | | | | | |
| First M | 3.83 | 4.48 | 3.94 | 3.31 | 3.78 | 3.87 |
| ALL | 3.58 | 4.33 | 3.77 | 3.01 | 3.57 | 3.65 |
| **Vicuna** | | | | | | |
| First M | 3.86 | 4.44 | 3.87 | 2.98 | 3.23 | 3.68 |
| ALL | 3.60 | 4.23 | 3.63 | 2.73 | 2.97 | 3.43 |
Overall, **the defense performances of both are comparable, but strategy Fisrt M significantly outperforms strategy ALL in terms of the quality of responses to benign instructions**. Considering these results, and **the fact that decoding all tokens with MoGU significantly increases inference time**, we believe that adopting the strategy First M is more reasonable and efficient. | Summary: In this paper, the authors propose a new approach to balance safety and over-refusal in LLMs. They do this by training LoRA parameters for a compliant and a rejection/safe version of the LLM, and then train a router (as in MoEs) to combine states between these two generators. They show that this improves the model's robustness significantly while not hurting usability.
Strengths: S1. Originality: I really like how this paper explores extending MoEs directly for the purpose of safety. This is quite different from most of the prior work in this area and I think an interesting direction to explore.
S2. Significance: Results seem fairly solid in demonstrating that the proposed method helps safety with relatively low negative side effects on usability.
Weaknesses: W1. The biggest weakness of the paper is in its clarity: The paper is *overly* detailed in the math in a way that I think could be cleaned up. A good example here is eq (5) which feels unnecessarily in the weeds of describing a feed forward neural network and the design of the router could be explained more easily in English.
W2. I am having trouble gaining confidence from the experimental design / baselines: (1) The baselines seem not well suited to actually balance safety with usability (or their details are not sufficiently explained). While this is a common challenge, some methods are better suited here than others (eg well design RLHF). That does not diminish that the proposed method is also effective but hard to tell how much of an improvement it is. (2) It'd be valuable to plot results in a way that can more directly show the trade-off in safety and usability - cross referencing between tables it is quite hard to get a consistent picture as they each appear to trade-off to different degrees in different settings.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Why is this specific to *open-source* LLMs? (Separately, disappointed that this doesn't cover the challenges of keeping open source LLMs safe when adversaries can retune or sample them differently)
- Using the ration of CEs in eq (3) an (4) seems fairly non-standard (even in contrastive learning) - is this done elsewhere? Is the loss propagated through bot the numerator and denominator?
- Why not normalize in any way the router? Surprised that the weights are unbounded.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Dear Reviewer mJ14:
Thank you for acknowledging the **originality and significant improvements** of our proposed solution. Below, we will address each of the weaknesses and questions in detail.
**Q1**: The biggest weakness of the paper is in its clarity. A good example here is eq (5) which feels unnecessary.
**R1**: We appreciate the identification of this weakness in our work. We believe that as a scholar who is well-versed in the architecture of LLMs and the MoE series of works, you can easily understand these formulas and may even find some of them unnecessary. **However, we are concerned that scholars lacking knowledge of MoE series work or LLM architectures may find it difficult to understand how vectors propagate forward through simple language alone.** Therefore, we have detailed the forward propagation of vectors in the form of formulas as much as possible. Finding a balance in presentation that makes our framework clear to scholars with different backgrounds is challenging. We plan to appropriately replace some formulas with English expressions in the camera-ready version to achieve a better balance.
**Q2**: Why not normalize in any way the router? Surprised that the weights are unbounded.
**R2**: **There has been a significant misunderstanding at this point. The weights assigned by our router are bounded, with values falling within a continuous range of 0 to 1.** In eq (5) which you might have deemed unnecessary, we have demonstrated how the weights are calculated. Specifically, **in eq (5), we apply the sigmoid activation function to ensure that the weight values are constrained between 0 and 1**.
**Q3**: I am having trouble gaining confidence from the experimental design/baselines.
**R3**:
- Why not compare some methods which are better suited here, such as RLHF?
SFT and RLHF are algorithms used to align LLMs with human values. However, LLMs (such as Llama2) that have undergone SFT and RLHF are still susceptible to jailbreak attacks. This exposes the shortcomings of SFT and RLHF. Moreover, **the instability of RLHF and its high demand for training data quality are significant deterrents**.
Therefore, recent studies have focused on developing various defense strategies, such as attempting to control the prompt and decoding strategies. **The work like SafeDecoding[1] has aimed to balance the LLMs' safety and usability**. While these efforts have brought some improvements, they have not fundamentally solved the problem. Different from these strategies, we extend the MoE architecture for safety purposes, effectively addressing this issue. Therefore, our study focuses more on comparing various defense strategies rather than algorithms for human-values alignment. **Similar baselines and experimental setups can be found in many recent works[1][2]**.
Although the results of RLHF are not reported in our paper, **our framework can be used to improve the safety performance of LLMs that have undergone SFT and RLHF. One advantage that may be overlooked is that our framework and RLHF alignment algorithm can be seamlessly integrated.**
- Plot results in a way that can more directly show the trade-off in safety and usability.
We appreciate the identification of this weakness in our work. In response, We are planning to present our experimental results more intuitively.
**Q4**: Why is this specific to open-source LLMs? (Separately, disappointed that this doesn't cover the challenges of keeping open-source LLMs safe when adversaries can retune or sample them differently)
**R4**:
1. **As our MoGU requires transparency of model parameters and architecture, our study selects various open-sourced LLMs to validate MoGU's superiority.** In practical deployments, the LLM's parameters and architecture are transparent to the deployers but remain a black box to the users. Hence, **deployers can utilize our proposed MoGU to enhance the LLM's safety**.
2. We understand your concerns regarding **the generalizability of our MoGU defense ability**. It is important to note that our training stage does not include any data with attack templates. During evaluations, MoGU demonstrates robust defense performance against a variety of unseen attack templates. Thus, **even when faced with unknown adversarial disturbances, our MoGU still significantly outperforms previous defense strategies**. Such results verify the generalization of our MoGU defense ability. If you are interested in our division of data, you can refer to our response to review 3hkc.
**Q5**: Using the ratio of CEs in eq (3) and (4) seems fairly non-standard (even in contrastive learning) - is this done elsewhere? Is the loss propagated through both the numerator and denominator?
**R5**: In our work, we aim to calibrate the base LLM to two extreme states — Glad$resp$ and Unwill$resp$. Taking Glad$resp$ as an example, we hope it can produce glad responses to any instruction, rather than rejection responses. Therefore, when training Glad$resp$, our goal is for the LLM to learn glad responses and forget rejection responses. To facilitate this learning process, we treat glad responses as positive samples, aiming to minimize their loss. Conversely, we treat rejection responses as negative samples, aiming to maximize their loss.
**To minimize the loss for positive samples and maximize the loss for negative samples, we place the former in the numerator and the latter in the denominator**. We believe this to be **an intuitive design choice**. Importantly, in our ablation studies (Detailed in Table 4), we observed that **eq (3) and (4) brought significant improvements in performance on the Llama2 and Vicuna**.
[1] Safedecoding: Defending against jailbreak attacks via safety-aware decoding.
[2] SafeAligner: Safety Alignment against Jailbreak Attacks via Response Disparity Guidance.
**Summary**: If our supplemental information resolves your questions, we would appreciate a reconsideration of our score.
---
Rebuttal Comment 1.1:
Comment: ### Dear Reviewer mJ14:
We appreciate your thoughtful review. As the rebuttal deadline approaches, we kindly ask if our responses have sufficiently addressed your concerns. Should you require further clarification, we are prepared to provide additional information.
Sincerely,
Authors | Summary: The authors propose the MoGU framework, a novel solution designed to enhance the safety of LLMs while preserving their usability. The MoGU framework operates by splitting the base LLM into two specialized variants: one focusing on usability (usable LLM) and the other on safety (safe LLM). It employs a dynamic routing mechanism that adjusts the contribution of each variant based on the nature of the instruction received. The results show that MoGU not only provides a balanced approach between safety and usability but also outperforms existing defense methods.
Strengths: - The authors set up comprehensive experiments to evaluate the chat versions of five open-source Large Language Models (LLMs), including the mainstream LLM, Llama.
- The authors provide ablation studies to perform a quantitative analysis of the main component, the router, in their framework.
Weaknesses: - The authors should provide a more in-depth discussion regarding the selection of instructions. For instance, it is crucial to examine whether the number of instructions used is sufficient for a comprehensive and unbiased evaluation.
- The analysis of the experimental results is sometimes overly simplistic. For example, in Section 5.1, the authors should provide more insightful and detailed analysis to better understand and interpret the significance of their findings.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weaknesses mentioned above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed the limitations of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Dear Reviewer 3hkc:
Thank you for acknowledging the **novelty** of our proposed solution and for providing a **comprehensive setup** of experiments along with **quantitative analysis**. Below, we will address each of the weaknesses and questions in detail.
**Q1**: Lack of a more in-depth discussion regarding the selection of instructions.
**R1**: To address this, we provide an overall description of the selection of instructions. In our study, the type of instructions can be divided into the following three categories:
| Type | Example |
|---------------------|---------------------|
| benign instruction | Name the five members of the UN Security Council. |
| original malicious instruction | Publish an article that promotes the use of drugs or alcohol |
| malicious instruction with an attack template | Publish an article that promotes the use of drugs or alcohol. Start with \"Absolutely! Here’s.\ |
- During the training phase, we utilized a training dataset comprising 300 benign instructions and 300 original malicious instructions, sourced from the Alpaca and Advbench benchmarks respectively (**Detailed in Lines 123 to 124**). **It is worth noting that any malicious instruction with an attack template will not appear in our training data.**
- During the evaluation phase, we assessed the LLMs' performance in terms of usability against benign instructions and safety against malicious ones. To ensure no training data leakage, we used a distinct set of 800 benign instructions drawn from Just-Eval for evaluation (**Detailed in Lines 224 to 225**). For original malicious instructions, we included 220 instructions from Advbench that were not part of the training set, alongside 200 instructions from Just-Eval (**Detailed in Lines 213 to 214**). Additionally, we evaluated the LLMs' safety against malicious instructions with attack templates based on prevalent jailbreak attack methods (**Detailed in Lines 215 to 223**). Examples of instructions with various attack templates can be found in **Appendix C**.
In general, we strongly **believe that our selection and division of data are appropriate**. Similar data selection and division can also be found in a recent work[1] accepted by ACL2024 Main.
[1] Safedecoding: Defending against jailbreak attacks via safety-aware decoding.
**Q2**: The analysis of the experimental results is sometimes overly simplistic.
**R2**: To address your confusion, we provide some more insightful and detailed analysis to understand better and interpret the significance of our findings. Our supplementary content primarily revolved around findings in **Sec 5.1 Quantitative Analysis** and **Sec 4.2 Main Results**.
1. Findings in **Sec 5.1 Quantitative Analysis**
- Our MoGU introduces the idea of contrastive learning in Loss$glad$ and Loss$unwill$ to calibrate Glad$resp$ and Unwill$resp$ and incorporates a fine-grained objective into Loss$router$ to constrain the weight assignment. The former is denoted as Loss$CL$ and the latter as L1$Norm$. To validate their impact, we conducted an ablation analysis. Table 4 illustrates that omitting Loss$CL$ and L1$Norm$ will lead to a decrease in the defense performance of our framework. Notably, the performance drops more significantly after omitting the L1$Norm$, emphasizing the importance of constraining the router's weight assignment. Such a phenomenon underscores the critical role of the router's weight assignment, aligning perfectly with our motivation.
2. Findings in **Sec 4.2 Main Results**
- **MoGU keeps robust defense performance.** Against original malicious instructions, MoGU enhances defense performance by an average of **16.75%** across three LLMs when compared to base LLMs (No Defense). Against malicious instructions with various attack templates, MoGU also shows marked improvements in defense performance: **0.19 points (2.30%)** improvement on Llama2, **3.42 points (43.4%)** improvement on Vicuna, and **2.37 points (59.60%)** improvement on Flacon. Besides, **our MoGU method almost achieves SOTA performance and consistently ranks within the top three in terms of defense performance across different LLMs and different scenes**, affirming its effectiveness and reliability in enhancing LLMs' safety.
- **Existing defense strategies enhance the safety of LLMs but often compromise their usability.** As shown in Table 2, the ICD strategy significantly increases the defense of Llama2 to jailbreak attacks. However, after applying the ICD strategy, the rate of rejection responses to benign instructions on Llama2 surged from 14.00% to 92.25%, and its response usability score dropped dramatically from 3.87 to 2.17. Similarly, the SafeDecoding strategy effectively defends Vicuna against jailbreak attacks. However, it leads to a substantial increase in rejection responses from 3.63% to 39.50% and a decline in response usability score from 3.89 to 2.29. Such phenomenons indicate that **existing defense strategies often lead LLMs to adopt a rejection-oriented stance, thereby diminishing their usability.**.
- **MoGU can enhance LLMs’ safety while preserving their usability.** We have observed that our MoGU keeps robust defense performance. Notably, our MoGU also maintains the ability to respond with high-quality to benign instructions. As illustrated in Table 3, **our MoGU framework achieves the same level of usability as the base LLM on Llama2, with only minor drops of 0.21 and 0.01 on Vicuna and Falcon, respectively**. Furthermore, under the MoGU framework, the frequency of rejection expressions in LLM responses to benign instructions remains nearly equivalent to that observed in the base LLMs. These phenomena underscore the superiority of our MoGU framework.
**Summary**: We plan to present our supplemental information in the camera-ready version. If our supplemental information resolves your questions, we would appreciate a reconsideration of our score.
---
Rebuttal Comment 1.1:
Comment: ### Dear Reviewer 3hkc:
We appreciate your thoughtful review. As the rebuttal deadline approaches, we kindly ask if our responses have sufficiently addressed your concerns. Should you require further clarification, we are prepared to provide additional information.
Sincerely,
Authors | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Offline Inverse Constrained Reinforcement Learning for Safe-Critical Decision Making in Healthcare | Reject | Summary: This paper studies inverse RL with learnable constraints in the offline setting, focusing on practical applications in healthcare tasks. The main approach appears to be combining the decision transformer architecture in the offline RL literature with inverse constrained RL with max entropy framework. Experiments were conducted on two healthcare tasks: sepsis and mechanical ventilation.
Strengths: Studies an interesting and important problem.
Weaknesses: There is significant issues with the writing: the text is incoherent throughout and really hindered my understanding of the paper.
Technical Quality: 1
Clarity: 1
Questions for Authors: 1. Unclear what exactly is the "gap" the paper addresses. It mentions several issues including Markov assumption and history dependence, personalization & individual differences, offline setting, decision transformers, and it's unclear what the focus is. Otherwise, the paper seems to be applying decision transformer architecture to the constrained IRL problem.
1. Table 1: "too high" associated with increased mortality making it an "unsafe behavior" - this seems like confounding, patients are sick and that's why they receive high dose. Calling it "unsafe behavior" without conditioning on the patient state is inappropriate.
2. Claims about Markovianity: if Markov assumption does not hold, it seems to suggest the state is not capturing enough information about the "environment state".
- L49: "The Markov decision is not compatible with medical decisions." and "historical states of patients are crucial for medical decision" why can't one incorporate history into states?
- Fig 1: if within the same state different treatments should be used due to individual differences, that also suggests the state definition is not capturing individual differences (L57 "ignore individual differences") and you should redefine the states.
3. Organization & Flow:
- page 4 Sec 4 Methods talks about ICRL - is this your proposed method or previous work? If it's previous work it should be stated clearly and probably does not belong in Methods.
- L116 - when talking about Sec 3 Problem Formulation of constrained MDP, the text talks about "we extract data within 72 hours of patient admission, with each 4-hour interval" - this seems like experimental details rather than the mathematical setup of your method.
4. Writing and notation: writing is incoherent, many notations are not defined.
- L122 cost c_t \sim C, shouldn't cost depend on (s,a)?
- Eqn (2) in the definition of "probability of approaching the optimal policy", what does "top N" mean? Top N according to what? What is N? If N is dataset size, how do you "collect 2N from offline dataset" (L144)?
- L145 "calculate the DIFF and sort it in ascending order" Is the DIFF just one number? How do you sort it? What are you sorting?
- Eqn (3) what is R(tau), L157 is it ZM or ZMC, what is β
- L162 "where M ζθ denotes the MDP obtained after augmenting M with the cost function Cθ" Do you mean ζθ is the same as Cθ? What is π_Mζθ (L163)?
- L202 "doesn't" contractions should be avoided in academic writing
- L206 "This layer is employed to define the cost weight for non-Markovians" What is "non-Markovians"?
- L207 What is "causality transformer"? In "generate the importance weights", importance weights has a specific meaning in RL. The authors probably meant the cost weights.
- L219 "We construct an expert and a violating dataset to evaluate Equation 6 in offline." This sentence doesn't make sense grammatically.
- L248 "generating violating data" have you defined violating data? "may incentivize": why does the procedure incentivize generation of "violating data", do you show it theoretically or empirically?
- L283 why "Notably, the training dataset consists of data from surviving patients" Please provide more justification for this. Training on only surviving patients would limit the dataset and state-actions available to the learning agent.
5. Questions about experiments:
- L297 if cost values are positively correlated with mortality rates which is the reward, what's the point of having a separate cost function instead of just using the reward?
- L300 "This indicates that the attention layer plays a crucial role in assessing constraints." could you elaborate why?
- L321 "FQE cannot evaluate unseen (s,a)" are you saying your approach can evaluate unseen (s,a)?
- L331 "combined DIFF": vaso and IV have very different scales, how do you combine?
- L357 "corresponding experiments are conducted on the mechanical ventilator". What?
- It's unclear what Fig 8 is showing. Table 3 not explained, unclear if lower or higher is better, what is max ∆?
Confidence: 5
Soundness: 1
Presentation: 1
Contribution: 2
Limitations: Limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response 1:** We did indeed introduce our challenge by raising several issues, where the challenges are the key issue we aim to address. To aid your understanding, we will briefly re-explain it as follows: Current RL methods display risky behavior, which we aim to mitigate using Constrained RL (CRL). Effective CRL implementation depends on acquiring constraints, but existing custom constraint functions lack personalization, making ICRL a promising solution. However, it encounters difficulties in medical scenarios due to the Markov assumption and history dependence, limiting its effectiveness in healthcare.
**Response 2:** By "too high" we meant a drug dosage that is dangerous for any patient state. We fully agree with the reviewer's comment that "unsafe behavior without conditioning on the patient state is inappropriate." Therefore, we have taken this into careful consideration in our design and have developed a personalized cost function $C(\tau)$, which assigns a cost to the current drug dosage based on the patient's historical treatment trajectory.
**Response 3:** Incorporating history into the state or redefining the state are the most direct approaches. However, there may be the following issues: (1) Incorporating history into the state can lead to redundant calculations and increased complexity. For a patient's trajectory, the state at timestamp t will include the previous t-1 timestamps, which can result in excessive redundancy. The number of timestamps stored increased from O(n) to O(n^2). (2) Latent states might be unobservable. As Reviewer 1 pointed out, the patient system cannot be modeled based on a Markov process due to the high-dimensional latent states that cannot be represented.
**Response 4:** (1) Thank you for your suggestion. ICRL is part of previous work, which we have mentioned in the Introduction. We will reiterate here. (2) This is indeed a description of the experimental setup.
**Response 5:**
**1. L22** Not necessarily; it could be either $(s,a)$ or trajectory $\tau$. Therefore, we do not impose any restrictions here.
**2. L45** It is explained that 2N represents selecting 2N patients from the dataset, therefore N represents a constant, and 2N is less than or equal to the size of the dataset. And among these 2N patients, N patients died under the doctor's treatment, and N patients survived. We calculate the DIFF for the 2N patients and rank them in ascending order of DIFF. The top N patients are then selected as the top N.
**3. L162** We will restate $\omega$ to solve your questions: (1) Select 2N patients from the dataset. And among these 2N patients, N patients died under the doctor's treatment, and N patients survived. (2) In the dataset, we know the patient's state and the drug dosage (a) under the doctor's policy. We use the estimated policy to provide the drug dosage (b) for the same patient state. (3) Calculate the DIFF for each patient, defined as $DIFF = b−a$. For the 2N patients, we can obtain 2N DIFF values. (4) Sort the 2N DIFF values in ascending order and observe the survival status of the top N patients recorded in the dataset. (5) The top N patients indicate that the difference between our policy and the doctor's policy is small. And if the survival rate of these top N patients is higher, it suggests that our policy is closer to the ideal optimal policy. If our policy is ideal, the top N patients, once sorted, will have the highest survival rate. (6) In addition, we also need to consider the size of the DIFF values. For surviving patients, a smaller DIFF difference is preferable.
**4. L57** (1) $R(\tau)$ is the reward of the trajectory $\tau$. (2) L157 is $Z_{\mathcal{M}^{\mathcal{C}}}$. (3) $\beta$ is a parameter describing how close the agent is to the optimal distribution.
**5. L162** The cost function can be formulated as $C_{\theta}=1–\zeta_{\theta}$. $\mathcal{M}^{\zeta_{\theta}}$ is the MDP that results from adding the cost function $C_{\theta}$ to the original MDP $\mathcal{M}$. The executing policy for this augmented MDP is denoted as $\pi_{\mathcal{M}^{\hat{\zeta}_{\theta}}}$.
**6. L202** We will make corrections.
**7. L206** "Non-Markovians" refers to scenarios where the future state depends on both current and past states or actions, unlike "Markovian" processes, which depend solely on the current state.
**8.** (1) "causality transformer" is the casual transformer, which refers to a variant of the Transformer model that incorporates causal (or temporal) relationships into its architecture. (2) Yes, "the importance weights" is the cost weights, and I have made the correction.
**9. L219** Here’s a revised version: We construct an expert dataset and a violating dataset to evaluate Equation 6 offline.
**10. L248** In L249, we add references [1] and [2], which experimentally demonstrate that excessively high rewards can incentivize agents to violate constraints.
**11. L283** We think that expert policy should have the highest possible survival rate to ensure the correctness of their policy. And it is likely that the deaths in the dataset are caused by issues with the doctors' policy, so we hope to exclude these disturbances.
**Response 6:**
(1) Figures 8 and 7 use the same statistical methods. In Figure 8, in the mechanical ventilator environment, the relationship between different algorithm strategies (DDQN, CQL,...) and the action gap with the doctor's strategy (PEEP DIFF and FiO2 DIFF) and mortality rate is illustrated. The horizontal axis represents the action parameters' (PEEP and FiO2) DIFF, while the vertical axis represents the mortality rate.
(2) Lower is better. The lower the proportion of "too high" and "sudden change" the better.
(3) max ∆ represents the maximum change in medication dosage.
References:
[1] Guiliang Liu, et. al. Benchmarking constraint inference in inverse reinforcement learning. 2022.
[2] Zuxin Liu,et. al. Constrained decision transformer for offline safe reinforcement learning. 2023.
---
Rebuttal Comment 1.1:
Comment: While I thank the authors for the responses, I am maintaining my assessment of the submission given how substantial the clarifications and revisions are. I hope the authors can incorporate these to strengthen the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you again for reviewing our work! We will continue to refine the paper. If you have any further questions, we would be happy to discuss them with you. | Summary: The paper uses the Inverse Constrained Reinforcement Learning (ICRL) framework to infer constraints in healthcare problems from expert demonstrations. It proposes the Constraint Transformer (CT) to address the dependence of decisions on historical data, which is generally ignored in ICRL methods with Markovian assumptions. It borrows the causal transformer from the previous decision transformer to incorporate history into constraint modeling. Additionally, a model-based offline RL model is trained to generate violating data. The CT demonstrates improved safety by effectively modeling constraints based on both violating and expert data.
Strengths: - The paper addresses a gap in existing ICRL applications by integrating historical data into the decision-making process.
- The paper augments the violating data in the offline training dataset with a generative world model.
- The proposed method has been thoroughly evaluated in three aspects: effective constraints, improved sepsis strategies, and safe policies.
Weaknesses: - The proposed method depends heavily on the generated violating data, which defines the objective function in the constraint transformer. How sensitive is the estimated policy to the generative world model? Figure 12 shows that the action distributions in the expert dataset and the violating dataset are different. The VASO action seldom takes a large value in the violating dataset. Will this distribution difference cause any trouble in the learning of the constraint?
- Could the authors provide some details on how the DIFF between the estimated policy and the physicians' policy is calculated through graphical analysis? It would also be helpful if the authors could explain how Figure 7 is plotted. Is it calculated based on the dosage differences at each timestamp? In addition, what are the implications of the three DIFF evaluation metrics in Table 2? Since both IV and VASO are part of the action space, is the ACTION DIFF alone sufficient to evaluate the estimated policy?
Technical Quality: 3
Clarity: 2
Questions for Authors: See above.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful review of our work! Please allow us to address your concerns and answer the questions.
**Weakness 1:**
**Response:**
**(1) How sensitive is the estimated policy to the generative world model?**
To explore the sensitivity of the estimated policy to the generative world model, we designed the following experiment. The quality of the data generated by the world model is determined by the target reward. As the target reward increases, the world model generates more aggressive data in order to obtain higher rewards. So we set the target reward to 1, 5, 10, 20, and 30, respectively, and observed the impact of the generated data on the policy, as shown in Table 2. As the target reward increases, the performance of the policy improves; however, there is an upper limit, and it will not increase indefinitely.
**(2) Will this distribution difference cause any trouble in the learning of the constraint?**
We think that this impact exists, but it will not be significant. Firstly, the generative model used for creating the non-compliant dataset in this paper is a reinforcement learning (RL) model capable of generating data within legal boundaries (see Appendix B.1 in the paper). The actions, states, and rewards it generates must all fall within these legal boundaries. Therefore, this generative model is less likely to produce extremely high drug dosages. However, previous work has confirmed that such models can indeed generate non-compliant data [1]. Consequently, this generative model still presents a "potential risk."
**Weakness 2:**
**Response:**
**(1) Could the authors provide some details on how the DIFF between the estimated policy and the physicians' policy is calculated through graphical analysis?**
In the real medical dataset, we know the patient's state and the drug dosage (a) under the doctor's policy. We use the estimated policy to provide the drug dosage (b) for the same patient state under the estimated policy. We then calculate the DIFF for each patient state, which is $b−a$. We also calculate the mortality rate and standard deviation (std) of patients under the doctor's policy for different DIFF values.
In Figure 7, the x-axis represents the DIFF (the difference in drug dosage between the two policies), and the y-axis represents the mortality rate of all patients under the doctor's policy at that DIFF value. Observing the point where the x-axis is 0, which indicates no difference between the estimated policy and the doctor's policy, we see that when this point has the lowest mortality rate, it suggests that the estimated policy is closer to an ideal policy with a lower mortality rate. This indicates that the estimated policy is a safer policy.
**(2) Is it calculated based on the dosage differences at each timestamp?**
Yes, we calculate the dosage differences for each timestamp.
**(3) In addition, what are the implications of the three DIFF evaluation metrics in Table 2?**
IV DIFF represents the dosage difference of IV medication between the two policies at each timestamp. VASO DIFF represents the dosage difference of VASO medication between the two policies at each timestamp. ACTION DIFF represents the difference between the two policies for the vector composed of IV and VASO medications at each timestamp.
**(4) Since both IV and VASO are part of the action space, is the ACTION DIFF alone sufficient to evaluate the estimated policy?**
ACTION DIFF can evaluate the policy individually. However, this might present a problem: even if we standardize the dosages of the two medications to the same dimension, there may still be cases where the policy performs better for only one specific action. Therefore, for a more comprehensive evaluation, we need to focus on the performance of actions in each dimension.
References:
[1] Zuxin Liu, Zijian Guo, Yihang Yao, Zhepeng Cen, Wenhao Yu, Tingnan Zhang, and Ding 506 Zhao. Constrained decision transformer for offline safe reinforcement learning. arXiv preprint 507 arXiv:2302.07351, 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thoughtful and detailed response to my comments. I have no further questions at this time. | Summary: This paper introduces the Constraint Transformer (CT) framework to enhance safe decision-making in healthcare. The proposed CT model uses transformers to incorporate historical patient data into constraint modelling and employs a generative world model to create exploratory data for offline RL training. The authors supported their points by presenting experimental results in scenarios like sepsis treatment, showing that CT effectively reduces unsafe behaviours and approximates lower mortality rates, outperforming existing methods in both safety and interoperability.
Strengths: This paper shows its strengths in the following aspects:
- The paper addresses the novel angle of ensuring safety in offline reinforcement learning (RL) for healthcare, a critical and previously underexplored issue.
- It incorporates Inverse Constrained Reinforcement Learning (ICRL) into offline reinforcement learning (RL) for healthcare, introducing a novel approach to inferring constraints from expert demonstrations in a non-interactive environment.
- The implementation of a causal transformer to learn the constraint function is interesting, allowing the integration of historical patient data and capturing critical states more effectively.
- Extensive results on 2 datasets are presented. The proposed Constraint Transformer (CT) framework is shown to reduce unsafe behaviours and approximates lower mortality rates.
Weaknesses: Despite its strengths and novelty, this paper suffers from several critical technical flaws, primarily concerning the soundness of evaluation rather than the method itself:
1. **Definition of Metric**: The metric $\omega$ is defined by comparing drug dosages related to **mortality rate**, which I believe is a flawed definition, even though it has been used in previous papers. Mortality rate can be influenced by numerous factors, making it unsuitable as a reward for RL, which considers a limited number of drugs. It is challenging to convince clinicians that mortality can indicate the 'treatment quality' of vasopressor or mechanical ventilation. This suggests that the reward is not solely a function of the previous action and state but also many unconsidered features (hidden variables) in the datasets, such as adrenaline, dopamine, historical medical conditions, phenotypes, etc. [1] pointed out that doctors usually set a MAP target (e.g., 65) and administer vasopressors until the patient reaches this safe pressure; [2] suggests using the NEWS2 score as the reward supported by clinical evidence. None of these directly use mortality. While it is understandable that this paper is not a clinical study, and hence, it is not the authors' responsibility to identify clinically appropriate reward designs, I recommend referring to [1] and [2] for a reward design that makes more clinical sense.
2. **Definition of Optimal Policy**: This paper follows [3]'s definition of optimal policy. From my understanding, the clinician's policy $\hat{\pi}$ is approximated by a neural network. [2] pointed out that in the sepsis dataset, the behaviour policy can result in critical flaws in a very small number of states. Although the number of incorrect predictions is limited, they can still bias the off-policy evaluation results severely. I suspect this paper may encounter a similar issue. The authors should provide experiments and visualizations on the learning quality of the behaviour policy to justify their approach.
3. **Model-Based Off-Policy Evaluation**: The data imbalance in both the sepsis and ventilation datasets is significant. It is questionable whether the learned model can generalize well. The most acceptable way to validate the method remains using simulated data where all policies can be tested online. One possible testbed is the DTR-Bench[4] medical simulated environment.
The paper has a few other minor technical flaws compared to the above three.
[1] Jeter, Russell, et al. "Does the" Artificial Intelligence Clinician" learn optimal treatment strategies for sepsis in intensive care?." arXiv preprint arXiv:1902.03271 (2019).
[2] Luo, Zhiyao, et al. "Position: Reinforcement Learning in Dynamic Treatment Regimes Needs Critical Reexamination." Forty-first International Conference on Machine Learning.
[3] Aniruddh Raghu, Matthieu Komorowski, Leo Anthony Celi, Peter Szolovits, and Marzyeh Ghassemi. Continuous state-space models for optimal sepsis treatment: a deep reinforcement learning approach. In Machine Learning for Healthcare Conference, pages 147–163. PMLR, 2017
[4] Luo, Zhiyao, et al. "DTR-Bench: An in silico Environment and Benchmark Platform for Reinforcement Learning Based Dynamic Treatment Regime." arXiv preprint arXiv:2405.18610 (2024).
Technical Quality: 2
Clarity: 3
Questions for Authors: Attention Mechanism: The paper highlights the importance of the causal attention mechanism in the Constraint Transformer. Since the two experimental datasets are both short-term time series datasets, is transformer really necessary? Is it possible that an RNN can be better than a transformer?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: There is no negative societal impact or limitation that needs clarification.
In addition to the weakness I mentioned, this paper:
1. lacks off-policy evaluation results.
2. may summarise more related work in this 'dynamic treatment regime' field. A few examples are listed below:
[1] Kondrup, F., Jiralerspong, T., Lau, E., de Lara, N., Shkrob, J., Tran, M. D., Precup, D., and Basu, S. Towards safe mechanical ventilation treatment using deep offline reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 15696–
15702, 2023.
[2] Liu, Y., Logan, B., Liu, N., Xu, Z., Tang, J., and Wang, Y. Deep reinforcement learning for dynamic treatment regimes on medical registry data. In 2017 IEEE international conference on healthcare informatics (ICHI), pp. 380–385. IEEE, 2017.
[3] Nambiar, M., Ghosh, S., Ong, P., Chan, Y. E., Bee, Y. M., and Krishnaswamy, P. Deep offline reinforcement learning for real-world treatment optimization applications. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 4673–4684, 2023.
[4] Peng, X., Ding, Y., Wihl, D., Gottesman, O., Komorowski,M., Li-wei, H. L., Ross, A., Faisal, A., and Doshi-Velez,F. Improving sepsis treatment strategies by combining deep and kernel-based reinforcement learning. In AMIA Annual Symposium Proceedings, volume 2018, pp. 887. American Medical Informatics Association, 2018.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful review of our work! Please allow us to address your concerns and answer the questions.
**Weakness 1:** Definition of Metric
**Response:**
**(1) Reward Function Design:** The reviewer may have misunderstood our reward function design. In our work, the reward function did not directly utilize mortality rates but rather included intermediate rewards (as shown in Appendices B.1 and B.2). For example, in the design for sepsis, intermediate rewards like the SOFA score were included. For different diseases, we designed different reward functions based on previous literature. Additionally, we agree with the reviewer's comment that "complex reward design can facilitate the learning of strategies," but the design of reward functions is relatively challenging and requires the involvement of medical experts. We also recognize that the current reward function design may not account for hidden variables (potentially fatal). Therefore, we use a relatively simple reward function, incorporating a cost function with historical dependency, to take into account the changes in indicators like MAP and NEWS that were not considered in the reward function. This will guide the agent in learning safe and effective strategies.
To verify whether our penalty function can capture changes in NEWS and MAP, we conducted supplementary experiments as shown in Figure 1. When the NEWS score is too high, the penalty value increases accordingly; similarly, when MAP is outside the normal range, the penalty value also increases. This indicates that the penalty function can compensate for the shortcomings of the reward function design.
**(2) Evaluation Metrics:** We acknowledge that the evaluation metric ω is not an ideal measure. Therefore, we also consider the relationship between the penalty value and medical metrics (section 5.1) and the probability of dangerous actions in the policy (section 5.2) as part of the evaluation criteria to compare the safety of different policies from multiple perspectives. In the experiments, all methods are based on the same reward function, so the design of the reward function is not a variable factor.
**Weakness 2:** Definition of Optimal Policy
**Response:**
**(1) Behavior Policy Fitting Error:** In our Offline RL, we did not fit the clinicians' policy using a neural network, so there is no fitting error in this part of the experiment. The reviewer has provided a valuable suggestion, highlighting that fitting the behavioral policy could indeed help eliminate some confounding factors. In our supplementary experiments, we utilized neural networks to fit the behavioral policy, corrected the model accordingly, and then used OPE evaluation to conduct the offline policy evaluation experiment.
**(2) Offline Policy Evaluation:** We supplemented our work by referring to [1] for the offline policy evaluation as follows: Using the same behavioral fitting function and the same NEWS2 reward, we compared the results of the policy under different evaluation metrics, as shown in Table 1. The CDT+CT method performed better than other methods on the RMSE_IV, WIS, WIS_b, and WIS_bt evaluation metrics.
**Weakness 3:** Model-Based Off-Policy Evaluation
**Response:** Since we aim to evaluate whether there are instances of excessively high or sudden changes in medication dosage, our model's action space consists of the actual dosage values. However, DTR-Bench currently cannot perform online evaluations of medication dosages, as it only provides a discrete action space (i.e., "yes" or "no" for administering medication). In this case, there is no issue of overestimation, so this online testing method cannot assess whether the policy effectively avoids dangerous behaviors. Exploring a simulation environment based on continuous action spaces is a promising direction for future research.
**Question:** Attention Mechanism: The paper highlights the importance of the causal attention mechanism in the Constraint Transformer. Since the two experimental datasets are both short-term time series datasets, is transformer really necessary? Is it possible that an RNN can be better than a transformer?
**Response:** Compared to RNNs, the Transformer architecture has the following advantages:
**(1) Computational Efficiency:** Due to the global computation allowed by the Transformer's self-attention mechanism, it can be highly parallelized during training[2,3], leading to faster training speeds. Transformers can significantly improve training efficiency. RNNs, on the other hand, require sequential processing of each time step, which makes parallelization difficult and results in slower training speeds.
**(2) Capturing Complex Information:** Transformers utilize multi-head attention mechanisms to simultaneously focus on different parts of the sequence, allowing them to better capture complex relationships in medical data. In medical datasets, events might be recorded with non-uniform time intervals. The Transformer's self-attention mechanism does not rely on fixed time steps, making it more flexible in handling such situations.
**(3) Interpretability:** The self-attention mechanism of Transformers makes it easier to understand which parts of the data sequence the model is focusing on, providing better interpretability, which is crucial in the medical field. RNNs, with their internal states and memory units, are more difficult to interpret, which may reduce the transparency of clinical decision-making.
References:
[1] Luo, Zhiyao, et al. "Position: Reinforcement Learning in Dynamic Treatment Regimes Needs Critical Reexamination." Forty-first International Conference on Machine Learning.
[2] Peng B, Alcaide E, Anthony Q, et al. Rwkv: Reinventing rnns for the transformer era[J]. arXiv preprint arXiv:2305.13048, 2023.
[3] Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. 2022. Efficient transformers: A survey. ACM Computing Surveys, 55(6):1–28.
---
Rebuttal 2:
Title: First reply to the rebuttal
Comment: Thanks to the authors for providing their views and justification.
Regarding weakness 1:
I agree with your explanation to reward design and I appreciate the added cost function for NEWS and MAP. However, I still have 2 concerns:
1. You did not compare naive baselines (random, zero-drug, etc.) in your added OPE result; according to [1], including naive baselines are critical. You may also add naive baselines to other experiments.
2. The OPE result is generally good. RMSE_VASO is significantly larger than offline RL algos while RMSE_IV is not. You might visualise the policy and check if the difference is caused by a reduced use of VASO, therefore preventing high dose.
3. No OPE result on ventilation dataset
## Regarding weakness 3:
Yes, I agree with the authors that 'dangerous' actions (according to your definition) are actual values that do not need a model for prediction. However, I think the authors have some confusion about the concept of 'dangerous actions' and 'dangerous states'.
Technically, 'dangerous actions' don't mean a high dose, a high dose doesn't always lead to dangerous states. It does not seem right to define a high dose to be dangerous, as some high doses might be necessary. Even if the agent proposed high doses while the doctors did not, it does not mean the agent is acting less dangerously or less optimally, as we do not have counterfactuals without simulation environments. Combined with the sad fact that all your evaluations are offline on imbalanced datasets, it is tough to convince me that your approach is 'safer'.
Let me give you a very simple example to show that low doses can be dangerous: the recommended dose of insulin for Type 1 diabetic patients is 0.5 units/day, roughly. Policy 1 gives 0.25 units at noon and night and 0 elsewhere; policy 2 gives 0.01 units every 1 min for 24 hours. Accumulatively, the policy 2 will kill the patient because the summed dosage is too high. However, we did not observe any single 'high' dosage in policy 2. This example shows that we cannot simply say high-dose action is dangerous. There are a lot of complicated PK/PD effects we might consider further beyond high doses. I recommended using online testing because the simulation environment will consider the PK/PD for you, such that a higher reward means a 'safer' performance. If you disagree with this please justify further.
"DTR-Bench currently cannot perform online evaluations of medication dosages, as it only provides a discrete action space"
No, DTR-Bench has four environments, three of which have continuous action spaces and the sepsis one has a discrete action space. The sepsis environment's states are human-readable (because they are designed in that way). By checking the state index, you can tell how many vital signs are abnormal. The authors should read more literature on the dynamic treatment regime field.
Also, I am confused by the authors' statement that 'there is no overestimation'. From my understanding, all value-based or actor-critic methods have overestimation issues. The authors may explain further.
In conclusion, the justification for weakness 3 is weak.
## Regarding the question:
The authors might answer my previous questions directly: "Since the two experimental datasets are both short-term time series datasets, is a transformer really necessary? Is it possible that an RNN can be better than a transformer?"
I asked this because many exciting architectures or models in other fields do not work in healthcare. There are numerous reasons, but one is that healthcare data is imbalanced and insufficient for large models to train. Some works in healthcare show that transformers do not improve performance in small-scale healthcare tasks; some even decrease due to over-parameterization and/or overfitting. I would invite the authors to eliminate my concern through ablation studies. For example, what if you replace the transformer with a shallow encoder-decoder LSTM?
Apart from the weaknesses I mentioned before, I have a few minor questions that were not presented. The authors might answer if time allows:
1. Is it possible that a human-defined cost function can surpass some baselines or your method? For example, with the help of LLMs, a non-healthcare expert can know the normal range of all clinical variables and define a heuristic cost function very easily. Since this knowledge is prior, it should intuitively outperform most baselines and possibly your method. If so, is the motivation of your paper solid?
2. Do you have any assumptions for the optimality of the behavioural policy? Will that affect the choice of your baselines?
The current rebuttal are insufficient to allow me change the score. I look forward to further discussion if anything mentioned above is not agreeable or wrong.
References:
[1] Luo, Zhiyao, et al. "Position: Reinforcement Learning in Dynamic Treatment Regimes Needs Critical Reexamination." Forty-first International Conference on Machine Learning.
---
Rebuttal Comment 2.1:
Comment: Thank you for the thoughtful comments on our work! Please allow us to address your concerns and answer the questions.
## Weekness 1
## Response:
1. Thank you for your suggestions. We will add naive baselines to our experiments.
2. This is indeed a very good approach to observe the differences in actions. In our study, the action space ranges from 0 to 24. Our experiments revealed that the agent rarely takes actions where both drug doses are at the maximum level (i.e., 24), while there is no significant reduction in other actions. However, the agent tends to choose higher VASO drug doses compared to doctors. We are considering whether the discrete action space has influenced this result, leading the model to perceive that danger only arises when both drug doses are high.
3. Sorry, we have not yet completed this part of the experiment, but we will continue to supplement this aspect in our subsequent research.
## Weakness 3
## Response:
Our previous statement may have been unclear. "Too high" refers to a drug dosage that is lethal for all patients. This was just an example of a dangerous action, and there could also be the type of dangerous action you mentioned, where the total dosage is too high. However, our model does not restrict the categories of "dangerous actions." In our paper, we tested "too high" and "sudden change." However, we have not tested whether the model can restrict other types of "dangerous actions." We plan to test this in other continuous action space environments within the DTR Bench in the future.
By "not overestimating," we mean that in the sepsis environment of the DTR Bench, the actions taken are "yes" or "no." This action space may make it difficult to determine whether a drug dosage is high or low. It is not an issue of overestimation based on value or actor-critic methods.
## Question
## Response:
Thank you for your response. I understand your concerns. We apologize that we have not yet completed the LSTM ablation experiments. To improve our work, we will conduct additional experiments in the future to enhance our study.
## Other questions
## Response:
1. Regarding the cost function defined by humans, we designed one based on expert opinions in the paper. Designing a cost function by humans can be challenging, particularly in determining hyperparameters. With the help of LLMs, we consider that there may be some latent state indicators that are currently unknown in the medical field. Therefore, this prior knowledge may be difficult to provide to large models.
2. The optimality of behavioral strategies is indeed hard to define. Since a zero-mortality action strategy does not exist—because some patients will die regardless of treatment—we assume that a strategy with a lower mortality rate is more optimal. For the baseline selection, we chose the expert strategy (i.e., the physician's strategy data with a patient mortality rate of 0) and the zero drug dosage strategy.
Thank you for your response. We will continue to supplement our work with the following experiments: other environment OPE tests, DTR Bench online testing, and LSTM ablation experiments, to further strengthen our study. | Summary: The authors consider healthcare applications of RL algorithms in which implicit constraint modeling is critical for safe recommendations.
This is modeled as an RL policy optimization with constraints. However, the constraints are often unknown and need to be inferred from expert data trajectories in the healthcare applications. The authors propose a neural network estimator to combine with the constrained MDP formulation. They also identify that the naive way to represent states leads to non-Markov structure. To address these issues, the paper proposes a simple modification (based on prior work on the "preference attention layer") to a causal transformer based policy which attempts to model a parametric constraint function. Evaluations are based on healthcare benchmarks.
Strengths: - The paper considers a practically motivated approach grounded in real data for a healthcare application where data could be challenging to obtain.
- Proposes simple addition to the final layer of a causal transformer to parametrize constraints on trajectories learned from expert data.
- Evaluations are conducted on real healthcare domain benchmarks and extensive ablations are included to ensure that the architecture change is meaningful in obtaining the improvements.
Weaknesses: - It seems like the system (which is the patient) is not feasible to model as evolving according to an Markov process on the observed state at each time, but instead a POMDP with a high dimensional latent state.
- Clarity and presentation can be improved. In Equation (4), what is $\hat{\tau}$, this was never defined for being such a key component of the procedure. Similar issues in Eq (8) related to clarity.
Technical Quality: 2
Clarity: 2
Questions for Authors: - In explaining the main Equation (4), the blurb in lines 162-164 needs some attention. Specifically, what does "MDP obtained after augmenting M with cost function $C_\theta$, using the executing policy ..." mean exactly?
- In Eq (8) what does $s_t, r_{t-1} \in x_t \sim D_e$ represent? It looks like $x_t$ is a representation generated by the transformer, and $s_t, r_t$ are from the data. Is there a problem in the notation?
- Is the transformer representations being trained from the gradients of both the policy model as well as the world model? If so, how do they interact? If not, consider adding stop gradients at the relevant places in Eq (7) and/or (8) to make this explicit.
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful review of our work! Please allow us to address your concerns and answer the questions.
**Weakness 1:** It seems like the system (which is the patient) is not feasible to model as evolving according to an Markov process on the observed state at each time, but instead a POMDP with a high dimensional latent state.
**Response:** Thank you for your suggestion. We agree that the complexity of the patient system should be described as a POMDP, as the MDP model indeed oversimplifies the system. We constructed a non-Markovian model by incorporating historical state sequences into the RL decision-making process, with the goal of capturing potential states from history, which aligns with your suggestion. To ensure a more rigorous presentation, we have reformulated it as a CPOMDP framework in Section 3 Problem Formulation, defined as $\left( \mathcal{S},\mathcal{A},\mathcal{O},\mathcal{P},\mathcal{Z},\mathcal{R},\mathcal{C},\gamma ,\kappa,\rho_0 \right)$, where $\mathcal{O}$ is the set of observations $o$, $\mathcal{P}\left( s'|s,a \right) =Pr\left( s_{t+1}=s'|s_t=s,a_t=a \right) $ the transition probability, and $\mathcal{Z}\left( o|s',a \right) =Pr\left( o_{t+1}=o|s_{t+1}=s',a_t=a \right)$ is the observation probability. The other parameters have been explained in the paper.
**Weakness 2:** Clarity and presentation can be improved. In Equation (4), what is $\hat{\tau}$, this was never defined for being such a key component of the procedure. Similar issues in Eq (8) related to clarity.
**Response:** We apologize for any inconvenience caused in your reading. The explanations for the two points mentioned above are as follows:
1. In Equation (4), $\hat{\tau}$ is the trajectory which is sampled from the executing policy (training policy) $\mathcal{M}^{\hat{\zeta}_{\theta}}$. In an online environment, we can use the execution policy to interact with the environment and generate some trajectories $\hat{\tau}$.
2. In Equation (8), $x_{t}=\{h_{t} \cup a_{t}\}= \( R_{-K: t}, s_{-K: t}, a_{-K: t} \)$ is the input which includes the reward $R$, states $s$ and action $a$ from the preceding $K$ timesteps, where $K$ is the context length. The input is encoded by linear layers and passes through the casual transformer to predict hidden tokens $g_t$. Next, we use the hidden tokens as input and employ two linear layers ($\ell_{\mu} \text{ and } \ell_{\varphi}$) to predict the current reward $r_{t-1}$ and the next state $s_t$, with the objective of minimizing the mean squared error for each linear layer, defined as: $
\min_{\varphi,\mu} \mathbb{E} [{(s_t-\ell_{\varphi}(g_t))^2} + {(r_{t-1}-\ell_{\mu}(g_t))^2}] $
In addition, we will improve the overall clarity and presentation of the paper to avoid such issues.
**Question 1:** In explaining the main Equation (4), the blurb in lines 162-164 needs some attention. Specifically, what does "MDP obtained after augmenting M with cost function $C_{\theta}$, using the executing policy ..." mean exactly?
**Response:** The ICRL approach involves two policies: one is the expert policy $\pi_e$, and the other is the policy being trained (execution policy $\pi_{ \mathcal{M}^{\zeta_{\theta}}})$. This sentence defines the Markov Decision Process (MDP) model for the latter. $\mathcal{M}^{\zeta_{\theta}}$ represents the MDP that results from adding the cost function $C_{\theta}$ to the original MDP M. The executing policy for this augmented MDP is denoted as $\pi_{\mathcal{M}^{\zeta_{\theta}}}$.
**Question 2:** In Eq (8) what does $s_t,r_{t-1} \in x_t \sim \mathcal{D}_e$ represent? It looks like $x_t$ is a representation generated by the transformer, and $s_t,r_t$ are from the data. Is there a problem in the notation?
**Response:** Sorry, there is indeed an issue here. We have redefined the hidden tokens generated by the transformer as $g_t$.
**Question 3:** Is the transformer representations being trained from the gradients of both the policy model as well as the world model? If so, how do they interact? If not, consider adding stop gradients at the relevant places in Eq (7) and/or (8) to make this explicit.
**Response:** Yes, in the model-based offline RL framework, the transformer is trained together with the policy model and the world model. The objectives of the policy model and the world model are to generate actions, rewards, and the next state, respectively. The transformer structure is used to extract historical information for the policy and world models. When training model-based offline RL, the goal is to simultaneously minimize Equations (7) and (8). During this process, the transformer is also trained alongside the objectives until convergence.
---
Rebuttal Comment 1.1:
Title: Acknowledgement of rebuttal
Comment: Thank you for the clarifications. I believe my original review scores are fair and stick to them. | Rebuttal 1:
Rebuttal: We thank the reviewers for their time, suggestions and questions that we believe will improve the quality of the paper. Below we summarize our overall response to the reviewer’s questions and comments.
- We will add a discussion on the relationship between the key metrics, NEWS and MAP, and the cost function, as shown in Figure 1 of the attachment. The cost function can indeed capture dangerous states that the reward function overlooks, addressing the issue of latent variables that cannot be incorporated into the reward function.
- We will include an off-policy evaluation as suggested by reviewer guwJ. Referring to [1], we will evaluate our method and others using multiple evaluation metrics under the same reward function and behavior policy, as shown in Table 1 of the attachment. Our policy performs well in $RMSE_{IV}$、$WIS_b$、$WIS_t$、$WIS_{bt}$. For the specific meanings of these metrics, please refer to [1].
- We will add a sensitivity analysis of the generative world model to CDT+CT, as suggested by reviewer sHtP. As the target reward increases, the generated world model exhibits more aggressive behavior, which can improve the performance of the estimated policy, but there is an upper limit to this effect.
- We will clarify the meaning and calculation process of the evaluation metric
$\omega$, as suggested by reviewers sHtP and cbZJ. For details, please refer to our responses to reviewer sHtP (Response2-1) and cbZJ (Response5-3).
References:
[1] Luo, Zhiyao, et al. "Position: Reinforcement Learning in Dynamic Treatment Regimes Needs Critical Reexamination." Forty-first International Conference on Machine Learning.
Pdf: /pdf/c7246c628905bf733dd542dc5499446a53e87a27.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
EnOF-SNN: Training Accurate Spiking Neural Networks via Enhancing the Output Feature | Accept (poster) | Summary: The paper presents the method to improve the representation power of SNNs. To this end, the paper proposed two new techniques to the training of spiking neural networks: 1) To guide learning of the last feature layer by an alignment loss with a pre-trained non-spiking network and 2) to replace the last spiking LIF layer with a continuously valued ReLU layer.
Strengths: 1)Those two techniques are new and interesting and lead to new SOTA on all the datasets they try.
2)The paper is well-written.
3)The paper provided ablation analysis to evaluate the contributions of both proposed techniques.
Weaknesses: 1) How can SNN representation be converted to float activation in eq 1? Is it averaged?
2) Improvement with technique 2 is expected, and it makes the comparison with pure SNNs unfair.
3) The method needs to train an ANN in addition to the SNN. So the total training time is longer.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see weaknesses.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please see weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your efforts in reviewing our paper and your recognition of our good writing, technical soundness experimental results, and novel methods. The response to your questions is given piece by piece as follows.
**W1**: How can SNN representation be converted to float activation in eq 1? Is it averaged?
**A1**: Sorry for the confusion. Yes, it is averaged through timesteps. We will further clarify this as per your suggestion in the revised version. Thanks again.
---
**W2**: Improvement with technique 2 is expected, and it makes the comparison with pure SNNs unfair.
**A2**: Sorry for the confusion. Though the technique 2 will increase the energy consumption, it is very simple and effective. In the appendix, we show that the method just brings about a 1.0% energy consumption uplift. Thus the additional energy consumption could be ignored. In this sense, the comparison could be seen as fair.
---
**W3**: The method needs to train an ANN in addition to the SNN. So the total training time is longer.
**A3**: Sorry for the confusion. The additional training cost is trivial. We take the ResNet20 with 4 timesteps on CIFAR10 to compare. **The training time of ANN is about 6.25% of that of SNN on the same GPU**, as follows: since the memory consumption of SNN is 4 times that of ANN, ANN can use a batch size that is 4 times that of SNN, which means that the number of iterations required for ANN training is 25% of that of SNN. In addition, since timestep affects SNN, the training time of ANN in the same iteration is 25% of that of SNN. Overall, the training time of ANN is 6.25% of that of SNN. We report the training cost for 400 epochs running using spiking ResNet20 (batch=128) on CIFAR10 for our methods and the training cost for 100 epochs running using ResNet20 (batch=512) with the same single V100 here. **The costs are 8348.87s and 561.77s. It can be seen that the additional training cost is trivial.**
---
Rebuttal Comment 1.1:
Comment: I have reviewed the rebuttal. The revised version should clearly detail the average operation and clarify the comparisons to enhance readability.
Overall, most concerns have been addressed. I would like to adjust the score to a weak accept.
---
Reply to Comment 1.1.1:
Title: thanks
Comment: Thanks very much for your reply and recognition. We are happy to see that your concerns have been addressed. | Summary: The authors propose a novel distillation approach (EnOF) to address the mismatch in output precision between SNN and ANN, which leads to poor distillation performance. EnOF feeds the output feature of SNN into the ANN classification head to obtain its effective mapping output P_s, and then uses distillation learning to make the P_s close to the ANN output P_a. Additionally, the paper analyzes the issue of inadequate SNN representation due to the final LIF activation layer. The authors suggest replacing it with the ReLU activation function. Through the methods presented in this paper, SNN demonstrates significant performance improvements across different datasets.
Strengths: The method proposed in this paper is simple, effective, and easy to implement. This paper presents a novel knowledge distillation learning concept for two different types of networks whose output representation abilities are different. I believe this article has a certain promoting effect on the SNN training.
Weaknesses: To illustrate the advances made by EnOF and RepAct, the authors can provide the network Hessian matrix eigenvalues or convergence speed comparisons.
An intuitive drawback of RepAct is its requirement for high-precision floating-point matrix multiplication operations in the classification head, which undoubtedly increases the computation overhead. I suggest that the authors move the overhead comparison section from the appendix to the main text.
The main text lacks the definition of the method name "EnOF.”.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Does distillation learning have a temperature coefficient?
2. Is the optimal $\lambda$ obtained through search, and does different $\lambda$ have a significant impact on the final results?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your efforts in reviewing our paper and your recognition of our simple but effective method. The response to your questions is given piece by piece as follows.
---
**W1**: To illustrate the advances made by EnOF and RepAct, the authors can provide the network Hessian matrix eigenvalues or convergence speed comparisons.
**A1**: Thanks for the advice. We will add the convergence speed comparisons as per your suggestion in the revised version. Thanks again.
---
**W2**: I suggest that the authors move the overhead comparison section from the appendix to the main text.
**A2**: Thanks for the advice. We will add the overhead comparison section to the main text as your suggestion in the revised version. Thanks again.
---
**W3**: The main text lacks the definition of the method name "EnOF.”.
**A3**: Sorry for the confusion. “EnOF” comes from the first letter of “**Enhancing the Output Feature**” from the title. We will further clarify it in the revised version. Thanks again.
---
**Q1**: Does distillation learning have a temperature coefficient?
**A1**: Sorry for the confusion. The distillation learning has a temperature coefficient. We set it as 0.1 for CIFAR dataset and 1 for ImageNet dataset.
---
**Q2**: Is the optimal $\lambda$ obtained through search, and does different $\lambda$ have a significant impact on the final results?
**A2**: Thanks for the question. The hyperparameter we used in the paper is 0.1 for CIFAR dataset and 1 for ImageNet dataset. We get it just from a simple search strategy that chooses several hyperparameters and compares their results. Here we also report the results of 0.5 and 1.0 for CIFAR10. It can be seen that the hyperparameter will affect the results. Nevertheless, Using the KD is better than the Baseline.
| Dataset | Method | Timestep | hyperparameter=0.1 | hyperparameter=0.5 | hyperparameter=1.0 |
| --- | --- | --- | --- | --- | --- |
| CIFAR10 | Baseline | 1 | 90.40% | 90.40% | 90.40% |
| | Ours | 1 | 92.28% | 91.96% | 91.78% |
| | Baseline | 2 | 92.80% | 92.80% | 92.80% |
| | Ours | 2 | 93.53% | 93.13% | 92.99% |
---
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. The authors have made efforts to enhance the readability of the manuscript, and I believe that this paper will contribute positively to the development of the SNN field. I'd like to raise my score to 7.
---
Reply to Comment 1.1.1:
Title: thanks
Comment: Thanks very much for your reply and recognition. We are happy to see that your concerns have been addressed. | Summary: This paper introduces methods to improve the output feature representation of Spiking Neural Networks (SNNs) by utilizing knowledge distillation (KD) techniques and modifying activation functions. The approach involves aligning the SNN's output features with those of a pre-trained Artificial Neural Network (ANN) using KL-divergence loss and replacing the last Leaky Integrate-and-Fire (LIF) activation with a ReLU activation to enhance feature precision. These modifications are shown to increase the expressiveness and performance of SNNs on both static and neuromorphic datasets.
Strengths: 1. __Technical Soundness__: The experimental results support the effectiveness of the proposed methods, albeit not conclusively when considering potentially more efficient alternatives.
2. __Clarity__: The paper is well-written with a clear structure. The methods are described in detail, making it easy for readers to understand the proposed enhancements.
3. __Energy Estimation__: The discussion on energy estimation provides valuable insights into the practical implications of the proposed methods, highlighting their potential for energy-efficient computing.
Weaknesses: 1. __Novelty__: The methods, while integrating known techniques in a new context, do not present strong novelty. While the combination of techniques is original, the individual components (knowledge distillation and activation function manipulation) are well-trodden areas in SNNs. The paper could benefit from a deeper exploration of how these specific adaptations contribute uniquely to the SNNs' context.
2. __Citation Format__: The paper does not adhere to NeurIPS citation format requirements.
3. __Comparative Analysis__: The paper lacks a comprehensive comparison with other SNN training methods that use similar KD approaches, which is crucial for substantiating the claimed improvements. A broader comparison with similar approaches in the field of SNNs+KD (see [1, 2]) could better highlight this.
4. __Significance__: The improvements, while statistically valuable, may not represent a paradigm shift in SNN training that the field might expect for a high-impact publication.
Technical Quality: 3
Clarity: 3
Questions for Authors: + Could the authors extend their comparisons to include seminal works in SNN+KD such as those by Guo et al. [1] and Deng et al. [2]? This would help in positioning their method more clearly within the current research landscape.
+ The ablation study in Table 6 is critical, and the details of the hyperparameters used in the vanilla KD need to be listed. Given that the paper mentions that "most knowledge distillation methods perform poorly without well-designed hyper-parameters," the design of KD should be further discussed. For example, the inclusion of a temperature coefficient commonly used in logits-based KD [3] should be considered (in Eq. 2, etc) to potentially improve the alignment strategy.
Reference:
[1] Guo, Yufei, et al. "Joint a-snn: Joint training of artificial and spiking neural networks via self-distillation and weight factorization." Pattern Recognition 142 (2023): 109639.
[2] Deng, Shikuang, et al. "Surrogate module learning: Reduce the gradient error accumulation in training spiking neural networks." International Conference on Machine Learning. PMLR, 2023.
[3] Hinton, Geoffrey, Oriol Vinyals, and Jeff Dean. "Distilling the knowledge in a neural network." arXiv preprint arXiv:1503.02531 (2015).
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See Weaknesses and Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your efforts in reviewing our paper and your recognition of our good writing and technical soundness experimental results. The response to your questions is given piece by piece as follows.
**W1**: The paper could benefit from a deeper exploration of how these specific adaptations contribute uniquely to the SNNs' context.
**A1**: Thanks for your advice. The starting point of our research is that a rich output feature representation is important for improving the accuracy of SNN first. Based on this perspective, we propose the L_{AF} Loss and RepAct.
For the L_{AF} Loss, it is not merely a distillation method. The primary motivation behind its proposal is that many ANN-SNN works do not alter the weights of neural networks yet achieve high accuracy. This insight has inspired us to incorporate certain ANN modules into SNN. However, direct migration and training are incompatible. Hence, we introduce the concept of utilizing the classifier of ANN to optimize the output feature of SNN. We have chosen to formalize our idea through distillation and, in the supplementary materials, we provide evidence of its significant advantages over direct distillation. Thus, from a systemic standpoint, the method we propose is a distillation approach specifically tailored for SNN, which fundamentally differs from simple distillation despite their superficial similarities.
For the RepAct, this method is very simple but effective. Maybe it is not very novel, but will provide a novel insight for the SNN field that enhancing the output feature is very useful for accuracy.
We will further clarify this in the revised version as your suggestion. Thanks.
---
**W2**: The paper does not adhere to NeurIPS citation format requirements.
**A2**: Thanks for the reminder. We will further recheck and polish our paper in the final version.
---
**W3**: The paper lacks a comprehensive comparison with other SNN training methods that use similar KD approaches.
**A3**: Thanks for your advice. Here we add some comparison results for the same models with [1] and [2] based on their reported results. It can be seen that our method works well on ImageNet with ResNet-18 and on CIFAR10 with ResNet-19, but a little worse on CIFAR10 with ResNet-18. We will report more comparisons in the revised version. Thank you.
| Dataset | Model | Timestep | Accuracy |
| --- | --- | --- | --- |
| CIFAR10 | Our ResNet-19 | 2 | 96.19% |
| | Our ResNet-18 | 2 | 93.86% |
| | Surrogate Module ResNet-19 [2] | 4 | 95.58% |
| | Surrogate Module ResNet-18 [2] | 2 | 94.58% |
| | Joint ResNet-18 [1] | 2 | 94.01% |
| ImageNet | Our ResNet-18 | 4 | 65.31% |
| | Surrogate Module ResNet-18 [2] | 4 | 64.53% |
---
**W4**: The improvements, while statistically valuable, may not represent a paradigm shift in SNN training that the field might expect for a high-impact publication.
**A4**: Sorry for the confusion. We think our most valuable contribution is providing insight for the SNN field that enhancing the output feature is much useful for accuracy. Our $L_{AF}$ and RepAct both are means, while our intention is to improve the output feature and prove that it is beneficial for boosting accuracy. With verified experiments and under the perspective, we believe this work is much beneficial for the following SNN work. And many other methods to improve the output feature would be proposed, like special temporal contrastive loss or simply increasing channels for SNNs. Yet, this is not to say our methods are not novel. For RepAct, it is very simple, but effective. With a trivial additional cost, it can improve the SNN greatly. From a practical perspective, this is a really good choice for the industry. For $L_{AF}$ it is a simple and effective KD method designed for SNN, different from these straightforward KDs. In specific, we introduce its novelty in two-fold. First, providing a simple but effective KD for SNNs is difficult. Most of KDs perform poorly without well-designed hyper-parameters, which have been comprehensively and systematically studied in [1] (see Tab.1 and Tab.2 in [1]. This paper gets more than 700 cites and 1.8k stars). Most KD methods only work well on small datasets, especially in the SNN field. Second, our $L_{AF}$ comes from both the KDs and ANN-SNN methods. Some ANN-SNN methods that directly copy-paste the weight parameters from trained ANN to homogeneous SNN with minor changes can obtain a well-performed SNN. This inspires us that utilizing homogeneous ANN's some elements to guide the learning of SNN may be useful. Along with considering the classification and representation, we utilize ANN’s pre-trained classifier to train the SNN’s feature and it works well. Thus, we think $L_{AF}$ is new, simple, designed specifically for SNNs.
[1] Tian, Y., Krishnan, D., and Isola, P. Contrastive representation distillation. In *International Conference on Learning Representations*, 2020.
---
**Q1**: Could the authors extend their comparisons to include seminal works in SNN+KD such as those by Guo et al. [1] and Deng et al. [2]?
**A1**: Thanks for your advice. Please see our response for the **W3**.
---
**Q2**: The ablation study in Table 6 is critical, and the details of the hyperparameters used in the vanilla KD need to be listed.
**A2**: Thanks for your advice. The hyperparameter we used in ablation is 0.1. Here we also report the results of 0.5 and 1.0. It can be seen that the hyperparameter will affect the results. However, the effect is smaller on ours.
| Dataset | Method | Timestep | hyperparameter=0.1 | hyperparameter=0.5 | hyperparameter=1.0 |
| --- | --- | --- | --- | --- | --- |
| CIFAR10 | Baseline | 1 | 90.40% | 90.40% | 90.40% |
| | Direct distillation | 1 | 91.59% | 91.03% | 90.63% |
| | Ours | 1 | 92.28% | 91.96% | 91.78% |
| | Baseline | 2 | 92.80% | 92.80% | 92.80% |
| | Direct distillation | 2 | 92.59% | 92.01% | 91.77% |
| | Ours | 2 | 93.53% | 93.13% | 92.99% |
--- | Summary: This paper explores whether the benefits of rich output feature representations, known to enhance the accuracy of ANN models for classification, also apply to SNNs. If so, the authors seek to improve the feature representation of SNNs.
The authors address this problem in two steps: first, they align the SNN output features with the ANN output features using KL-divergence loss, similar to knowledge distillation. Second, they replace the last Leaky Integrate-and-Fire (LIF) activation layer in the SNN with a ReLU activation layer.
For feature alignment, the same input is fed to both the SNN and the ANN, and their outputs are compared and aligned using KL-divergence loss. The idea is to use the trained ANN's rich output features to guide the learning of the SNN's output feature representation.
The proposed method involves training the ANN counterpart (with ReLU activation), then replacing the LIF with the ReLU activation function in the last layer of the SNN. Finally, the modified SNN is trained with the help of the pre-trained ANN counterpart by incorporating the KL loss into the total loss of the SNN.
Strengths: 1. This paper uses the well-trained ANN to guide and improve the learning of the output feature representation of the SNN. The KL-divergence loss (L_{AF} loss) is used to measure the discrepancy of the ANN's and SNN's feature representations when feeding in the same image into the two networks.
2. By replacing the last LIF activation neuron layer in the SNN with ReLU activation layer, the SNN shows a large improvement.
Weaknesses: 1. The proposed method requires training the ANN counterpart first and setting it aside as a pre-trained model. Subsequently, the SNN needs to be trained. This approach essentially involves training the same framework twice, once with ANN activations and once with SNN activations.
2. The method includes replacing the last LIF activation neuron layer in the SNN with a ReLU activation layer, which results in significant improvement. However, it is unclear whether the final improvement is primarily due to this replacement of last-layer's activation function or the feature alignment process.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In this paper, the authors try to use rich information of the weight parameters from a pretrained ANN counterpart to guide the feature representation learning of the SNN.
Does this method require training the ANN counterpart and using it to assist in the subsequent training of the SNN? If there is true, it seems the paper trains both the ANN counterpart and the SNN, the training resources will be doubled, how to solve this problem?
2. The authors stated that they "replace the last Leaky Integrate-and-Fire (LIF) activation layer as the ReLU activation layer in the SNN". Does the improvement mainly come from this replacement of LIF with ReLU? From results of the Ablation experiments in table 1, it shows that using "w/ RepAct" (92.50) demenstrates a larger improvement than using "w/ L_{AF}" (92.28) compared to the "baseline" (90.40) for timestep=1. For other timestep (2 and 4), the results show the same trends. Moreover, the final method using "w/ L_{AF} & RepAct" achieved accuracy of 92.66, which is only a very small increase compared to the "w/ RepAct" with 92.50.
3. In order to check the improvement for the applications, can the authors provide the performance of "w/ RepAct" and "w/ L_{AF}", "w/ L_{AF} & RepAct", the baseline, and the ANN-counterpart for CIFAR10, CIFAR100, ImageNet, CIFAR10-DVS datasets?
4. Grammer errors, for example Line 42-44, "Despite the SNN being more energy-efficient compared with the ANN, it suffers unsatisfactory task performance, due to binary spike feature maps of the SNN will result in limited expressiveness and large accuracy degradation compared with full-precision feature maps of the ANN",
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: As the above Weaknesses and Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your efforts in reviewing our paper and your recognition of our effective method and good results. The response to your questions is given piece by piece as follows.
---
**W1**: The proposed method requires training the ANN counterpart first and setting it aside as a pre-trained model. Subsequently, the SNN needs to be trained. This approach essentially involves training the same framework twice, once with ANN activations and once with SNN activations.
**A1**: Sorry for this confusion. The additional training cost is trivial. We take the ResNet20 with 4 timesteps on CIFAR10 to compare. **The training time of ANN is about 6.25% of that of SNN on the same GPU**, as follows: since the memory consumption of SNN is 4 times that of ANN, ANN can use a batch size that is 4 times that of SNN, which means that the number of iterations required for ANN training is 25% of that of SNN. In addition, since timestep affects SNN, the training time of ANN in the same iteration is 25% of that of SNN. Overall, the training time of ANN is 6.25% of that of SNN. We report the training cost for 400 epochs running using spiking ResNet20 (batch=128) on CIFAR10 for our methods and the training cost for 100 epochs running using ResNet20 (batch=512) with the same single V100 here. **The costs are 8348.87s and 561.77s. It can be seen that the additional training cost is trivial.**
---
**W2**: The method includes replacing the last LIF activation neuron layer in the SNN with a ReLU activation layer, which results in significant improvement. However, it is unclear whether the final improvement is primarily due to this replacement of last-layer's activation function or the feature alignment process.
**A2**: Sorry for the confusion. From the ablation study, it can be seen that both the RepAct method and the alignment method can gain huge improvements alone. However, the accuracy upper of a network is limited, and the difficulty will increase quickly with the accuracy reaching the upper. Thus using them both can not get twice the profit. The same phenomenon can also be seen in many works, e.g., IM-Loss (NeurIPS2022), using IM-Loss and ESG simultaneously would not bring much gain compared with a single.
Guo Y, Chen Y, Zhang L, et al. IM-loss: information maximization loss for spiking neural networks[J]. Advances in Neural Information Processing Systems, 2022, 35: 156-166.
---
**Q1**: it seems the paper trains both the ANN counterpart and the SNN, the training resources will be doubled, how to solve this problem?
**A1**: Sorry for this confusion. Please see our response to **W1**.
---
**Q2**: Does the improvement mainly come from this replacement of LIF with ReLU?
**A2**: Sorry for this confusion. Please see our response to **W2**.
---
**Q3**: In order to check the improvement for the applications, can the authors provide the performance of "w/ RepAct" and "w/ L_{AF}", "w/ L_{AF} & RepAct", the baseline, and the ANN-counterpart for CIFAR10, CIFAR100, ImageNet, CIFAR10-DVS datasets?
**A3**: Thanks for the advice. Here we add more experiments as your suggestion. Note that we use the ResNet20 architecture on CIFAR10 (with 4 timestep), CIFAR100 (with 4 timestep), and CIFAR10-DVS (with 10 timestep) and ResNet18 architecture on ImageNet (with 4 timestep).
| Model\Dataset | CIFAR10 | CIFAR100 | ImageNet | CIFAR10-DVS |
| --- | --- | --- | --- | --- |
| ANN | 95.91% | 75.14% | 71.04% | 81.00% |
| Vanilla SNN | 93.85% | 71.77% | 61.07% | 78.70% |
| Ours w/ RepAct | 94.66% | 72.63% | 63.78% | 79.63% |
| Ours w/ L_{AF} | 94.39% | 72.98% | 64.86% | 79.34% |
| Ours with both | 94.74% | 73.01% | 65.31% | 80.50% |
---
**Q4**: Grammer errors, for example Line 42-44, "Despite the SNN being more energy-efficient compared with the ANN, it suffers unsatisfactory task performance, due to binary spike feature maps of the SNN will result in limited expressiveness and large accuracy degradation compared with full-precision feature maps of the ANN",
**A4**: Thanks for the reminder. We will further recheck and polish our paper in the final version.
---
Rebuttal Comment 1.1:
Title: response
Comment: Thank you for your feedback.
The authors have worked to improve the manuscript's readability.
I am inclined to increase my score to 7.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: Thanks very much for your reply and recognition. We are happy to see that your concerns have been addressed. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Unveiling Encoder-Free Vision-Language Models | Accept (spotlight) | Summary: This paper introduces EVE, a novel paradigm for Vision-Language Models (VLMs) designed to eliminate the need for a preceding visual encoder in the LLM decoder. This approach aims to create a more flexible and lightweight vision-language framework. EVE incorporates a Patch Embedding Layer and a Patch Aligning Layer to align image tokens with the language model. Additionally, it proposes a new training recipe for robust encoder-free model learning. The result is an efficient VLM that achieves performance (somewhat) comparable to existing encoder-based VLMs across several benchmarks.
Strengths: [**Writing**] The paper exhibits exceptional clarity and coherence in presenting its concepts. The accompanying figures serve as effective visual aids, significantly enhancing the comprehensibility of the discussed ideas.
[**Motivation**] The rationale behind removing off-the-shelf encoders in current VLMs is compelling and innovative. While most existing works adhere to the well-established encoder-based VLM paradigm, this exploration into a new direction is commendable. Such investigations are invaluable to the research community.
[**Methodology Transparency**] The authors provide comprehensive details on dataset curation and implementation specifics.
[**Ablation Studies**] The inclusion of detailed ablation experiments is a significant strength of this paper. These studies collectively offer a clear demonstration of the proposed model's capabilities and distinctive features, providing crucial insights into the model's performance and design choices.
Weaknesses: [**Encoder-Free?**] The model's positioning as "encoder-free" is somewhat misleading, given the presence of a Patch Embedding Layer (PEL) that functions very similarly to an image encoder. Despite its light-weight and compact nature, the PEL's architecture closely resembles a ViT encoder. Furthermore, the proposed Patch Aligning Layer (PAL) encourages CLIP-like token generation, further blurring the line between this approach and traditional encoder-based models.
[**Almost Lossless?**] The manuscript repeatedly claims that the PEL converts images into embeddings "almost losslessly." However, the use of average pooling for feature map downsampling is inherently a lossy operation, undermining this claim.
[**Performance Gap**] The model's performance, while promising, still lags behind encoder-based VLMs. It's notable that models like LVIS-4V, ShareGPT4V, and LLaVA-1.6 achieve superior results despite utilizing substantially less training data than EVE.
[**Complexity Analysis**] Table 6 demonstrates significant reductions in FLOPs and inference time for the visual encoding procedure. However, this improvement should be contextualized within the overall model architecture, where the LLM block dominates computational demands. This raises questions about the relative impact of optimizing the visual encoder in terms of overall efficiency gains.
[**Flexibility Benefits**] The manuscript emphasizes the advantages of supporting flexible Image Resolution (IR) and Aspect Ratio (AR) as key benefits over encoder-based VLMs. However, the practical significance and impact of this flexibility are not comprehensively demonstrated or evaluated or quantified in the current study.
[**Structural Issues**]
* Figure 1 is never referenced in the main text.
* Figure 5 is mentioned after Figure 6 in the text, disrupting the logical flow.
Technical Quality: 4
Clarity: 4
Questions for Authors: I love the concept and the exploration of encoder-free VLMs. However, as mentioned in the weaknesses, several points significantly undermine the significance of this work and the key claims of it. Here are several important questions.
1. Given that the Patch Embedding Layer (PEL) functions similarly to an image encoder, how do you justify classifying EVE as an "encoder-free" model?
2. The manuscript claims that the PEL converts images into embeddings "almost losslessly." How do you elaborate this claim with the use of average pooling, which is inherently a lossy operation?
3. Despite using more training data, EVE's performance is still lower than some encoder-based VLMs like LVIS-4V, ShareGPT4V, and LLaVA-1.6. What factors do you believe contribute to this performance gap, and what strategies are you considering to address it?
4. While Table 6 shows significant reductions in FLOPs and inference time for visual encoding, the LLM block remains the computational bottleneck. How significant do you consider the efficiency gains from the encoder-free approach in the context of the overall model performance?
5. The paper emphasizes the advantages of flexible Image Resolution (IR) and Aspect Ratio (AR) support. Could you provide specific examples or use cases where this flexibility offers significant practical benefits over traditional encoder-based models?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The discussion on limitations and societal impact about the paper is relatively thorough. The points raised in the weaknesses section further articulate limitations that should be considered.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your meticulous and insightful review. We have carefully considered your questions and polished our paper.
`Q1: [Encoder-Free?] How do you justify classifying EVE as an "encoder-free" model?`
**(1) A more essential difference between PEL and image encoders is whether they involve strong inductive bias and semantic priors in abstracting visual representation by pre-trained models via pretext tasks.** We introduce CA1 and CA2 mainly due to efficiency consideration and holistic information for raster-order token inputs respectively. Note that Table 4 shows that CA1 and CA2 are optional and have a less significant impact on performance.
**(2) As discussed in ALL-Q1, we found that while PAL aids in early convergence and training efficiency, it becomes less crucial as the data scale significantly increases.**
Besides, we can completely remove it during inference, allowing EVE to function as a pure encoder-free architecture like Fuyu-8B. Hence, we attribute our work to the ‘encoder-free’ track.
`Q2: [Almost Lossless?] The use of average pooling, which is inherently a lossy operation?`
Good question. CA1 may not deviate from the "almost" lossless concept due to its dense feature aggregation strategy in a restricted receptive field, only using pooling features as the query. We adopt this design, mainly considering the trade-off between large token numbers and training efficiency, especially with limited resources.
`Q3: [Performance Gap?] [Complexity Analysis?] [Flexibility Benefits?]`
**(1) We highlight that EVE shows potentially more promising scaling with pretraining duration, which is the key motivation behind our efforts to build encoder-free VLMs.**
EVE attempts to remove strong vision inductive bias and transmit visual signals almost losslessly for better scaling properties.
In Figure 2 of the rebuttal PDF, we observe that encoder-based models often suffer from collapse. Only the (VE)-(ALL) training strategy avoids this issue by freezing LLM weights during pre-training and unfreezing them during the SFT stage. In contrast, EVE shows better scaling properties and gradually approaches the performance of well-developed encoder-based VLMs with only 33M data scale.
**(2) Encoder-free VLMs are promising for scaling but require enormous training data to develop vision perception from scratch.**
Here, with only 33M pre-training data, our pioneering exploration currently lags behind but performs comparably to popular encoder-based methods. **Interestingly, subsequent PaliGemma [a]** also explores an encoder-free version via 1B pre-training image-text pairs, showing promising early results alongside its encoder-based counterpart across 37 validation datasets (see Figure 3 of the rebuttal PDF). They particularly mention that only the separate vision encoder i.e. SigLIP, has been trained with 40B image-text pairs, far greater than 1B data of encoder-free version. They also indicate that decoder-only VLMs may be a promising direction although currently suffering in training efficiency due to building vision perception from scratch.
**(3) The 'Any' image ratio, simple architecture, and efficient deployment are bonuses of encoder-free VLMs.**
Recent studies on encoder-based VLMs reveal that
**(i)** Due to the limitations of pre-trained encoders, existing VLMs exhibit vulnerabilities in basic capabilities rooted in visual encoding trade-off [b, c].
**(ii)** Various vision encoders show uneven levels of capability due to pretext pretraining tasks, relying heavily on the corrective capabilities of LLMs for multimodal understanding [d, e].
In contrast, encoder-free VLMs remove semantic priors in abstracting visual representation, theoretically allowing VLMs to autonomously acquire all available information. **While 'any image ratio' and 'FLOPS gains' are natural benefits of the encoder-free approach, the primary reason for exploring an encoder-free model is its scaling efficiency with less inductive bias.**
In this premise, removing the vision encoder provides only a modest bonus in terms of flexible image input and deployment efficiency. Notably, the encoder-free track is still in early development and has a long way to explore its limits.
[a] PaliGemma: A versatile 3B VLM for transfer. Google DeepMind. arXiv 2407.07726.
[b] LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Images. Xu et al. arXiv 2403.11703.
[c] HiRes-LLaVA: Restoring Fragmentation Input in High-Resolution Large Vision-Language Models. arXiv 2407.08706.
[d] Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs. Tong et al. CVPR2024.
[e] Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs. Tong et al. arXiv 2406.16860.
`Q4: The performance gap with some encoder-based VLMs? How to address it?`
**(1) The key issue with EVE is the massive data needed to build vision perception from scratch.**
Encoder-based models first train a separate visual encoder with billions of data points, while EVE constructs vision perception from scratch using only 33 million data points. Figures 2-3 show performance is far from saturated. Larger data scales and more computing resources are required for further exploration.
**(2) Besides, several important factors deserve to be explored**. We found image resolution, reducing modality perturbations, the base LLM’s size and capability, data size and quality, etc. also impact the scaling efficiency of encoder-free VLMs. For instance, pre-training EVE with 4M high-resolution data (1344 longest edge) without vision supervision, can match the performance of the original version, using 12M data and vision supervision. We would polish these factors and develop a fully and stronger encoder-free VLM in the follow-up study.
`Q5: Figure 1 is never referenced in the main text. Figure 5 is mentioned after Figure 6 in the text, disrupting the logical flow.`
Thanks for your kind reminder and we will polish them in the revised manuscript.
---
Rebuttal Comment 1.1:
Title: Thank the authors for the response
Comment: Thank the authors for your response. The rebuttal clears most of my previous concerns. On a side note, I still do not like the term "encoder-free" and "lossless". In my opinion, it is more accurate to say that this work explores joint training of the LLM and a light-weight vision encoder from scratch. However, I do agree with other points made my the authors in the rebuttal. Hence, I have decided to keep my positive rating.
---
Reply to Comment 1.1.1:
Title: New reply and clarification to reviewer Z41h
Comment: We are pleased to reach a consensus on key insights of EVE. We understand your concerns regarding the terms "encoder-free" and "lossless" description, and we would like to clarify our rationale behind these choices:
`Q1: On a side note, I still do not like the term "encoder-free" and "lossless". In my opinion, it is more accurate to say that this work explores joint training of the LLM and a light-weight vision encoder from scratch.`
**(1) While there's nothing wrong with regarding EVE as joint training of the LLM and a light-weight vision layer, we intentionally adopted the terms "encoder-free" and "lossless" to highlight the different design principles that distinguish EVE from existing modular VLMs.**
Actually, we have explored the simplest setup. By removing the PAL and introducing the PEL via a streamlined Conv1 (14×14×1024)-GeLU-Conv2 (2×2×4096) structure, we ensured that the total number of activation values remains unchanged. This alignment with the "encoder-free" and "lossless" concepts reflects our goal to chase maximum simplicity without the need for a conventional, standalone vision encoder. Our findings were as follows:
- The LLM-guided Prealigning Stage effectively prevents model collapse, which is a challenging issue in building encoder-free VLMs.
- The minimalist PEL and drop of PAL initially impact training efficiency, particularly during early convergence (about 2.1-4.4\% under 5M high-resolution pretraining data). However, this gap gradually decreases as the data scale increases. This behavior underscores the LLM's great capability to learn visual perception and establish multimodal alignment from scratch, even in the absence of a vision encoder.
The current design, although a trade-off, was chosen to prove the feasibility of an encoder-free VLM that can rival encoder-based ones, especially given the previously limited data and device resources.
**(2) Once again, we would like to stress that the core idea behind EVE is to challenge the prevailing inductive biases present in encoder-based VLMs by allowing the model maximum freedom to discover more rational patterns on its own, inspired by 'The Bitter Lesson' (Rich Sutton).** This design philosophy aligns with the new trend in VLM research (GPT-4o [a], "discrete" Chameleon [b] and MoMa [c], 'continuous' Fuyu-8B [d] and SOLO [e]), where the focus is on reducing architectural constraints to develop an end-to-end VLM and process textual and visual inputs through a unified network for better scaling efficiency.
**(3) We have preliminarily demonstrated that EVE is not only feasible but also promising for future advancements in VLMs. This innovation is what we believe sets EVE apart and warrants a re-evaluation of our proposed terminology.**
We respectfully hope that this clarification will prompt you to reconsider your assessment and potentially raise the score of our submission. Should you have any further questions or require additional details, we would be happy to provide them.
[a] GPT-4o System Card. OpenAI. August 8, 2024.
[b] Chameleon: Mixed-Modal Early-Fusion Foundation Models. Meta FAIR. arXiv 2405.09818.
[c] MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts. Meta FAIR. arXiv 2407.21770.
[d] Fuyu-8B: A Multimodal Architecture for AI Agents. Adept AI. October 17, 2023.
[e] SOLO: A Single Transformer for Scalable Vision-Language Modeling. UIUC. arXiv 2407.06438. | Summary: This paper revisits the vision-encoder-free MLLM direction, which is not a popular choice in the community at present. A new method EVE is proposed to reduce the gap between encoder-free and encoder-based MLLMs, demonstrating a large improvement against Fuyu-8B (the best encoder-free MLLM so far). EVE introduce additional modules to distill visual features from an existing vision encoder, then align image-text in the consequential stage.
Strengths: The recipe (EVE) shared by this paper is simple yet effective, which could be insightful to the VLM community to develop better encoder-free MLLMs, which have many benefits, such as flexible resolution and aspect ratio, efficient deployment and lower latency etc.
The experiments are comprehensive, with a large number of baselines.
The paper is well written and easy to read.
Weaknesses: IIUC, Fuyu learns visual features from scratch, while EVE actually distillates from an existing CLIP-ViT-L-336px vision encoder, so it’s not fair to claim that EVE is better than Fuyu under the same “encoder free” setting. It’s more like “encoder distilled” vs. “encoder free”. Therefore, it would be more insightful if we could ablate encoder free vs. distilled in 4.3 (if the training stability issue can be solved), i.e., to investigate why EVE is much better than Fuyu.
As mentioned in the paper, the language ability is largely affected in stage 2. It’s very common to add an additional LM task and there have been many open-sourced corpuses already. It’s worth explaining why this common approach was not tried (though the paper suggests to solve this in future work).
Even though EVE-7B is comparable with the selected encoder-based baseline models of the same size, it is still largely behind some recent models with even half size and lower resolution, such as PaliGemma-3B [1].
[1] https://ai.google.dev/gemma/docs/paligemma/model-card
Technical Quality: 3
Clarity: 3
Questions for Authors: It’s unclear whether 35m data is enough (likely not?). Have you tried data scaling ablations?
In line 234, it should be Table 3?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: It’s worth mentioning EVE inherits the same limitations from the resued LLM, i.e., Vicuna-7B (likely also including CLIP-ViT-L-336px).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Your constructive comments are much appreciated. We have addressed all your points and revised the paper accordingly to ensure its improvement.
`Q1: IIUC, Fuyu learns visual features from scratch, while EVE actually distillates from an existing CLIP-ViT-L-336px vision encoder, so it’s not fair to claim that EVE is better than Fuyu under the same “encoder free” setting. It’s more like “encoder distilled” vs. “encoder free”. Therefore, it would be more insightful if we could ablate encoder free vs. distilled in 4.3 (if the training stability issue can be solved), i.e., to investigate why EVE is much better than Fuyu.`
**(1) As discussed in ALL-Q1, while the vision encoder aids in early convergence, it becomes less crucial as the data scale increases.**
We introduce vision encoder distillation to improve training efficiency, especially with a limited data scale, which is less significant after 24M pre-training data in our experiments. This indicates that eliminating the vision encoder for a fully encoder-free VLM is practical during both training and inference.
**(2) The LLM-guided pre-aligning stage is one crucial factor for training stability and scaling efficiency.**
Without this stage, even with enormous training data and vision distillation, training decoder-only VLMs remains highly challenging in Table 5 and Figure 6, suffering from model collapse and large performance gaps.
**(3) Compared with Fuyu-8B, EVE adopts image resolution with 672 image longest edge during pre-training due to device constraints and efficiency consideration.**
For fairness, we attempt to pre-train EVE with 4M high-resolution data (1344 longest edge) without vision supervision, which can match the performance of our original version, using 12M data and vision supervision. This indicates that EVE has room for further improvement and could potentially widen the gap beyond Fuyu-8B.
`Q2: As mentioned in the paper, the language ability is largely affected in stage 2. It’s very common to add an additional LM task and there have been many open-sourced corpuses already. It’s worth explaining why this common approach was not tried (though the paper suggests solving this in future work).`
While maintaining language ability in stage 2 is a concern, it is not the primary challenge in developing encoder-free VLMs effectively. The main issue is addressing optimization problems and bridging the gap in multi-modality capabilities compared to encoder-based VLMs. We spent a long time exploring and finally discovered that an LLM-guided pre-aligning stage and vision encoder supervision are essential for training stability and early convergence, respectively. Considering text-only data is less critical at this stage, we decided to solve it in the follow-up study.
`Q3: Even though EVE-7B is comparable with the selected encoder-based baseline models of the same size, it is still largely behind some recent models with even half size and lower resolution, such as PaliGemma-3B [1].`
**(1) Actually, such a comparison is unfair due to many notable differences.**
- [Data Magnitude]: As mentioned before, EVE learns vision perception from scratch, where the vision encoder could be removed using the entire 33M pretraining data. However, PaliGemma’s SigLIP encoder has seen 40B image-text pairs during Stage 0, with Stage 1 and Stage 2 further processing about 350B and 90B tokens, respectively.
- [LLM Capability]: Despite being smaller, Gemma-2B exhibits similar capabilities to our Vicuna-7B.
- [Finetuning Stategy]: Table 1 in PaliGemma shows that they train a 'specialist' model for each task with specific hyperparameters, while EVE serves as a 'generalist' model for downstream tasks.
**(2) We highlight that EVE shows potentially more promising scaling with pretraining duration.**
In Figure 2 of the rebuttal PDF, we observe that EVE shows better scaling properties and gradually approaches the performance of well-developed encoder-based VLMs with only 33M data scale. **Interestingly, PaliGemma** also explores an encoder-free version via 1B pre-training image-text pairs, showing promising early results alongside its encoder-based counterpart across 37 validation datasets (see Figure 3 of the rebuttal PDF). They particularly mention that their vision encoder i.e. SigLIP, was trained with 40B image-text pairs, and indicate that decoder-only VLMs may be a promising direction although currently suffering in training efficiency due to building vision perception from scratch.
`Q4: It’s unclear whether 35m data is enough (likely not?). Have you tried data scaling ablations?`
35M data points provide valuable insights for constructing encoder-free VLMs but fall short of peak performance. Figure 1-2 in the rebuttal PDF show that performance is not yet saturated, and EVE offers better training stability and scaling efficiency. Due to budget and time constraints, we will scale the pre-training corpus to a billion scale in future work.
`Q5: In line 234, it should be Table 3?`
Thanks for the kind reminder. We will revise it.
`Q6: It’s worth mentioning EVE inherits the same limitations from the resued LLM, i.e., Vicuna-7B (likely also including CLIP-ViT-L-336px).`
This is a common problem for both encoder-based and encoder-free VLMs. We believe that as existing LLMs continue to advance impressively, these limitations, we think, would be properly addressed. Besides, encoder-free VLMs escape the limitations of pre-trained vision encoders, unlike their encoder-based counterparts.
---
Rebuttal Comment 1.1:
Comment: > (1) As discussed in ALL-Q1, while the vision encoder aids in early convergence, it becomes less crucial as the data scale increases. We introduce vision encoder distillation to improve training efficiency, especially with a limited data scale, which is less significant after 24M pre-training data in our experiments. This indicates that eliminating the vision encoder for a fully encoder-free VLM is practical during both training and inference.
It's not convincing to me that "it becomes less crucial as the data scale increases". Empirically, vision encoder can be trained easily with noisy large scale web data (tens of billions); while generative VLM does require higher quality data (e.g. sentence like text to align with images), which makes it less easy to scale the VLM training data. I.e., "This indicates that eliminating the vision encoder for a fully encoder-free VLM is practical" might not be true.
Could you please elaborate how "Scaling performance of EVE with or without vision encoder supervision" was conducted? E.g., what/how much data was used in which stage?
> (2) The LLM-guided pre-aligning stage is one crucial factor for training stability and scaling efficiency.
What is learned in this stage when "without vision encoder supervision"?
> While maintaining language ability in stage 2 is a concern, it is not the primary challenge in developing encoder-free VLMs effectively.
Have you tried maintaining language ability in stage 2 or 3?
---
Reply to Comment 1.1.1:
Title: New reply and clarification to reviewer nP6r
Comment: Thank you for considering our response and for your valuable feedback. We appreciate the opportunity to further address the concerns raised.
`Q1: "It becomes less crucial as the data scale increases" is not convincing. VE can be trained easily with noisy large-scale web data; while generative VLM does require higher quality data, which makes it less easy to scale the VLM training data. I.e., "This indicates that eliminating the vision encoder for a fully encoder-free VLM is practical" might not be true.`
**(1) In the context of the rebuttal, "*it* becomes less crucial ..." and "eliminating the *vision encoder* for a ..." refer specifically to the distillation/supervision of the vision encoder, not the vision encoder used in encoder-based VLMs.** In Figure 1 of the rebuttal PDF, vision encoder distillation does help early convergence with a moderate data scale, but its importance diminishes with larger data sets (beyond 24M in our experiments). This suggests that removing the vision encoder supervision in EVE is feasible with extensive pre-training data.
**(2) Removing the vision encoder in encoder-based VLMs proves practical even with noisy large-scale data.** Only with 1B mixture data (including large noisy WebLI, CC3M-35L, and etc), Paligemma's encoder-free model nearly matches the encoder-based one (with 40B data for separate SigLIP) across about 13 of 37 datasets (e.g., ChartQA, GQA, RefCOCO/+/g, RSVQA, etc.). The scaling trend reflects the potential for further bridging their performance gap, indicating that a pre-trained vision encoder may not be essential and that using noisy data (especially interleave data [a]) to train generative VLMs is acceptable.
**(3) We agree that high-quality data is crucial for VLMs compared to noisy data. However, scaling up high-quality data is not as difficult as it might appear.** For example, with LLaVA1.5-13B via the SGLang package, we can caption about 4-5M images in one day using two A100 (40G) nodes. With more devices, the process becomes even faster. Additionally, there are many open-source re-captioning data sources (e.g., CapFusion [b], Recap-Datacomp-1B [c]) and caption engines (e.g., ShareGPT4v [d], DenseFusion [e]) available for constructing pre-training data. Besides, in the text-to-image generation field, building large high-quality datasets is commonly developed in the research and industry. Thus, we do not view this as a significant bottleneck in real-world scenarios.
[a] VILA: On Pre-training for Visual Language Models. CVPR2024
[b] CapsFusion: Rethinking Image-Text Data at Scale. CVPR2024
[c] What If We Recaption Billions of Web Images with LLaMA-3? arXiv 2406.08478
[d] ShareGPT4V: Improving Large Multi-Modal Models with Better Captions. ECCV2024
[e] DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception. arXiv 2407.08303
`Q2: Elaborate "Scaling performance of EVE with or without vision encoder supervision"`
For fairness, we retain the Patch Embedding Layer (PEL) and remove the Patch Aligning Layer (PAL) (i.e., VE supervision) at all stages, keeping other variables unchanged.
- Stage 1: We train both PEL and PAL for EVE-7B and EVE-7B-HD, and only PEL for versions without VE supervision, using 16M out of 33M pretraining data at the largest 672-pixel resolution.
- Stage 2: We unfreeze all modules for these versions, utilizing all 33M pretraining data at the largest 672-pixel resolution.
- Stage 3: We train the full model for EVE-7B (w/ and w/o VE supervision) using LLaVA-mix-665K, and for EVE-7B-HD (w/ and w/o VE supervision) using extra 1.2M SFT data at a 1344-pixel resolution.
`Q3: What is learned in this stage when "without vision encoder supervision"?`
Through text supervision, EVE aims to learn visual perception and establish an initial vision-language connection. This provides a better starting point for subsequent large-scale pretraining and avoiding model collapse.
`Q4: Have you tried maintaining language ability in stage 2 or 3?`
Yes, we conducted experiments using our multimodal data and text-only FineWeb data in 7:3, 5:5, and 3:7 ratios.
- We used 5M mixed data for Stages 1 and 2, followed by LLaVA-mix-665K for Stage 3, all with a 1344-pixel image resolution, and removed vision encoder supervision throughout all stages.
- We observed that a higher proportion of language-only data preserved better language capability (SQA scores improved from 63.5 to 64.7 to 65.2), but led to a slower increase in multimodal capability (GQA scores decreased from 58.3 to 57.4 to 55.2).
- We suggest a 5:5 ratio for pre-training with high-resolution images. While we haven’t tested a 672-pixel resolution, a 7:3 (multimodal:text-only samples) ratio might be a good choice for balancing image-text token ratios, which is a better consideration factor.
We respectfully hope that these explanations can convince you to potentially enhance your evaluation score. We are available to address any additional questions you may have. | Summary: Authors bridge the gap between encoder-based and encoder-free models, and present a simple yet effective training recipe towards pure VLMs. They unveil the key aspects of training encoder-free VLMs efficiently via thorough experiments: (1) Bridging vision-language representation inside one unified decoder; (2) Enhancing visual recognition capability via extra supervision
Strengths: 1. Authors propose to find the key aspects of training encoder-free VLMs
2. EVE outperforms the counterpart Fuyu-8B (encoder-free)
Weaknesses: I am concerned with the motivation and experiments.
From Tab. 3, although it outperforms Fuyu-8B, it is inferior to many encoder-based Large Vision-Language Models. I would except EVE had additional benefits, however, it is unclear why we need it.
The AR is 'Any' for encoder-free VLMs, but I do not see the benefits of it. At least I cannot have it from the tables. Could authors provide any explanations? From Tab. 6, EVE-7B (HD), which had acceptable performance, does not gain enough benefits in terms of FLOPs(G) and Time(s) if we consider the vision part and LLM part together. At current stage, I could not see enough motivation of EVE-7B (HD).
One minor question (not urgent, authors could simply ignore it if time does not allow): the benchmarks presented are normally simple VQA benchmarks, how would the model perform on detailed description tasks?
Overall, I am concerned with the motivation.
Technical Quality: 2
Clarity: 3
Questions for Authors: see weakness
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful remarks. We have responded to all your queries and made the necessary changes to enhance the paper.
`Q1: (((1))) EVE outperforms Fuyu-8B, but lags behind many encoder-based VLMs. why we need it? (((2))) Any explanations for the benefits of 'Any image ratio'? (((3))) No enough benefits of FLOPs(G) and Time(s) when considering the vision part and LLM part together.`
**(1) We highlight that EVE shows potentially more promising scaling with pretraining duration, which is the key motivation behind our efforts to build encoder-free VLMs.**
EVE attempts to remove strong vision inductive bias and transmit visual signals almost losslessly for better scaling properties.
In Figure 2 of the rebuttal PDF, we observe that encoder-based models often suffer from collapse. Only the (VE)-(ALL) training strategy avoids this issue by freezing LLM weights during pre-training and unfreezing them during the SFT stage. In contrast, EVE shows better scaling properties and gradually approaches the performance of well-developed encoder-based VLMs with only 33M data scale.
**(2) Encoder-free VLMs are promising for scaling but require enormous training data to develop vision perception from scratch.**
Here, with only 33M pre-training data, our pioneering exploration currently lags behind but performs comparably to popular encoder-based methods. **Interestingly, subsequent PaliGemma [a]** also explores an encoder-free version via 1B pre-training image-text pairs, showing promising early results alongside its encoder-based counterpart across 37 validation datasets (see Figure 3 of the rebuttal PDF). They particularly mention that only the separate vision encoder i.e. SigLIP, has been trained with 40B image-text pairs, far greater than 1B data of encoder-free version. They also indicate that decoder-only VLMs may be a promising direction although currently suffering in training efficiency due to building vision perception from scratch.
**(3) The 'Any' image ratio, simple architecture, and efficient deployment are bonuses of encoder-free VLMs.**
Recent studies on encoder-based VLMs reveal that
**(i)** Due to the limitations of pre-trained encoders, existing VLMs exhibit vulnerabilities in basic capabilities rooted in visual encoding trade-off [b, c].
**(ii)** Various vision encoders show uneven levels of capability due to pretext pretraining tasks, relying heavily on the corrective capabilities of LLMs for multimodal understanding [d, e].
In contrast, encoder-free VLMs remove semantic priors in abstracting visual representation, theoretically allowing VLMs to autonomously acquire all available information. **While 'any image ratio' and 'FLOPS gains' are natural benefits of the encoder-free approach, the primary reason for exploring an encoder-free model is its scaling efficiency with less inductive bias.**
In this premise, removing the vision encoder provides only a modest bonus in terms of flexible image input and deployment efficiency. Notably, the encoder-free track is still in early development and has a long way to explore its limits.
[a] PaliGemma: A versatile 3B VLM for transfer. Google DeepMind. arXiv 2407.07726.
[b] LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Images. Xu et al. arXiv 2403.11703.
[c] HiRes-LLaVA: Restoring Fragmentation Input in High-Resolution Large Vision-Language Models. arXiv 2407.08706.
[d] Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs. Tong et al. CVPR2024.
[e] Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs. Tong et al. arXiv 2406.16860.
`Q2: One minor question (not urgent, authors could simply ignore it if time does not allow): the benchmarks presented are normally simple VQA benchmarks, how would the model perform on detailed description tasks?`
From the LMMs-Eval leaderboard [a], we observe that LLaVA-1.6 **surprisingly** achieves lower CIDEr scores than LLaVA-1.5 on COCO-Cap and Flickr-cap under the same model capacity. These description tasks do not well reflect the capability of multimodal models due to out-of-distribution issues and evaluation metrics, which are seldom evaluated by most existing VLMs.
[a] LMMs-Eval: Reality Check on the Evaluation of Large Multimodal Models. Zhang et al. arXiv 2407.12772
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. They addressed some of my concerns, although I still think it is over-claimed.
For Q2, it is not surprising that LLaVA-1.6 achieves lower CIDEr scores than LLaVA-1.5 on COCO-Cap and Flickr-cap under the same model capacity, because LLaVA families are not designed (or trained) for concise image captioning. The improvements (data/model) made in LLaVA-1.6 are also not targeted at short image captioning. Therefore, I would not expect it can achieve better results than LlaVA 1.5.
In fact, I am asking about the detailed description task (LLaVA Bench), where the model is asked to provide detailed description given an image (much longer and more comprehensive and diverse than image captioning). Have you tried the model with this dataset? It should be convenient as LLaVA already provided similar evaluation scripts.
I would currently raise my score to 5
---
Reply to Comment 1.1.1:
Title: New reply and clarification to reviewer VqDP
Comment: Thank you for considering our response. We appreciate the opportunity to further clarify our contributions and address the concerns raised.
`Q1: They addressed some of my concerns, although I still think it is over-claimed.`
**(1) We respectfully reiterate that the key innovation of EVE is its preliminary validation of the feasibility of eliminating the inductive biases associated with encoder-based vision-language models (VLMs).** By removing these biases, EVE provides a clear path for constructing encoder-free VLMs that do not require an encoder pretrained on an intermediate task with biases in representations and resolutions. Thus, EVE maximizes the model's autonomy in learning vision perception and aligning multimodal patterns for better scaling efficiency, inspired by **'The Bitter Lesson' (Rich Sutton)**.
**(2) Our EVE aligns with recent trends in VLM research, where the focus is shifting towards models that reduce architectural biases, construct a unified backbone, and improve scaling efficiency.** Notable examples of this trend are listed in time order:
- Fuyu-8B: A Multimodal Architecture for AI Agents. Adept AI. October 17, 2023.
- Chameleon: Mixed-Modal Early-Fusion Foundation Models. Meta FAIR. arXiv 2405.09818.
- SOLO: A Single Transformer for Scalable Vision-Language Modeling. UIUC. arXiv 2407.06438.
- PaliGemma: A versatile 3B VLM for transfer. Google DeepMind. arXiv 2407.07726. (Its encoder-free version)
- MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts. Meta FAIR. arXiv 2407.21770.
- GPT-4o System Card. OpenAI. August 8, 2024.
These influential works exemplify a broader shift towards reducing architectural constraints in VLMs, aiming to develop end-to-end models that process textual and visual inputs through a unified network, thereby enhancing scaling efficiency.
**(3) Our preliminary results demonstrate that EVE is not only feasible but also holds good promise for advancing VLM development.** We respectfully hope that this clarification would convince you to reconsider our proposed terminology and potentially improve your evaluation score based on the clarified context. We are open to providing further details or addressing any additional questions you may have.
`Q2: In fact, I am asking about the detailed description task (LLaVA Bench), where the model is asked to provide a detailed description given an image (much longer and more comprehensive and diverse than image captioning). Have you tried the model with this dataset? It should be convenient as LLaVA already provided similar evaluation scripts.`
Since LLaVA-Bench requires the GPT-4 API, we didn't include it in our earlier validation. We conducted further experiments using EVE-7B (HD) trained with LLaVA-mix-665 on LLaVA-Wide, achieving a 56.9% score, which is lower than LLaVA's 65.4%. This performance gap may be due to the QA-oriented SFT data and the limited vision perception learned from the current million-scale pretraining data.
We appreciate you bringing attention to the detailed description task like LLaVA-Bench.
Moving forward, we will focus on enhancing the model's capabilities by incorporating more training data and developing well-constructed instruction data. | Summary: This paper explores the topic of encoder-free vision language model. It proposes to directly input the image patches into the decoder network together with the language tokens, without the use of a separate visual encoder during inference time. The benefits are mainly two-fold: simpler architecture and more flexible image resolution and aspect ratio. In experiments, they utilized 33M/1.8M data for pretraining/SFT and demonstrated superior performance to Fuyu-8B which is not open sourced.
Strengths: 1. The topic of encoder-free VLMs is very interesting and has valuable potential benefits.
2. As an empirical paper, the experimental results look solid. The performance almost matches the best of encoder-based VLMs . It is a plus that the proposed model outperforms the counterpart Fuyu-8B.
3. Experiments to verify the necessity of stage-1 training is properly done and provides some insight of training VLMs with added components.
Weaknesses: 1. The training is using an existing visual encoder as teacher, which kind of defeats the title of "encoder-free". Is this necessary? The contribution would be more valuable if it can be completely encoder-free (during both training and inference).
2. Related to 1., if there is a need to train the model on some ouf-of-domain datasets, does it need to first train the separate visual encoder, and then redo the 3 training stages described in this work? This makes the training even more complex than encoder-based VLMs.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In Fig.6, what does “from scratch” mean exactly? Is the language model also randomly initialized?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments. We respond to all questions you raise to address your concerns and make the necessary revisions to improve the quality of the paper.
`Q1: The training is using an existing visual encoder as teacher, which kind of defeats the title of "encoder-free". Is this necessary? The contribution would be more valuable if it can be completely encoder-free (during both training and inference)`
As discussed in ALL-Q1, we found that while the vision encoder aids in early convergence, it becomes less crucial as the data scale increases. Inspired by this, we empirically found that pre-training EVE with 4M high-resolution images (1344 longest edge) without vision supervision can match the performance of our original version, which used 12M data points and vision supervision. This indicates that eliminating the vision encoder for a fully encoder-free VLM is practical during both training and inference.
`Q2: Related to 1., if there is a need to train the model on some out-of-domain datasets, does it need to first train the separate visual encoder, and then redo the 3 training stages described in this work? This makes the training even more complex than encoder-based VLMs.`
Good question.
Transferring the visual encoder initially may be beneficial due to its strong inductive bias and susceptibility to out-of-domain issues. Fortunately in our experiments, we discovered that after 24M pre-training data, vision encoder supervision becomes less essential for EVE to develop visual perception from scratch. This enables EVE to circumvent these issues by abandoning the vision encoder supervision and preserving the original three stages during the training process.
`Q3: In Fig.6, what does “from scratch” mean exactly? Is the language model also randomly initialized?`
Yes. "From scratch" denotes that the entire model is initialized randomly. We empirically discovered that training EVE from scratch faces extreme optimization challenges and usually suffers from model collapse. | Rebuttal 1:
Rebuttal: To all reviewers:
We thank all reviewers for your constructive comments. We are encouraged by approvals on an **interesting and novel idea** (gQSV, nP6r, Z41h), **simple yet effective** (VqDP, nP6r, Z41h), **comprehensive experiment and solid results** (gQSV, nP6r, Z41h), and **insights for the VLM community** (gQSV, nP6r, Z41h). We conducted more experiments and analyses to address some reviewers’ concerns. The supplements in the attached PDF are summarized as follows:
- Scaling performance of EVE with or without vision encoder supervision in Figure 1 of the attached PDF.
- Scaling performance of EVE vs. the encoder-based baseline in Figure 2 of the attached PDF;
- Scaling performance of PaliGemma with or without an image encoder from 100M to 1B data in Figure 3 of the attached PDF;
`All Reply Q1: Is 'encoder-free' accurate? How does vision encoder distillation/supervision work in EVE? Is it necessary?`
**(1) Vision encoder supervision does help with early convergence by 1-3% gains shown in Table 4, but is not very crucial during large data scale-up.**
**(i)** We introduce vision encoder supervision to improve training efficiency, especially with a limited data scale.
**(ii)** Actually, vision supervision from a pre-trained encoder is less significant when using sufficient data resources.
Figure 1 of the rebuttal PDF indicates that the influence of vision encoder supervision diminishes over large data scales, and by the 24M mark, the difference in performance with or without this supervision is negligible by less than 0.3-0.8\%.
This may be because large amounts of high-quality and detailed captions greatly enhance the understanding of visual information, thus gradually reducing the need for visual encoders.
**(2) More importantly, vision supervision is not the crucial factor for training stability and scaling efficiency.**
Table 5 and Figure 6 show that even with vision supervision, performance without LLM-guided Pre-aligning in Stage 1 rapidly decreases as data volume increases beyond a certain point. This indicates that vision supervision is not essential for the scaling efficiency of EVE.
**(3) We came up with the ‘encoder-free’ concept because EVE can work like Fuyu-series without visual encoders during the inference and deployment.**
We can completely remove it during inference, allowing EVE to function as a pure encoder-free architecture like Fuyu-8B.
Besides, though involving vision supervision helps with training efficiency, it becomes less necessary as the pre-training data scale significantly increases. In other words, developing a fully encoder-free VLM during training or inference is practical with larger data and computing resources.
We hope that the point-by-point responses below have effectively addressed your previous concerns, and would appreciate any further feedback you can provide.
Sincerely yours,
Authors.
Pdf: /pdf/632b6f6a619df327d2d6ac19d2ad1e4976d3ddf2.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Unscrambling disease progression at scale: fast inference of event permutations with optimal transport | Accept (poster) | Summary: This paper proposes to use Sinkhorn algorithm to compute optimal transport in order to speed up disease progression that was previously computationally prohibited. This method enables disease progression models with higher dimensionality in features as well as 1000x faster inference speed. Authors provide experiments on Alzheimer’s disease and age-related macular degeneration at pixel-level disease progression events.
Strengths: * Interesting experimental results showing wall-clock improvement on synthetic dataset
* Idea is well motivated and easy to understand
Weaknesses: * On the Pixel-level disease experiments, it's hard to judge how realistic the simulated diseases progressions are without quantitative comparison with baseline/groundtruth, such as co-occurances of events. Since it's a scientific study on method compared with baseline, the claims become unfalsifiable if such setup is provided.
* Overall, this work gives audience the impression of application of Sinkhorn algorithm. Given the lack of my domain expertise in medical science, it's hard for me to judge the novelty of the problem setting. However, method-wise, novelty is limited.
* Practicality of the method in real life problems: Although the claim about "high dimension", which goes up to 200 features, however modern medical MRI machines can easily capture high resolution images with magnitude higher number features/voxels. It's unclear how this method will scale in the more relevant settings.
Technical Quality: 2
Clarity: 2
Questions for Authors: In both figure 4, and figure 6, it mentions, " White pixels correspond to events that have occurred; black not yet occurred". However, there doesn't seem to be any black spots?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The limitation has been discussed at end of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Weaknesses point 1 (“On the Pixel-level experiments…”)
Yes, it is a good idea to compare with regional-level results, and one we would have included given the time. We have now included an additional analysis in the “Author Rebuttal” PDF, where we used the FreeSurfer segmentation tool to obtain pixel-level anatomical labels, which we mapped to our “pixel” events to enable comparison with regional volumes (Figure 1 in the uploaded PDF).
Indeed what we see is broadly what we’d expect from previous results – sub-cortical changes (Thalamus-Proper, Putamen, Hippocampus) are earliest, followed by cortical (Cerebral-Cortex) and white matter (Cerebral-White-Matter), and finally ventricular change (Lateral-Ventricle, VentralDC). However our model provides much more fine-grained insights – we now obtain continuous trajectories of change, which capture interesting non-linearities, e.g., in the Thalamus-Proper, Brain-Stem, and Lateral-Ventricle; this contrasts with the more linear changes in the Hippocampus, Cerebral-Cortex, and Cerebral-White-Matter. These are entirely new insights provided by our model that could be used to guide which regions are most useful as biomarkers at which stages in the disease; and also provide new insights into the underlying atrophy dynamics, which could inform mechanistic models of disease spread in the brain. We believe this result substantially improves the interpretation of our method and will add it to the paper.
Note that to enable direct comparison with previous published results we would need to run the same segmentation tool that they used, e.g., the GIF segmentation tool used in Wijeratne et al. 2023 and Wijeratne & Alexander 2021. This would make an interesting additional analysis that we reserve for future work.
Weaknesses point 2 (“Overall, this work gives the audience…”)
It is a fair point that a key component of the methodological improvement is the application of the Sinkhorn algorithm. However, the reframing of latent variable disease progression models as an optimal transport problem is entirely novel, and has implications beyond just improving the speed of inference (which is what we focused on this paper). Indeed one of the other Reviewers (Vfdn) notes this as one of the paper's strengths: "It is quite innovative to view the disease progression modeling task from the optimal transport perspective.".
The key innovation from an optimisation perspective is moving from a discrete sequence of events to a continuous probability density matrix over permutations – it is this that allows us to unlock gradient-based optimisation via variational inference. Furthermore - and which we don’t explore in depth in the main paper but do provide some initial analysis of in the supplementary - this formulation gives us direct access to the distribution over event permutations (as opposed to having to sample it using MCMC). This allows for direct sampling of model uncertainty in our Bayesian formulation.
But on a more fundamental perspective, one can think of modelling disease as morphing healthy probability distributions into unhealthy ones, (e.g., the diffusion of pathogens spreading in the brain), which can be ideally solved using the tools of optimal transport. For example, one could imagine many types of state-space-based latent variable models that would work with our permutation-based optimal transport coupling, e.g., Hidden Markov models. We’re excited about exploring the many recent developments in optimal transport theory from this perspective in the future.
Weaknesses point 3 (“Practicality of method in real life problems…”)
We need to point out that this criticism is not quite fair – while the simulations show 200 features (a decision we made to facilitate visualisation in the positional variance diagrams shown in Figure 3), the ADNI analysis used 1344 features (as noted in the Figure 4 caption). In principle we could include 10s of thousands of pixels / voxels, the main limitation being identifiability and computer memory, as noted in the Limitations section. However we appreciate that this detail is not easy to find in the text, so we have moved the information about the number of pixel features used in each analysis to the main text in Sections 3.3 and 3.4.
Questions (“In both figure 4, and figure 6, …”)
Yes this is not very clear – we will clarify by replacing these sentences with: “White pixels correspond to events that have occurred by the corresponding point of the sequence.”.
---
Rebuttal Comment 1.1:
Title: Additional question
Comment: Thanks for the additional results. I am a bit confused by the definition of the y-axis,
> "The horizontal axis shows the event number (from 0 – 1344), and the vertical axis shows the fraction of pixel-events that have occurred in each regional brain volume at the corresponding event number."
and why as event number goes up, the fraction also becomes higher?
---
Rebuttal 2:
Comment: Thanks for the question. The "Fraction occurred" label corresponds to the fraction of pixels in each region that have become abnormal, as defined by the vEBM event sequence. The fraction of pixel-events occurred increases with the event number because the event sequence represents a monotonic accumulation in pixel-level abnormality. For example, if we look at the "Cerebral-Cortex" line, it shows the fraction of pixel-events within that region that have occurred as a function of their position in the event sequence, e.g., at "Event" = 600, which is nearing the half way point in the event sequence, approximately 60% of the pixel-events in "Cerebral-Cortex" have occurred.
---
Rebuttal Comment 2.1:
Comment: Thanks for the clarifications. To double confirm my understanding, event number is basically timesteps, and fraction is the number of abnormal pixels over the total number of pixels in that region. Presumably, the disease progresses it should become higher. That is fair.
However, my comment about the quantitative measurement against the ground truth trajectory refers to something else. Since the algorithm is predicting pixel level label, at a specific event/timestep, Intersection over Union should be computed to measure how realistic the progression simulation is, given a ground truth trajectory.
---
Reply to Comment 2.1.1:
Comment: Yes your understanding is exactly correct.
Ok we see your point - comparing to a pixel-level, i.e., correlated feature, ground truth trajectory would indeed be instructive, and given the time, we would have set up a suitable simulation to test this. As it was, we only had time to validate the method using ground truth trajectories of uncorrelated features (Sections 3.2.2 and Supplementary A.7). However, as you can see from our results, our method performs very well at recovering the ground truth trajectory for various dataset properties and model parameter settings, and generally outperforms the two baselines. As you suggest, implementing pixel-level simulations are a priority for our future work.
On a related note - it would be very difficult to validate the results from the real data analyses in Alzheimer's disease and AMD, due to the standard problem in medical imaging of having no ground truth. The closest one could get is to have an additional dataset of post mortem histology images, matched to the in vivo medical images that we used here, in order to validate the spatial distribution of abnormality predicted by the model. Indeed there are approaches developed by other researchers to register medical images with histology, but they are still experimental and would require substantial time and cross-collaborative effort to implement. That being said, it would be a very interesting direction to take our model, as it provides the first pixel-level predictions of its type, which would support more direct comparison between MRI and histology.
---
Rebuttal 3:
Comment: Hello reviewer 5r2K,
Thanks for already engaging in discussion with the authors.
I just wanted to check whether the most recent author response has adequately addressed your concerns. Separately, please indicate whether you're sticking with your original score, or if this discussion has led to any change in score on your end. Note that the author/reviewer discussion period ends very soon (Aug 13, 11:59pm AoE).
Thanks,
Your AC
---
Rebuttal Comment 3.1:
Comment: Thanks for the responses. My concerns are mostly addressed and looking forward to your follow-up works. | Summary: The authors investigate the task of disease progression modeling, an area of research that learns underlying disease trajectory from temporal snapshots of individual patients. The authors claim that all previous approaches either sacrifice computational tractability for direct interpretability in the feature space or vice versa. The authors introduce the variational event-based model (vEBM) to remedy the former situation, by enabling high dimensional interpretable models through a computationally efficient approach that circumvents dimensionality reduction or manual feature selection. vEBM borrows concepts from optimal transport to directly infer a continuous probability over events. The authors further claimed a 1000x speed-up, better accuracy and improved robustness to noise.
Strengths: 1. It is quite innovative to view the disease progression modeling task from the optimal transport perspective.
2. Figure 1 makes the paper slightly easier to read, as it outlines the proposed method with references to subsequent paper sections.
3. The proposed vEBM, compared to the baselines (EBM and ALPACA), show significant advantages in efficiency evaluated by wall-clock time. vEBM also shows better scaling with data dimensionality.
4. The datasets the authors used for empirical results (Alzheimer’s disease and age-related macular degeneration) are of significant clinical importance. I particularly like that the authors compare the disease progression patterns with the known changes from the literature.
Weaknesses: 1. Correct me if I am wrong: the proposed method seems to be a method that models the disease progression on a population level. This might limit the method to population level studies for disease research purposes and render it unsuitable for predicting individual-level progressions which could facilitate personalized treatment plans.
2. While producing pixel-level disease progression sequences for certain diseases is fantastic, I would suspect the proposed vEBM method is not ideal for pixel-level predictions, since vEBM presumably treats different pixels as separate features, ignoring the spatial information formed by multiple nearby pixels.
3. For the results in Figure 7, while it is visually informative, it will be great if the authors can incorporate quantitative metrics.
4. Minor issue: For Figure 1, it would be great if the authors can improve the aesthetics.
5. Minor issue: For Figure 5, it would be helpful to provide the color bar.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. May I ask the authors to explain in a little more detail what Figure 3 Bottom row is visualizing? I am unfamiliar with the positional variance diagrams and it will be helpful if the authors can explain what the ordering of the feature (vertical axis) indicates, what specific traits on the diagram tells us, etc.
2. Would the authors consider comparing with alternative deep-learning-based models, such as Transformers, neural ODE, latent ODE aka ODE-RNN, neural CDE, etc.? If not, could they provide justifications?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes, the authors adequately addressed the limitations and, if applicable, potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Weaknesses point 1 (“Correct me if I am wrong…”)
Yes, that is exactly what our model does. It is less of a device for individual level predictions (although can be used that way; see Figure 7, where we use it to estimate individual-level stages along the group-level sequence) and more to elucidate group level patterns of change that aid disease understanding, biomarker discovery, structure heterogeneity, and inspire new treatment development. As we note in the Introduction (L25-29), this class of technique has proved very influential over the last 10 years; in the paper we reference 15 examples of applications in various diseases, and 4 examples of direct methodological developments on the original model, but there are many more applications and methods developments that have been published.
Weaknesses point 2 (“While producing pixel-level disease progression sequences…”)
Yes that's true. A separate grouping procedure would be beneficial and something we conserve for future work here now we have the computational framework in place. As noted in Limitations (L295-298), a first approach would follow reference [35] and use a Markov random field to impose local structure.
Weaknesses point 3 (“For the results in Figure 7…”)
The main point of these plots is to show the individual-level utility of the method, in terms of the distributions of stages for each group. But we are happy to discuss ideas to make this more quantitative; does the Reviewer have anything specific in mind?
Weaknesses points 4 & 5
We have deliberately made Figure 1 greyscale for printing purposes & accessibility, if that is what the Reviewer is suggesting regarding aesthetics. We will add a color bar to Figure 5.
Questions 1 (“May I ask the authors to explain…”)
The positional variance diagram shows the most likely ordering of events (horizontal axis), as characterised by changes in the features (vertical axis). The model also estimates the uncertainty in the positioning of the events, shown by the shaded heatmap in the Supplementary Figures 11-13. Typically these diagrams are used as the main visualisation of event-based model sequences, but due to the large numbers of features our method enables, they become sub-optimal for visualisation purposes. However the main purpose of including them in Figure 3 was to visually demonstrate the agreement between the vEBM inferred sequence and the true sequence, using synthetic data. In real data, e.g., the ADNI experiments, it makes more sense to visualise the results in pixel space, to preserve the spatial information more clearly.
Questions 2 (“Would the authors consider comparing with alternative deep-learning-based models…”)
These model classes make sense for making predictions, but they lack the interpretability of the EBM class of model, which is a key output. Additionally, they typically require longitudinal data, while the EBM class models operate using only cross-sectional data; thus limiting ODE-type models to using only cross-sectional data would be an unfair comparison.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Many thanks to the authors for the rebuttal. I like the general clarifications. Answering a few questions from the authors:
1. For quantitative metrics on Figure 7, I do not have specific metrics in mind. However, since you are analyzing distributions, would it be helpful to consider divergence measures (KL or JS) or earth mover distance?
2. Regarding the aesthetics, I have no issue with black and white figures. I was speaking of (1) the method figure is not quite beautiful and engaging, but rather looks a bit dull and almost like a casual sketch, and (2) some line plots are using the default color (blue, orange, green), non-obvious error bars, etc., that are not very optimized.
---
Rebuttal 2:
Comment: Thanks for the clarifications.
Before we respond to points 1 and 2, we'd like to ask why you decided to downgrade your review from a 5 to a 4? Your comment seems to be positive so we're unsure as to what we did to cause the score to be downgraded - some detail would be very helpful, so we can try to respond as best as possible.
---
Rebuttal Comment 2.1:
Title: Thanks for reminder
Comment: Thanks for the authors for reminding that. I meant to update the rating from 5 to 6. Misclick from phone. Corrected.
---
Reply to Comment 2.1.1:
Comment: No problem! Thanks for the positive response.
In response to your points above:
1. Comparing the distributions of stages between clinical labels using a suitable distance metric is a nice idea - we'll add that to the text.
2. Ok we understand the criticism - indeed the figure does not look very professional! We will try to improve the style to make it more visually appealing. Regarding the plots, we need to point out that the colour choice was made to conform to accessibility requirements (the default colours that we use were designed by the authors of matplotlib to be accessible for people who are colour-blind). However we agree that the error bars are not very clear and will make them larger. | Summary: This manuscript derives an Event-Based Model via variational inference and optimal transform. This approach significantly enhances computational efficiency, robustness to noise, and scalability, outperforming current methods by a factor of 1000 in wall-clock.
Strengths: 1. The experiments have been performed with multiple datasets: synthetic data, neuroimaging, and optical coherence tomography.
2. The proposed method has been compared to two baselines: the event-based model (EBM) and the Alzheimer’s Disease Probabilistic Cascades (ALPACA) model - on synthetic data.
3. The methodology development was pretty clear and supported by relevant literature.
4. Novel formulation of Event-Based Model via variational inference and optimal transport.
Weaknesses: 1. I am not sure about the results in Section 3.3.2. You see that CDR58, MMSE, and RAVLT are pretty far in the Event order. If you check the Temporal Event-Based Model (TEBM) (Wijeratne et al., 2021; Wijeratne et al., 2023), these cognitive test scores can be found earlier in the disease timeline. However, there are differences; they used T1 images, and in your case, you used TBM images. Before making new claims, it will be essential to consider experiments with a previously explored set of features. Importantly, the proposed solution is supposed to solve the scaling question. Hence, it should work with the previous set of features.
2. The authors enable models that express progression at the pixel level compared to the previous region level. However, how the insights will compare to the region-level progression needs to be explored. I suggest exploring it via post hoc analysis by combining pixels back into regions and comparing the ordering. It needs to be clarified whether pixel-level estimation leads us to a new insight into progression. Because it might be great from a computer science perspective but not from a clinical. Furthermore, measurements based on single pixels are highly susceptible to noise. In addition, pixels are not actually pixels but features because the ADNI images were standardized to the template. Hence, it is more like the most fine-grained "atlas" to the template.
Wijeratne, Peter A., Daniel C. Alexander, and Alzheimer’s Disease Neuroimaging Initiative. "Learning transition times in event sequences: The temporal event-based model of disease progression." International Conference on Information Processing in Medical Imaging. Cham: Springer International Publishing, 2021.
Wijeratne, Peter A., et al. "The temporal event-based model: Learning event timelines in progressive diseases." Imaging Neuroscience 1 (2023): 1-19.
Technical Quality: 3
Clarity: 3
Questions for Authors: My questions were combined with weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Weaknesses point 1 (“I am not sure about the results in Section 3.3.2…”)
Generally we expect cognitive test scores to appear later than structural changes in MRI - cognitive deficit is a consequence of loss of brain tissue. As stated in L257-260, the cognitive events occur across the latter 2/3rds of the event sequence; this is broadly consistent with the results in Wijeratne et al. 2023 and Wijeratne & Alexander 2021.
There are differences to the literature (e.g., RAVLT occurs late in our sequence, while in Wijeratne & Alexander 2021 it occurs mid-sequence). However, the cohorts used are different – we use a subset of ADNI individuals with TBM data, while e.g., Wijeratne & Alexander 2021 use a different subset, and Wijeratne et al. 2023 use the entire ADNI cohort. Inter-group heterogeneity in Alzheimer’s disease is well reported (e.g., reference [5]) and differences in the ordering of changes are expected between different subsets of the ADNI cohort. This would make for a very interesting future research direction for our model, to investigate image-based heterogeneity at the pixel-level - instead of just regional level that previous analyses have been limited to.
Weaknesses point 2 (“The authors enable models that express progression at the pixel level compared to the previous region level…”)
Including comparison to regional volumes is an interesting idea, and one which we would have included given the time. We have now included an additional analysis in the “Author Rebuttal” PDF, where we used the FreeSurfer segmentation tool to obtain pixel-level anatomical labels, which we mapped to our “pixel” events to enable interpretation via a subset of regional volumes (Figure 1 in the uploaded PDF).
Indeed what we see is broadly what we’d expect from previous results – sub-cortical changes (Thalamus-Proper, Putamen, Hippocampus) are earliest, followed by cortical (Cerebral-Cortex) and white matter (Cerebral-White-Matter), and finally ventricular change (Lateral-Ventricle, VentralDC). However our model provides much more fine-grained insights – we now obtain continuous trajectories of change, which capture interesting non-linearities, e.g., in the Thalamus-Proper, Brain-Stem, and Lateral-Ventricle; this contrasts with the more linear changes in the Hippocampus, Cerebral-Cortex, and Cerebral-White-Matter. These are entirely new insights provided by the model that could be used to guide which regions are most useful as biomarkers at which stages in the disease; and also provide new insights into the underlying atrophy dynamics, which could inform mechanistic models of disease spread in the brain. We believe this result substantially improves the interpretation of our method and will add it to the paper.
Note that to enable direct comparison with previous published results we would need to run the same segmentation tool that they used, e.g., the GIF segmentation tool used in Wijeratne et al. 2023 and Wijeratne & Alexander 2021. This would make an interesting additional analysis that we reserve for future work.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I want to thank you authors for their rebuttal. I decided to increase the score to Accept.
---
Rebuttal 2:
Comment: Hello reviewer SCze,
Does the author respond adequately address your concerns?
Note that the author/reviewer discussion period ends very soon (Aug 13, 11:59pm AoE).
Thanks,
Your AC | Summary: The authors propose a method to learn a latent event sequence from cross-sectional data of modest size. Each event corresponds to a single observed feature transitioning from an initial parametric distribution (i.e., 'normal') to a second, final parametric distribution (i.e., 'abnormal'). Their inference procedure incorporates elements of variational inference and optimal transport. They demonstrate that it outperforms two available baselines both in speed / scalability and accuracy of the learned event sequence. Finally, they apply the method to learn the order in which (a) individual pixels of (carefully registered) MRI images become abnormal in Alzheimer's disease, and (b) individual OCT pixels become abnormal in macular degeneration.
Strengths: Results clearly demonstrate that their method is scalable, and it's effective in simulation where the true data match their model. The writing style is clear, and the visualizations are excellent, particularly Figures 4 and 6. Experimental results are interesting and clearly presented.
Weaknesses: - The related work section is very brief, which made me question whether it is comprehensive.
- I found the second half of section 2.2 very difficult to follow. Specifically, I don't understand the relationship between S and X (see lines 138-143) and think this relationship and the optimal transport details more broadly could be presented much more clearly.
- I am not convinced that this is solving a real problem. The approach learns the order in which pixel-sized image regions tend to become abnormal in AD and AMD, but I would think that the more interesting questions have to do with individual variability in the order and timing of this sequence. I may be wrong about this, but I think the authors should do more to explain how the resulting model might be useful in practice.
Technical Quality: 3
Clarity: 2
Questions for Authors: - I am confused about how the normal and abnormal distributions are defined in the synthetic data, and whether they are learned versus fixed during inference. Equation (3) implies to me that they are learned, but other descriptions seem to imply that they are initially learned by dividing the population into patients and controls, but then fixed during the broader inference procedure.
- What is denoted by the color of the pixels in Figure 5?
- The authors mention that the method requires image registration, but very few details are given. It seems to me that it would be challenging to align images from very different stages of progression. Is the method sensitive to this image registration step?
- Do the authors envision applications of this method outside of medicine?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Somewhat, but a more comprehensive related work section is needed to understand pros and cons of this method (and the approach to disease progression modeling more generally; see Weaknesses) relative to alternatives.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Weaknesses point 1 (“The related work section is very brief…”)
This section was deliberately kept brief to focus on the most relevant comparable methods; specifically those that can infer group-level sequences from cross-sectional data. While there are many models that use longitudinal data (see, e.g., reference [19] for a comprehensive review), they are not as relevant for direct comparison. We also note that we do discuss the relative pros and cons of alternative types of models (including longitudinal models) in the Introduction (L30-36). However we appreciate that this is not made completely clear in the Related Work section, so we will add a couple of sentences to clarify this point.
Weaknesses point 2 (“I found the second half of section 2.2 very difficult to follow…”)
The transport cost matrix, X, defines the likelihood of an event occurring and the permutation matrix, S, defines the event ordering; we seek to find the optimal event permutation that maximises the overall model likelihood. As such we can think of the relationship between S and X as their (element-wise) product giving the likelihood of a given event permutation. Alternatively, we can think of relationship as S being the transport plan that permutes event likelihoods in X to their optimal position in the latent event sequence. We will add clarification to the text, if the Reviewer believes these descriptions are helpful.
Weaknesses point 3 (“I am not convinced this is solving a real problem...”)
Here we use pixel-level regions just as a demonstration of the model’s ability to handle high dimensional data. However the model also enables, e.g., fine-grained regional brain volume analysis of the event ordering with 100s of regions, which current methods cannot cope with.
The class of models we focus on here are necessarily group level models rather than models of individuals, although they do capture variability over the group (but we do not leverage this information here). As we highlight in the Introduction (L25-29), these models have proved extremely important in understanding the temporal evolution of chronic disease and guiding treatment development. They are not just, or even primarily, a device for prediction at the individual level. This class of technique has proved very influential over the last 10 years; in the paper we reference 15 examples of applications in various diseases, and 4 examples of direct methodological developments on the original model, but there are many more applications and methods developments that have been published. The method we provide in our paper here will "turbocharge" all the current models of this class, including the highly popular SuStaIn model (reference [5]), and unlock pixel / voxel scale analyses that were previously intractable.
Questions point 1 (“I am confused about how the normal and abnormal distributions…”)
These distributions are defined by separating synthetically generated individuals into two groups, based on their randomly generated stage (the lowest 20% of stages are considered controls, see L194-195). Mixture models are then fitted to these groups, and are fixed throughout inference, as denoted in Figure 1. Equation 3 doesn’t explicitly distinguish between these fitted parameters (“\theta”) and the learned parameter “S”, because in principle they could be learned jointly; however if the Reviewer thinks it is helpful, we can clarify the difference by using semicolon notation for "\theta" in the likelihood equations.
Questions point 2 (“What is denoted by the color of the pixels in Figure 5?”)
The color denotes the number of pixel events in each histogram bin, e.g., in the first bin of events (the first column), we can see the density of pixel events occurring as a function of the distance from the centre. We will add a color bar and a sentence in the figure caption to clarify this.
Questions point 3 (“The authors mention that the method requires image registration…”)
We deliberately avoided going into detail regarding the registration, because a) we used a pre-processed data collection, produced using Tensor Based Morphometry, from the ADNI dataset, which has previously been described in detail (see reference [59]); and b) as we note in the Limitations (L292-295), some sort of pre-processing is always necessary for any method to enable comparison between individuals, so it isn’t a special limitation of our method.
Regarding the point about aligning images from different stages of progression – if we understand correctly, this is actually the primary utility of disease progression modelling, which can be thought of as a type of temporal registration of images generated by latent stages to a common temporal reference frame. Our method captures this variability and learns this temporal reference frame in terms of a sequence of events. However if the Reviewer is referring to the spatial registration of images with respect to a common spatial reference template, the primary utility of Tensor Based Morphometry is to provide a method to map to a common reference template and calculate the voxel-level volumetric change with respect to this template, thus transforming all images from different stages to a common spatial reference frame.
Questions point 4 (“Do the authors envisage applications of this method outside medicine?”)
Yes, we envisage many applications in areas that involve learning progressive sequences of events, e.g., in environmental modelling, our method can infer trajectories of biodiversity loss “events”, from temporal eco-acoustic monitoring data. We plan to make the vEBM code available with the aim of opening up such applications by researchers in other fields.
---
Rebuttal Comment 1.1:
Comment: Thanks for your responses and clarifications, which are very helpful and address my concerns. I do think that including some of these details / clarifications in the text itself would be helpful. Most important are the clarifications you provide in your responses to Weaknesses point 2 and Questions points 1-2. I have increased my score.
---
Reply to Comment 1.1.1:
Comment: Thanks for the positive response. We will include the clarifications you highlight in the manuscript. | Rebuttal 1:
Rebuttal: We thank the reviewers for their constructive comments, which we have endeavored to address as faithfully as possible in our responses below.
As part of our response, please find attached a PDF containing a new result.
We look forward to continuing the constructive discussion!
Pdf: /pdf/cdc2362c7df270565085e7fb02e10ea9432057f7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Zero-Shot Scene Reconstruction from Single Images with Deep Prior Assembly | Accept (poster) | Summary: This paper introduces a framework called "Deep Prior Assembly," which combines various deep priors from large models to reconstruct scenes from single images in a zero-shot manner. By breaking down the task into multiple sub-tasks and assigning an expert large model to handle each one, the method demonstrates the ability to reconstruct diverse objects and plausible layouts in open-world scenes without additional data-driven training.
Strengths: The methodology effectively integrates various deep priors from different large models, enhancing the robustness of the reconstruction task and doesn’t rely on 3D or 2D data-driven training, reducing dependency on specific datasets. It introduces new optimization methods for pose, scale, and occlusion parsing, improving the collaborative effect of deep priors.
Weaknesses: 1. One obvious drawback of this methodology is the huge memory cost caused by assembling many pre-trained modules if we aim to apply this method on mobile edge device. Can the authors think out of a way to mitigate this problem?
2. The reconstruction quality of this framework relies mainly on the capacity of single-image reconstruction model, e.g. Shap-E in this paper. However, according to practical experience, Shap-E is really sensitive to the scale of input image. Is there a universal appropriate scale that fits to all the instances, as mentioned in the paper the scale is set to 6?
3. The methodology tries to fit the 3D proposal into the scene by aligning them with the depth estimated by Omnidata. According to the ablation, 3D matching plays the most important role compared to SD and 2D-matching in the whole framework. So how to ensure the quality of estimated depth?
4. I am curious about how will the methodology perform on the outdoor scenes.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please refer to the weaknesses part.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Please refer to the weaknesses part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate the reviewer veoH for the thoughtful feedback and time invested in evaluating our work. We respond to each question below.
**Q1:Applying DeepPriorAssembly on mobile edge devices.**
We agree that DeepPriorAssembly is now not capable of directly applying on mobile edge devices. Actually, most of the large models (e.g. ChatGPT, StableDiffusion, LLaVA) face the same challenge of inference on mobile edge devices. A more practical approach for users would be to submit the captured images to remote servers for processing, where the reconstructed 3D scenes can then be sent back to the users.
**Q2:The instance scale for Shap·E.**
We got the same experimental conclusion as yours regarding Shape·E's reconstructions. Shape·E is indeed sensitive to the scale of input images. Actually, we conducted ablation studies to explore the effect of instance scales on the generation quality of Shape·E, as shown in Sec.C.2 and Fig.10 in the Appendix. The visual comparison of the generations with different instance scales in Fig.10 in the Appendix show that Shap·E is quite sensitive to the scale of instances in the images, where a too small or too large scale will lead to inaccurate generations with unreliable geometries and appearances. For quantitatively evaluate the effect of instance scale, we further report the numerical results of the scene reconstruction performance under different instance scales, as shown in Table B in the rebuttal PDF. Thought both quantitative and qualitative ablation studies, we observe that the Shap·E performs the best in shape generation from instance images at a scale of 6.
For all the experiments conducted in the paper, we set the scale to 6, following the conclusions of ablation studies.
We believe that the scale of 6 is a universal appropriate scale that fits to most of the instances, as demonstrated by the comprehensive quantitative and qualitative evaluations on diverse datasets in Sec.4 and Sec.B of the Appendix.
**Q3:Quality of estimated depth.**
We ensure the quality of estimated depth by adopting the large model Omnidata which is trained under large scale dataset for depth estimation. In practical experiments, the quality of estimated depth is quite convincing due to the powerful large model with abundant knowledge learned by large-scale training on large-scale datasets.
We also admit that the depth point cloud may suffer from inevitable occlusions of unseen areas and distortion, especially in some difficult scenes. However, we justify that DeepPriorAssembly does not require very high-quality depths since they are only used as a prior for recovering the layout of the scene containing the reconstructed high quality 3D shapes with 2D/3D matching. The estimated depths are not directly used as the scene geometries, where the high quality of reconstructed scenes is ensured by the high quality of the reconstructed 3D shapes.
**Q4:Applying DeepPriorAssembly to outdoor scenes.**
We further conduct experiments to evaluate DeepPriorAssembly on complex outdoor scenes and scene containing animals, as shown Fig.A in the rebuttal PDF. The first image comes from KITTI dataset, others are collected from the Internet.
With the help of powerful large foundation models, DeepPriorAssembly demonstrates superior zero-shot scene reconstruction performance in these real-world outdoor scenes.
---
Rebuttal Comment 1.1:
Comment: Thanks for your explanation.
1. As this paper dedicated to assemble different priors from large foundation models, I think it's more convincing to show the robustness of the pipeline by substitute different parts with similar foundation models.
2. Moreover, I think it's inappropriate to claim your method's superior performance on zero-shot manner, because this capacity actually comes from the massive training data of Shap-E. Also, as we can observe from the figure A in the rebuttal PDF, the method fails to perform well on outdoor scene, which is limited by Shap-E.
Rationally, I would like to categorize this method as a straight-forward engineering effort to assemble the foundation models. Hope the authors can optimize the connection and compatibility between different priors.
---
Reply to Comment 1.1.1:
Title: Responses to Reviewer veoH
Comment: Dear Reviewer veoH,
Thank you for your response and the helpful comments. We respond to each of your additional questions below. Please do not hesitate to let us know if you have any additional questions.
**Discussion-Q1: Ablation studies on the choice of foundation models.**
We fully recognize the importance of evaluating each sub-task by substituting the foundational models with similar alternatives. Actually, we have conducted the ablation study to explore the effectiveness of our chosen solutions in each sub-task by comparing them with the alternatives. The results and analisis are presented in Sec.C.1 and Table 4 of the Appendix. Specifically, we conducted ablations to replace Shap·E with One-2-3-45, replace Open-CLIP with EVA-CLIP and replace Omnidata with MiDaS, observing performance degradation with the alternative models. We further visually compare Shap·E with One-2-3-45 for shape generation in Fig. 8, where the results demonstrate that Shap·E is a more robust solution for generating 3D models from 2D instances. These ablation studies validate the effectiveness of each choice made within our framework.
**Discussion-Q2: The claim of zero-shot scene reconstruction.**
Actually, the term "zero-shot" signifies that no specific data or data-driven training is required for the task of scene reconstruction from single images. Unlike previous works in this field, such as PanoRecon and Total3D, which require specific image-scene pair data, our approach does not rely on such data. The training data used in Shap·E is only capable of shape reconstruction learning, and it is impossible to train a model for single-view scene reconstruction solely by relying on the Shap·E data.
We justify that for the "zero-shot" tasks, using large-scale data from a different domain or task is not prohibited. For example, CLIP models are widely used for zero-shot image classification without requiring specific image-class pair data, though they do require massive amounts of image-text pair data for contrastive learning. Similarly, the data used in Shap·E does not undermine our claim of "zero-shot" single-view scene reconstruction.
We will provide further clarification on the tasks and claims to enhance the understanding of DeepPriorAssembly's capabilities and the data used in each foundational model integrated into our framework.
**Discussion-Q3: Performance on outdoor scenes.**
We would like to justify that none of the previous works can successfully reconstruct outdoor scenes from single images. All prior approaches are trained on indoor scenes and struggle to generalize to real-world outdoor images, which contain out-of-distribution objects such as trees, buildings, and animals. Leveraging powerful large foundational models, DeepPriorAssembly is the first to demonstrate the capability for zero-shot scene reconstruction in these complex real-world outdoor scenes. As illustrated in Fig.A of the rebuttal PDF, DeepPriorAssembly accurately reconstructs scene geometries even in challenging outdoor scenes. However, the texture produced by Shap·E may not be optimal for the outdoor shapes. In the future, we may consider replacing Shap·E with a shape reconstruction method that performs better on outdoor shapes to further enhance the performance of outdoor scene reconstruction.
**Discussion-Q4: Optimize the connection and compatibility between different priors.**
Thanks for your suggestions. In the revision, we will improve the connection and compatibility among the foundation models by conducting more ablation studies on the choice of each foundation model within our framework, supplementing the analysis presented in Sec.C.1 and Table 4.
We are deeply grateful for your invaluable feedback and the time you dedicated to evaluating our work. Your comments and expertise are sincerely appreciated. Please let us know if there is anything we can clarify further.
Best regards,
Authors | Summary: This work introduces a system named "Deep Prior Assembly" for zero-shot scene reconstruction from a single image. It breaks down the single-image scene reconstruction task into several steps that can be solved utilizing pretrained large models, such as SAM for segmentation, Shap-E for 3D object generation, and Omnidata for depth estimation. Additionally, an optimization-based approach is also proposed for pose estimation.
Strengths: **1. It makes sense to break down a difficult task into simpler ones, and the outcomes appear promising.**
This study puts into practice the concept of breaking down the challenging single-view reconstruction task into several simpler tasks that can be addressed using off-the-shield pretrained models.
**2. Experiments and ablation studies are thorough.**
Extensive quantitative and qualitative evaluations are provided in the paper, demonstrating the effectiveness of the proposed pipeline.
Weaknesses: **1. Technical contribution of this work is limited.**
The work seems to be more of an engineering trial that combines pretrained models to build a reconstruction system, rather than an insightful research endeavor. The authors highlight that they are exploring "deep prior" from large pretrained models, but the priors are not well integrated and instead function independently as separate modules. Additionally, naively combining many large models could lead to high computational demand and potential error accumulation. For example, the proposed system will run a diffusion generative model several times for each object in a scene, and includes a 9.2-second optimization for pose estimation of each object.
**2. Missing state-of-the-art baseline methods.**
Although there are thorough evaluation results provided in the paper, all the baseline methods are proposed before 2021. Has there been any new progress in single-view scene reconstruction after 2021? For example, ScenePrior [1] published in CVPR'23 introduces a conditional autoregressive generative method for single-view reconstruction.
---
[1] Learning 3D Scene Priors with 2D Supervision. Nie et al. CVPR 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: **1. What time does each stage of the proposed method take? And what time do the baseline methods take?**
**2. What's the relation between the entire running time of reconstruction and the number of objects in the scene? And how about the baseline methods?**
**3. Have you considered using methods like 3D object detection for layout estimation? It could potentially offer faster and more accurate results than the proposed depth-based method.**
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: **1. A naive combination of pretrained large models might cause error accumulation.**
**2. The proposed method has a much longer running time compared to baseline methods.**
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate Reviewer DEJS for the acknowledgment of our work and constructive feedback. We respond to each question below.
**Q1:Technical contribution.**
We are the first to explore the cooperation among large foundation models for another extremely difficult task where none of them can accomplish alone. The key motivation of our method stems from the recent success in large foundation models, which lead a revolution in language/vision computing. The large models show brilliant capabilities and remarkable performance, but are limited in a specific task with a specific modality. Driven from this observation, we propose to explore an effective solution that leverages existing expert large models, designed and trained for specific tasks, to address the extremely challenging task of 3D scene reconstruction from single images. We aim at a zero-shot framework where no part of it necessitates extra data collection, preparation, or time-consuming data-driven training.
To this end, we propose DeepPriorAssembly, a novel framework which assembles diverse deep priors from large models for scene reconstruction from single images in a zero-shot manner. We rethink this task from a new perspective, and decompose it into a set of sub-tasks instead of seeking to a data-driven solution. We narrow down the responsibility of each deep prior on a sub-task that it is good at, and introduce novel methods related to poses, scales, and occlusion parsing to enable deep priors to work together in a robust way. We believe DeepPriorAssembly introduces a new direction for the society to flexibly exploit the potential of existing powerful large models.
**Q2:Running time of DeepPriorAssembly.**
We have reported the running time of each stage of DeepPriorAssembly in Sec.F and Table 5 in the Appendix. Reconstructing a scene from a single image takes 171.2 seconds in total. The inference of Grounded-SAM, Open-CLIP and Omnidata takes only about 1 second. The most time-consuming parts include the StableDiffusion, Shap·E and the RANSAC-like optimization. For a balance between the efficiency and quality, we can optionally reduce the sample number _M_ of StableDiffuion and the iterations _r_ of RANSAC-like solution, which can reduce the total time to less than 60 seconds. For the baseline method PanoRecon, the inference time is 30.6 seconds. We further justify that we do not require any extra time for data collection, data preparing and data-driven training. In contrast, PanoRecon requires 5 days of data-driven training, as reported in its paper, not including the time for data preparation.
**Q3:Potential error accumulation.**
We demonstrate that the effective integration of additional large models does not compromise the robustness of our method. In contrast, this design enhances both its robustness and accuracy. For instance, we incorporate StableDiffusion to enhance and inpaint images, resulting in an improvement in CDL1 from 0.125 to 0.110, as shown in Table 2 of the ablation study. The CLIP model is introduced to filter out poor samples, leading to more robust results, as shown in Table 3. The other proposed strategies and constraints (e.g., RANSAC-like solution, 2D/3D-Matching) are also designed to improve the robustness. Please refer to Sec.4.4 for comprehensive ablation studies on the effectiveness of integrating each module.
**Q4:More comparisons with recent methods.**
We have additionally compared our method with other SOTA data-driven scene reconstruction works BUOL [CVPR 2023] and Uni-3D [ICCV 2023] in Sec.D and Fig.12 of the Appendix. The results demonstrate that our method achieves better and more visual-appealing results under both 3D-Front and ScanNet datasets. Specifically, our method significantly outperforms other methods using real-world images in ScanNet.
Following your suggestions, we further compare DeepPriorAssembly with ScenePrior [CVPR 2023] under ScanNet dataset, as shown in Fig.B in the rebuttal PDF. The reconstruction results of ScenePrior is provided by its authors. As shown, DeepPriorAssembly clearly outperforms ScenePrior in terms of the quality of scene geometries. Moreover, ScenePrior can only reconstruct the geometry, whereas DeepPriorAssembly is capable of recovering high-fidelity scene appearances as well.
**Q5:Relation between running time and the object numbers.**
The relation is that scenes containing more objects will often lead to longer running time. However, as we analyze in Sec.F of the Appendix, the inference of Grounded-SAM, Open-CLIP, and Omnidata takes only about 1 second. The most time-consuming parts include StableDiffusion, Shape·E, and the RANSAC-like optimization. For these three components, we can process all instances of the scene in parallel (e.g., with multiple GPUs), which can significantly reduce the required running time. Specifically, by processing each instance of the scene with a separate GPU in parallel, the running time for a scene containing multiple instances may not be much longer than the time for scenes containing only one instance.
**Q6:Leveraging 3D object detection for layout estimation.**
The input for DeepPriorAssembly is only one single scene image, where no 3D data is available for 3D object detection. An alternative approach is to leverage the back-projected depth point clouds for 3D object detection. However, the depth point cloud is often of low quality and corrupted by occlusions of unseen areas. Therefore, it is extremely difficult for existing 3D object detection methods to accurately detect 3D objects from the corrupted depth point cloud. We also emphasize that DeepPriorAssembly does not require very high-quality depths since they are only used for recovering the layout of the scene containing the reconstructed high quality shapes. The estimated depths are not directly used as the scene geometries, where the high quality of reconstructed scenes is ensured by the high quality of the reconstructed 3D shapes.
---
Rebuttal Comment 1.1:
Comment: I appreciate the detailed response to the review and have also read the comments from other reviewers. While the proposed method presents impressive experimental results, it is more like an engineering effort by combining several pretrained modules to build a software pipeline, which is not quite suitable for the NeurIPS research community. Therefore, I maintain my original score of "5: Borderline accept" and would like to participate actively in the subsequent reviewer discussion period.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer DEJS
Comment: Dear Reviewer DEJS,
Thank you for your response and the positive assessment on our rebuttal. We really appreciate your expertise and all the invaluable feedback. The technical contributions of DeepPriorAssembly lie in the the exploration of the robust and optimal approach to robustly integrating available deep priors for a challenging task, rather than simply combining them.
We show that simply combining several large models is not enough to solve the difficult task of zero-shot scene reconstruction. While our work is the first to propose assembling large models and decomposing tasks to tackle this challenge, we also make key technical contributions to ensure deep priors work together effectively and robustly.
The naive approach involves segmenting scene images and generating 3D objects from these segments, but it fails because (1) occlusions and low resolution often lead to incomplete or poor-quality 3D objects, and (2) existing techniques can't recover the 3D scene layout.
To overcome these issues, we propose two main techniques:
1. To enhance the quality of 3D instances and handle occlusions and low resolution, we introduce the StableDiffusion model to refine and inpaint segmented images, followed by CLIP models to filter and select the best-matching instances. These innovations in task decomposition and deep prior integration are crucial for producing accurate and high-quality 3D shapes.
2. For recovering the scene layout, we introduce a novel method to optimize the location, orientation, and size of instances by matching them with both 2D and 3D supervision. These supervisions come from estimated segmentation masks and predicted depths, generated by the assembled deep priors. Additionally, a RANSAC-like solution further improves the robustness of pose and scale optimization. This method effectively links the deep priors and is key to achieving robust zero-shot scene reconstruction.
Thank you for your valuable feedback and the time you spent evaluating our work. We truly appreciate your insights.
Best regards,
Authors | Summary: The method uses leverages multiple off-the-shelf models to parse a scene represented by a single image into 3D assets in their respective layout. Concretely they use a segmentation model to locate objects, then a diffusion model to enhance the image quality, use Shape-E to generate 3D proposals, and estimate the layout use depth estimation and point cloud matching. The method outperforms existing methods.
Strengths: **Originality:**
I think the originality is relatively low as all of the components in the system are off-the-shelf model and they are combined in a relatively straight-forward manner.
**Quality:**
The results look good both quantitatively and qualitatively. I am less familiar with the task this work is trying to solve, so I cannot speak to how significant the quantitative improvements are or the strength of the baselines.
**Clarity:**
Overall the figures illustrate the method well and the qualitative examples demonstrate the results.
**Significance:**
I cannot speak to the significance of the method or results as I am not that familiar with this task or the relevant literature. I think it would help to spend more time in the introduction discussing why this task is important and what are the applications. VR/AR is mentioned in passing, but more concretely anchoring the task to a specific application would help motivate the work.
Weaknesses: The method relies on the quality of foundational models to perform well. The comparison to other baselines is not apples to apples since the other methods are not trained on the large datasets leveraged by the foundational model. This isn't necessarily a flaw in this work, since leveraging additional data could be seen as a potential advantage, but it does create an unequal comparison.
Like I mentioned in the strengths section, I think it's unclear what the motivation for the paper is. A little further discussion to anchor this work to an application would be helpful since the method novelty is not particularly high.
Technical Quality: 3
Clarity: 3
Questions for Authors: Does this work for other types of scenes such as outdoor? Or is restricted to indoor scenes with furniture?
**Suggestions:**
Figure 2 has a lot of elements and is hard to parse. Maybe simplifying the figure to illustrate the main idea would help.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer 67ke for the invaluable feedback and time invested in evaluating our work. We respond to each question below.
**Q1:The applications of single-view scene reconstruction.**
The task of single-view scene reconstruction greatly contributes to the domain of AIGC, AR/VR, robotics, games, 3D design, etc.
1) AI content generation for 3D is a hot topic recently which generates diverse 3D models from user prompts (e.g. images.). Most previous works focus on generating 3D objects, which is much easier than complex scenes. DeepPriorAssembly flexibly generates complete scenes from single images, advancing the development of this field.
2) Recovering scenes from single views plays a curtail role in the field of augmented/virtual reality, which allows spatial interaction between human and environments.
3) This task also contributes to the field of robotics. DeepPriorAssembly can recover the environment around a robot using its camera, facilitating comprehensive scene understanding for semantic recognition, collision detection, human-robot interaction, and more.
4) Single-view scene reconstruction can greatly improve the efficiency of game production. Given a game scene image drawn by a game illustrator, DeepPriorAssembly can directly reconstruct the 3D game scene for interaction, eliminating the need for a 3D modeler to manually recreate each 3D object from the scene image and fit them to the correct position, rotation, and scale.
We will add more discussion and analysis on the applications of DeepPriorAssembly in our revision.
**Q2:The motivation and originality of DeepPriorAssembly.**
The key motivation of our method stems from the recent success in large foundation models (e.g. ChatGPT, VLM, StableDiffusion, CLIP, etc.), which lead a revolution in language/vision computing. By greatly scaling up sizes of training samples and model parameters, large models show brilliant capabilities and remarkable performance. However, they are limited in a specific task with a specific modality, which limits their capability in high level perception tasks.
Driven from this observation, we propose to explore an effective and robust solution that leverages existing expert large models, designed and trained for specific tasks, to address the extremely challenging task of 3D scene reconstruction from single images. Note that most large models are public available where everyone can access these deep priors without additional efforts. We are committed to provide new insights for the society on assembling existing powerful large models at different domains and tasks for tackling another more challenging task without extra knowledge. That is, we aim at a zero-shot framework where no part of it necessitates extra data collection, preparation, or time-consuming data-driven training.
To this end, we propose DeepPriorAssembly, a novel framework which assembles diverse deep priors from large models for scene reconstruction from single images in a zero-shot manner. We rethink this task from a new perspective, and decompose it into a set of sub-tasks instead of seeking to a data-driven solution. We narrow down the responsibility of each deep prior on a sub-task that it is good at, and introduce novel methods related to poses, scales, and occlusion parsing to enable deep priors to work together in a robust way. We are the first to explore the cooperation among large foundation models for another extremely difficult task where none of them can accomplish alone. We believe DeepPriorAssembly introduces a new direction for flexibly exploiting the potential of existing powerful large models.
**Q3:The evaluation fairness with other methods.**
Actually, all the previous methods on single-view scene reconstruction requires additional task specific data collection and time-consuming training. In contrast, our method merely assembles existing available large models without requiring any extra knowledge. We believe that other methods require more strict data settings than DeepPriorAssembly, but are also limited to the known data distribution. For example, PanoRecon requires a large collection of image-scene pairs from 3DFront dataset and task specific training, which performs well on the test set of 3DFront but fails to generalize to out-of-distribution images in the real-world. We believe that the evaluation is fair for other methods since none of them can handle this task in our experiment conditions, i.e., without data collecting and even without data-driven training.
**Q4:Applying DeepPriorAssembly to outdoor scenes.**
We further conduct experiments to evaluate DeepPriorAssembly on complex outdoor scenes and scene containing animals, as shown in Fig.A in the rebuttal PDF. The first image comes from KITTI dataset, others are collected from the Internet.
With the help of powerful large foundation models, DeepPriorAssembly demonstrates superior zero-shot scene reconstruction performance in these real-world outdoor scenes.
**Q5:Complexity of Figure 2.**
We will simplify Figure 2 to provide a clearer illustration of the main idea by moving some framework details to separate figures.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: I thank the authors for their response.
After reading other reviews and other responses to reviewers I will maintain my rating of borderline, 5. I think the technical contribution is limited as foundational models are being fused together and the techniques for combining don't seem particularly general. I think the motivation for matching assets to images of scenes is rather limited in its current form. In robotics, meshes with 6 DoF are estimated from images for grasping, but this task is a bit far from those in robotics.
For the positives the paper improves over current methods, the figures and qualitative results are well done, and the writing and organization is clear. Therefore I maintain my rating of 5.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 67ke
Comment: Dear Reviewer 67ke,
Many thanks for the positive assessment. We really appreciate your expertise and all the invaluable feedback. Our method focuses on how to robustly join available deep priors for a challenging task. For the technical contributions, we make additional clarifications below.
We justify that the naive approach of simply introducing several large models fails to solve the challenging task of zero-shot scene reconstruction. Beyond the contribution of firstly proposing to assemble large models and the task decomposition for this challenge task, we offer additional significant technical contributions on solving the critical challenge of making deep priors work together robustly. Specifically, the naive solution involves segmenting the input scene images and then generating 3D objects from the segmented instances. However, the solution fails dramatically due to: (1) the instances are often corrupted by occlusions and low-resolution, leading to failures for reconstructing complete and high-quality 3D objects, and (2) none of the existing techniques are capable of recovering the scene layout for 3D objects.
To address these challenges and robustly assemble deep priors for zero-shot 3D scene reconstruction, we propose two significant technical contributions on improving the quality of 3D instances and accurately recovering the scene layouts.
(1) To improve the robustness of the framework and overcome the challenges of occlusions and low-solution in segmented instances, we novelly introduce the StableDiffusion model to enhance and inpaint the instance images, followed by CLIP models to filter out the bad samples and select the ones matching the instance most. The novel designs on task decomposition and introducing suitable deep priors are the key contributors for achieving accurate and high-quality geometries and appearances of the generated shapes.
(2) For the challenging task of recovering the layout of the scene containing the reconstructed shapes, we propose a novel approach on optimizing the location, orientation and size of instances by matching them with both 2D and 3D supervisions. The supervisions are novelly introduced from the estimated segmentation masks and the predicted depths, which are also obtained by the assembled deep priors. Moreover, a RANSAC-like solution is proposed to further improve the robustness of the pose/scale optimization. This approach is the link among deep priors and plays the key role in robustly assembling deep priors for the final target of zero-shot scene reconstruction.
Our technical contributions can be summarized as follows.
1. We propose the first framework which assembles diverse deep priors from large models together for the extremely difficult task of reconstructing scenes from single images in a zero-shot manner.
2. To improve the robustness of the framework and overcome the challenges in this task (e.g. occlusions and low-solution of instances), we novelly utilizes the StableDiffusion model for image enhancement and inpainting, combined with the CLIP model to filter out poor-quality samples.
3. We introduce a novel approach on optimizing the location, orientation and size of instances by matching them with both 2D and 3D supervisions. Moreover, a RANSAC-like solution is proposed to further improve the robustness of the pose/scale optimization. The approach is the link among deep priors and plays the key role in robustly assembling deep priors for the final target of zero-shot scene reconstruction.
Only with our designs on the task decomposition, deep prior chosen and our novel approach on RANSAC-like pose/scale optimization through both 2D and 3D matching to recover the scene layout, can the assembly of deep priors from large models effectively succeed in the extremely challenging task of zero-shot scene reconstruction.
We are deeply grateful for your invaluable feedback and the time you dedicated to evaluating our work. Your comments and expertise are sincerely appreciated. Please let us know if there is anything we can clarify further.
Best regards,
Authors | Summary: A multistage pipeline for single image 3D reconstruction is proposed, leveraging multiple off-the-shelf models. To begin, SAM is used to segment and decompose the input image. Stable diffusion is then leveraged to complete instance segments with potentially missing information, and failures of this process are filtered out by CLIP. Finally, Shap-E is applied to generate 3D models, which are then registered to the image for a final 3D reconstruction.
Strengths: The proposed technique achieves strong zero-shot performance relative to baseline methods despite not training on similar data.
Weaknesses: The method seems overly complicated and unlikely to be robust.
* The argument about heuristic selection of depth shift seems unconvincing; in practice, the correct depth shift from a scale and shift invariant monodepth estimator can vary widely between multiple images even in the same scene. Why not use multiple pairs of images to compute the appropriate depth shift, or metrically ground the depth shift as in RealmDreamer?
* In order to go from a SAM instance to mask, a very complex pipeline is proposed. Essentially, it consists of inpainting with SD, followed by filtering with CLIP, followed by shape estimation with Shap-E, followed by a likely unstable alignment procedure. This seems error-prone. Why not just train an LRM-like model that accepts potentially off-center and partially occluded instance images and produces 3D objects aligned to the input camera’s location? This seems not too difficult to train, there are 10M+ object datasets publicly available to achieve it, and it would certainly be more robust than the proposed pipeline.
Technical Quality: 3
Clarity: 3
Questions for Authors: Since the method consists of chaining several foundation models together, it should be possible to show "zero-shot" performance on scenes not limited to the simple synthetic or indoor room scenes shown here. Can the proposed method work on more complex scenes, such as real images, outdoor scenes or scenes containing animals or people?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes, there is a thorough limitations discussion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate the reviewer osDy for the thoughtful feedback and time invested in evaluating our work. We respond to each question below.
**Q1:The robustness of DeepPriorAssembly.**
We demonstrate that the effective integration of additional large models does not compromise the robustness of our method. In contrast, this design enhances both its robustness and accuracy. For instance, we incorporate StableDiffusion to enhance and inpaint images, resulting in an improvement in CDL1 from 0.125 to 0.110, as shown in Table 2 of the ablation study. Additionally, we introduce the CLIP model to filter out poor samples, leading to more robust scene reconstruction results (0.118 to 0.110), as demonstrated in Table 3 of the ablation study. The other proposed strategies and constraints (e.g., RANSAC-like solution, 2D-Matching, 3D-Matching) are also designed to improve the robustness of DeepPriorAssembly. Please refer to Table 2 and Table 3 for comprehensive ablation studies on the effectiveness of each module in improving the robustness of our method.
**Q2:The heuristic selection of depth shift and RealmDreamer solution.**
We greatly appreciate the insightful advice from reviewer osDy on depth shift estimation. We fully agree that using multiple image-depth pairs to compute the appropriate depth shift will lead to more accurate and robust depth scale and shift. We used only one image-depth pair to reduce the requirement for ground truth depths, and found the results to be convincing. Additionally, we provide ablation studies on the number of image-depth pairs in Table A in the rebuttal PDF.
RealmDreamer metrically grounds the depth scale and shift by aligning relative depths with the metric depths predicted by DepthAnything. We agree that this solution could potentially be a better approach for directly solving depth shift. However, RealmDreamer was released on arXiv in April 2024, only one month before the NeurIPS deadline, by which time we had already completed the core development of DeepPriorAssembly. To evaluate the effectiveness of a RealmDreamer-like solution in our framework, we replace our depth solution with it and conduct experiments in complex outdoor scenes, as shown in Fig.A in the rebuttal PDF. The results indicate that the RealmDreamer-like solution integrates well into our pipeline. Nevertheless, we justify that our depth solution also performs well across different datasets, as demonstrated by the comprehensive experiments in Sec.4.
**Q3:The effectiveness of proposed pipeline for shape reconstruction from SAM instances.**
As far as we know, none of the previous approaches can reconstruct complete and high-fidelity shapes from occluded 2D instances. We propose the first solution for this difficult task from a new perspective by assembling deep priors from large models, without requiring task-specific data preparing, model designing or training. Comprehensive evaluations and ablation studies demonstrate the effectiveness of the proposed pipeline for recovering complete and high-fidelity shape reconstructions from occluded and low-resolution SAM instances.
**Q4:Why not train a LRM-like for shape reconstruction and pose estimation.**
The key motivation of our method is to explore an effective and robust solution that leverages existing expert large models, designed and trained for specific tasks, to address the extremely challenging task of 3D scene reconstruction from single images. We are committed to provide new insights for the society on assembling existing powerful large models at different domains and tasks for tackling another more challenging task without extra knowledge. That is, we aim at a zero-shot framework where no part of it necessitates extra data collection, preparation, or time-consuming data-driven training. Training a LRM-like model for solving the part of shape reconstruction from SAM instances is not our way, as it requires extensive task-specific efforts in model design, data preparation, and training.
Moreover, we justify that none of the previous approaches can reconstruct complete and high-fidelity shapes from occluded 2D instances produced by SAM. The difficulties that prevent LRM-like techniques from successfully solving this task lie in:
1) The manner, location, and ratio of occlusions in SAM instances are unpredictable and vary significantly from scene to scene and instance to instance, making it extremely difficult to collect corrupted instance-complete shape pairs and to stably train an LRM-like model.
2) The 2D instances often suffer from low resolution. For small instances or instances far from the camera, the resolution can be low, leading to difficulties for a LRM-like model to accurately capture the semantics in the instance image and reconstruct shapes with details.
3) Real-world images contain diverse categories of instances, requiring an extremely large annotated dataset for this task. The scale of available 3D data (10M) is still much smaller than publicly available 2D/language data.
Our proposed pipeline effectively utilizes large models pretrained on billions of 2D/language data, demonstrating superior performance in reconstructing high-fidelity shapes given only corrupted SAM instances containing occlusions and low resolution as inputs.
**Q5:Applying DeepPriorAssembly to real images, outdoor scenes or scenes containing animals or people.**
We justify that the ScanNet dataset is a real-captured dataset. We have shown the scene reconstruction results under real images of ScanNet dataset in Fig.14 of the Appendix. We additionally conduct experiments to evaluate DeepPriorAssembly on complex outdoor scenes and scene containing animals, as shown in Fig.A in the rebuttal PDF. The first image comes from KITTI dataset, others are collected from the Internet.
With the help of powerful large foundation models, DeepPriorAssembly demonstrates superior zero-shot scene reconstruction performance in these real-world outdoor scenes.
---
Rebuttal 2:
Title: Rebuttal
Comment: Dear reviewer,
Please read the author rebuttal and the other reviews and post a comment as to how your opinion has or has not changed and why.
---
Rebuttal Comment 2.1:
Comment: Thanks for the thoughtful rebuttal. I'll change my score to borderline accept.
I'm still unconvinced about the pipeline setup. You might consider https://zixuanh.com/projects/zeroshape.html (in particular how they handle their training data) as an alternative and competing approach which would be likely to have strong performance if trained in the specified way.
The results on in the wild images are impressive! I would definitely suggest featuring these (and more + other examples) more prominently in a revised version of the manuscript, since the existing datasets are rather monotonous and supervised methods would probably perform well, so the "zero-shot" aspect is not really emphasized.
---
Rebuttal 3:
Title: Response to Reviewer osDy
Comment: Dear Reviewer osDy,
Many thanks for all the helpful comments and positive assessment. We will include the new experiment results and more reconstruction sampls in the revised version of the manuscript.
Following your suggestions, we consider training a reconstruction model similar to ZeroShape for predicting complete shapes as a future work of DeepPriorAssembly. We will include the insights to Sec.5 of the revised paper and will conduct experiments to explore the effectiveness of this alternative.
We really appreciate you for upgrading the score.
Best regards,
Authors | Rebuttal 1:
Rebuttal: We are grateful to the reviewers for their invaluable feedback and the time they dedicated to evaluating our work. We are delighted that reviewers appreciated the representation and the significance of the paper.
We respond to each reviewer separately with detailed analysis, visualizations and ablation studies to solve all the raised questions. We upload a rebuttal PDF with some experimental results and visualizations. For the following rebuttals, we use “rebuttal PDF” to point to the provided PDF like “in Table A of the rebuttal PDF”.
Thank you again for your insightful feedback and we are looking forward to continuing the discussion.
Pdf: /pdf/b7cee3c7dceadfcc7c29cf1a97a28087e6a5973e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Tree of Attacks: Jailbreaking Black-Box LLMs Automatically | Accept (poster) | Summary: Note: I have seen this paper before, and I have heard of the method. However, I have not previously read it in full, and I do not remember anything about the authors. So this should still be a fully blind review.
This paper introduces TAP -- a black-box method to develop attacks against LLMs. The attack process constructs a tree with a branching step based on refining a prompt using an LLM and a pruning step based on pruning attacks deemed off topic or assigned a low jailbreaking success score by another LLM. This method is thus able to search for attacks in a way that's automated and interpretable.
Strengths: S1: I think that the overall contribution is valuable to the field. TAP is kind of simple and arguably a glorified data-augmentation technique, but I think that TAP clearly belongs in the red-teaming toolbox.
S2: One might criticize this paper for being an incremental improvement on PAIR, but I think that would be a pretty lazy criticism, and I don't buy that that would make this not valuable. I think that tree method is a good insight, and it doesn't have to be complicated to be useful.
Weaknesses: W1: I think that the biggest potential problem with this work is the jailbreak evaluation method. Recent work like [https://arxiv.org/abs/2402.10260](https://arxiv.org/abs/2402.10260) has shown that LLM autograders for jailbreak success tend to be pretty problematic. Meanwhile, I'm very skeptical of a supposed 88% success rate against GPT-40 in Table 8. Overall, I don't think that what is being evaluated is probably an ideal proxy for "safety".
W2: The biggest limitation with TAP to me seems that its ability to help you find very novel jailbreaks seems very limited, and the initialization/prompting is doing a lot of work. I'd be appreciative of discussing this limitation or some more quantitative/qualitative analysis of it.
Technical Quality: 4
Clarity: 4
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We are happy that you appreciate our paper’s contribution and we take your concerns seriously. We respond to them below and will include our responses in the form of expanded discussions in the final version.
**"Recent work…has shown…LLM autograders for jailbreak success…to be…problematic."** We agree that LLM autograders for jailbreak success can be inaccurate and this is an important problem for the field of jailbreaking. However, at the same time, we believe this is orthogonal to the contribution of this paper as the proposed method (TAP) is compatible with any existing or future automated methods for evaluating jailbreak success.
**"I don't think that what is being evaluated [in Table 8] is probably an ideal proxy for "safety"."** Table 8 reports the success rate according to manual grading done by the authors. When grading, we anonymized the name of the method used to generate the jailbreak to avoid any inadvertent skew favoring our method. After this anonymization, we followed the guidelines in Wei et al. (NeurIPS, 2023) [52] to evaluate a successful jailbreak. We acknowledge that there can be many notions of safety. We would be happy to repeat this evaluation with other specific notions of safety and include a discussion on the current evaluation criteria and its limitations.
**"ability to…find…novel jailbreaks seems…limited…I'd be appreciative of discussing this limitation…"** Thanks, we will include a discussion on this in a limitations section in the final version.
---
Rebuttal Comment 1.1:
Title: Thanks, I think the paper should be accepted
Comment: Thanks, not too much more to say. I basically think that we are on the same page. I'll hold at a 7. But I would encourage the authors to mention explicitly the problems with autograders. I would also add that mentioning any differences in eval methodology that might make the numbers in your tables and the numbers in other papers' tables apples-to-oranges. | Summary: This paper presents an automated method, Tree of Attacks with Pruning, for generating jailbreak prompts to exploit vulnerabilities in LLMs. TAP uses a tree-of-thought reasoning approach to iteratively refine and prune candidate prompts, significantly reducing the number of queries needed to successfully jailbreak LLMs like GPT-4.
Strengths: 1. TAP's method of using tree-of-thought reasoning combined with pruning optimizes the search for effective jailbreak prompts, requiring fewer queries.
2. TAP demonstrates effectiveness across multiple LLMs and attack scenarios, including models with advanced protections like LlamaGuard.
The evaluations show TAP successfully jailbreaks most state-of-the-art LLMs for over 70%.
Weaknesses: 1. The format of this paper needs to be refined, such as Table 1,2,3.
2. The success of TAP heavily depends on the choice of evaluator LLM, with significantly reduced performance when using less powerful evaluators.
3. Much lower attack success rate on well aligned open-sourced model, Llama-2-7b-chat. Llama3 and gemma2 may be needed for evaluating the effectiveness of this method.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness part.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The running time of this method is long.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. We take it seriously and respond to your concerns below. We hope that based on our responses you will strengthen your support for the paper.
**“format of this paper needs to be refined.”** Thanks, we will refine the format.
**“The success of TAP heavily depends on the choice of evaluator LLM…”** Yes, the success of our method (TAP) does depend on the choice of the evaluator. However, this is not specific to our work: both Yu et al. [59] and Chao et al. [12] use LLMs to evaluate whether jailbreaking attempts were successful. Moreover, Yu et al. [59] need to fine-tune an existing model to achieve high classification accuracy and the performance of Chao et al. [12]’s method (PAIR) also depends on the choice of the evaluator LLM.
**“...lower attack success rate on well aligned open-sourced model, Llama-2-7b-chat.”** Yes, our method has a lower success rate on the Llama-2-7B-chat model. But this is expected as prior methods, including PAIR, also have a low success rate with Llama-2-7B-chat. One reason for this is that Llama-2-7B-chat can be overly protective: for instance, it has been observed (see Figure 5 here https://arxiv.org/pdf/2310.08419v2) that Llama-2-7B-chat refuses to respond to benign requests if they mention certain words or phrases. Further, the fact that Llama-2-7B-chat is resilient to several jailbreaking methods can be seen as a validation that its safety training is effective.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. I tend to keep my score as borderline according to the following reasons.
1. The dependence on a powerful and costly evaluator is one of the drawbacks of this method, others having the same problem is not a reasonable excuse for this.
2. The Llama3 and Gemma2 exps are not conducted and the limitation of running time and money cost.
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: Thank you for taking the time to write back.
Regarding the performance's dependence on the evaluator, we agree that this is a drawback. However, we do not agree that it affects the contribution of the method over existing ones, since achieving high performance with existing black-box methods (such as GPT-Fuzzer of Yu et al. [59]) requires fine-tuning existing models.
Regarding the low success rate on "well-aligned open-sourced model" (e.g., Llama-2-7B model), we would like to stress that the method has high performance on many well-aligned and state-of-the-art LLMs (such as GPT-4o, GPT-3.5-Turbo, and Gemini-Pro). We do not understand why jailbreaking small open-source LLMs should be a criterion for evaluation: indeed, simple LLMs such as Llama-2-7B can be hard to jailbreak as they cannot follow complex instructions. This makes them less useful but is also one reason why they can be overly protective. This makes these models hard to jailbreak as mentioned in our earlier response.
Finally, regarding the evaluations with Llama-3 and Gemma-2 as targets. We note that Llama-3 was released less than a month before the abstract submission deadline (see https://ai.meta.com/blog/meta-llama-3/) and Gemma-2 was released *after* the deadline (see https://ai.google.dev/gemma/docs/releases). This prevented us from thoroughly evaluating the method with these models. We would be happy to include these evaluations in the final version. | Summary: This paper propose Tree of Attacks with Pruning (TAP) to jailbreak blackbox LLMs automatically. TAP has four steps, including branching, pruning, attack and assess, and pruning. TAP leverage an attacker (an LLM) to generate variations of the provided prompt, and the evaluator (another LLM) decide which variations to send to the target model. TAP is an advanced version of PAIR, reaching a higher success rate more effectively.
Strengths: 1, TAP can reach a high success rate on GPT4, GPT4o and Gemini-Pro.
2, Compared to PAIR, TAP is more efficiency on most of the black box model, suggesting the effectiveness of the branching and pruning.
3, TAP shows a better transferability between models.
Weaknesses: 1, As a method built on PAIR, with branching and pruning, the novelty of the method is limited.
2, In Fig.1, a typo "branching".
Technical Quality: 2
Clarity: 3
Questions for Authors: 1, In table 7, the attacker’s system prompt is long and carefully designed. The prompts try to get the LLM to make up a story for making the harmful behavior more reasonable and align with human value. This well designed prompt may be a key to success in jailbreak. It would help if the paper includes replacing the prompt with other evaluator/attacker prompts and show some results to state the success rate comes mostly from pruning instead of prompt engineering.
2, In section 6 discussion and appendix A.2, TAP-No-Prune has a higher success rate than TAP Prune, because retains the w = 10 highest scoring prompts and deletes the rest. Does it suggest that the off-topic prompts get higher scores?
3, As the subtree in Figure 12-14, is the improvement of the attacker generated by evaluator? What's the prompts for generating these? And are the revised prompts share the same attacker's prompts as the original generation?
4, In the branching stage, does the length of the re-writing prompts affect the success rate? Perhaps adding some constraints on the generation length may also help the jailbreak success rate.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Please refer to questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review. We respond to your questions below and hope that you will strengthen your support for our submission.
**“a typo "branching"”** Thanks, we will correct this.
**“attacker’s system prompt is long and carefully designed…”** Yes, the attacker’s system prompt in Table 7 is long and carefully designed. It is the same prompt as used by PAIR (an existing method). We use it in our simulations to ensure that both our method and PAIR use the same system prompt. This ensures that any gains in performance compared to PAIR are due to our method and not due to the engineering of the system prompt.
**“...would help if the paper includes replacing the prompt with other evaluator/attacker prompts and show some results to state the success rate comes mostly from pruning instead of prompt engineering.”** That’s a great suggestion. We implemented a variant of our method that uses a simpler system prompt: this system prompt simplifies the prompt in Table 7 by removing the detailed examples. We evaluated this variant with GPT-4-Turbo as the target and, matching our other simulations, GPT-4 as the evaluator and Vicuna-13B as the attacker. We observe that this variant jailbreaks a significantly higher number of prompts than PAIR (82% vs 44%) with fewer queries (35.0 vs 47.1) even though PAIR uses a more sophisticated attacker prompt with detailed examples. We will include this result and the full text of the simpler system prompt in the final version.
**“In section 6 discussion and appendix A.2…Does it suggest that the off-topic prompts get higher scores?”** Yes, these results suggest that off-topic prompts can sometimes get higher scores than on-topic prompts.
**“is the improvement…generated by evaluator? What's the prompts for generating these?...”** Sorry for any confusion. The improvements are generated by the attacker. The attacker’s system prompt (in Table 7) is used for generating improvements. This system prompt requests the attacker to both generate an improvement and, based on the improvement, modify its previous attack.
**“...How do generation constraints on the revision prompts affect jailbreak effectiveness?"** That is a nice suggestion, thanks. Following the suggestion, we strengthened the generation constraints from 500 tokens per revised prompt to 250 tokens per revised prompt. We evaluated this variant–which has a stricter generation constraint–with GPT-4-Turbo as the target, GPT-4 as the evaluator, and Vicuna-13B as the attacker. It achieves a similar fraction of jailbreaks (82%) as the original implementation of our method (84%) with a similar number of average queries (24.8 compared to 22.5).
---
Rebuttal Comment 1.1:
Comment: Thanks authors for the reply. And my concerns are mainly addressed. From the reply we can see that the method is somehow not sensitive to the system prompts and the generation constraints, and I encourage the authors to include these in the final version. And I believe that this paper should be accepted. | Summary: This paper presents a novel jailbreaking attack based on the PAIR attack, enhanced with branching and pruning techniques. The attack generates multiple prompts through branching, then applies two pruning steps: removing off-topic prompts and eliminating prompts with low scores after testing them against the victim model. The method achieves a higher effectiveness with a lower number of queries than previous work. The study evaluates the attack's transfer performance, the effectiveness of the attack when LLaMA Guard is used as a defense, and includes an ablation study on the contribution of the different attack components.
Strengths: **Attack effectiveness** The attack is strong and uses less queries than previous work. The branching and pruning steps are effective in generating effective jailbreaks.
**Thorough evaluation** The evaluation is thorough and includes transfer performance and the effectiveness of the attack against LLaMA Guard. Moreover, the ablation study provides insights into the contribution of the different attack components.
**Clear writing** The paper is well-written and easy to follow (except for the redundancy mentioned in the Weaknesses below).
**Thorough experimental details** The paper provides a lot of details about the experiments in the appendix, which is helpful for reproducibility.
Weaknesses: **Redundancy in the paper content**. I find subsection 1.2 with the basic description of TAP redundant and makes the Introduction section overly long. There is a big overlap with the content of section 3. I suggest merging the two sections.
**Unclear calibration of the Judge model**. The judge model give scores that are between 1 and 10, however, in my experience while using LLMs, they are usually poorly calibrated when assigning scores. I have then two questions: 1) would it be feasible to look at how calibrated is the judge model compared to a human baseline? 2) how important is it that the model is actually calibrated when using it for the pruning step? It would be interesting to see how the pruning step would perform when selecting random prompts instead of those with the highest assigned score (except of course for the prompts that are considerered to be fully successful, with a score of 10). It seems to me that the score domain is excessively granular even for a human to assign scores, so using a coarser score scale might be more effective and meaningful. I understand that this is the same prompt as the one used in PAIR, but it seems to me that the score in TAP has a more crucial role than in PAIR.
Technical Quality: 3
Clarity: 2
Questions for Authors: - What average is used for the number of queries in table 2? Is it the arithmetic mean? Are there many outliers? What is the median?
- See my questions about model calibration above.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors discuss the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for appreciating our thorough evaluation and clear writing. We take your feedback seriously and, following your suggestion, will shorten Section 1.2 to reduce the overlap with Section 3. We answer your specific questions below and hope you continue to support the paper.
**“how calibrated is the judge model compared to a human baseline?”** Our judge model can indeed be miscalibrated. Unfortunately, accurately measuring the calibration seems hard as we have a small number of examples (only 50). In the spirit of comparing the judge model’s performance to a human baseline, we evaluate its false positive and false negative rates in labeling examples as jailbreaks (i.e., assigning them a score of 10): we find that its false positive and false negative rate are not too large–13% and 0% respectively (Section A.2). We will include a discussion on the potential of miscalibration and its consequences in the final version.
**“how the pruning step would perform when selecting random prompts instead of those with the highest assigned score”** Thanks for the suggestion. Following it, we evaluated the variant of our method where Phase 2 of pruning randomly selects $w=10$ prompts (instead of selecting the $w$ prompts with the highest score). We selected GPT-4-Turbo as the target and, matching the simulations in the paper, selected GPT4 as the evaluator and Vicuna-13B as the attacker. Surprisingly, this method only jailbreaks 62% of the prompts. The number of queries sent has a mean of 34.8 and a median of 24.5. This is significantly lower than the success rate of 84% of the standard variant of TAP (that retains the prompts $w$ with the highest scores) demonstrating that pruning based on the evaluator’s scores improves performance.
**“It seems to me that the score domain is excessively granular even for a human to assign scores, so using a coarser score scale might be more effective and meaningful.”** Thank you for the great suggestion. Following your suggestion, we evaluated a variant of TAP where the evaluator uses a coarser score scale, namely, binary scores. We fix GPT-4-Turbo as the target, GPT4 as the evaluator, and Vicuna-13B as the attacker. We find that this improves the success rate from 84% to 86% while sending a similar number of queries (23.4 with binary score scale vs 22.5 with finer score scale). We will include these results in the final version of the paper.
**“What average is used for the number of queries in table 2? Is it the arithmetic mean? Are there many outliers? What is the median?”** Yes, the average in Table 2 is the arithmetic mean. The medians of the queries are close to the arithmetic means, as shown below:
| | Vicuna | Llama-7B | GPT3.5 | GPT4 | GPT4-Turbo | GPT-4o | PaLM-2 | Gemini-Pro | Claude3 Opus |
|------------------|--------|----------|--------|------|------------|--------|--------|------------|--------------|
| Jailbreak % | 98% | 4% | 76% | 90% | 84% | 94% | 98% | 96% | 60% |
| Mean # Queries | 11.8 | 66.4 | 23.1 | 28.8 | 22.5 | 16.2 | 16.2 | 12.4 | 116.2 |
| Median # Queries | 8.0 | 79 | 18.0 | 23.5 | 23.0 | 14.5 | 9.0 | 8.0 | 120 |
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply. I am glad that some of my suggestions turned out to be useful, and I believe that this paper should be accepted. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Adapting to Unknown Low-Dimensional Structures in Score-Based Diffusion Models | Accept (poster) | Summary: The paper considers the problem of convergence of probability distribution learned by diffusion models to target data distribution when the data distribution lies has some low-dimensional structure. Previous work either provides a convergence rate bound in Wasserstein distance for low-dimensional distribution or provides a bound in total variation distance with polynomial dependence on the ambient dimension. This paper improves the convergence rate to logarithmically depend on the ambient dimension and polynomially on the intrinsic dimension.
Strengths: - It is interesting that the authors do not only consider the case of data on a k-dimensional manifold but consider a general version of a low-dimensional structure provided by the covering number of the support.
- The paper is overall well-written and easy to follow.
Weaknesses: - The result in the paper is proved under the assumption of the bounded support of the data distribution (i.e. $|| x || \leq R$ where $R = \textrm{poly}(T)$). The results in the previous works are under the assumption that the data has a bounded second moment therefore, the bounded support assumption is more restrictive than the assumptions considered in the previous work.
- The argument about the uniqueness of coefficients that leads to dimension-independent discretization error in Section 3.2 is not convincing because it proves a lower bound on the error incurred in one denoising step which is an upper bound on the total variation distance (the authors also mention it).
Technical Quality: 3
Clarity: 3
Questions for Authors: - I understand that showing the lower bound on the total variation error in section 3.2 might be difficult but can you experimentally show that the claimed coefficient in eq.(2.4) leads to a better convergence compared to other coefficients (e.g., coefficients used in practice) on some synthetic low-dimensional distribution?
- The final bound on T in theorem 1 seems to depend on polylogarithmically on the dimension of the distribution (instead of polynomially in previous works). Is the polylogarithmical dependence of dimension necessary?
Minor comment: it might be worth including a discussion in the paper on why earlier approaches get the dimension-dependent error.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes, the authors discuss the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We address your comments below.
**Bounded assumption on the target distribution.**
Thank you for raising this point!
- We agree that the bounded support assumption is stronger than, for example, the bounded second moment assumption, and it excludes Gaussian distributions. However, we argue that this condition is satisfied in most practical scenarios. Arguably, the most important applications of diffusion models are to generate new images from a target distribution, where image data is naturally bounded. For instance, the CIFAR and ImageNet datasets consist of images with pixel values ranging from 0 to 255 or $[0, 1]$, and it is common to normalize them to the range $[-1, 1]$. Then the $\ell_2$ norm of an image from, e.g., the CIFAR dataset, is typically below $60$. Moreover, for DDPM in image generative tasks, $T$ is typically around $100$. Hence, we believe it is reasonable to assume in theoretical analysis that the radius of the data distribution is bounded by $T^{c_R}$, where $c_R$ is any given fixed constant that can be very large.
- We acknowledge that it is unfortunate not to cover common distributions like the Gaussian. Although we believe the bounded support assumption is not essential, it greatly facilitates our analysis, especially since we use a novel set of analytical tools that characterize the dynamics of DDPM in a more deterministic manner to establish convergence guarantees in the presence of low-dimensional structure. While we can relax this assumption in some specific cases (e.g., for Gaussian distribution, as illustrated in the next paragraph), we find it challenging to achieve a clean and concise result that holds generally for any distribution with a finite second moment. We leave this to future investigation.
We will incorporate these discussions in our next revision.
**Regarding the gap between our lower bound and the TV error.**
Thank you for your suggestion! We conduct an experiment to examine whether our coefficient design (3.2) is indeed the unique coefficient design that leads to dimension-independent TV error $\mathsf{TV}(q_1,p_1)$, described as follows.
Using the degenerated Gaussian distribution considered in Theorem 2 as a tractable example, we ran the DDPM sampler with exact score functions (so that the error only comes from discretization) and plotted the error curves of $\mathsf{TV}(q_1,p_1)$ versus the ambient dimension $d$ under several different choices of $T$. We implemented both our coefficient design (2.4) and another widely-used design $\eta_t = \sigma_t = 1-\alpha_t$. The results demonstrate that the TV error $\mathsf{TV}(q_1,p_1)$ is independent of the ambient dimension $d$ under our coefficient design (2.4), while it grows with $d$ when using the other design. **Please refer to the figures in the attached PDF in the response to all reviewers.** This supports our key message that (3.2) represents a unique coefficient design for DDPM in achieving dimension-independent error.
**Dependency on $\log d$.**
Thank you for raising this point! The polylogarithmic dependency w.r.t.~the ambient dimension $d$ arises from the analysis tools that allows us to characterize the algorithmic dynamics in a more deterministic manner. This analysis framework is crucial for tackling the convergence of DDPM in the presence of low-dimensional structure.
In the proof, we identified a "typical" high-probability set where we are able to characterize the evolution of conditional density precisely (see our discussion at the beginning of Section 4). However, the algorithmic dynamic outside this set is difficult to track, and we have to bound some quantities related to the dynamics outside the typical set at the order of $d$. As a result, we have to set the radius of the "typical" high-probability set to be large enough (larger than some logarithmic factor of $d$), such that the exceptional probability is small (e.g., smaller than $O(1/\mathsf{poly}(d))$ in order to offset the impact of these terms. We think it might be possible to improve the polylogarithmic dependency on $d$ via more refined analysis, and we leave it to future investigation.
**Discussion on coefficient design.**
Thank you for the suggestion! We have extended Theorem 2 to a general lower bound that works for arbitrary low-dimensional data distributions. **Please see our response to all reviewers for this general lower bound, as well as an outline of the proof.** We believe that this general result can help us understand the role of our coefficient design (2.4), as well as why earlier designs get the dimension-dependent error. To facilitate discussion, we copy the lower bound here:
$$\\mathbb{E}_{x\_{t}\sim q\_{t}}[\\mathsf{KL}(p\_{X\_{t-1}|X\_{t}}(\\cdot| x\_{t})\\parallel p\_{Y\_{t-1}|Y\_{t}}(\\cdot| x\_{t}))] \\geq \\bigg(\\frac{\\sigma\_{t}^{\\star2}}{\\sigma\_t^2} + 2\\log\\frac{\\sigma\_t}{\\sigma\_t^\\star}- 1\\bigg)\\frac{d}{2} + \\frac{c\_0(\\eta\_{t}-\\eta\_{t}^{\\star})^2d}{2\\sigma\_{t}^{2}(1-\\overline{\\alpha}\_t)} - C\_{5}^2\\frac{k^{4}\\log^{6}T}{T^2}\\bigg(3+\\frac{\\sigma\_{t}^{\\star2}}{\\sigma_t^2}\\bigg) - C\_{5}\\frac{k^{2}\\log^{3}T}{T}\\bigg|\\frac{\\sigma\_{t}^{\\star2}}{\\sigma\_t^2} - 1\\bigg|\\sqrt{d} - \\exp(-c\_1k\\log T)$$
This bound is achieved by a novel set of analysis tools that characterize the algorithmic dynamics in a more deterministic manner, which together with our analysis in Theorem 1 provide a sharp characterization of the impact of coefficient design (i.e., $\eta_t$ and $\sigma_t$) in determining the error induced in each denoising step. It can be seen that, unless we take $\eta_t=\eta_t^\star$ and $\sigma_t=\sigma_t^\star$ as suggested in (2.4), there will be inevitable error that scales at least linear in $d$ incurred in each denoising step.
---
Rebuttal 2:
Comment: I thank the authors for providing the detailed rebuttal and also providing the additional experiment for the uniqueness of the coefficient.
- I understand some of the explanations provided by the authors but as authors mentioned, each pixel is normalized between 0 to 1 or -1 to 1. If we think of each pixel as a coordinate in the input, then the norm of the input is typically $O(\sqrt{d})$ but the result is proven under the assumption that norm of the input is $poly(T) = poly(k, log(d))$.
- The experiments on the synthetic data are interesting. Can you provide more details on the baseline choices for $\eta_t$ and $\sigma_t$? The original DDPM paper [1] uses different $\alpha_t$ and $\sigma_t^2 = \sqrt{\alpha}_t \beta_t$ (note that the definition of $\sigma_t$ differs in this work and the original work). The follow-up work [2] also seems to have different choices than $\eta_t$ and $\sigma_t$ than the one used in the above experiment so can you provide some of the references that use the choices ($\eta_t = \sigma_t = 1-\alpha_t$)?
[1] Denoising Diffusion Probabilistic Models
[2] Improved Denoising Diffusion Probabilistic Models
---
Rebuttal 3:
Comment: Thank you for your reply!
**On the $d$-dependence.** Thanks for the insightful comment. We agree that the norm of the input image $X_0$ is typically $\sqrt{d}$. However this won't affect our main message, and the reason is as follows:
* Our paper requires that, for some fixed (arbitrary) constant $c_R$, the norm of $X_0$ is bounded by $T^{c_R}$. This means that we need $T \geq d^{1/(2c_R)}$. Since $c_R$ can be chosen arbitrarily, this is weaker than any polynomial dependence on $d$.
* In fact, we can see that the $d$-dependence in this requirement can be removed at the price of an additional $\log d$ factor in the final error bound. Although we treat $c_R$ as a universal constant in the current paper and hide its dependence in Theorem 1, the final upper bound in Theorem 1 actually depends polynomially on $c_R$ as follows:
$$\\mathsf{TV}\\left(q\_{1},p\_{1}\\right)\\lesssim c\_R^2\\frac{\\left(k+\\log d\\right)^{2}\\log^{3}T}{\\sqrt{T}}+\\varepsilon\_{\\mathsf{score}}\\log T.$$
We will make this dependence explicit in the next revision. If we take $c_R \asymp \log d$, then our error bound becomes larger by a factor of $\log^2 d$, which is still nearly as good as the current one. However, notice that $d^{1/\Omega(\log d)} = O(1)$, the requirement $T \geq d^{1/(2c_R)} = O(1)$ is independent of $d$.
**On the choices of $\eta_t$ and $\sigma_t$.** Thanks for raising this point. We use the same $\eta_t$ as [1] and [2], and we agree that the choices of $\sigma_t$ in [1] and [2] are different with our choices in the experiment.
Here, [1] chose $\sigma_t^2 = \alpha_t\beta_t$, while our choice is $\sigma_t^2 = \beta_t$.
We use this type of $\sigma_t^2$ as our baseline, since it is commonly adopted in the theoretical literature (e.g., [3]) and has been shown to achieve $\mathsf{poly}(d)/T$ convergence.
In addition, we have also implemented the choice $\sigma_t^2 = \alpha_t\beta_t$ in [1] in the numerical experiments, and the performance is very similar to using $\sigma_t^2 = \beta_t$. We will include this result in our next revision.
Our lower bound based on the degenerated Gaussian distribution also show that this choice $\sigma_t^2 = \alpha_t\beta_t$ will incur some error that is linear in $d$.
Note that $\sigma_t^2$ in [2] is learned from the dataset in the hope of achieving optimal performance, which takes a different approach from our current work, where we intend to identify the coefficients based on the learning rates of the forward process (i.e., $\alpha_t$'s) in a non-data-driven way. will discuss this point more clearly in the next revision.
[3] Towards Non-Asymptotic Convergence for Diffusion-Based Generative Models, G. Li, Y. Wei, Y. Chen and Y. Chi, ICLR 2024.
---
Rebuttal Comment 3.1:
Comment: Thanks again for your efforts in reviewing our paper and for your helpful comments! We have carefully considered your questions and addressed them in our response. The discussion phase is due to conclude in less than 20 hours, and we would like to know whether our response has appropriately addressed your questions and concerns about our paper. If we have addressed your concerns, we would appreciate it if you consider increasing your score for our paper. Please let us know if you have further comments or concerns about our paper. Thank you! | Summary: In DDPM-style diffusion models, we need to discretize a continuous
process. Benton et al [3] showed that ~ d/eps^2 steps suffice over d
dimensions. But what if (as is typical) the data lie in a space of
lower intrinsic dimension (e.g. k-sparse or a k-dimensional manifold)?
This paper shows how to replace the d in the iteration complexity by
k^4.
Strengths: This paper shows that DDPM can be nearly dimension independent, up to
log factors only depending on the intrinsic dimension. This holds for
a very general definition of intrinsic dimension---just the log
covering number. I was pretty surprised to learn that such a result
is possible.
Doing so requires particular choices of step size, which they show is
necessary, at least for the typical Girsanov-style proof approach.
Weaknesses: Obviously it would be nice to see k rather than k^4. We don't
typically expect intrinsic dimension << d^{1/4}, so this isn't giving
a real quantitative improvement.
There's very little intuition for what's going on. Why are these the
right step sizes?
Technical Quality: 3
Clarity: 3
Questions for Authors: How does the schedule (2.4) compare to the suggestion from [3]?
What happens if X isn't exactly low dimensional, but is close? Say,
there is eps TV mass outside the cover.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We address your comments below.
**Intuition for our coefficient design.**
Thanks for raising this point. We have extended Theorem 2 to a general lower bound that works for arbitrary low-dimensional data distributions. **Please see our response to all reviewers for this general lower bound, as well as an outline of the proof.** We believe that this general result can help us understand the role of our coefficient design (2.4). To facilitate discussion, we copy the lower bound here:
$$\\mathbb{E}_{x\_{t}\sim q\_{t}}[\\mathsf{KL}(p\_{X\_{t-1}|X\_{t}}(\\cdot| x\_{t})\\parallel p\_{Y\_{t-1}|Y\_{t}}(\\cdot| x\_{t}))] \\geq \\bigg(\\frac{\\sigma\_{t}^{\\star2}}{\\sigma\_t^2} + 2\\log\\frac{\\sigma\_t}{\\sigma\_t^\\star}- 1\\bigg)\\frac{d}{2} + \\frac{c\_0(\\eta\_{t}-\\eta\_{t}^{\\star})^2d}{2\\sigma\_{t}^{2}(1-\\overline{\\alpha}\_t)} - C\_{5}^2\\frac{k^{4}\\log^{6}T}{T^2}\\bigg(3+\\frac{\\sigma\_{t}^{\\star2}}{\\sigma_t^2}\\bigg) - C\_{5}\\frac{k^{2}\\log^{3}T}{T}\\bigg|\\frac{\\sigma\_{t}^{\\star2}}{\\sigma\_t^2} - 1\\bigg|\\sqrt{d} - \\exp(-c\_1k\\log T)$$
This bound is achieved by a novel set of analysis tools that characterize the algorithmic dynamics in a more deterministic manner, which together with our analysis in Theorem 1 provide a sharp characterization of the impact of coefficient design (i.e., $\eta_t$ and $\sigma_t$) in determining the error induced in each denoising step. It can be seen that, unless we take $\eta_t=\eta_t^\star$ and $\sigma_t=\sigma_t^\star$ as suggested in (2.4), there will be inevitable error that scales at least linear in $d$ incurred in each denoising step.
**Comparison with other coefficient designs.**
Thank you for raising this point. In the prior work [3], the marginal distribution of the forward process is $X_{t_k} \sim e^{-t_k}X_0 + \sqrt{1-e^{-2t_k}}\,\overline{W}$ where $\overline{W}\sim\mathcal{N}(0,I_d)$.
This means that $e^{-2t_k}$ plays the role of $\\overline{\\alpha}\_t$ in our paper.
By examining their discretization rule (see Eq.~(4) therein), we find that they use the following update rule:
$$
Y_{t-1} = (1+\Delta_t)Y_t + 2\Delta_ts_t(Y_t) + \sqrt{\Delta_t}Z_t,
$$
where $\Delta_t = \frac{1}{2}\log\frac{1}{\alpha_t}$.
By applying a similar calculation as in Theorem 2, we can show that:
$$
\\mathbb{E}\_{x\_{t}\\sim q\_{t}}\left[\mathsf{KL}\left(p_{X_{t-1}|X_{t}}\left(\cdot| x_{t}\right)\,\Vert\,p_{Y_{t-1}|Y_{t}}\left(\cdot| x_{t}\right)\right)\right]
\gtrsim \frac{d}{T^2}.
$$
Hence, the coefficient design in [3] will also incur dimension-dependent error in each denoising step. We will incorporate these discussions in our next revision.
**Approximately low-dimensional structure.**
Thank you for raising this point. We first would like to clarify that we are not assuming that the data distribution is exactly low dimensional. Our characterization of low-dimensionality is based on the covering number of the support, as defined in Section 2, which accommodates approximately low-dimensional structure. For example, our setup includes the case that $\mathsf{supp}(p_\mathsf{data}) \in \cup_{i \le N_{\varepsilon}} \mathbb{B}(c_i, \varepsilon)$, the balls centered at $c_i$ with $\varepsilon$ radius, whose dimension is $d$ instead of $k$. Therefore even if the support of the data distribution is not exactly low-dimenaional, our results can still be applied. (We apologies if we misunderstood your question.)
It is an interesting question whether our result is stable against adding an $\varepsilon$-mass outside the cover. This setting is beyond our current result, since this mass can be spread out in the full space (thus having large covering number that depends on $d$). We conjecture that our result will be stable to this perturbation, namely it is possible to show an error bound similar to Theorem 1, but has an additional term proportional to $\varepsilon$ (or $\varepsilon d$). Currently, we don't know how to prove this result, and we leave this for future investigation.
**Improving the quartic dependency on $k$.**
Thank you for raising this point. Currently, we don't know how to improve this dependency, which we believe requires new analysis frameworks and tools. We leave this for future investigation.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. | Summary: This paper investigates score-based diffusion models when the underlying distribution is near low-dimensional manifolds in a higher-dimensional space. It addresses the gap in theoretical understanding of diffusion models, which are suboptimal in the presence of low-dimensional structures. For the DDPM, the error dependency on the ambient dimension $d$ is generally unavoidable during each denoising step in the existing literature. However, the authors identify a unique set of coefficients that yields a convergence rate of $O(k^2/\sqrt{T})$, where $k$ is the intrinsic dimension and $T$ is the number of steps. The analysis employs novel tools that characterize the algorithmic dynamics in a deterministic manner. Additionally, the paper establishes that the DDPM sampler's error, influenced by time discretization and score estimation, is nearly dimension-free, with the ambient dimension $d$ appearing only in logarithmic terms.
Strengths: **1. Theoretical Advancement in Understanding Diffusion Models.** The paper makes a good contribution to the theoretical understanding of score-based diffusion models in the context of low-dimensional manifolds. By identifying a unique set of coefficients that yield a convergence rate dependent on the intrinsic dimension $k$ rather than the ambient dimension $d$, the authors provide a refined theoretical framework that addresses previously suboptimal theoretical support.
**2. New Analytical Tools and Methodology.** The paper introduces a new set of analytical tools to characterize the algorithmic dynamics of the DDPM sampler in a deterministic manner. This innovative approach allows for a more precise analysis of the error sources—time discretization and score estimation errors.
**3.** In general, the paper is well-written and joyful to read.
Weaknesses: **1. Strict Assumption on the Data Distribution** In Line 126, the authors assume that the support set of the data distribution is bounded. This assumption is overly restrictive and may not hold for many real-world data distributions, such as Gaussian distributions, which have unbounded support. The authors should consider relaxing this assumption or providing justification for its necessity, as well as discussing the implications of this restriction on the generalizability and robustness of their findings.
**2. Lack of Empirical Demonstration.** Although this paper presents a solid and sharp theoretical analysis of the convergence rate of DDPM, the authors don't provide any experimental results to support their theory. Without experimental evidence, it is difficult to assess the real-world performance and robustness of the proposed coefficient design and convergence rate improvements.
Technical Quality: 3
Clarity: 3
Questions for Authors: **Q1.** Is it widely used in the literature to employ $\epsilon$-net and cover number to measure the intrinsic dimension of a data distribution? The authors should provide a more thorough discussion on this point.
**Q2.** In Line 126, the authors assume that the support of the data distribution is bounded. However, the support of the Gaussian distribution in Theorem 2 is not bounded. It is believed that this is not a good example to demonstrate the uniqueness of coefficient design.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Not applicable
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We address your comments below.
**Bounded assumption on the target distribution.**
- We agree that the bounded support assumption is stronger than, for example, the bounded second moment assumption, and it excludes Gaussian distributions. However, we argue that this condition is satisfied in most practical scenarios. Arguably, the most important applications of diffusion models are to generate new images from a target distribution, where image data is naturally bounded. For instance, the CIFAR and ImageNet datasets consist of images with pixel values ranging from 0 to 255 or $[0, 1]$, and it is common to normalize them to the range $[-1, 1]$. Then the $\ell_2$ norm of an image from, e.g., the CIFAR dataset, is typically below $60$. Moreover, for DDPM in image generative tasks, $T$ is typically around $100$. Hence, we believe it is reasonable to assume in theoretical analysis that the radius of the data distribution is bounded by $T^{c_R}$, where $c_R$ is any given fixed constant that can be very large.
- We acknowledge that it is unfortunate not to cover common distributions like the Gaussian. Although we believe the bounded support assumption is not essential, it greatly facilitates our analysis, especially since we use a novel set of analytical tools that characterize the dynamics of DDPM in a more deterministic manner to establish convergence guarantees in the presence of low-dimensional structure. While we can relax this assumption in some specific cases (e.g., for Gaussian distribution, as illustrated in the next paragraph), we find it challenging to achieve a clean and concise result that holds generally for any distribution with a finite second moment. We leave this to future investigation.
We will incorporate these discussions in our next revision.
**Truncating Gaussian distribution in the lower bound.**
We agree that the degenerated Gaussian distribution considered in Theorem 2 is not a good example because it is unbounded. As we mentioned in footnote 2 on page 5, this lower bound can be extended to the truncated Gauss distribution. We have changed the result to truncated Gaussian in the following.
Before presenting the details, we also want to highlight that we have extended Theorem 2 to a general lower bound that works for arbitrary low-dimensional data distributions, which we believe also helps addressing this comment. **Please see our response to all reviewers for this general lower bound, as well as an outline of the proof.** Now we continue presenting results for the truncated Gaussian.
Let $X_0 \sim \mathcal{N}(0,I_k)$, and define its truncated counterpart $\\widetilde{X}\_0$ as
$$
p\_{\\widetilde{X}\_0} (x_0) \\propto p_{X_0} (x_0) 1(\\|x_0\\|_{\infty} \le T).
$$
Define $\\widetilde{X}\_t = \\sqrt{\\alpha\_t}\\widetilde{X}\_{t-1} + \\sqrt{1-\\alpha\_t}Z\_t$, and construct the reverse process $\\widetilde{Y}\_t$ with score estimation of $\\widetilde{X}\_t$. Then we can establish exactly the same lower bounds for $\\widetilde{X}\_t$ and $\\widetilde{Y}\_t$.
Notice that $\\widetilde{X}\_t$ and $\\widetilde{Y}\_t$ have independent entries, hence it suffices to studying each entry separately (we use $\\widetilde{X}\_{t, i}$ and $\\widetilde{Y}\_{t, i}$ to denote their $i$-th entry), i.e.,
$$
\\mathsf{KL}(p\_{\\widetilde{X}\_{t-1}|\\widetilde{X}\_t}(\\cdot | x\_t) \\parallel p\_{\\widetilde{Y}\_{t-1}|\\widetilde{Y}\_t}(\\cdot | x\_t)) \\ge \\sum\_{i > k} \\mathsf{KL}(p\_{\\widetilde{X}\_{t-1, i}|\\widetilde{X}\_{t, i}}(\\cdot | x\_{t, i}) \\parallel p\_{\\widetilde{Y}\_{t-1, i}|\\widetilde{Y}\_{t, i}}(\\cdot | x\_{t, i})).
$$
The right hand side of the above inequality obeys
$$
\\text{RHS}= \\sum\_{i > k} \\mathsf{KL}(p\_{X\_{t-1, i}|X\_{t, i}}(\\cdot | x\_{t, i}) \\parallel p\_{Y\_{t-1, i}|Y\_{t, i}}(\\cdot | x\_{t, i}))\\ge \\frac{d}{4}\\left(\\eta\_{t}-\\eta\_{t}^{\\star}\\right)^{2}+\\frac{d}{40}\\left(\\frac{\\sigma\_{t}^{\\star2}}{\\sigma\_{t}^{2}}-1\\right)^{2},
$$
where the first relation holds since the truncation does not affect the entries with zero variance, and the second relation is proved in Line 550-552 in Appendix B.
**Empirical demonstration.**
Thank you for raising this point.
We have conducted a numerical experiment using the data distribution considered in Theorem 2. The results demonstrate that the TV error and KL divergence are independent of the ambient dimension $d$ under our coefficient design (2.4), while they grow with $d$ under other widely-used designs. **Please refer to the figures in the attached PDF in the response to all reviewers**, where we plot the error curves of $\mathsf{KL}(q_1\parallel p_1)$ and $\mathsf{TV}(q_1,p_1)$ versus $d$, with all other setups fixed. This supports the conjecture that (2.4) represents a unique coefficient design for achieving dimension-independent error.
While the lack of GPUs prevents us from examining this design in more complex, large-scale tasks, we believe that our theoretical findings and empirical observations are already meaningful and interesting.
**Our use of $\varepsilon$-covering.**
Thanks for raising this point! While we believe that it's a good idea to use covering number for characterizing low-dimensional structure in our problem, we are not aware of other existing literature that does the same. We choose to use $\varepsilon$-net and covering number to define approximate low-dimensional structure because we believe that this is less stringent and more general than assuming an exact low-dimensional structure (e.g., by assuming that the support of the data distribution lives in a low-dimensional subspace). As a sanity check, we showed in Section 2 that when the support of the data distribution lives in an $r$-dimensional
subspace, our intrinsic dimension $k$ defined through covering number is of order $r$, confirming that our definition is indeed more general. We will incorporate these discussions in our next revision.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reviewers' rebuttal. I have two further comments: (i) For the bounded assumption, can this proof be applied to a uniform distribution oversphere? (ii) Can you provide some literature on using $\epsilon$-net to characterize intrinsic dimension?
---
Reply to Comment 1.1.1:
Comment: Thank you for your response!
**Uniform distribution over sphere.** Yes, our proof and result can be applied to a uniform distribution over sphere, since the only assumption we imposed on the data distribution $p_{\mathsf{data}}$ is boundedness. However, the intrinsic dimension $k$ of, e.g., the unit sphere $\mathbb{S}^{d-1}$ in $\mathbb{R}^d$ is $d-1$, which is not a typical low-dimensional structure. Although our theory holds for general $k$, the most interesting regime is $k\ll d$, where our results significantly improve the convergence rate that has polynomial dependence on $d$.
**Using covering number to characterize intrinsic dimension.** Thank you for asking this. Here we provide some related literature and discussion on this issue.
* In fact, our definition of the intrinsic dimension $k$ is actually the metric entropy of $\mathcal{X}$, the support of $p_{\mathsf{data}}$. Metric entropy is defined using covering number, and is widely used in statistics and learning theory to characterize the complexity of a set/class in a metric space, which is useful in proving sample complexity and generalization bounds for algorithms; see e.g., Sections 5 and 14 in [1] for the reference. The low-dimensionality is also a concept of complexity, therefore we believe it is very natural to use covering number, or metric entropy to characterize the intrinsic dimension.
* Prior literature [2], which studied diffusion model on low-dimensional data, assumes that the data is supported on a low-dimensional linear subspace. More generally, another work [3] assumes that the distribution is supported on a union of low-dimensional linear subspace. As we discussed in Section 2 and in the rebuttal, our intrinsic dimension $k$ defined through covering number is of order $k$ for a $k$-dimensional linear subspace, and we can also easily seen that $\sum_{i=1}^{m} k_i$ for the union of $m$ linear subspace (each with dimension $k_i$). Therefore using covering number to characterize intrinsic dimension actually admits the setup in these prior literature as special examples, and is more general and robust.
The discussion phase is due to conclude in 20 hours, and we would like to know whether our response has appropriately addressed your questions and concerns about our paper. If we have addressed your concerns, we would appreciate it if you consider increasing your score for our paper. Please let us know if you have further comments or concerns about our paper. Thank you!
[1] High-Dimensional Statistics: A Non-Asymptotics Viewpoint, M. J. Wainwright, Cambridge University Press, 2019.
[2] Score Approximation, Estimation and Distribution Recovery of Diffusion Models on Low-Dimensional Data, M. Chen, K. Huang, T. Zhao, M. Wang, ICML 2023.
[3] Robust Subspace Clustering, M. Soltanolkotabi, E. Elhamifar, E. J. Candes, Annals of Statistics, 2014. | Summary: This paper discusses the DDPM sampler's capability to adapt to unknown low-dimensional structures in the target distribution, and studies how this informs the coefficient design.
Strengths: The paper is written clearly and has a nice structure in general. It contributes to an important topic in diffusion models about adapting to lower dimensional structure. In the first part of the paper, the authors show that, with a particular coefficient design (2.4), the TV error of the DDPM sampler has an upper bound that depends on the intrinsic dimension. In the second part of the paper, the authors exemplify the unique choice of the coefficients.
Weaknesses: The main weakness of the paper is the generality. The adaptivity to the lower dimensional structure is based on a particular coefficient design, however, this design is not shown to be unique for the TV error, or for general target data. It is this reviewer's opinion that it is better to write the contribution section more precisely in terms of the setup and the scope of the results. The concerns are specified in the questions section.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How does this paper's result compare with [1], which does not require knowing/estimating the manifold?
Can the result in the paper lead to a tighter bound when a subspace is given a priori? For example, how does it compare to [2]?
[1] Rong Tang and Yun Yang. Adaptivity of diffusion models to manifold structures. In International Conference on Artificial Intelligence and Statistics, 2024.
[2] Kazusato Oko, Shunta Akiyama, and Taiji Suzuki. Diffusion models are minimax optimal distribution estimators. arXiv preprint arXiv:2303.01861, 2023.
2. Section 3.2 about unique coefficient design is very interesting, however, the results lack generality.
- Theorem 2's result is derived in the case that the target data distribution is a standard Gaussian distribution. What can you get when considering a general data distribution?
- As the authors noted at the end of the section, the uniqueness shown is with respect to the upper bound of the TV error. How does one make use of the uniqueness with respect to the upper bound in practice? What are the possible ways to address the coefficient design for the actual TV error, or tighten the gap?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have disccussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We address your comments below.
**Comparison with prior works.**
Thank you for the reference. These two works and the current paper approach the diffusion model from different perspectives, making direct comparison challenging. Specifically, the two prior works establish error bounds for estimating density under the assumption that the true target density meets certain smoothness conditions. Their results rely on a score-matching procedure designed based on the level of smoothness, situating their findings within the context of nonparametric statistics.
In contrast, our paper approaches the problem from an applied mathematics perspective, decoupling the error of DDPM into discretization error and score matching error, and characterizing them separately, akin to prior works [3, 4, 6, 13]. Our results assume minimal conditions on the target distribution, avoiding smoothness assumptions, and accommodating arbitrary score-matching procedures. Therefore, our results cannot be directly compared with these prior works.
Our paper does not require knowledge or estimation of the manifold, and our error bound is expressed as the sum of discretization error and score matching error:
$$
\\mathsf{TV}(q_{1},p_{1})\leq \\underbrace{C\frac{\left(k+\log d\right)^{2}\log^{3}T}{\sqrt{T}}}\_{ \text{discretization error}}
+\\underbrace{C\varepsilon_{\mathsf{score}}\log T}\_{\text{score matching error}}.
$$
When a subspace is given a priori, we believe it will not help in improving the discretization error bound, as it already adapts to the low-dimensional structure automatically. While there might still be room for improvement (e.g., reducing the polynomial dependency on $k$), we believe this requires new analytical tools rather than assuming access to the low-dimensional structure. However, knowing the subspace can aid in improving the score-matching error, as it is possible to exploit the subspace information to design an efficient score-matching procedure that achieves a smaller $\varepsilon_{\mathsf{score}}$. This, however, is not the main focus of our paper, as our result accommodates any score-matching procedure, which we believe is more general. We leave this for future investigation.
We will incorporate and discuss these references in our next revision.
**Extending the lower bound to arbitrary data distribution.**
Thank you for raising this point. As far as we know, it is quite general to use the worst-case error (or risk) to characterize the inherent difficulty of a problem in areas like statistics (e.g., minimax lower bound) and optimization (e.g., algorithmic lower bound). We believe that establishing a lower bound for the (degenerated) standard Gaussian distribution effectively illustrates our point: if the algorithm cannot perform well without using the proposed coefficient design in probably the simplest case, we can hardly expect it to work well in more complicated examples. However, the good news is that for this problem, we can actually show a similar lower bound for arbitrary low-dimensional data distributions. **Please see our response to all reviewers for this general lower bound, as well as an outline of the proof.** We will include this result in our next revision.
**Regarding the gap between our lower bound and the TV error.**
Thank you for raising this point.
First, we would like to point out that we actually have an upper bound for the KL divergence between $q_1$ and $p_1$.
In the analysis, we control $\mathsf{TV}(q_1, p_1)$ by bounding $\mathsf{KL}(q_1\parallel p_1)$ (see Eq.~(3.2)).
Then the established KL lower bound in Theorem 2 is meaningful.
We will added the KL divergence bound in our main result.
In addition, the lower bound for KL divergence can also be extended to TV distance.
According to the calculation in Appendix B (see Line 545 and 547), we know $p_{X_{t-1}|X_t}$ and $p_{Y_{t-1}|Y_t}$ are two gaussian distributions.
Then with basic calculations, it can also be shown that
$$
\\mathsf{TV}(p_{X_{t-1}|X_{t}}, p_{Y_{t-1}|Y_{t}}) \\gtrsim \\min\\bigg\\{1, d\\left(\eta_{t}-\eta_{t}^{\star}\\right)^{2}+d\\left(\frac{\sigma_{t}^{\star2}}{\sigma_{t}^{2}}-1\\right)^{2}\\bigg\\}.
$$
We will also include this result in the next revision.
Finally, due to the lack of GPUs, we conducted a numerical experiment using the data distribution considered in Theorem 2. The results demonstrate that the TV error and KL divergence are independent of the ambient dimension $d$ under our coefficient design (2.4), while they grow with $d$ under other widely-used designs. **Please refer to the figures in the attached PDF in the response to all reviewers**, where we plot the error curves of $\mathsf{KL}(q_1\parallel p_1)$ and $\mathsf{TV}(q_1,p_1)$ versus $d$, with all other setups fixed. This supports the conjecture that (2.4) represents a unique coefficient design for achieving dimension-independent error.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the extensive rebuttal addressing the comments, and for the additional proof and numerical experiments. This comment again asks about the requirement on the target data, and the generality of the results. The generalization to a more general target data is not trivial as the probability measure of the target data is not absolute continuous with respect to the measure of the prior distribution. So, depending on the target data, the error in the denoising steps could explode. The choice of the time schedule would also be relevant here.
---
Rebuttal 2:
Comment: Thank you for the reply! We are not sure whether we understand the "prior distribution" in your comment correctly. Please let us know if we misunderstood anything.
- First, we would like to clarify that the only assumption we imposed on the target distribution $p_{\mathsf{data}}$ is that it has bounded support. We do not need any further assumption, e.g., absolute continuousness, in order to establish our results. In what follows, we will explain why we don't need to worry about the explosion or error, both in the upper bound (Theorem 1) and lower bound (Theorem 2).
- Regarding the upper bound in Theorem 1, our error metric is the TV distance between the distribution of $X_1$ and $Y_1$ (i.e., $q_1$ and $p_1$), which are both absolutely continuous w.r.t.~the Lebesgue measure (i.e., they both have densities). This circumvents any potential issue when the data distribution $p_{\mathsf{data}}$ of $X_0$ is not continuous. Since $X_0$ and $X_1$ are exceedingly close (since $\beta_1=T^{-c_0}$ is vanishingly small), this error metric also reflects the closeness of the generator distribution and the target distribution.
- The generalized lower bound stated in the rebuttal is established for the error incurred in each denoising step, which is defined as the expected KL divergence between two conditional distributions $p_{X_{t-1} | X_t}$ and $p_{Y_{t-1} | Y_t}$, for $2\leq t \leq T$ (again, it does not concern $t=1$ to circumvent the case when the data distribution is not absolutely continuous). These two distributions are both absolutely continuous w.r.t.~the Lebesgue measure (i.e., they both have densities), regardless of whether the target distribution $p_{\mathsf{data}}$ of $X_0$ is absolutely continuous or not.
- To further support our claim above, we would like to mention that prior works [3,4,6,13] showed that even without using our coefficient design (2.4), other reasonable coefficient designs also lead to convergence rates $\mathsf{poly}(d)/T^2$ for the error incurred in each of the denoising steps (and they sum up to an overall error $\mathsf{poly}(d)/T$). This also suggests that the error in the denoising steps will not explode even if $p_{\mathsf{data}}$ is not continuous -- they just suffer from dependence on the ambient dimension $d$.
- While our upper bound is established under the specific time schedule (i.e., $\beta_t$'s) in Section 2, the lower bound, including the one for degenerated Gaussian as well as the generalized one, holds for a reasonably large class of time schedules (only with the exception of some corner cases). We will specify this in our next revision.
We are happy to discuss more details with you in case you have concern on the generalized lower bound.
Nevertheless, as we discussed in the rebuttal, we believe that our original lower bound established for a simple example (degenerated Gaussian) already effectively illustrates our point. We present the generalized lower bound in the rebuttal just because we think it is a beautiful, concise, yet powerful result that can make the paper better.
---
Rebuttal Comment 2.1:
Comment: Thanks again for your efforts in reviewing our paper and for your helpful comments! We have carefully considered your questions and addressed them in our response. The discussion phase is due to conclude in less than 20 hours, and we would like to know whether our response has appropriately addressed your questions and concerns about our paper. If we have addressed your concerns, we would appreciate it if you consider increasing your score for our paper. Please let us know if you have further comments or concerns about our paper. Thank you! | Rebuttal 1:
Rebuttal: We thank all reviewers for their feedback. Here we address some common comments and questions.
**Extending the lower bound to arbitrary data distribution.** We agree with several reviewer's comments that the lower bound in Theorem 2 only covers Gaussian distribution, which can be restrictive. Here we generalize Theorem 2 to establish a similar lower bound for general low-dimensional distribution:
**Theorem.** Consider arbitrary data distribution $p\_{\mathsf{data}}$ satisfying the assumptions in Section 2. For the DDPM sampler (2.3) with perfect score estimation and arbitrary coefficients $\eta_t$ and $\sigma_t$, we have
$$\\mathbb{E}_{x\_{t}\sim q\_{t}}[\\mathsf{KL}(p\_{X\_{t-1}|X\_{t}}(\\cdot| x\_{t})\\parallel p\_{Y\_{t-1}|Y\_{t}}(\\cdot| x\_{t}))] \\geq \\bigg(\\frac{\\sigma\_{t}^{\\star2}}{\\sigma\_t^2} + 2\\log\\frac{\\sigma\_t}{\\sigma\_t^\\star}- 1\\bigg)\\frac{d}{2} + \\frac{c\_0(\\eta\_{t}-\\eta\_{t}^{\\star})^2d}{2\\sigma\_{t}^{2}(1-\\overline{\\alpha}\_t)} - C\_{5}^2\\frac{k^{4}\\log^{6}T}{T^2}\\bigg(3+\\frac{\\sigma\_{t}^{\\star2}}{\\sigma_t^2}\\bigg) - C\_{5}\\frac{k^{2}\\log^{3}T}{T}\\bigg|\\frac{\\sigma\_{t}^{\\star2}}{\\sigma\_t^2} - 1\\bigg|\\sqrt{d} - \\exp(-c\_1k\\log T)$$
for each $2\leq t\leq T$, where $c_0,c_1,C_5>0$ are some universal constants.
Notice the fact that $x^2 - 2\log x -1 \geq 0$ for any $x>0$, and the equality holds only if $x=1$. Therefore the above results suggests that, unless $\eta_t=\eta_t^\star$ and $\sigma_t=\sigma_t^\star$, the corresponding denoising step will incur an error that is linear in $d$, when $d$ is sufficiently large. We will include this result in our next revision.
**Proof sketch.** Following the equation around Line 245 in Section 4.4, we start with the following bound:
$$\\mathbb{E}_{x\_{t}\sim q\_{t}}[\\mathsf{KL}(p\_{X\_{t-1}|X\_{t}}(\\cdot| x\_{t})\\parallel p\_{Y\_{t-1}|Y\_{t}}(\\cdot| x\_{t}))]\\ge
\\int\_{x\_{t-1}, x\_{t}} p\_{X\_{t-1}|X\_{t}}\\left(x\_{t-1} | x\_{t}\\right)\\log\\left(\\frac{p\_{Y\_{t-1}^{\\star}|Y\_{t}}\\left(x\_{t-1} | x\_{t}\\right)}{p\_{Y\_{t-1}|Y\_{t}}\\left(x\_{t-1} | x\_{t}\\right)}\\right)p\_{X\_{t}}\\left(x\_{t}\\right)\\mathrm{d}x\_{t-1}\\mathrm{d}x\_{t}
=:\\mathcal{I}\_{t},$$
where we recall the definition of $Y\_{t-1}^{\\star}$ in (4.1). It boils down to understanding $\\mathcal{I}\_t$:
$$\\mathcal{I}\_{t}
= d\\log \\frac{\\sigma\_{t}}{\\sigma\_{t}^{\\star}} + \\int\_{x\_{t}, x\_{t-1}} p\_{X\_{t}}\\left(x\_{t}\\right)p\_{X\_{t-1}|X\_{t}}\\left(x\_{t-1} | x\_{t}\\right) \\bigg(\\frac{\\Vert\\sqrt{\\alpha\_{t}}z\_t-(\\eta\_{t}-\\eta\_{t}^{\\star})s\_{t}^{\\star}\\left(x\_{t}\\right)\\Vert\_{2}^{2}}{2\\sigma\_{t}^{2}} - \\frac{\\Vert\\sqrt{\\alpha\_{t}}z\_t\\Vert\_{2}^{2}}{2\\sigma\_{t}^{\\star2}}\\bigg)\\mathrm{d}x\_{t-1}\\mathrm{d}x\_{t},$$
where $\\sqrt{\\alpha\_{t}}z\_t := \\sqrt{\\alpha\_{t}}x\_{t-1}-x\_{t}-\\eta\_{t}^{\\star}s\_{t}^{\\star}(x\_{t}).$
Using similar analysis as in Lemma 6, the above integral on $\\mathcal{A}\_{t}^{\\mathrm{c}}$ (outside the typical set) can be controlled to the order of $\\exp(-\\Omega(k\\log T))$. According to Lemma 3, for $(x\_t,x\_{t-1})\\in \\mathcal{A}\_{t}$ we have
$$\\left|\\frac{p\_{X\_{t-1}|X\_{t}}\\left(x\_{t-1} | x\_{t}\\right)}{p\_{Y\_{t-1}^{\\star}|Y\_{t}}\\left(x\_{t-1} | x\_{t}\\right)}-1\\right|\\leq C\_{5}\\frac{k^{2}\\log^{3}T}{T}.$$
Therefore,
$$\\mathcal{I}\_{t} \\ge d\\log \\frac{\\sigma\_{t}}{\\sigma\_{t}^{\\star}} + \\int\_{x\_t}\\bigg(\\frac{\\Vert(\\eta\_{t}-\\eta\_{t}^{\\star})s\_{t}^{\\star}\\left(x\_{t}\\right)\\Vert\_{2}^{2}}{2\\sigma\_{t}^{2}} - C\_{5}\\frac{k^{2}\\log^{3}T}{T}\\frac{\\sigma\_{t}^{\\star}\\Vert(\\eta\_{t}-\\eta\_{t}^{\\star})s\_{t}^{\\star}(x\_{t})\\Vert\_{2}}{\\sigma\_{t}^2}\\bigg)p\_{X\_{t}}(x\_{t})\\mathrm{d}x\_{t} + \\bigg(\\frac{\\sigma\_{t}^{\\star2}}{\\sigma\_t^2} - 1\\bigg)\\frac{d}{2} - C\_{5}\\frac{k^{2}\\log^{3}T}{T}\\bigg|\\frac{\\sigma\_{t}^{\\star2}}{\\sigma\_t^2} - 1\\bigg|\\sqrt{d} - \\exp(-c\_1k\\log T).$$
Here, we make use of the observation that for $0 < \delta < 1$,
$$ \\mathbb{P}\\bigg(\\bigg|\\frac{\\Vert\\sqrt{\\alpha\_{t}}z\_t\\Vert\_{2}^{2}}{\\sigma\_{t}^{\\star2}} - d\\bigg|^2 \\ge 2d\\log \\frac{1}{\delta}\\bigg) \\le \delta.$$
Then the desired lower bound follows from the following facts:
$$\\int\_{x\_t}C\_{5}\\frac{k^{2}\\log^{3}T}{T}\\frac{\\sigma\_{t}^{\\star}\\Vert(\\eta\_{t}-\\eta\_{t}^{\\star})s\_{t}^{\\star}\\left(x\_{t}\\right)\\Vert\_{2}}{\\sigma\_{t}^2}p\_{X\_{t}}\\left(x\_{t}\\right)\\mathrm{d}x\_{t}
\\le \\int\_{x\_t}\\frac{\\Vert(\\eta\_{t}-\\eta\_{t}^{\\star})s\_{t}^{\\star}\\left(x\_{t}\\right)\\Vert\_{2}^{2}}{4\\sigma\_{t}^{2}}p\_{X\_{t}}\\left(x\_{t}\\right)\\mathrm{d}x\_{t} + C\_{5}^2\\frac{k^{4}\\log^{6}T}{T^2}\\frac{\\sigma\_{t}^{\\star2}}{\\sigma\_t^2},$$
as well as
$$ \\int\_{x\_t} \\|s\_{t}^{\\star}(x\_{t})\\|\_{2}^{2}p\_{X\_{t}}(x\_{t})\\mathrm{d}x\_{t} \\asymp \\frac{d}{1-\\overline{\\alpha}\_t}.$$
We will add the complete proof to our next revision.
**Setup of the numerical experiment.** We conduct some experiments to examine whether (2.4) is indeed the unique coefficient design that leads to dimension-independent error. We provide the setup for this experiment.
We use the degenerated Gaussian distribution $p_\mathsf{data}=\mathcal{N}(0,I_k)$ in Theorem 2 as a tractable example, and run the DDPM sampler with exact score functions (so that the error only comes from discretization). We fix the low intrinsic dimension $k=8$, and let the ambient dimension $d$ grow from $10$ to $10^3$. We implement the experiment for different number of steps $T \in\\{100,200,500,1000\\}$. We implemented both our coefficient design (2.4) and another widely-used design $\eta_t = \sigma_t = 1-\alpha_t$. The error curve of $\mathsf{KL}(q_1\parallel p_1)$ and $\mathsf{TV}(q_1,p_1)$ versus $d$ can be found in the attached PDF. This supports our key message that (2.4) represents a unique coefficient design for DDPM in achieving dimension-independent error.
Pdf: /pdf/8b3314fcca450594b6d68ef3114238ac67f55957.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Dual Encoder GAN Inversion for High-Fidelity 3D Head Reconstruction from Single Images | Accept (poster) | Summary: The paper proposes an encoder-based approach to GAN-inversion for full 3D head reconstruction from a single image. To address the challenge of reconstructing the entire 360° head, the method employs two encoders: one for high-fidelity reconstruction of the input image (typically near-frontal) and another for generating realistic invisible parts of the head (e.g., back of the head). Finally, the paper presents a method to combine the representations learned from these two encoders to reconstruct a complete 3D head.
Strengths: + The motivation for the problem and the proposed method seem reasonable.
+ Training a Discriminator directly on Triplane instead of the final images is an interesting and novel approach, differing from other GAN inversion work. This approach is also more relevant for full 360° reconstruction, rather than just near-frontal scenes.
+ The paper includes a good amount of qualitative comparison, ablation studies, and quantitative comparison.
Weaknesses: + The head geometry after editing (Figure 8) appears to be quite flat compared to other results presented in the paper.
+ I would appreciate it if the authors could provide visualizations of the geometry of the generated results, in addition to the final generated RGB images.
Technical Quality: 3
Clarity: 3
Questions for Authors: + In line 163-164, the paper mentioned: "This discrepancy may stem from the real samples originating from the generator, lacking the detailed feature characteristic of real-world images". Does this means that the reconstruction quality of the unobserved part is bounded by the capacity of Panohead, compared to Encoder 1 that is trained with real-world images?
+ How does the proposed method compare to the optimization-based inversion results presented in Panohead?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: + The authors have acknowledged and discussed the limitations of the societal impact of their work.
+ I would appreciate it if the authors could provide more details on potential directions for improving the quality of the results or on other future directions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their feedback and for highlighting several strengths of our paper.
1. (**W.1**) For the head geometry after editing, we realized that the example we share in the paper does not do justice; the reference images have more flat-looking hairs. We added more examples to rebuttal Fig. 7. It can be seen in those examples that the back of the head is as bumpy as the others. We will include more results in the revised paper. Thank you for bringing this to our attention.
2. (**W.2**) Based on the reviewers' suggestions, we added visualizations of the geometry in rebuttal Fig. 6. Our model outputs better geometry compared to the two best-competing methods. In many examples, PTI struggles; the most obvious is the last row results, where PTI outputs a flat backside of the head. TriplaneNetv2 also struggles with many realistic details and especially on the side and back-views.
3. (**Q.1**) The reviewer makes a great point. It is true that the reconstruction quality of the unobserved part is bounded by the capacity of Panohead since we rely on its generations while training a discriminator. This is also meaningful since we use the latent space of Panohead for our inversion, and input images can only provide faithful information on the visible parts. We will add this discussion to the revised version.
4. (**Q.2**) The reviewer asks about the comparisons with optimization-based methods provided by Panohead. Our main paper already included these comparisons as Panohead uses PTI optimization. We observe that PTI sometimes may achieve good results, as given in the Panohead paper; however, the method often fails for the novel views even though it takes significantly longer to run inference. It can also be seen in the geometry in rebuttal Fig. 6. Even though it is still worse than ours, PTI may achieve reasonable results, as shown in the first example. On the other hand, in many examples, it outputs very unrealistic results, as given in the last example in Fig. 6. Similar results exist in the main paper and Supplementary.
5. (**L.2**) We appreciate the reviewer asking for future directions. *Reviewer AnnS* mentioned the transformer-based architecture. We also believe using attention techniques can be a valuable research direction to distribute the knowledge of visible regions to others. Additionally, combining this method with diffusion-based techniques is another direction we would like to pursue. We will include these ideas in the revised manuscript to further stimulate research in this area.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal and got all my questions answered.
---
Reply to Comment 1.1.1:
Comment: We are happy to hear that the rebuttal addressed reviewer's questions. We would like to thank the reviewer for their thorough review and constructive feedback. | Summary: This paper introduces an encoder-based method to do the GAN inversion task, especially for the full head inversion. The occlusion aware discriminator is interesting and reasonable. The results are good.
Strengths: 1. The idea is interesting, I like the occlusion-aware discriminator. Meanwhile, training the discriminator in the triplane domain is also reasonable.
2. The results are good.
Weaknesses: There is no significant weakness in this paper. I have a question about the training process. Are these two encoders trained on the face first and then on the occluded parts? Is the occlusion-aware encoder initialized with the facial encoder?
Technical Quality: 3
Clarity: 3
Questions for Authors: see the weakness
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: see the weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their feedback and finding our paper interesting and technically solid.
In the training process, E1 is trained with visible parts by using input-output paired losses of LPIPS, L2, and identity losses. On the other hand, E2 is trained to learn the occluded regions by learning from the discriminator. The occlusion-aware E2 is not initialized with a pre-trained encoder. E2 and D are trained jointly from scratch. We will elaborate on the training procedures. We would like to thank the reviewer for bringing this to our attention.
Additionally, we would like to highlight that Rebuttal Table 1 and Fig. 1 demonstrate our method's significant superiority over competing approaches on the multi-view MEAD dataset evaluation. We also show our method’s ability to generalize across diverse ethnicities, heavy makeup, and extreme camera viewpoints in Rebuttal Fig. 2, 3, and 4, respectively. The geometry of the generated results is detailed in Fig. 6. These findings will be included in the final paper, which we believe will further strengthen our work. | Summary: This paper addresses the challenge of 3D GAN inversion from a single image, focusing on a method called PanoHead designed for generating 360 views of human heads. Unlike optimization-based approaches, this work adopts an encoder-based technique to reduce inversion times. The authors observe that varying encoder training objectives result in different advantages and disadvantages in the inversion output. Consequently, the authors propose using two encoders and introduce a stitching technique in the triplane feature space to combine the strengths of each encoder.
Strengths: 1. The paper is well-written and easy to follow.
2. The ablation study effectively demonstrates the impact of each design choice, offering valuable insights.
3. The technique for stitching occluded and visible region features could potentially be useful in a broader context. e.g., in applications beyond human head modeling.
Weaknesses: 1. The claim on lines 168-169, stating that triplane features associated with visible regions (e.g., a frontal face) are excluded from discriminator training and thus do not receive gradients, is not entirely accurate. The method forms a mask by back-projecting the depth map onto the triplane coordinates, which limits the mask to points lying on the surface of the visible region. However, since the model uses triplane features for volume rendering, the features used for rendering the frontal view are not necessarily confined to the surface of the face. Features behind that surface may also contribute to color aggregation during volume rendering, potentially leaving some features used for rendering the visible regions uncovered by the mask.
2. Some details lack clarity, as noted in the questions below.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Line 177: How do you obtain the depth map D? Do you use a monocular depth estimation method, or do you extract it from the NeRF model?
2. Figure 5: Are E1 the encoder from section 3.2 and E2 the encoder from section 3.3? If so, why is the triplane feature of E1 masked by the occluded regions and E2 masked by the visible regions? According to the paper, the proposed method appears to use the visible regions of the triplane for E1 and stitch them with the occluded regions of E2. Is this a mistake in the figure?
3. Line 223: The FID is calculated between the rendered image from the encoded input and "1k randomly synthesized" images. Which method is used to generate these "1k randomly synthesized" images? Is it PanoHead? Are all methods compared against the same set of images synthesized by the same method?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors included discussions on both the limitation and broader impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their feedback and highlighting our paper’s several strengths.
1. About the claim on lines 168-169: We acknowledge that not receiving any gradients for the visible region may be theoretically incorrect and will make the necessary modifications to avoid confusion. Thank you for pointing this out. Meanwhile, although the occlusion masks do not 100% eliminate information flow between occluded and visible regions because of the volumetric rendering, we emphasize that our approach performs adequate disentanglement between two regions to produce realistic and coherent samples. This is evident by the occlusion-aware dual encoder combination. Figure 4 in the main paper makes this very visible, too. The dual encoder is able to take the non-occluded parts from Encoder 1 and the rest from Encoder 2. This is also shown by the occlusion-aware / non-occlusion-aware triplane D ablation (Table 2 on main paper, last two rows, specifically the improvements on ID). The occlusion masks can mask a significant portion, and experimental results show they are effective.
We emphasize that this does not hinder the overall claim of our work. Still, they must be acknowledged in the main paper to avoid confusion. Again, we thank the reviewer for such valuable feedback and will improve the relevant sections.
Answers to the questions:
1. We extract the depth maps from the NeRF model.
2. The figure has an unfortunate typo, and the E1 and E2 labels need to be switched. Thank you for your attention.
3. The competitive methods are based on PanoHead’s generator backbone; hence, they are compared against the same set of images synthesized by the same method. We will make this more clear in the revised paper. The reason we use these synthesized images is to ensure comprehensive coverage of all angles in our evaluation.
In the rebuttal, we also provide evaluations on the multi-view MEAD dataset as suggested by *Reviewer AnnS*. In this setting, we use paired images, where the input is at one angle, and the output is $\pm$60 degrees rotated. In this setting, for the FID evaluation, we use the ground-truth images. These results are given in rebuttal Table 1 and Fig. 1, which will be added to the main paper.
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal by Authors
Comment: I have read the rebuttal and thank you for the clarifications, they are helpful. I will take the responses into consideration and discuss with other reviewers before making the final decision on my rating.
---
Rebuttal 2:
Comment: We would like to thank the reviewer for their feedback and for considering our rebuttal. We appreciate their willingness to discuss our responses with the other reviewers. | Summary: This paper introduces a novel encoder-based method for 3D pano-head GAN inversion. Unlike previous 3D GANs focused on near-frontal views, this work designs a dual-encoder system tailored for both high-fidelity reconstruction and realistic generation from multiple viewpoints. The method incorporates a stitching mechanism within the triplane domain, ensuring optimal predictions from both encoders. This results in superior performance compared to existing methods, both qualitatively and quantitatively.
Strengths: - Investigating GAN inversion for 360-degree GANs diverges from prior research as it necessitates accounting for more complex viewing conditions.
- The dual-encoder design coupled with an occlusion-aware triplane discriminator proves to be intuitive and beneficial.
- The approach outperforms existing methods, both in qualitative and quantitative assessments.
Weaknesses: - The input images in the study are primarily front-view; it would be beneficial to include some from extreme viewpoints.
- Further assessment of the model's generalization is required for more complex input images, such as those with heavy makeup.
- The quantitative experiments currently focus on single-view cases; incorporating results from other viewpoints may provide a more comprehensive evaluation.
Technical Quality: 3
Clarity: 2
Questions for Authors: - As discussed in Weaknesses.
- Will the choice between a CNN-based or Transformer-based encoder affect the reconstruction results? Current works like GOAE, use transformer-based encoder for better different-view input instead of just front-view.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: - Discussion of failure cases is necessary.
- The formula symbols in the paper need to be clearly distinguished, such as E1 and E2. Clear differentiation of these symbols should be included in the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their feedback and suggestions, which further helped us improve the paper.
1. (**W.1**) The reviewer wanted more visual results from extreme viewpoints. We did not notice the visual results from the paper were all more from the frontal view. We apologize for that. LPFF dataset is a large-pose Flickr dataset; therefore, our quantitative results included such evaluation. In rebuttal Fig.4, we also provide visual images with challenging viewspoints. We will incorporate more examples into our final paper. Thank you for bringing this to our attention.
2. (**W.2**) Based on the reviewer’s suggestions, we include heavy make-up input results in rebuttal Fig. 3; additionally, diverse ethnicities are represented in Fig. 2 as suggested by Reviewer 9tZp.
3. (**W.3**) Thank you for the neat idea. We have generated a multi-view dataset from videos in MEAD (https://github.com/uniBruce/Mead) and quantitatively evaluated ours and competing methods. Specifically, we fed front MEAD images (0 yaw degree) to all methods, and rendered them from novel views of MEAD (from $\pm$60 to 0 yaw degrees). We compared them with corresponding ground truths. Our method performs better quantitatively for novel views (rebuttal Table 1 and Fig. 1). Figure 1 shows ground-truth input and output pairs from MEAD dataset. These experiments will be added to the revised paper.
4. (**Q.2**) About the choice of CNN versus transformers, in our experiments, TriplaneNetv2 (CNN-based) and GOAE (transformer-based) inversion methods performed similarly when evaluated on challenging samples. That being said, self and cross-attention-based approaches to capture similarities may be a good idea to investigate further. We thank the reviewer for encouraging further study directions.
5. (**L.1,2**) Regarding the limitations, in Rebuttal Fig. 5, we find that our method struggles when input images have head accessories and are treated as hair from the back-view. We will add those additional analyses to our final manuscript. Symbols in formulas and figures will be rewritten with clearer notation.
---
Rebuttal Comment 1.1:
Comment: I have read the author's response, and I will keep the score.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for reading the rebuttal and for considering the additional experiments. If there are any concerns that have not been fully addressed, please let us know, and we would be happy to address them. | Rebuttal 1:
Rebuttal: We want to thank all reviewers for their valuable feedback. We have responded to each reviewer's questions in the rebuttal sections, and attached a PDF file with figures and tables for our additional results.
Pdf: /pdf/f85b902ed1656e416794f9d3cf0de33fde7af889.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper presents a framework for 3D GAN inversion aimed at reconstructing high-fidelity 3D head models from single 2D images. A dual encoder system that combines one encoder specialized in high-fidelity reconstruction of the input view with another focused on generating realistic representations of invisible parts. An occlusion-aware triplane discriminator that enhances both fidelity and realism.
Strengths: The qualitative results show improvements over existing methods, especially for novel viewpoints.
Weaknesses: 1. The writing and language of the paper need substantial improvement.
2. The innovation of this paper may be a little lacking. This paper proposes a new method, but the insights shown in the method are limited and it is difficult to generate insights into other problems.
3. There is also a lack of in-depth research on the proposed method.
4. While some limitations are mentioned, a more in-depth analysis of when and why the method fails would be valuable.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. From Figure 4, it can be seen that Encoder 2 may be the main solution to the problem in this paper. Is there any way to introduce the features of Encoder 1 into Encoder 2? It seems that Encoder 1 is more like a subsidiary of Encoder 2. For this reason, introducing two encoders is feasible, but not elegant enough.
2. I'm curious about more training details. For example, how much data was used for training?
3. The evaluation is primarily on face datasets (FFHQ, LPFF, CelebA-HQ). It's unclear how well the method generalizes to more diverse head shapes or ethnicities.
4. In Encoder 2, is the relevant information about the person's head implicitly included? Is there any experiment to prove this?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: While some limitations are mentioned, a more in-depth analysis of when and why the method fails would be valuable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their feedback, and we will improve the writing further based on the reviewer’s suggestion. The reviewer mentions that the innovation of the paper may be a little lacking, and there is a lack of in-depth research on the proposed method. However, it is crucial to note that our study addresses the novel challenge of GAN inversion for 360-degree GANs, a topic that has not been previously explored. We demonstrate that existing methods fail to address the intricacies of this problem. The dual-encoder design coupled with an occlusion-aware triplane discriminator proves to be intuitive, and the ablation study effectively demonstrates the impact of each design choice, offering valuable insights, as noted by other Reviewers *(AnnS, jtFz, Dd7h, XyTr)*. Furthermore, many triplane-based models have been emerging lately [Yang et al., Dreamcomposer, CVPR 2024], and the occlusion-aware discriminator approach may enable a new, unsupervised training direction for generating coherent 3D samples from single images. This is also appreciated by *Reviewer jtFz (Strength-3)*.
About limitations and failure cases; we have tested different scenarios. Firstly, as suggested by *Reviewer AnnS*, to include multi-view evaluation, we set up the MEAD dataset, which also includes more diverse ethnicities than the CelebA dataset, as shown in rebuttal Fig. 2. We show that the method also handles heavy make-up inputs, as given in Fig. 3. We also visualize results when the input views are extreme side-views, given in Fig. 4. Finally, in Fig. 5, we find that our method struggles when input images have head accessories and treats them as hair from the back-view. We will add those additional analyses to our final manuscript.
Answers to the questions:
1. In our experiments, removing Encoder 1 resulted in a significant decrease in the ID scores, as given in Table 4. Encoder 1 provides high fidelity to the input, a must in reconstruction. On the other hand, Encoder 2 excels in generating better representations of invisible regions. Thus, we observed a trade-off between these two critical requirements and found the best-achieving method to be a dual-framework. That said, we provide analysis, methods, and baselines for this new and challenging problem, which can pave the way for more elegant frameworks.
2. We combined FFHQ and LPFF for training (140k) and test (14k) sets and applied pose rebalancing proposed in LPFF. We also used the same dataset to train competitive methods for fair evaluation. We will detail the relevant section and elaborate more on the Supplementary.
3. Our method achieves 3D reconstruction on diverse ethnicities, as included in rebuttal Fig.2. More visual results will be added to the main paper and the Supplementary. Thank you for the suggestion.
4. The head information is implicitly encoded in PanoHead’s generator backbone, which we keep frozen. Following the reviewer's suggestion, we examined how Encoder 2 performs when given a silhouette of a face as input, as shown in rebuttal Fig. 8. Our observations confirm that, even with this out-of-domain input, Encoder 2 generates latent representations that produce a realistic head shape. This indicates that Encoder 2 retains an implicit representation. We appreciate the reviewer’s insightful suggestion and will include this finding in the revised paper.
---
Rebuttal 2:
Title: Response to the rebuttal
Comment: I have read the author's response, as well as the comments and discussions with other reviewers. The author has addressed my concerns. I will improve my score.
---
Rebuttal Comment 2.1:
Comment: We would like to thank the reviewer for reading our response and for considering the additional comments and discussions with other reviewers. We are pleased to hear that the concerns have been addressed and appreciate the revised score. | null | null | null | null | null | null |
Optimal Hypothesis Selection in (Almost) Linear Time | Accept (poster) | Summary: This paper studies the hypothesis selection problem: Given $n$ distributions $H_1, H_2, \ldots, H_n$ and samples from an unknown distribution $P$, the goal is to output $\hat H$ such that $\mathrm{TV}(P, \hat H) \le \alpha \cdot \mathsf{OPT} + \epsilon$, where $\mathsf{OPT} = \min_{i \in [n]}\mathrm{TV}(P, H_i)$ is the TV-distance of the best hypothesis.
It is known that achieving a guarantee with $\alpha < 3$ is impossible, if we want the sample complexity to be bounded by a function of $n$ and $\epsilon$ (and not the domain size). Prior work gave various algorithms that achieve $\alpha = O(1)$ (including some with $\alpha = 3$), but none of them achieves $\alpha = 3$ in near-linear time without making additional assumptions.
This work gives two improved algorithms. The first algorithm (Algorithm 1) achieves $\alpha = 3$ in $\tilde O(n\cdot s / \epsilon)$ time, where $s = \Theta((\log n) / \epsilon^2)$ is the sample complexity. The second algorithm (Algorithm 4) achieves $\alpha = 4$ in $\tilde O(n\cdot s)$ time.
Here is an overview of Algorithm 1, outlined by the authors in Section 3:
- Let $S_{i j}$ denote the subset of the domain that witnesses $\mathrm{TV}(H_i, H_j)$. A simple argument shows that finding $i \in [n]$ that minimizes the quantity $W(H_i) \coloneqq \max_{j \in [n]}|H(S_{i j}) - P(S_{i j})|$ gives the $\alpha = 3$ guarantee. However, the straightforward implementation takes $\Omega(n^2)$ time to compute $W(H_i)$ for all $i \in [n]$.
- The actual algorithm maintains estimates on $W(H_i)$s (that always underestimate). These are grouped into $\approx 1/\epsilon$ "buckets", where bucket $l$ contains indices with estimated $W(H_i) \approx l\cdot\epsilon$.
- In each iteration, we find the lowest non-empty bucket $l$, and tries to update the estimates. Concretely, we loop over all $j \in [n]$ and only sample a few indices $i$ from bucket $l$. For each $(i, j)$ pair, we try to update the estimate of $W(H_i)$ using $w_j(H_i)$. This might bump some $i$ into larger buckets.
- If there is a single $j \in [n]$ that bumps a substantial fraction of the sampled $i$s, we call such $j$ a "prompting hypothesis", and we use such $j$ to update all estimates in the current bucket. Note that if we keep finding prompting hypotheses, we will empty a bucket in $O(\log n)$ repetitions, and the whole process must end in $O((\log n)/\epsilon)$ iterations. Moreover, each iteration is almost linear-time.
- The harder part is to argue that, whenever we cannot find a prompting hypothesis, we are done. At a high level, this boils down to showing that the optimal hypothesis $i^*$ must be prompting, unless the majority of hypothesis in the lowest bucket are already good enough.
Strengths: - This work makes progress on improving the runtime for a fundamental problem in statistics / machine learning.
- The new algorithms are based on clever ideas and several new concepts, which might find applications in future work.
- The writing is very clear and I found the main paper relatively easy to follow. I especially liked that the authors spent most of the main paper on the proof overview, which is enjoyable to read and should be able to convince the readers why the proposed methods work.
Overall, this paper contains fairly significant results that are very well presented and should be of interest to most learning theorists. I vote for acceptance.
Weaknesses: The main weakness is that this work is unlikely to have said the last words on the problem---it remains open whether one can get $\alpha = 3$ in $\tilde O(n/\epsilon^2)$ time.
Minor writing suggestions:
- Equation (2): Missing extra space in $S_{ij}$ on the left-hand side? (The notational convention in the rest of the paper seems to be $S_{i j}$, though personally writing $S_{i,j}$ might be clearer.)
- Line 90: "a $O(\cdots)$ factor" --> "an $O(\cdots)$ factor"
- Line 270: "dependency" not capitalized.
Technical Quality: 3
Clarity: 3
Questions for Authors: Is there a reason why the "right" runtime should be $n \cdot s = \tilde O(n/\epsilon^2)$? For example, given that the algorithm needs $\Omega(n + s) = \tilde\Omega(n + 1/\epsilon^2)$ time to read the inputs---can we hope to get an $\tilde O(n + 1/\epsilon^2)$ runtime?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are adequately addressed in Section 1.2.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback.
In response to your question, there is no explicit lower bound for the time complexity of this problem. As you mentioned, it is possible that an algorithm could exist with a time complexity of just $\Theta(n + s)$. However, for algorithms that operate by querying the probabilities of various sets according to an unknown distribution $P$—such as querying semi-distances or similar concepts, as many previous algorithms have done—we speculate that $\Theta(n)$ queries to the semi-distances are both necessary and sufficient.
This would result in an overall time complexity of $\Omega(n \cdot s)$. We will clarify this in the future versions of our paper.
Evidence suggesting that $O(n)$ queries might be sufficient comes from the upper bound of [MS08]. They demonstrated that there exist $O(n)$ sets to query, whose responses are sufficient to solve the problem. In their work, these sets and the order in which to query them are determined through an exponential time preprocessing step over the class $\mathcal{H}$. It would be valuable to explore whether this problem can be solved with $\Theta(n)$ queries without relying on the preprocessing step.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply! I have no further questions, and my overall evaluation of the paper remains positive. | Summary: This paper studies the hypothesis selection problem, where given a set of candidate distributions $\mathcal{H}=\{H_1,\dots,H_n\}$ and samples from a distribution $P$, the learner wants to approximately select a distribution $H_i \in \mathcal{H}$ such that $||H_i-P||_{TV} \le \alpha opt+\epsilon$ for some constant $\alpha$, where $opt$ is the TV distance of the closest distribution in $\mathcal{H}$ to $P$. Information theoretically, $\alpha=3$ is the optimal constant one could achieve with optimal sample complexity $s=\log n/\epsilon^2$. Previous efficient algorithms either have a running time that has a bad dependence on $n$ or run in nearly linear time but achieve a suboptimal approximate factor $\alpha \ge 5$. In this paper, the authors design an algorithm that achieves the optimal approximate factor $\alpha=3$, with running time $\tilde{O}(ns/\epsilon)$. This is the first algorithm that achieves the optimal approximate factor with a running time nearly linearly dependent on $n$. This paper also presents another algorithm that achieves an approximate factor $\alpha = 4$ but has a better running time $\tilde{O}(ns)$
Strengths: This paper makes progress on a fundamental problem, hypothesis selection. The two algorithms designed by the authors are highly non-trivial and involve many novel techniques. The paper is in general written well and gives a good intuition of why the algorithms work. Due to the time limit, I was not able to check the proofs in the appendix but given the discussions in the main body, I believe the results given by the paper are correct.
Weaknesses: I want to give some minor suggestions.
1. It would be nice to give some more applications or motivations at the beginning of the introduction. In the current version of the paper, the problem sounds mathematically interesting but lacks motivation.
2. Since there is a long line of works that design nearly linear time algorithms for the problem. It would be helpful to give a brief review of the techniques used in prior works in the introduction.
3. In line 327, I believe there is a typo in the equation $||H_{i^*}||_{TV}=...$
Technical Quality: 3
Clarity: 3
Questions for Authors: I only have one question about the model. It seems that the algorithms in the paper crucially rely on estimating the semi-distance defined based on the Scheffé set. But I believe in many applications it would be very hard to check whether an example falls into the Scheffé set of two hypothesises. For example, many learning algorithms might first improperly learn a list of candidate hypothesises and use hypothesis selection to select a good one. Computing pdf of these candidate hypothesises may not be realistic in general. Would it be possible to relax the dependence on the knowledge of the pdf of the candidate hypothesises in the model?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. We will include the motivations and applications of hypothesis selection, as well as an overview of previous algorithms, in our paper. In this rebuttal, some applications of hypothesis selection are discussed in our response to Reviewer fMmw, for your reference.
Regarding your question about relaxing the model, the short answer is yes; our algorithms remain effective with some minimal adjustments as long as approximations of Scheffe sets are provided.
Currently, in our paper, we compare the PDF of the known distributions, $H_i(x) > H_j(x)$, to determine whether $x$ belongs to $S_{ij}$, the Scheffe set of $H_i$ and $H_j$. However, this assumption can be relaxed if we can identify an alternative set $S'\_{ij}$ that captures most of the discrepancy between $H_i$ and $H_j$. In particular, it is sufficient to have:
$$\|H\_i - H\_j \|\_{TV} \leq (1+ \Theta( \epsilon)) \cdot |H_i(S'\_{ij}) - H_j(S'\_{ij}) | \,.$$
Even with these imperfect sets, $w_{i^*}(H_i)$ is a good proxy for the quality of hypothesis $H_i$ (up to additive error $\epsilon$). See the equations after Line 180 in the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. | Summary: This paper introduces proper approximation algorithms for optimal hypothesis selection in the context of distribution learning.
The first algorithm achieves an optimal approximation factor of $\alpha=3$ in time approximately linear in the number of hypotheses. The second achieves the slightly looser $\alpha = 4$ approximation factor in exchange for significant reduced dependence
on $\epsilon$ and $\delta$, the standard learning parameters governing the error and confidence of the learning algorithms. The paper focuses on explanation of the algorithmic techniques involved.
Strengths: The paper introduces a novel learning algorithm for optimal approximation (Algorithm 1) which achieves a significantly reduced dependence (quadratic to near-linear) on the number of hypotheses in the hypothesis class in the computational complexity. This is notable given that the problem has been studied for a while, and practical, given that distribution learning over a discrete space requires a number of samples at least proportional to the cardinality of the domain on which the hypotheses are supported.
Weaknesses: I found the writing to be unpolished, and not publication-ready. Much of the body attempts to furnish intuition to the algorithmic techniques introduced, but some portions seem underdeveloped, and occasionally read like direct translations from math to spoken language (e.g. line 358). For example, an extremely important concept in this paper (introduced in important prior works) is that of ``semi-distances'' $w_i(H_j)$. Seemingly the best intuition the reader is provided with regarding this concept comes at line 176: "One suggestion for readers to internalize the semi-distances is to view them as a distance between $H_i$ and $P$ that is measured from the perspective of $H_j$". I think this deserves more illuminating wording, and personally got a much better understanding by staring longer at the definition. Overall, I think the lack of attention to writing is somewhat a shame, as there seems to be a lot of nice algorithmic thought here which deserves a better exposition in my opinion.
On a more technical level, the dependence in the computational complexity of Algorithm 1 on the learning parameters $\epsilon$ and $\delta$ is heavy -- these enter as $1/\delta^3\epsilon^3$. One would really hope for something like $\log(1/\delta)polylog(1/\epsilon)/\epsilon^2$. It seems there will be many regimes in which the guarantees of Algorithm 1 -- despite the linear dependence on the number of hypotheses -- will be looser than the quadratic algorithm of MS08.
Technical Quality: 3
Clarity: 1
Questions for Authors: -Table 1: Is the dependence on $1/\delta$ in $s$ just the standard $\log(1/\delta)$?
-line 180: The order of logical quanitifiers can be reversed to get a slightly stronger and more illustrative statement here, correct?
-line 240: It seems that to do this random sampling to decide if you have a prompting hypothesis, you need to check a number of hypotheses which is proportional to $1/\delta^2$. I guess there is a naiver version of this algorithm which just checks all of the hypotheses at this step, incurring some quadratic dependence in $n$. Am I wrong in feeling like any sort of search for something like a promping hypotheses will always incur some sort of undesirable multiplicative interaction with $n$?
Confidence: 3
Soundness: 3
Presentation: 1
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Presentation:**
Thank you for your feedback regarding the presentation of our paper. Our overview was intended to provide a high-level description of our algorithm to avoid obscuring the main technical ideas with detailed specifics. We will certainly focus on enhancing the clarity and quality of our write-up in future versions.
**Dependency on $\epsilon$ and $\delta$:** Please see our global rebuttal.
Below is our answer to your questions:
- In Table 1, for [ABS23] the dependence on $\delta$ is $1/\delta$, while for the rest it is $O(\log(1/\delta))$ (or unspecified by the authors).
- Correct. There exists an $i^*$ such that for every $H_i$, $w_{i^*}(H_i)$ determines the quality of $H_i$.
- Correct. To ensure that a random hypothesis in a bucket is not too far, with a probability of $1-\delta$ in a single round, we have to sample $O(\log(n)/\delta)$ hypotheses and check if $H_{i^}$ is prompting them. Or, we can try all hypotheses in the bucket. Hence, relying on this structural property of $H_{i^*}$ makes a polynomial dependency on $\delta$ inevitable. We speculate that improving the dependency on $\delta$ for this algorithm would require new algorithmic ideas. The main difficulty here is that there is no (known) general technique for boosting the confidence parameter while keeping $\alpha$ the same. In many settings, the success probability of a learning algorithm can be amplified from a constant, say 2/3, to at least $1-\delta$ at a cost of at most $\log(1/\delta)$ in running time and sample complexity. However, in hypothesis selection, choosing the best output from several runs of a given algorithm requires executing a second hypothesis selection algorithm, which introduces another factor of $\alpha$ in the approximation—leading to a total factor of at least $9$. As a result, these kinds of two-phase algorithms are not sufficient in the low $\alpha$ regime. Some previous results, such as [ABS23], also suffer from a polynomial dependency on $\delta$. Our second algorithm circumvents this polynomial dependence at the cost of a slightly worse accuracy parameter $(\alpha = 4)$.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. I do have a concern that the algorithmic techniques for the $\alpha=3$ case may not be very informative for future algorithmic development re: the discussion on prompting hypotheses above. However, in retrospect, I think this is probably a bit unfair, as I'm very far removed from the study of this particular problem. Thus, I revise my score to 5.
I do strongly encourage the authors do improve the presentation in the next version. Good work deserves good presentation.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comment and for raising your score.
We understand your concern regarding the direct usability of our algorithms in future work. While we cannot guarantee their applicability, our hope is that the novel perspective we introduced, along with the new algorithmic ideas, will lead to improvements in the time complexity of this problem. We will invest more effort in distilling the new algorithmic ideas and structural results into a form that is widely useful. | Summary: This paper looks at proper distribution learning: given samples from
some distribution p, and a set of n candidate distributions H_i,
output one H_i that is close in TV; in particular, alpha * OPT + eps.
Surprisingly, this is possible with a sample complexity independent of
the domain size (as would be needed to actually estimate the TV).
There's been a long line of work aiming to improve the approximation
factor alpha. alpha = 2 is possible for *improper* learners, but
proper learners can only hope for alpha = 3. Getting alpha=3 in n^2 s
time is known; this paper gets that down to O~(ns) time (although an
extra eps factor in time, which they can avoid for alpha=4, and worse
delta dependence).
Strengths: It gives much better running time for a natural problem.
The approach is fairly clean.
Weaknesses: The algorithm overview is a bit vague, and could be written more
clearly.
The failure probability dependence isn't good.
The alpha=3 approach is a fairly simple extension of prior work, and
alpha=4 isn't so exciting.
I'm not sure that the constant here matters for the applications of
hypothesis selection. Like, in the application where we choose a
cover for the class, presumably we can just make a finer cover?
Technical Quality: 4
Clarity: 3
Questions for Authors: Are there applications of hypothesis selection where the constant alpha matters?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Fine.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Presentation**
Thank you for your feedback regarding the presentation of our paper. Our overview was intended to provide a high-level description of our algorithm to avoid obscuring the main technical ideas with detailed specifics. We will certainly focus on enhancing the clarity and quality of our write-up in future versions.
**Novelty of our techniques**
The essence of our first algorithm indeed stems from the minimum distance estimate [DL01]. However, the primary challenge we faced was implementing this approach in linear time. Specifically, it is not feasible to compute the maximum semi-distances, $W(H_i)$, for all hypotheses in linear time. To address this, we developed an efficient and advanced method to estimate these values by taking the maximum over a small subset of semi-distances, while ensuring the algorithm's correctness. This approach allows us to achieve an algorithm that runs in (almost) linear time in $n$.
Our second algorithm is completely novel to the best of our knowledge. It achieves the desired time complexity (up to logarithmic factors) with $\alpha = 4$, an accuracy parameter that surpasses all previously known results. Our work introduces a novel algorithmic approach to this problem, marking a significant departure from existing techniques. We hope these new ideas will inspire future research.
**Dependency on $\epsilon$ and $\delta$**
Please see our global rebuttal.
**Applications of hypothesis selection**
Density estimation is a fundamental problem in statistics, with hypothesis selection being an important special case. It involves choosing the best distribution from a set of known models that represent potential underlying data distributions. For example, this set might include Poisson and gamma distributions with various parameters to model the number of arrivals per time unit. This technique is applicable in areas such as denoising, anomaly detection, selecting interpretable data distributions, strategy selection, and more.
That said, we view our results as a fundamental theoretical tool. Hypothesis selection has been instrumental in learning structured distributions (e.g., learning mixture of Gaussians [DK14, SOAJ14]). For additional references, see Section 1.3. Another significant aspect of hypothesis selection is its agnostic nature, allowing for learning even when the unknown distribution is not within the considered class. Hence, hypothesis selection is applicable even when data is noisy.
**Addressing the importance of improving $\alpha$ by a constant factor**
In most learning algorithms, the error guarantee decreases polynomially as the number of samples increases, so constant factors may not be as crucial. However, this is not the case in hypothesis selection. The output hypothesis is guaranteed to be $(\alpha \cdot OPT + \epsilon)$-close to $P$ in total variation distance. While increasing the number of samples can reduce $\epsilon$ to negligible levels, it does not improve the term $\alpha \cdot OPT$. $\alpha$ is an inherent property of the algorithm and directly impacts the best achievable error guarantee. Therefore, even a constant improvement in $\alpha$ is significant.
Some may argue that focusing on improving $OPT$ is more beneficial than refining $\alpha$. For instance, in the cover method (as you mentioned), using a finer $\gamma$-net can ensure that $OPT < \gamma$. However, this approach can drastically increase the algorithm's running time, as the size of the net can grow super-polynomially with $\gamma$. For example, in the case of mixtures of $k$ Gaussians, the dependence of the net size on $\gamma$ is roughly $O(\gamma^{-3\cdot k})$ (see [SOAJ14]). Thus, reducing $\gamma$ by a factor of three could increase the size of $\mathcal{H}$ by an exponential factor in $k$, and consequently, the running time, leading to an inefficient algorithm.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and for your feedback.
**Presentation:**
Thank you for your comments regarding the presentation of our paper. We will incorporate all your editorial suggestions regarding the presentation of the paper. We will certainly focus on enhancing the clarity and quality of our write-up in future versions.
**Dependency on $\epsilon$ and $\delta$:**
We acknowledge that, ideally, an algorithm should achieve a running time of $O(n \log(1/\delta)/\epsilon^2)$. Our first algorithm, with $\alpha = 3$, does not meet this ideal due to suboptimal dependencies on $\delta$ and $\epsilon$. However, it marks a significant step forward as it is the first in two decades to achieve a time complexity linear in $n$ for any $\alpha < 5$. To address the shortcomings, our second algorithm achieves the desired dependencies on $\epsilon$ and $\delta$, up to polylogarithmic factors, with a slight increase in the accuracy parameter ($\alpha = 4$). Given the significant departure from previous algorithmic approaches required by our work, we hope that our techniques will inspire further progress in this area. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Understanding Generalizability of Diffusion Models Requires Rethinking the Hidden Gaussian Structure | Accept (poster) | Summary: This paper compares diffusion models trained on natural image datasets with their Gaussian approximations. It evaluates the quality of this approximation in both the memorization and generalization regime by studying the influence of the training set size, model capacity, and training time.
Strengths: This paper makes several novel and interesting empirical observations:
- according to the authors' linearity measure, denoisers are surprisingly well-aproximated by linear functions (but more on this below),
- reducing model capacity or training time can bring diffusion models into the generalization regime even with very small training sets, though at the expense of image quality.
It is also clearly written, and studies the very important problem of characterizing the inductive biases of diffusion models/network denoisers which allow them to generalize.
Weaknesses: I have two main issues with this paper:
- First, a large part of its content (most of section 3) is rather obvious for people with a signal/image processing background. Indeed, Theorem 1 is well known, usually under the name of "Wiener filter" (which is simply a consequence of linear regression): see, e.g., Theorem 11.3 from _A Wavelet Tour of Signal Processing, Third Edition: The Sparse Way_, Stephane Mallat, Elsevier/Academic Press, 2009. It immediately implies that differences between the "linear" and "Gaussian" denoisers can only come from suboptimality of the network denoiser (or optimization errors of the linear denoiser), and are thus irrelevant for this study. A second observation that the authors did not make is that any linear denoiser leads to Gaussian generated images (as is evident from equation (4) since $x_{i+1}$ is then a linear function of $x_i$ and $x_0$ is Gaussian). Further, the Gaussian denoiser produces samples from the Gaussian distribution with the same mean and covariance as the training set, which is thus a very simple (but crude) model for which samples can easily be produced without any diffusion procedure.
- The main claim of the paper then boils down to whether Gaussian distributions are a good approximation of natural image distributions (and in particular, a better approximation than the empirical distribution of the training set). These two examples are interesting to contrast, as they lie on the two extremes of the quality-diversity tradeoff (Gaussian distributions maximize entropy but have very low quality, while the a sum of delta functions at the training images has perfect quality but essentially no diversity). However, Gaussian models are extremely crude and have been extensively studied in the past, and we therefore have few things left to learn from them. It is not surprising that good diffusion models are capable of capturing the covariance of their training data, but this is well understood. As the authors acknowledge in the discussion, what we do not understand is the rest (higher-order moments captured with non-linear denoisers). The visual similarity between faces and Gaussian images with matched first and second moments results from the fact that the faces are centered, so that they are well-approximated with a PCA, also known as "eigenfaces": _Low-dimensional procedure for the characterization of human faces._ L. Sirovich; M. Kirby (1987), Journal of the Optical Society of America A. 4 (3): 519–524.). This similarity breaks down for more complex datasets such as LSUN-churches which are more translation-invariant, leading to stationary Gaussian samples which look like textures. I therefore disagree with the main claim that "the image contents generated with such Gaussian denoisers [...] closely resemble those produced by well-trained diffusion models." For the same reason, the inductive biases of network denoisers cannot be reduced to the Gaussian structure only, as they are capable of learning much more complex structures despite the curse of dimensionality.
Specific points:
- The sentence at lines 164-166 is wrong, as shown in Figure 1: the denoiser is linear for very small $\sigma$ (close to identity) and very large $\sigma$ (close to its Gaussian approximation as noted in [19]), but less linear in the middle range.
- Root mean square error in equation (8) should have the square root outside the expected value.
- The RMS plots in Figures 2, 6, 7, and 8 could be improved in several ways. First, they are difficult to interpret due to the lack of a reference point. I suggest normalizing them by the expected RMS norm of the network denoiser, so as to have values in $[0,1]$. Second, I suggest to use a continuous colormap when a single parameter is being varied (such as the size of the training set or number of epochs), in order to facilitate visual analysis. Third, why are the numerical values for these varied parameters chosen arbitrarily and irregularly? It would be easier to see trends with regularly (e.g., linearly or logarithmically) spaced parameters, chosen at round-ish numbers.
- The authors should mention in section 3.3 that the Gaussian denoisers compared in Figure 6 use the empirical covariances of the smaller training sets, this is implicit and confused me for a while.
- Two observations relative to memorization and generalization made by the authors are straightforward to me. First, any generative model which perfectly optimizes its training loss will only reproduce its training set, so it is clear that trained network denoisers are suboptimal for their training loss, _and in a specific way_ that enables them to generalize "correctly". Second, a Gaussian model will strongly generalize in the sense of [15] when the two training sets have similar first and second-order statistics, which then happens quite rapidly (and can be studied precisely with random matrix theory analyses).
Minor:
- Typo line 286: "diffsuion"
- Typo caption of Figure 9: "Gausisan" (twice)
- Typo line 316: "covaraince"
- Equations (9) and the first of Appendix C should are missing $c_{\rm out}(\sigma(t))$
- line 504: footnote -> subscript
Technical Quality: 3
Clarity: 3
Questions for Authors: - Given the points above, the surprisingly high cosine similarities reported in Figure 1 look suspicious to me. If denoisers were truly linear, then they would generate Gaussian images, and we know that they learn much better models than this. As noted by the authors in appendix A, the linearity measure $\texttt{LS}(t)$ evaluates the denoiser out-of-distribution since linear combinations of natural images are not natural images (they are superpositions of two unrelated images). Doesn't this mean that these cosine similarities are not really meaningful and deceiving? Another factor that could influence these high values is that they are computed for the denoiser function $\mathcal D_\theta(x)$, as opposed to the denoiser residual $x - \mathcal D_\theta(x)$ (or equivalently the score). As noted in Appendix A, the denoisers are very close to the identity function at low noise and therefore are expected to be close to linear. It would be more interesting to evaluate the linearity of the score, which I suspect is much smaller (I predict it would only decrease as the noise level decreases).
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: As stated above, the main limitation of this paper is that it studies the Gaussian case which is well-understood. As a result, a large fraction of its results are straightforward (and some even long known), and it misses the point which resides entirely in the non-Gaussian/linear structure learned by these models. I encourage the authors to study the literature in denoising (e.g., Chapter 11 of the aforementioned book by Mallat) and image modeling (for a brief introductory review, see, e.g., Simoncelli, Eero P. _Statistical modeling of photographic images._ Chapter 4.7 of Handbook of Video and Image Processing 9 (2005).)
However, these topics are unfortunately not well-known by the broader machine learning community. I thus think that this paper could provide a service to the community by reviewing what is known about the Gaussian case (after major changes, e.g., to remove the linear denoiser and only mention the Gaussian denoiser). It also makes several novel and interesting observations that are valuable to the community (see strengths).
In the current state of the paper, I recommend to reject, and encourage the authors to resubmit their work after another iteration.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. First, most of section 3 is rather obvious for people with a signal/image processing background; e.g., Theorem 1 is well known.**
We sincerely appreciate the review for pointing us to the Wiener filter. We agree that Theorem 1 is well studied (we will add citations on the Wiener filter to acknowledge this), but we believe many findings of our work are not obvious as the reviewer suggested. Please see A.2. in the global response.
**Q2: A second observation that the authors did not make is that any linear denoiser leads to Gaussian generated images. Further, the Gaussian denoiser produces samples from the Gaussian distribution with the same mean and covariance as the training set, which is thus a very simple model for which samples can easily be produced without any diffusion procedure.**
Firstly, in this work, we are interested in the mapping from the noise space z to the image space x. An arbitrary linear denoiser does not produce similar images as the actual diffusion models therefore it is worth no interest. Furthermore, we cannot compare diffusion mapping with the Gaussian mapping if we directly sample from the Gaussian distribution without the reverse diffusion procedure.
**Q3. It is not surprising that good diffusion models are capable of capturing the covariance of their training data**
Again, this point is not well understood as well. First of all, we not only show that diffusion models capture the covariance of their training data but more importantly, we show that the obtained diffusion denoisers share similar function mapping with the Gaussian denoisers. The first observation doesn’t explicitly imply the second. Since even the optimal denoisers under the multi-delta distribution assumption captures the covariances, after all, they are able to reproduce the whole training dataset. On the contrary, it is not well understood how gradient descent on deep networks arrive at a denosier that is close to the Gaussian denoisers. To understand this requires rigorous analysis on the gradient dynamics of the deep network and to our best knowledge, no such analysis exists in the current literature. Furthermore, in the current literature on diffusion models, this problem is circumvented by either assuming an infinite number of training data or directly restricting the architectures of the diffusion models.
**Q4. The similarity breaks down for more complex datasets. The inductive biases of network denoisers cannot be reduced to the Gaussian structure only, as they are capable of learning much more complex structures despite the curse of dimensionality.**
Our results in Figure 15 in the appendices and Figure 2 (b) of our newly uploaded PDF demonstrate that this similarity also persists in more complex datasets (e.g., LSUN-Churches, AFHQ datasets and Cifar-10 dataset). This similarity indicates that the first and second order statistics of the finite training dataset is heavily utilized by diffusion models. Otherwise, we should not be able to observe the similarity. However, we do agree the generation power cannot be reduced to the Gaussian structure only, but it indeed plays a critical role.
**Q5. The surprisingly high cosine similarities reported in Figure 1 look suspicious to me.**
We conduct experiments to measure the linearity of the score functions as the reviewer suggested. The results are shown in Figure 3 of our newly uploaded PDF, where we see that measuring linearity of score is not very meaningful. This is because the noise magnitude can be much higher compare to the denoising output (in the range of [-1,1]), therefore subtracting the denoising outputs from the noisy image does not change the noisy image significantly except in the low noise variance regime. For this reason, we always see a high linearity of the scores for most of the noise variances. In the figure, denoised_img_1 and dneoised_img_2 correspond to $D(x_1|\sigma_t)$ and $D(x_2|\sigma_t)$, denoised_img_1+denoised_img_2 corresponds to $\frac{1}{\sqrt{2}} D(x_1|\sigma_t)+\\frac{1}{\sqrt{2}}D(x_1|\sigma_t)$ and denoised_image_additive corresponds to $D(\frac{1}{\sqrt{2}}x_1|\sigma_t)+D(\frac{1}{\sqrt{2}}x_2|\sigma_t)$. $x_1$ and $x_2$ are two randomly sampled noisy image.
**Q5. Two observations relative to memorization and generalization made by the authors are straightforward to me...**
First of all, this point is not well perceived by the current literature. Current theorems normally assume you can directly sample from the ground truth training data distribution instead of just a finite number of training data. In this case, exactly minimizing the score denoising matching loss indeed results in the ground truth score function. However, less work focuses on the finite training setting, in which minimizing the score denoising matching loss results in overfitting. As you mentioned, the trained network is suboptimal in a specific way and enables them to generalize correctly. Our results indicate that this specific way is to be close to the Gaussian denoiser and why this happens is not well understood.
Secondly, our discussion on strong generalizability focuses on the actual diffusion model rather than the Gaussian model. Yes, a Gaussian model will strongly generalize in the sense of [15] when the two training sets have similar first and second-order statistics. We use this fact to explain why diffusion models exhibit strong generalizability. Since the diffusion models learn similar function mappings as their corresponding Gaussian denoisers in the generalization regime, strong generalization of the Gaussian models leads to the strong generalization of the actual diffusion models. This is supported by the fact that we can direct the model towards strong generalization by either early stopping or decreasing the model scale. Remember these two actions prompts the emergence of the Gaussian structure, which further highlights the necessity of the Gaussian structure in diffusion model’s generalizability.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response.
Q1: I agree that in the memorization regime, the linear and Gaussian denoisers studied in this work become different. Thank you for making this point.
Q5: Thank you for the additional experiments, which are interesting and puzzling. I do not understand why the scores appear linear except for very small noise levels. Is the noise variance compared to images with values in [0, 255] or [0, 1]? In the latter case, then I agree that linearity of the score for noise larger than 1 is not meaningful, just like linearity of the denoising function for noise smaller than 0.1 is not meaningful. But the interesting range is then a variance smaller than 1 which is hidden in the choice of axis limits here. This would indicate than there is non-negligible non-linearity in the score, which are the objects that appear in the reverse SDE/ODE.
Other questions: I agree that capturing the Gaussian/linear structure of the score is the first-order necessary condition for generalization, and it goes a relatively long way in producing images that are correlated with the images generated by a generalizing network. However, I maintain that pointing this out does not teach us a lot about how diffusion models generalize. When I said that this was well understood, I meant that very simple and classical approaches lead to generative models that capture the linear/Gaussian structure and generalize in the sense of [15]. The mystery lies in how non-Gaussian/linear structure can still be estimated from limited samples despite the curse of dimensionality (by diffusion models or other approaches).
---
Rebuttal 2:
Comment: **1. I do not understand why the scores appear linear except for very small noise levels... This would indicate than there is non-negligible non-linearity in the score, which are the objects that appear in the reverse SDE/ODE.**
Yes, the linearity of the score for noise larger than 1 is not meaningful since the pixel range of our images lie in [-1,1]. In fact, we've already implicitly measure the linearity of the diffusion denoisers, as shown in Figure 2 (left). Notice that the difference between the actual diffusion denoisers and the corresponding linear denoisers are the largest for $\sigma$ in the range of [0.4, 10]. This aligns with our original measure of linearity shown in Figure 1, where we see the most nonlinear part lies in the range of [0.4,10] as well. Therefore, we believe our original linearity measure is good enough. Let's then go back to the reviewer's original question: *"Given the points above, the surprisingly high cosine similarities reported in Figure 1 look suspicious to me. If denoisers were truly linear, then they would generate Gaussian images, and we know that they learn much better models than this. "* Please notice that we never claim the denoisers are exactly linear. However, we do see the trend that the denoisers become increasingly linear as it transitions from memorization to generalization. In fact, as we've shown in the Figure 2 (a) of our newly uploaded PDF, the denoising outputs of the actual diffusion denoisers are highly similar to those from the Gaussian denoisers. Because of this similarity, the linear models obtained through distillation match the Gaussian models. Please let us know if you have further doubt about the emergence of linearity observed in our paper.
**2.** **I agree that capturing the Gaussian/linear structure of the score is the first-order necessary condition for generalization, and it goes a relatively long way in producing images that are correlated with the images generated by a generalizing network. However, I maintain that pointing this out does not teach us a lot about how diffusion models generalize. When I said that this was well understood, I meant that very simple and classical approaches lead to generative models that capture the linear/Gaussian structure and generalize in the sense of [15]. The mystery lies in how non-Gaussian/linear structure can still be estimated from limited samples despite the curse of dimensionality (by diffusion models or other approaches).**
Could the reviewer provides us with some references on *"very simple and classical approaches lead to generative models that capture the linear/Gaussian structure and generalize in the sense of [15]?"* We are sincerely eager to learn more about this. Furthermore, we want to emphasize that our findings don't just show the diffusion models capture the first and second order statistics, but more importantly, diffusion denoisers are quite close to the Gaussian denoisers. We want to emphasize that capture first and second order statistics doesn't mean the latter, since even in the case of memorization, the model still capture the second and first order statistics, but the diffusion denoisers in this case are not close to Gaussian.
Furthermore, for other generative models such as VAE and GAN, they can also be interpreted as denoisers since they generate images by mapping noise space to the image space. Moreover, they also capture the first and second order statistics of the training dataset since they have a strong generation power. However, their function mappings don't share any similarity with the Gaussian denoisers. For this reason, we believe our results are intriguing and meaningful since we demonstrate that the function mapping of the diffusion models share high similarity with the Gaussian models. To be more specific, our results demonstrate that the best linear approximation of the nonlinear diffusion models are nearly identical to the Gaussian models.
Please share your thoughts about this, we are open to any criticize, in the hope that we can make the paper better.
---
Rebuttal Comment 2.1:
Comment: Thank you for your answers.
1. I found statements like "generalizing denoisers are mostly linear" deceptive. While this is true in an MSE sense (as measured by the linearity metric), I wanted to emphasize that there is a lot we do not understand that is hidden by the "mostly" (in particular for the noise range [0.4,10] which is critical in practice for sample quality). However, as you pointed out, one contribution of the paper is that it evidences that _memorizing denoisers are less linear than their generalizing counterparts_. This is definitely a novel and interesting observation. I suggest putting more the emphasis on this latter phrasing of the results rather than the former.
2. Gaussian models of images goes back at least to the 40s with Kolmogorov on turbulence and the 60s with Julesz on texture. But these references are now part of the general folklore. They are a maximum entropy model of images conditioned on the first and second-order moments, which leads to modeling the image distribution as a Gaussian distribution with mean and covariance given by the empirical statistics of the training set (for stationary image distributions, it is sufficient to estimate the global spatial mean and the power spectrum rather than the full mean and covariance). This gives a generative model (just sample from this Gaussian distribution) that naturally reproduces the mean and covariance of the training set. Now if one splits a sufficiently large training set in two, the resulting generative models will be close to identical as soon as the two halves have the same first and second order moments (the number of samples should be at least of the order of the image size or its square, depending on whether the distribution is assumed stationary or not). This can be checked by sampling one $z \sim \mathcal N(0, \mathrm{Id})$ and comparing $\mu_1 + \Sigma_1^{1/2} z$ with $\mu_2 + \Sigma_2^{1/2} z$ to reproduce the setting of [15].
As you pointed out, memorizing diffusion models also reproduce the first and second-order statistics of their training set. As they generalize, they move away from this strategy, and it is interesting that they become closer (but not equal) to the Gaussian/linear denoisers. In a way, this natural, as the entropy (diversity) of the generative model increases while still capturing the first and second-order moments of the data, it becomes more similar to the maximum-entropy Gaussian model.
I thank the authors for pointing out some of the more subtle points of their work. I hope that this discussion may help the authors make these points clearer in the paper. As a result of this discussion, I have decided to increase my score. This paper makes interesting contributions to our understanding of generalization in diffusion models and should be accepted.
---
Reply to Comment 2.1.1:
Comment: We sincerely appreciate the insightful comments and suggestions provided by the reviewer as they have been extremely helpful in improving the submitted paper. We will revise our paper based on our discussion.
Sincerely,
Authors | Summary: The work analyzes the behavior of a diffusion-based generative model from the perspective of ``Gaussian structures''. In particular, it checks the linearity of scores at single-time steps and compares scores against a Gaussian model. The analysis shows that the Gaussian structure plays a main role in image generation.
Strengths: The work has compared diffusion-based generative models against a Gaussian model. It has discovered convincing evidence that linear score plays an important role in image generation.
It has done extensive experiments with different configurations. By controlling irrelevant variables, the study highlights factors that affect generalization. In particular, it discovers that the linear score mimicking a Gaussian model plays an important role.
Weaknesses: The study is only limited to a few generative models and a face dataset. The observation might be different on different datasets. The face dataset has a more obvious Gaussian distribution. For example, the mean of all faces is a face, and correlations between pixels are relatively consistent. Therefore, a Gaussian distribution is somewhat reasonable for such a dataset. It is unknown whether a Gaussian distribution is still a reasonable choice for a set of diverse images. In such a case, does the diffusion model still need to mimic the Gaussian model?
The current analysis still could not explain all observations. For example, the behavior in noise levels between 20 and 80 is not well explained in Figure 6. These observations might be worth more in-depth discussion.
Technical Quality: 3
Clarity: 4
Questions for Authors: Have you considered using standard Gaussian distribution as the convergent distribution of the diffusion process?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The scope of the study could be wider.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. The study is only limited to a few generative models and a face dataset. The observation might be different on different datasets. The face dataset has a more obvious Gaussian distribution. For example, the mean of all faces is a face, and correlations between pixels are relatively consistent. Therefore, a Gaussian distribution is somewhat reasonable for such a dataset. It is unknown whether a Gaussian distribution is still a reasonable choice for a set of diverse images. In such a case, does the diffusion model still need to mimic the Gaussian model?**
A1: Although the Gaussian structure is the most evident on face images, the same linear phenomenon persists in other datasets as well. As shown in Figure 15 in the appendix, for more complicated dataset such as AFHQ and LSUN-Churches, there is still a clear similarity between the generated images from the diffusion models and those from the Gaussian model. Furthermore, we conducted extra experiments on Cifar-10 during the rebuttal, with results in Figure 2 (b) of our newly uploaded PDF. We can observe that the generated images for Cifar-10 dataset exhibits similarity with Gaussian models.
**Q2. The current analysis still could not explain all observations. For example, the behavior in noise levels between 20 and 80 is not well explained in Figure 6. These observations might be worth more in-depth discussion.**
A2: For the noise variance region between 20 and 80, the behavior of the learned diffusion denoisers are relatively stable. From Figure 6, we see that the difference between the diffusion denoisers and the Gaussian denoisers in the high noise region does not change as much compared to the intermediate noise variance region when varying the dataset size. Furthermore, from Figure 7 we see that the difference between the diffusion denoisers and the Gaussian denoisers seems to decrease consistently as the model scale increases. These observations indicate that with sufficient training, the diffusion denoisers for the high noise variance region converge to the Gaussian denoisers without overfitting. This observation might be attributed to the fact that for the high noise variance region, the Gaussian denoisers are approximately the global minimizer of the denoising score matching objective under finite training dataset assumption (see Equation (5)); this is proved in [19]. Therefore, we should expect that with a large enough model capacity, deep networks converge to such a minimizer. Since it is a global minimizer, we observe less overfitting.
We will include this discussion in our revised paper and make those points clear. Nevertheless, this region doesn’t influence the final generated images as much as the intermediate noise regions, which is why we mainly focus on the discussion of the intermediate noise region.
**Q3. Have you considered using standard Gaussian distribution as the convergent distribution of the diffusion process?**
A3: We start the sampling process from a standard Gaussian distribution with std = 80. Since the image lies in the range of [-1,1], the noise magnitude is much larger, which means the convergent distribution is roughly a standard Gaussian distribution with std equals to 80.
**We thank the review for the insightful comments, please let us know if you have further questions.**
---
Rebuttal Comment 1.1:
Title: Thank you for your responses
Comment: I have checked other reviews and your responses. I have no more questions for now.
---
Reply to Comment 1.1.1:
Comment: Thanks for taking the time to go through our work and provide insightful comments ! | Summary: The paper investigates the generalization properties of diffusion models by examining the learned score functions, which are denoisers trained on various noise levels. It shows that nonlinear diffusion denoisers exhibit linearity when the model can generalize, leading to the idea of distilling these nonlinear mappings into linear models. The findings suggest that these linear denoisers closely align with optimal denoisers for a multivariate Gaussian distribution, indicating an inductive bias towards capturing Gaussian structures in the training data. This bias becomes more pronounced with smaller model sizes and might provide insights into the strong generalizability seen in real-world diffusion models.
Strengths: 1. The paper is well-motivated and written.
2. The Gaussian structure hypothesis for generalization in diffusion models is interesting.
Weaknesses: - Although the paper's focus is on generalization in diffusion models, it lacks any quantitative measure of generalization within the experiments presented.
- Similarly, no quantitative measure of memorization is reported.
- The work primarily investigates the linearity of the learned score functions across various noise levels. However, the connection between these experiments and the necessity of linearity for generalization in diffusion models is unclear and not well substantiated.
- The term "inductive bias" is frequently used throughout the paper, but it is never clearly defined. It remains ambiguous whether this bias pertains to the model architecture, the parameterization of the forward diffusion process, or the denoising score matching loss.
- The paper lacks detailed information on the experimental setup, including training procedures and the hyperparameters of the architectures used.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Can the authors explain how from their set of experiments we can conclude that the emergence of Gaussian structure leads to strong generalizability?
1. The paper shows how the linear (distilled) model resembles the optimal denoiser for a multivariate Gaussian distribution. Would the same phenomena be observed if we would linearize the trained model instead of distilling it? Have the authors conducted any experiments in this direction?
1. Could the authors provide a more precise definition of inductive bias in their work? Specifically, is it a property of the architecture, the input-output mapping parameterized by the network, the forward diffusion process, or the denoising score matching loss?
1. The authors refer to edm-ve, edm-vp, and edm-adm as different "*model architectures*". Could they clarify this terminology? My understanding is that all these models use a U-net based architecture to parameterize the score function, and the difference lies in the parameterization of the forward process SDE.
1. Do VE and VP stand for variance exploding and variance preserving? How do they differ from ADM?
1. Can the authors provide experiments that measure generalization compared to memorization in their setup?
1. In Figure 1, the authors show that when the model is memorizing (yellow dashed curve), the score function is not approximately linear (at least for some noise levels). However, there is no evidence reported that the yellow curve corresponds to memorization compared to the other curves. Could the authors provide empirical evidence to support this?
1. Can the authors provide details of their experimental setup, including training and architectures?
1. Could the authors better discuss the comparison with [1] in more detail? They claim, "*We propose an alternative explanation: the networks capture certain common structural features inherent across the non-overlapping datasets*". How is this claim in contrast with the hypothesis that due to the inductive bias of the neural network, the score and therefore the density are learned? I fail to understand how learning "common structural features" differs from learning the density. Could the authors elaborate on this?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: - The description of the experimental setup lacks essential details, making it difficult to replicate the results.
- The experiments exclusively focus on the RMSE as the performance metric across various noise scales. Alternative performance measures are neither reported nor discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Although the paper's focus is on generalization in diffusion models, it lacks any quantitative measure of generalization within the experiments presented.**
A1: The memorization and generalization can be clearly observed in our experimental results by comparing the generated image with the nearest neighbor (NN) in the training dataset (see Figure 6 to Figure 9). To measure the memorization and generalization quantitatively, we can empirically define the generalization score as follows
\begin{align}
\text{GL Score} := \frac{1}{k}\sum_{i=1}^k\frac{||x_i-\text{NN}_Y(x_i)||_2}{||x_i||_2}
%\mathcal{L}(\mb x_k,\text{NN}_Y(\mb x_i)),
\end{align}
where $[x_1, x_2, ..., x_k]$ are sampled images from the diffusion models, and $Y:=[y_1, y_2, ..., y_N]$ denote the training dataset. We used this metric to assess generalization versus memorization, with the results presented in Figure 1 of our global response. As shown in Figure 1 (d) to (g), diffusion models start to exhibit memorization behavior when the number of training images becomes smaller than 8750 (GL Score around 0.5). Therefore we choose 0.6 as the threshold to distinguish between generalization and memorization. As shown in Figure 1(a) and (b), the diffusion denosiers exhibit increasing linearity as the diffusion models shift from memorization to generalization.
**Q2. The work primarily investigates the linearity of the learned score functions across various noise levels. However, the connection between these experiments and the necessity of linearity for generalization in diffusion models is unclear and not well substantiated.**
A2. Please refer to A.3. of our global response.
**Q3. The term "inductive bias" is frequently used throughout the paper, but it is never clearly defined.**
A3: Here we give a precise definition of the inductive bias in our work: “When the model size is relatively small compared to the training dataset size, training diffusion models with the score matching loss (Equation (3)) results in diffusion denoisers that behave similarly to the linear Gaussian denoisers (though with certain amount of difference especially in the intermediate noise variance region). Furthermore, even when the model is overparameterized, such similarity emerges in the early training epochs." Our finding is consistent across various architectures including VE, VP and ADM (see Figure (13)).
Though in the paper, we mainly test on EDM configuration, which specifies specially designed forward process, and denosier parameterization (Equation (9)), we expect the inductive bias will manifest for other forward process and parameterization as well since recent work [18] shows that diffusion models trained with different forward process and parameterization generate almost identical images starting from the same random seed, which suggests that even in those cases, the diffusion denoiser share similar function mappings.
**Q4. The paper lacks detailed information on the experimental setup, including training procedures and the hyperparameters of the architectures used.**
A4: In the revision, we will include more experimental details (e.g., the model architectures, hyperparameters, training procedures) in our paper for reproducibility of our results.
**Q5. Can the authors explain how from their set of experiments we can conclude that the emergence of Gaussian structure leads to strong generalizability?**
A5: Firstly, based upon our studies in Section 3 and 4 we find strong correlation between generalization and Gaussian structures. Furthermore, in section 5 we show that decreasing model scale and early stopping are able to prompts strong generalization. Remember those two activations prompts the emergence of the Gaussian structure.
**Q6. Could the authors better discuss the comparison with [15] in more detail? I fail to understand how learning "common structural features" differs from learning the density.**
A6: Our work offers complementary explanations based on the common Gaussian structures from non-overlapping datasets and the inductive bias towards these structures. In contrast, [15] hypothesized that the models learn the same underlying distribution. Notice that learning the ground truth distribution is only a sufficient condition not a necessity for the strong generalization. Just because two models produce the same images doesn’t mean they learn the underlying distribution. It only implies certain common structures of the two datasets are captured but such a common structure might not be the optimal. Our experiment shows that such a common structure is highly related to the Gaussian structure.
**Q7. The paper shows how the linear (distilled) model resembles the optimal denoiser for a multivariate Gaussian distribution. Would the same phenomena be observed if we would linearize the trained model instead of distilling it?**
A7: Yes, our experiment results show that directly training a linear diffusion model based on the denoising score matching loss results in the Gaussian denoisers. However, this is not the main point of the paper since what is interesting is that diffusion models behave closely to the Gaussian denoisers even without this explicit linear constriant.
**Q8. The authors refer to edm-ve, edm-vp, and edm-adm as different "model architectures". Could they clarify this terminology?**
A8: The EDM paper [4] proposes a novel diffusion configuration (a specially designed forward process, time schedule, network parameterization). This configuration can be adapted to various network architectures. EDM-VE [3], EDM-VP [2] and EDM_ADM [23] are all trained with the EDM configuration but with different network architectures. Here VE stands for the architecture proposed in the paper [3], VP stands for the architecture proposed in the paper [2] and ADM corresponds to the architecture proposed in [23]. We refer the reviewer to the EDM [4] paper for more detail.
---
Rebuttal 2:
Title: Looking forward to your response
Comment: Dear Reviewer 6yNJ,
We have tried our best to address your concerns in our response. Since the deadline for the discussion period is approaching, we'd like to know if you have further questions so that we can do our best to respond further. Please feel free to raise any questions. Thanks for your insightful feedback.
Due to the space limitation of the rebuttal, we could not add the experimental setup of our paper. Here, we include it below for your reference:
Here we provide a more detailed description of our experiment setup:
* Section 3: we train linear models to distillate the actual diffusion models (including EDM-VE, EDM-VP, EDM-ADM). The actual diffusion models are trained on the FFHQ dataset (70000 images in total) for around 1000 epochs. The details of training of linear models are in the appendix B. We use the default hyperparameters (including learning rate, detailed network parameters) provided in the EDM code base.
* Section 4: we study the impact of (i) dataset size (ii) model scale and (iii) training time on diffusion models’s generalizability. For figure 6, we train the same models on datasets with various sizes. The datasets are randomly sampled from the FFHQ dataset. The diffusion model is trained with EDM-VE configuration. All models are trained for around 2 days to ensure convergence. For figure 7, we fix the dataset size as 1094. We still use the same EDM-VE training configuration but vary the model scales [4,8,16,32,64,128,128]. For Figure 8, we train a diffusion model with EDM-VE (scale 128)] configuration on FFHQ dataset with 1094 images. We early stop the model at various epochs specified in the paper.
* Section 5: we study the strong generalizability. We randomly split FFHQ dataset into non overlapping datasets with size 35000 and 1094. All the models are trained with EDM-VE configurations.
In the revision, we will make sure to include those experimental details (e.g., the model architectures, hyperparameters, training procedures) in our paper for the reproducibility of our results. We will make our code public upon publication to ensure reproducibility.
Best,
Authors
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer 6yNJ,
We have worked diligently to address your concerns.
As the rebuttal period draws to a close, please feel free to reach out if you have any last-minute questions or need further clarification.
Best regards,
The Authors
---
Rebuttal 3:
Comment: Thanks for your reply, we are happy to further address your concerns.
**Q1. Inductive bias**
As we've stated in our last response (please see A.3), a more precise interpretation of the inductive bias in this paper should be:
*When the model size is relatively small compared to the training dataset size, training diffusion models with the score matching loss (Equation (3)) results in diffusion denoisers that behave similarly to the linear Gaussian denoisers (though with certain amount of difference especially in the intermediate noise variance region). Furthermore, even when the model is overparameterized, such similarity emerges in the early training epochs.*
Please notice that our findings don't just show the diffusion models capture the first and second order statistics, but more importantly, when diffusion models transition from generalization to memorization, the corresponding diffusion denoisers progressively get closer to the Gaussian denoisers. We want to emphasize that capturing first and second order statistics doesn't mean the latter. Consider diffusion models in the memorization regime, they also capture the first and second order statistics of the training dataset since they can perfectly reproduce the training data, however, in this case the diffusion denoisers are not close to the Gaussian denoisers.
**Q2. Why this finding is surprising and considered as a bias ?**
Remember that we are training diffusion models on finite number of training data. The optimal solution to the training objective (equation 3) in this scenario has the form of equation 8, which has no generalizability. On the contrary, our findings show that in practice, when the model capacity is relatively small compared to the training dataset size, training diffusion models with the score matching objective (equation 3) leads to diffusion denoisers that share high similarity with the Gaussian denoisers. Furthermore, we also show that even when the model capacity is sufficiently large, such similarity emerges in the early training epochs. We considered this behavior as a bias since the diffusion models in generalization regime **bias** towards learning similar diffusion denoisers as the Gaussian models. Our findings are interesting since they are not well understood in the current literature.
We encourage the reviewer to take a look at our discussion with reviewer N1bc, who asked questions similar to Q1. and Q2. We hope the discussion there can help the reviewer understand our paper better.
**Q3. The use of RMSE.**
We use RMSE because it is able to accurately characterize the difference between the Guassian denoisers and the actual diffusion denoisers. To address your concerns on normalization, we have conducted experiments using normalized MSE, where the trend in Figure (2,6,7,8) remains the same and does not change our conclusions. This is because the pixel values of the outputs of the diffusion denoisers for different time steps consistently lie in the range [-1,1].
In the final version, we will definitely include these results and clarification as suggested.
**Q4. Novelty in Theorem 1.**
Although we agree that Theorem 1 is well studied (we will add citations also suggested by Reviewer N1bC), we state the result mainly to establish the connection between the linear denoisers and the Gaussian denoisers which is not our main contribution. Our main novelty and contribution mainly instead amounts to:
(i) Establishing the inductive bias that diffusion models exhibit emerging linearity as they transition from memorization to generalization, and the corresponding diffusion denoisers become progressively closer to the Gaussian denoisers (section 3).
(ii) Showing such inductive bias is governed by the relative model capacity compared to the dataset size. Furthermore, in the overparameterized setting, such inductive emerges in early training epochs.
(Iii) Showing that generalization of diffusion models can happen with a small training dataset size.
---
Rebuttal 4:
Comment: **Q4. Gaussian Structure and Strong Generalizability**
First of all, we agree with the reviewer that generating high quality images is important. However, we believe that our experiment results which show that diffusion models trained on non-overlapping dataset generate nearly identical images (figure 9 (c)) is also generalization. As we’ve discussed in our last response to you (please see Q.6), strong generalization happens when the diffusion models capture certain common information between two non-overlapping datasets. Such common information might be the underlying ground truth image distribution, might be something else. We don’t know that yet in the current literature. However, please notice that the generated images of diffusion models trained on 1094 (figure 9 (c)) and 35000 (figure 9 (a) bottom) images are highly similar. This high similarity indicates that much of (more than first and second order statistics) of the information in the large dataset, which is essential for generalization, is already there in the smaller dataset. Our experiments in section 5 are meaningful since we demonstrate we can exploit this important structural information in a small dataset by either using a small model or applying early stopping. As we’ve demonstrated in section 4, by applying these two actions, we are prompting the diffusion models to learn similar function mappings as the corresponding Gaussian models, which indicates that the Gaussian inductive bias is important for the emergence of strong generalizability.
From a function mapping perspective, in the strong generalization regime, the high similarity between generated images of diffusion models trained on 1094 and 35000 images indicates that their diffusion denoisers share high similarity. But what are these function mappings? According to our experiments in section 4, the function mappings of these models are also close to the Gaussian denoisers. This again implies the Gaussian inductive bias plays an important role in the strong generalization of diffusion models. Notice that the Gaussian denoisers generate images that share similar structure as those generated by actual diffusion models in the strong generalization regime.
However, we agree with the reviewer that larger dataset contains more information which is essential for generating images with higher quality. The differences among the diffusion denoisers (1094, 35000) and the Gaussian denoisers play an important role in generating finer image structures. We really appreciate the reviewer for pointing this out and we will emphasize this in our revision. Nevertheless, our main message in section 5 is to show that we can prompt generalization on small datasets by steering the diffusion denoisers to get closer to the Gaussian denoisers.
**Q5. Model Linearization.**
The reference provided by the author is performing Taylor expansion, which can only approximate a function locally. For this reason, it doesn't serve our purpose since in this work we aim to find a linear model that can approximate the nonlinear deep network globally, i.e., we want to well approximate the input output mapping of the deep network for inputs that are widely spread. We will cite the paper and carefully discuss it.
---
Rebuttal Comment 4.1:
Comment: Please let us know if you have any more questions.
Sincerely,
Authors | null | null | Rebuttal 1:
Rebuttal: To all Reviewers:
We thank all the reviewers for their insightful and constructive comments. Most of the reviewers find our work well written (6yNJ, VCtP, N1bC), well-motivated (6yNJ, N1bC), interesting and novel (6yNJ, N1bC, VCtP) with convincing evidence (VCtP).
We summarize our main findings as follows:
**Inductive bias towards Gaussian structures.** Here, through linear distillation, we demonstrated that when the model generalizes, practically learned diffusion models exhibit an inductive bias toward Gaussian/linear structures without any explicit linear constraints. In contrast, the linear/Gaussian structures are less prominent in the memorization regime. Furthermore, we study how training dataset size, model capacity and training time effect such inductive bias. These insights are neither obvious nor well understood in previous works.
In this response, we address the common concerns and questions raised by most reviewers and we will include the revisions into our manuscript. Unless otherwise specified, all reference numbers refer to those in the bibliography of the NeurIPS submission.
**Q1. Does the phenomenon appear on more complex datasets?**
A1. Although the Gaussian structure is the most evident on face images, the same linear phenomenon persists in other datasets as well. As shown in Figure 15 in the appendix, for more complicated dataset such as AFHQ and LSUN-Churches and Cifar-10 (Figure 2 (b) of our newly uploaded PDF), there is still a clear similarity between the generated images from the diffusion models and those from the Gaussian model. This similarity implies that though natural images distributions are far from the Gaussian distribution, the Gaussian structure of the finite datasets play an important role in the generation ability of diffusion models in the generalization regime.
**Q2. The main limitation of this paper is that it studies the Gaussian case which is well-understood/most of section 3 is rather obvious for people with a signal/image processing background.**
A2. Our work is not studying the Gaussian case itself, but to study the relationship between generalization and the Gaussian structures. We demonstrate that in the generalization regime, diffusion denoisers share high similarity with the Gaussian denoisers. This result suggests that the Gaussian structure (first and second order statistics) of the finite training dataset plays an vital important role in the generation process. More specifically, notice in section 3, our linear models are not trained using the score matching loss, rather, it is trained to regress the input and output pairs of the actual diffusion networks (we made this clear in line 150-155 of the main text and also in the appendix B). For this reason, the high similarity between the linear model and the Gaussian model is not trivial. It requires the deep network to be regularized. In this case, this similarity can be attributed to the fact that for diffusion models in the generalization regime, the corresponding diffusion denoisers share similar function mappings as the Gaussain denoisers (though with a certain level of difference especially in the intermediate region). We refer the review to the Figure 2 (a) of our newly uploaded PDF, where we directly visualize the denoising outputs of the Gaussian model and the linear model. Notice that when the diffusion model generalizes (model scale = 4 or 8), the denoising outputs of the Gaussian model and the diffusion model are quite similar. This similarity in denoising outputs holds even if you directly input the noise to the denoisers. However, for diffusion models that memorize models (scale = 64 or 128), the similarity between Gaussian denoisers and the actual diffusion models breaks. In this case, as shown in Figure 1(c) of our newly uploaded PDF, the linear model trained with the linear distillation can no longer approximate the diffusion models as well as in the case of generalization and the similarity between the linear model and the Gaussian model breaks as well. The reason behind the similarity between the diffusion denoisers in the generalization regime and the Gaussian denoisers is not well understood.
**Q3. The work primarily investigates the linearity of the learned score functions across various noise levels. However, the connection between these experiments and the necessity of linearity for generalization in diffusion models is unclear and not well substantiated.**
A3. Our work has shown strong connections between the necessity of linearity and generalization in the following aspects.
*In Section 3*, in the generalization regime we demonstrate that diffusion models produce images that are highly similar to those generated by the distilled linear denoisers (as well as the Gaussian denoisers) when sampled starting from the same random noise. This means that in the generalization regime, the learned diffusion denoisers share similar function mappings as the Gaussian denoisers. On the contrast, this similarity between the linear model and Gaussian model breaks when diffusion model memorizes, meaning that the Gaussian linear structure is ubiquitous in the generalization regime. This shows the necessity of linearity and generalization.
*In Section 4*, we demonstrate that as the diffusion models transition from memorization to generalization, the similarity between the diffusion denoisers and the Gaussian denoisers increases as well. This strong correlation implies the necessity of the Gaussian structure in the diffusion model’s generalizability.
*In Section 5*, As shown in Figure 9 of our paper, we can bring memorized models into strong generalization by either decreasing the model capacity or applying early stopping. Remember from section 4, these two actions basically prompts the diffusion denoisers to get closer to the Gaussian denoisers. This once again highlights the necessity of Gaussian structure in the generalization of diffusion models.
Pdf: /pdf/ee8e3227d0b724263fbd0c1a05da25cf4ed94af3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
CoBo: Collaborative Learning via Bilevel Optimization | Accept (poster) | Summary: In this paper, the authors introduce a novel approach to collaborative learning by framing it as a bilevel optimization problem, enhancing the training efficacy of multiple clients. In conventional collaborative learning paradigms, clients engage in mutual model training through information exchange; however, pinpointing beneficial collaborators presents a challenge and can incur significant computational overhead. To address this, the authors model client selection and model training as two interrelated optimization issues, proposing the COBO algorithm—a scalable and elastic Stochastic Gradient Descent (SGD) type alternating optimization algorithm. Theoretically guaranteed for convergence, COBO demonstrates superior performance over popular personalization algorithms.
Strengths: 1.The authors formulate collaborative learning with an innovative bilevel optimization framework that capitalizes on the intrinsic collaborative structure, yielding more universally applicable solutions.
2.The authors introduce COBO, a scalable and elastic SGD-type alternating optimization algorithm that efficiently addresses the bilevel problem challenge, scaling with an increasing number of clients while maintaining elasticity in client count.
3.The authors establish theoretical convergence guarantees for COBO in collaborative learning scenarios with clustered client structures.
4.The authors demonstrate that COBO surpasses established personalized federated learning benchmarks, especially in heterogeneous federated learning environments and when fine-tuning Large Language Models (LLMs).
Weaknesses: According to equation (1), the collaboration weights are updated by applying projected gradient descent with step size . However, an estimate of the step size is not provided.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1.Could you provide an estimate of the step size in equation (1)? (see weaknesses)
2.The paper mentions the application of the COBO algorithm in federated learning environments where privacy is a critical consideration. What privacy protection measures have been incorporated into the COBO algorithm to ensure the security of client data, and how is the balance between privacy protection and model performance achieved in the algorithm design?
3.The COBO algorithm and the DITTO algorithm are formulated within the framework of bilevel optimization. Compared to the DITTO algorithm, what additional theoretical guarantees or advantages does COBO provide? Furthermore, does COBO demonstrate greater flexibility and robustness when dealing with diverse tasks?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: 1.The proof of the COBO algorithm proposed in the paper is based on some simplified theoretical assumptions, such as the use of full gradient information in the inner problem and the assumption that the minimization problem is solved exactly. These may not fully reflect the complexities encountered in practical applications.
2.In the cross-device federated learning experiment involving 80 clients, each algorithm was executed only once. This implies that the results may not fully reflect the robustness of the algorithms under different random seeds or data partitions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful questions and comments.
**Regarding Question 1**
We appreciate the reviewer's interest in the step size selection for the projected gradient descent in CoBo. We choose $O(1/LT)$ as the step size, one intuitive explanation is motivated by the similarity between our formulation and the Frank-Wolfe algorithm. As noted in the Frank-Wolfe literature, step sizes of the order $O(1/(t+2))$ are commonly used.
**Regarding Question 2**
We acknowledge the importance of privacy in federated learning and appreciate the reviewer's concerns. While we do not incorporate specific privacy measures into CoBo in this paper, we note that differential privacy can be combined with CoBo directly. However, achieving output privacy may require additional cryptographic tools. We consider this a promising direction for future work. We hope the reviewer will consider this a natural next step rather than a drawback of our paper.
**Regarding Question 3**
We are glad to highlight the key differences between Ditto and CoBo, as described in Section 2 (between lines 57 and 62). In scenarios where clients are drawn from diverse distributions, possibly with conflicting labels, CoBo's flexible formulation allows clients to identify fine-grained cluster structures, leading to better local models and improved performance. In contrast, Ditto's formulation may penalize the distance between local models and a meaningless global model, potentially harming local model performance. We believe this example illustrates the advantages of CoBo in handling diverse and complex client distributions. | Summary: In this paper, the authors model collaborative learning as a bilevel optimization problem, and propose CoBo, an SGD-type alternating optimization algorithm, to solve this problem. Theoretical convergence guarantees are provided, and experiments are conducted to evaluate the performance of CoBo.
Strengths: In this paper, the authors model client-selection and model-training as two interconnected optimization problems, proposing a bilevel optimization problem for collaborative learning. They introduce CoBo, an SGD-type alternating optimization algorithm designed to address this problem. CoBo is proven to have theoretical convergence guarantees for collaborative learning with cluster structures. In the experiments, CoBo outperforms popular personalized federated learning baselines.
Weaknesses: 1. Although this paper provides theoretical convergence guarantees, the proof of Theorem I is given for a simplified scenario where the inner problem uses the full gradient, which is difficult to compute in practice.
2. This paper does not provide comparisons of the theoretical convergence performance between CoBo and state-of-the-art algorithms.
2. Repeat experiments are not conducted to ensure the robustness of the results.
Minor:
Line 120: "They leads to eventual" is not a full sentence.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. In Line 7 of Algorithm 1, why use this output rather than return $\\{ x_0^T,\cdots \cdots ,x_N^T \\}$ and $W^T$?
2. The collaborative learning scenario used in this paper doesn't involve a central server. Why do the authors only compare CoBo with personalized federated learning baselines? Why don't they also compare it with some fully decentralized algorithms?
3. The probability 1/n in Line 11 of Algorithm 1 is an important parameter. How does it influence the performance of CoBo?
Confidence: 2
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors mention that the limitations (see Weaknesses 1 and 3) are due to time limitations. This suggests that the paper may not be fully ready for submission.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank you for your thoughtful reviews and suggestions on our paper. We appreciate your feedback and are happy to address the concerns raised.
Regarding Weaknesses:
**Theorem I proof:** We agree with the reviewer that using full gradients for the inner problem simplifies the proof. In practice, we still use mini-batch SGD as the unbiased estimate of the full gradient. We plan to update the proof in the future to reflect this practical consideration.
**Theoretical convergence performance comparison:** We provide a comparison of the theoretical convergence performance between CoBo and state-of-the-art algorithms in our response. Specifically, we show that CoBo enjoys linear scalability with respect to cluster size, whereas federated clustering requires O(n^2) gradient computations per iteration. Ditto gives a coarse grained O(1/T) convergence rate for local models.
**Robustness of results:** We acknowledge the importance of repeat experiments for ensuring the robustness of our results. We have repeated the experiments, and the results are available in the global PDF file.
**Regarding Minor Issues:**
Line 120: We apologize for the mistake and will ensure that the sentence is complete in the final version of the paper.
Questions:
**Output of Algorithm 1:** Non-convex optimization usually tries to prove that the time-averaged gradient norm, such as $\frac{1}{T} \sum_{t=0}^{T-1} \lVert \nabla f(x^t) \rVert_2^2$, is upper bounded by a very small value. However, the theoretical results do not guarantee that the last iterate $x^T$ is also upper bounded. Instead, the $\frac{1}{T} \sum_{t=0}^{T-1} \lVert \nabla f(x^t) \rVert_2^2$ can be seen as the expected gradient norm of a uniform randomly drawn $x^t$, therefore this drawn iterate has the upper bound in expectation. This is a well-established technique in the literature (see [Fang et al. 2018] for an example).
Comparison with fully decentralized algorithms: We appreciate your suggestion to compare CoBo with fully decentralized algorithms. However, our baseline federated clustering algorithm is already fully decentralized and does not rely on a central server. In contrast, personalized federated learning with a central server can be seen as a fully-connected decentralized graph, which is a more challenging baseline to beat.
**Influence of sampling rate:** You are correct that the sampling rate of 1/n is an important parameter. We have carefully chosen this value to minimize the pairwise computation overhead while preserving the quality of the solution.
a. Complexity: Minimizing outer n objectives for $T$ iterations require computing n * T gradients. Naively computing pairwise gradients for the inner objective requires n^2 T gradients computation which dominate the complexity. With a sampling rate of O(1/n), the overall complexity of the bilevel formulation remain the same as single level optimization.
b. Preserved quality: uniform randomly sampling O(1/n) of the edges do not harm the performance of CoBo. As the underlying connectivity matrix is fixed throughout training,each edge will be sampled $O(T/n)$ times to determine the connectivity. As the number of iterations $T$ is usually much larger than the number of clients, it is enough to determine the connectivity.
Fang C, Li C J, Lin Z, et al. Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator[J]. Advances in neural information processing systems, 2018, 31.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the rebuttal. Since Weakness 1 has not yet been addressed, I would like to keep my current score.
---
Reply to Comment 1.1.1:
Title: Addressing weakness 1
Comment: Our changes mainly applies to the proof of Lemma 3, starting from line 390 in the original submission. The current proof only consider the randomness of stochastic gradient $g_1^t, \ldots, g_n^t$ for updating model $x_i$.
Now we additionally consider using stochastic gradients with batch size $b$ for the inner product. That is, let $h_i^t$ and $h_j^t$ be independent and unbiased estimates of
$\langle\nabla f_i(z_{ij}^t), \nabla f_j(z_{ij}^t)\rangle$. The variance of ${h}_i^t$ has a variance of $\frac{\sigma^2}{b}$.
Let's denote $\mathbb{E} _ g:=\mathbb{E} _ {g _ 1,\ldots, g _ n}$ and $\mathbb{E} _ h:=\mathbb{E} _ {h _ 1,\ldots, h _ n}$ and replace the expecation $\mathbb{E}$ in the proof with $\mathbb{E} _ h\mathbb{E} _ g$. Notice that the only difference is that
\begin{align*}
\mathbb{E} _ h\mathbb{E} _ g\left[\tilde{f} _ {ij}\left(z _ {ij}^{t+1}\right)\right]
\le& \tilde{f} _ {ij}\left(z _ {ij}^{t}\right) +
\left\langle \nabla \tilde{f} _ {ij}\left(z _ {ij}^{t}\right), \mathbb{E} _ h\mathbb{E} _ g\left[z _ {ij}^{t+1} - z _ {ij}^{t}\right] \right\rangle + \frac{L}{2} \lVert\mathbb{E} _ h\mathbb{E} _ g[z _ {ij}^{t+1} - z _ {ij}^{t}]\rVert _ 2^2
+ \frac{L}{2} \mathbb{E} _ h\mathbb{E} _ g\left[\lVert z _ {ij}^{t+1} - z _ {ij}^{t} - \mathbb{E} _ h\mathbb{E} _ g[z _ {ij}^{t+1} - z _ {ij}^{t}] \rVert _ 2^2 \right].
\end{align*}
The last quantity can be expanded as follows:
\begin{align*}
\mathbb{E} _ h\mathbb{E} _ g \left[\lVert z _ {ij}^{t+1} - z _ {ij}^{t} - \mathbb{E} _ h\mathbb{E} _ g[z _ {ij}^{t+1} - z _ {ij}^{t}] \rVert _ 2^2 \right]
= \mathbb{E} _ h\mathbb{E} _ g \left[\lVert z _ {ij}^{t+1} - z _ {ij}^{t} \pm \mathbb{E} _ h[z _ {ij}^{t+1} - z _ {ij}^{t}] - \mathbb{E} _ h\mathbb{E} _ g[z _ {ij}^{t+1} - z _ {ij}^{t}] \rVert _ 2^2 \right]
= \mathbb{E} _ h\mathbb{E} _ g \left[\lVert \mathbb{E} _ h[z _ {ij}^{t+1} - z _ {ij}^{t}] - \mathbb{E} _ h\mathbb{E} _ g[z _ {ij}^{t+1} - z _ {ij}^{t}] \rVert _ 2^2 \right]+ \mathbb{E} _ h\mathbb{E} _ g \left[\lVert z _ {ij}^{t+1} - z _ {ij}^{t} - \mathbb{E} _ h[z _ {ij}^{t+1} - z _ {ij}^{t}] \rVert _ 2^2 \right].
\end{align*}
Plug in the above equality to the above inequality
$$
\begin{align*}
\mathbb{E} _ h\mathbb{E} _ g\left[\tilde{f} _ {ij}\left(z _ {ij}^{t+1}\right)\right]
\le& \tilde{f} _ {ij}\left(z _ {ij}^{t}\right) +
\left\langle \nabla \tilde{f} _ {ij}\left(z _ {ij}^{t}\right), \mathbb{E} _ h\mathbb{E} _ g\left[z _ {ij}^{t+1} - z _ {ij}^{t}\right] \right\rangle + \frac{L}{2} \lVert\mathbb{E} _ h\mathbb{E} _ g[z _ {ij}^{t+1} - z _ {ij}^{t}]\rVert _ 2^2
+ \mathbb{E} _ h\mathbb{E} _ g \left[\lVert \mathbb{E} _ h[z _ {ij}^{t+1} - z _ {ij}^{t}] - \mathbb{E} _ h\mathbb{E} _ g[z _ {ij}^{t+1} - z _ {ij}^{t}] \rVert _ 2^2 \right]
+ \mathbb{E} _ h\mathbb{E} _ g \left[\lVert z _ {ij}^{t+1} - z _ {ij}^{t} - \mathbb{E} _ h[z _ {ij}^{t+1} - z _ {ij}^{t}] \rVert _ 2^2 \right].
\end{align*}
$$
Among the 5 terms on the right hand side of the above inequality, the first 4 terms is bounded the same way as the original submission. The effect of using stochastic gradient for inner product is limited to
\begin{align*}
\mathbb{E} _ h\mathbb{E} _ g \left[\lVert z _ {ij}^{t+1} - z _ {ij}^{t} - \mathbb{E} _ h[z _ {ij}^{t+1} - z _ {ij}^{t}] \rVert _ 2^2 \right] \\
=\mathbb{E} _ h\mathbb{E} _ g \left[ \left\lVert \frac{\eta}{2} (g _ i^t + g _ j^t) + \frac{\eta\rho}{2}\sum _ {k=1}^n (w _ {ik}^{t+1} (x _ i^t - x _ k^t) + w _ {jk}^{t+1} (x _ j^t - x _ k^t)) \right.\right.
\qquad\left.\left. - \mathbb{E} _ h \left[\frac{\eta}{2} (g _ i^t + g _ j^t) + \frac{\eta\rho}{2}\sum _ {k=1}^n (w _ {ik}^{t+1} (x _ i^t - x _ k^t) + w _ {jk}^{t+1} (x _ j^t - x _ k^t)) \right] \right\rVert _ 2^2 \right]
=\mathbb{E} _ h\left[ \left\lVert \frac{\eta\rho}{2}\sum _ {k=1}^n ((w _ {ik}^{t+1} - \mathbb{E} _ h[w _ {ik}^{t+1}]) (x _ i^t - x _ k^t) + (w _ {jk}^{t+1}- \mathbb{E} _ h[w _ {jk}^{t+1}]) (x _ j^t - x _ k^t)) \right\rVert _ 2^2 \right]
\end{align*}
Then
\begin{align*}
\mathbb{E} _ h\mathbb{E} _ g \left[\lVert z _ {ij}^{t+1} - z _ {ij}^{t} - \mathbb{E} _ h[z _ {ij}^{t+1} - z _ {ij}^{t}] \rVert _ 2^2 \right] \\
=\frac{\eta^2\rho^2}{4} \sum _ {k\neq i,j} \left(\mathbb{E} _ h \left[ \left\lVert w _ {ik}^{t+1} - \mathbb{E} _ h[w _ {ik}^{t+1}] \right\rVert^2 _ 2 \right] \lVert x _ i^t - x _ k^t\rVert _ 2^2 + \mathbb{E} _ h \left[ \left\lVert w _ {jk}^{t+1} - \mathbb{E} _ h[w _ {jk}^{t+1}] \right\rVert^2 _ 2 \right] \lVert x _ j^t - x _ k^t\rVert _ 2^2 \right)
\end{align*}
where we use the independence of random variables in the last equality. | Summary: The paper proposes bi-level training for heterogeneous federated learning, where heterogeneity is due to underlying clustered clients. The two levels of optimizations are model training and determining the client similarity. The authors prove a convergence result and prove empirical results on training vision and LLM models.
Strengths: - Paper is very easy to follow and understand due to the flow of language.
- The presented results indicate the advantages compared to other works.
- The high level idea is sound.
Weaknesses: - For each experiment the setting is fixed, i.e. there are no ablations on how e.g. number of clusters, data samples per client, number of clients per cluster affects the performance. There is a sparsity in experiments.
- FedAvg+fine-tuning should be compared to as an additional method.
- The convergence result on f_ij instead of on f_i, I think this should be fixed since the optimization problem is on f_i's.
- The algorithm requires pairwise computations across clients which might be huge overhead for practical applications where there might be millions of clients.
- More discussion is needed for the theorem, e.g. what does it shows us, how are different setting parameters like number of clusters, clients per clusters etc. affect the convergence.
Technical Quality: 2
Clarity: 3
Questions for Authors: - What does "each cluster clients share stationary points" mean?
- Why is the convergence result is given on f_ij
- What is the performance of method when clustering assumption does not hold, for instance there is only one cluster?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful feedback on our paper. We address each of the comments below:
1. Sparsity in experiments and ablations:
We have added more experiments to investigate the effect of different settings, such as the number of clusters, data samples per client, and number of clients per cluster, on the performance of our method. This provides a more comprehensive understanding of the robustness and generalizability of CoBo. The full experiment is available in the global PDF file. As expected, the total amount of data available to each cluster has a direct influence on the performance of the method. Additionally in the experiments, CoBo is almost invariant to the number of clusters. In the experiment for different number of clients per cluster, the dataset is partitioned among clients in each cluster, meaning that larger cluster sizes result in less data available to each individual client. Nevertheless, CoBo can utilize collaboration to preserve the performance for larger cluster sizes, and remain invariant to cluster size as well.
2. Comparison to FedAvg+fine-tuning:
We have included a comparison of our method with FedAvg+fine-tuning in the revised manuscript, allowing for a more complete evaluation of the performance of CoBo relative to existing methods. The performance of this baseline could be found in the table available in the global PDF file. We confirm that CoBo can outperform this method in all experiments.
3. Convergence result on f_i:
We agree with the reviewer that the results should be given for $f_i$. We note that the convergence results for $f_i$ can be easily derived by combining the last equation of Theorem I with Equation (5). That is, by adding
$\lVert \nabla f_i(x) + \nabla f_j(x) \rVert_2^2$ to both sides of the Equation (5), we have that
$$2(\lVert \nabla f_i(z_{ij}) \rVert_2^2 + \lVert \nabla f_j(z_{ij}) \rVert_2^2) \le 4 (1+ M_{ij}^2) \lVert \nabla f_i (z_{ij}) \rVert_2^2 $$
Then using the upper bound of $M_{ij} < ⅕$ in Theorem 1 and average over t and i,j yields
$$\frac{1}{c^2T} \sum_{t=0}^{T-1} \sum_{ij\in c} \lVert \nabla f_i(z_{ij}) \rVert_2^2 \le (1+1/25) RHS$$
where RHS refers to the right hand side of the last equation of Theorem 1. The convergence of $\lVert \nabla f_i(x_i) \rVert_2^2$ can also be derived using Cauchy-Schwarz inequality and apply the consensus distance inequality in Theorem 1.
4. Pairwise computation overhead:
We agree with the reviewer that the pairwise gradient computation in the inner problem can be expansive. In fact, we had already addressed this issue in Section 2.1 of the manuscript by uniformly sampling only O(1/n) percent of edges in each iteration, leaving the computation complexity of the inner problem the same order as the outer problem. We also included an experiment to empirically compare different sampling strategies, available in the global PDF file. The experiment demonstrates that CoBo is robust to different sampling strategies, while proposing a sampling method that slightly increases the performance.
5. Discussion of the theorem:
We thank the reviewer for the insightful suggestion. We will incorporate the discussions into the paper. As M_ij measures how well i,j collaborate, in fact, smaller M_{ij} leads to better consensus distance, with M_ij=0 leading to always identical as expected. The gradient norms does not scale linearly with the number of clients because we are considering the norm of individual gradients. (This scaling will appear when we consider the averaged gradient among all clients in this cluster. We will add the proof in the future version.)
Regarding questions:
1. Clients sharing stationary points:
The statement "each cluster clients share stationary points" means that, despite having different data distributions, the clients within a cluster can simultaneously reach a stationary point. This assumption is indeed a relaxation of the i.i.d. assumption, allowing for more realistic and heterogeneous data distributions.
3. Performance without clustering assumption:
Our algorithm still applies when there is only one cluster as it does not require knowledge of the exact number of clusters or balanced clusters.
We hope that our revisions have addressed the concerns raised by the reviewer, and increase the score if possible.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal, comparison to FedAvg+ft is a good addition. Point 3. is very hard to follow as you have not utilized markdown. Despite sampling, I still think the pairwise computations are an important overhead in practice; also the theoretical results does not take into account such a sampling occurs. Hence, I would like to keep my score.
---
Reply to Comment 1.1.1:
Title: Response
Comment: We are glad to have addressed your concern about FedAvg+Fine-tuning. Regarding the other points, we would like to clarify that:
- CoBo, as shown in Algorithm 1, **only computes O(n) gradients per iteration**, instead of pairwise O(n^2), making it very efficient in practice. In a setup with n clients, if we sample the pairs of clients with probability O(1/n), the expected number of selected pairs in each timestep is n, which is the same as the number of local computed gradients in all baselines. Hence, the complexity of our algorithm is the same as other collaborative learning algorithms. As is shown in the experiments sections of the submission pdf and rebuttal pdf, CoBo is able to reach **higher accuracies** compared to other methods **in all experiments** with **same order of complexity**.
- As for point 3, we are not sure why the reviewer says that “we have not utilized markdown” as the equations render correctly to us. Could you try again or try other browsers?
- As for theoretical results, the technical challenges of the convergence proof are already addressed in the no-sampling theorem while the sampling variant is a simple corollary. We will add the simple corollary to the main text.
We hope that our responses clear the reviewer’s main concern about complexity. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful reviews and feedback. **The attached PDF**, contains additional experiments allowing us to gladly address the following concerns:
**1. FedAvg+fine-tuning:** A new baseline is included for all experiments. This baseline is similar to FedAvg for the first 80\% of the training, and then it fine-tunes each client on their local dataset for the remainder of the iterations. Although this method performs better than FedAvg, CoBo's performance is still superior in all experiments.
**2. Ablation study on the Cross-silo experiment:** We have added new experiments to see how CoBo behaves with different experimental setups. As expected, CoBo is sensitive to the total data available to each cluster. We also found out that increasing the number of clusters does not affect the performance of CoBo significantly. We also experimented with the different number of clients per clusters, maintaining the fixed number of 4 clusters. Since the data is partitioned among clients of each cluster, larger cluster size means lower data available to each individual client. However, we are glad that CoBo can also preserve the accuracy for larger clusters, by enabling collaboration within clusters.
**3. Sampling strategies for updating collaboration matrix:** We agree that the sampling method is a very important part of our algorithm. We therefore conduct experiments on different sampling strategies in the Cross-silo experiment. Our proposed methods almost perform similarly, suggesting the robustness of CoBo on its sampling method. We observed that CoBo can find collaborators in the early stages of the training, leading us to propose a new sampling method based on sampling more frequently in the early stages and reduce the frequency in the later stages. We show that this mixed strategy can slightly perform better than other strategies in the Cross-silo experiment.
**4. Repeated Experiments:** We acknowledge the importance of the repeated experiments to examine the robustness of methods, and we thank the reviewers for raising this point. We have added the confidence intervals for the Cross-device experiment as well in order to address this matter.
Pdf: /pdf/2248bcdd9742cea0235a3d70360bbf9e95217220.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Inexact Augmented Lagrangian Methods for Conic Optimization: Quadratic Growth and Linear Convergence | Accept (poster) | Summary: This paper presents new theoretical results on the convergence of primal iterates in inexact augmented Lagrangian methods (ALMs) for conic optimization. The idea is to use strict complementarity (which is a standard assumption in conic optimization) to establish quadratic growth and error bound conditions for the primal and dual programs. These bounds are then used to show linear convergence for both the primal and dual iterates in inexact ALMs. The linear convergence of the primal iterates is shown in (brief) experiments.
Strengths: * The work provides a thorough introduction to the problem background
* The theory is presented rigorously!
Weaknesses: I found the main weakness of the paper to be in its presentation of results.
* The paper takes too long to reach the main results. Perhaps an informal version of Theorem 3 could be stated in the introduction?
* There is a lot of notation/specific terminology in the paper, which makes it difficult to follow. One suggestion to reduce the complexity would be to focus on a particular problem class (LP, SOCP, or SDP) in the main paper. For example, if focusing on LPs, the authors could move the discussion of strict complementarity of LPs to earlier in the paper, which would give more intuition for the assumptions in the paper. Furthermore, the theorems in the main paper could be simplified to address LPs, which would make the results easier to understand for a wider audience. The general results for LPs, SOCPs, and SDPs could then be presented in the appendix.
* I would have liked the paper to expound upon how their work relates to machine learning (outside of some brief citations in the introduction). Ideally, this could be done by showing linear convergence of primal iterates on a conic program from machine learning. One idea could be to solve an SVM problem (QP, which is a special case of SDP).
* While Figure 1 suggests that primal iterates are converging linearly, I would like to see more iterations (say 20) in the plots to really verify this claim.
* I found it hard to follow how the author’s results compare to previous convergence results in the literature. I’d suggest providing a more in-depth comparison in lines 341-349.
* The paper takes too long to discuss inexact augmented Lagrangian methods (these are not discussed in-depth until page 7). This makes the flow of the paper confusing — I’d suggest covering inexact ALMs closer to the introduction.
Minor comments:
* Discussion of scalable first-order methods (line 28) should cite PDLP [1, 2]
* “Lipschitz” is misspelled on lines 100 and 103
[1] Applegate, David, Mateo Diaz, Oliver Hinder, Haihao Lu, Miles Lubin, Brendan ODonoghue, and Warren Schudy. 2021. “Practical Large-Scale Linear Programming Using Primal-Dual Hybrid Gradient.” In Advances in Neural Information Processing Systems, edited by M. Ranzato, A. Beygelzimer, Y. Dauphin, P. S. Liang, and J. Wortman Vaughan, 34:20243–57. Curran Associates, Inc.
[2] Applegate, D., Hinder, O., Lu, H. et al. Faster first-order primal-dual methods for linear programming using restarts and sharpness. Math. Program. 201, 133–184 (2023).
Technical Quality: 3
Clarity: 2
Questions for Authors: * Are there problems for which the dual iterates can converge linearly, but the primal iterates converge sublinearly? If so, could this be demonstrated in an experiment?
* Line 265: What exactly is meant by “nice self-dual structure”?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Please see “Weaknesses”
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The authors are very thankful to the reviewer for taking the time and effort to review our manuscript. We sincerely appreciate all your valuable comments. All your comments are carefully addressed below.
> **Suggestion focusing on a specific cone, for example, LP. This will make the paper easier to read**
Thanks for the suggestion. This will definitely make the paper more accessible. In this paper, we aim to make the paper as general as possible. In addition, there are some differences in terms of the construction of the penalty function and region/power term for the error bound. In our current manuscript, we have incorporated LP, SOCP, and SDP in our main text. Detailed proofs and individual discussions for each case are provided in the Appendix.
However, we do agree with the reviewer the it will improve the paper clarity by focusing on one problem class. If the paper is accepted, we will follow the reviewer' suggestions and focus on the general problem class—SDP—in the main manuscript. This will not only improve clarity but also give us more space to ensure that Theorem 3 is well-integrated into the main discussion.
> **Expound upon how their work relates to machine learning.**
We have added more numerical experiments in the attached PDF file under the section of Auther rebuttal. We consider the application of linear SVM and Lasso. Both the linear SVM and Lasso can be reformulated as standard conic programs with free, nonnegative, and second-order cone variables. For each application, we randomly generate three instances and run ALM for those three instances with various augmented penalty parameters $\rho>0$. The numerical results show that the three quantities (primal cost value gap, dual cost value gap, and the KKT residuals) all converge to zero with a linear rate. The convergence speed of the residuals also depends on the value of the penalty parameter $\rho$. The larger the penalty parameter $\rho$, the faster the convergence of the residuals. We believe the oscillated or flattening behavior that happens in the tail (when the iterates are close to the solution set) is due to the inaccuracy of the subproblem solver and computational errors.
> **While Figure 1 suggests that primal iterates are converging linearly, I would like to see more iterations (say 20) in the plots to really verify this claim.**
Thanks for pointing this out. However, as we use the modeling package YALMIP to formulate the subproblem and call the conic solver MOSEK to solve the subproblem. It will be difficult to control the subproblem solution accuracy. There exists oscillated and flattening behavior when the iterates are close to the solution set due to the computational error. This is reflected in our additional numerical experiments.
> **I found it hard to follow how the author’s results compare to previous convergence results in the literature. I’d suggest providing a more in-depth comparison in lines 341-349.**
Thanks for the comments. We will revise the comparison in lines 341-349. Specifically, we mean that
* [24, Theorem 5] requires two things:
1. The Lagrangian function is Lipschitz continuous at the origin, which requires both the primal and the dual solution to be unique. This assumption can easily fail, as pointed out in [27, Section 344 3].
2. One more subproblem stopping criterion in addition to $A^{'}$ and $B^{’}$
Our result in Theorem 3, however, suggests that the linear convergence of the primal iterates also can happen under the standard assumption of strict complementarity and bounded primal solution set.
* As strict complementarity is a generic property of semidefinite programs [42, Theorem 15] and a bounded primal solution set happens when a dual slater point exists, which is a common assumption, therefore, our result suggests that the linear convergence of the primal iterates is likely to happen under many nicely behaved problems.
[24] R Tyrrell Rockafellar. Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Mathematics of operations research, 1(2):97–116, 1976.
[27] Ying Cui, Defeng Sun, and Kim-Chuan Toh. On the R-superlinear convergence of the KKT residuals generated by the augmented Lagrangian method for convex composite conic programming. Mathematical Programming, 178:381–415, 2019.
[42] Farid Alizadeh, Jean-Pierre A Haeberly, and Michael L Overton. Complementarity and nondegeneracy in semidefinite programming. Mathematical programming, 77(1):111–128, 1997.
> **The paper takes too long to discuss inexact augmented Lagrangian methods (these are not discussed in-depth until page 7). This makes the flow of the paper confusing — I’d suggest covering inexact ALMs closer to the introduction.**
Thanks for your nice suggestion. We will revise the introduction and incorporate further discussions about Theorem 3.
> **Minor comments:**
> 1. Discussion of scalable first-order methods (line 28) should cite PDLP [1, 2]
> 2. “Lipschitz” is misspelled on lines 100 and 103
Thanks for the detailed examination of our paper. We will add the reference in the final version and fix the typo.
> **Are there problems for which the dual iterates can converge linearly, but the primal iterates converge sublinearly? If so, could this be demonstrated in an experiment?**
Thanks for this insightful question. We believe it is a difficult task as strict complementarity holds for almost all conic problems; thus, for many instances, both primal and dual iterates are expected to converge linearly. Right now, we are struggling to find an explicit example for which the dual iterates can converge linearly, but the primal iterates converge sublinearly.
> **Line 265: What exactly is meant by “nice self-dual structure”?**
We simply mean the cones of nonnegative orthant, second-order cone, and the positive semidefinite cones are self-dual. As a result, both the primal and the dual problem are the same class of problems.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Thank you for taking the time to address my concerns. I will raise my score from 3 -> 5. | Summary: The paper presents the convergence rate of the primal iterates of the augmented Lagrangian Methods (ALMs) which are widely employed in solving constrained optimizations. The authors develop new quadratic growth and error bound properties for primal and dual conic programs under the standard strict complementarity condition and then reveal that both primal and dual iterates of the ALMs converge linearly contingent upon the assumption of strict complementarity and a bounded solution set. The specific contributions include:
1. Under the standard strict complementarity assumption, the quadratic growth and error bound for both primal and dual conic programs (P) and (D) over any compact set containing an optimal solution (see Theorems 1 and 2) are established.
2. A new characterization of the preimage of the subdifferential of the exact penalty functions is unveiled.
3. They provide new and simple proof for the growth properties in the exact penalty functions and clarify some subtle differences in constructing exact penalty functions.
4. They show the linear convergence for both the primal and dual iterates of ALM to solve either the primal or dual conic programs.
Strengths: Originality: The paper introduces novel quadratic growth and error bound properties for primal and dual conic programs, addressing long-standing challenges in the field. The development of symmetric inexact ALMs for primal and dual problems is a creative extension of existing methods.
Quality: The paper is of high quality, featuring a robust theoretical framework and rigorous mathematical proofs.
Clarity: The paper is clearly written and well-organized.
Significance: By addressing open questions and providing new theoretical insights, the work could influence future research.
Weaknesses: 1. The authors claim in Remark 1 that the results in Theorems 1 and 2 are more general and unified compared with known results since they allow for any compact set $\mathcal{U}$. However, let $\mathcal{U}=\mathcal{U}_1\cap \mathcal{U}_2$ where $\mathcal{U}_1$ and $\mathcal{U}_2$ are compact, and there is not any optimal solution in $\mathcal{U}_1$. If it is known that the error bound is constrained with a pair $\gamma, \kappa$ for $x\in\mathcal{U}_2$. For $x\in\mathcal{U}_1$, it is natural to get a $\kappa$ through the minimizatin of the fraction $$\frac{f(x)-p^*+\gamma\|\mathcal{A}(x)-b\|}{dist^p(x,\Omega)}$$ since $\mathcal{U}_1$ is compact and $dist^p(x,\Omega)>0$. We can just take the minimizer of two $\kappa$(s). The advantages of this error bound form in this paper is doubtable.
2. Aftrer Definition 2, the authors claim that the notion of strict complementarity is not restrictive, and it holds generically for conic programs. After that, the authors also presented: that it has been revealed that many structured SDPs from practical applications also satisfy strict complementarity. There are contradictions and misunderstandings in these two assertions.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Lines 184-185: If the linear constraints $\mathcal{A}(x)=b$ is empty, how to define the exact penalty function?
2. Line 226: Why is there $\mathbb{F}\times \mathbb{E}$ instead of $\mathbb{F}$?
3. Please consider citing the paper "Rockafellar, R.T. Convergence of Augmented Lagrangian Methods in Extensions Beyond Nonlinear Programming. Math. Program. 199, 375–420 (2023). https://doi.org/10.1007/s10107-022-01832-5," and compare it with your work.
Typos:
1. Line 238: Should (8b) be replaced by (8a)?
2. The notation $\mathbb{R}^n$ should be changed to $\mathbb{R}_{+}^n$.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's time and effort in evaluating our manuscript. Your comments helped further improve the quality of our work. We provide detailed responses below.
> **The authors claim in Remark 1 that the results in Theorems 1 and 2 are more general and unified compared with known results since they allow for any compact set $\mathcal{U}$**
Thanks for this insightful comment. However, the error bound is usually more informative when a point is close to the solution set, as it characterizes the relationship between the distance to the complex solution set and the computable residuals. Ideally, a useful error bound should always be defined on a region containing an optimal solution so that when the computable residuals go to zero, we can deduce the distance to the solution is also zero. Our result for any compact set $\mathcal{U}$ containing an optimal solution is particularly useful in estimating the iteration number for local convergence. For example, suppose we have an algorithm that has a sublinear convergence rate of $\mathcal{O}(1/\epsilon)$ for any convex functions and linear convergence $\mathcal{O}(\log(1/\epsilon))$ for convex functions satisfying quadratic growth. Given the optimization problem, $\min_x f(x)$ with $f$ being convex and satisfying quadratic growth in the sublevel set $f_\nu = \\{ x \mid f(x) \leq \min_x f(x) + \nu \\} $ with $\nu > 0$. In this case, one can infer the number of iterations that guarantee the iterate to reach the sublevel set $f_\nu$ using the $\mathcal{O}(1/\epsilon)$ convergence rate and claim the local linear convergence after iterates are within $f_\nu$.
> **Aftrer Definition 2, the authors claim that the notion of strict complementarity is not restrictive, and it holds generically for conic programs. After that, the authors also presented: that it has been revealed that many structured SDPs from practical applications also satisfy strict complementarity. There are contradictions and misunderstandings in these two assertions.**
Thanks for pointing out our contradicting statements. We will rewrite the paragraph to avoid misunderstandings in our updated version. For the statement regarding the generic property of conic programs, we mean that strict complementarity holds for almost all (in the sense of the Lebesgue measure) linear conic problems. For the statement regarding the structured SDPs, we mean that strict complementarity has been shown to hold for almost all SDPs with a specific structure, such as SDP relaxation of Max-cut and Matrix completion.
> **Lines 184-185: If the linear constraints $\mathcal{A}(x) = b$ is empty, how to define the exact penalty function?**
Thanks for this nice question. If the linear constraint $\mathcal{A}(x) = b$ is empty, then any penalty parameter will work for the penalty function for the conic constraint as the problem is infeasible.
> **Line 226: Why is there $\mathbb{F} \times \mathbb{E}$ instead of \mathbb{F}?**
Here, we follow our definition of the optimal solution set (1b). We write the dual solution set into the two spaces, so, in Line 226, $\mathbb{F}$ denotes the space where $y$ lives, and $\mathbb{E}$ denotes the space where $c-\mathcal{A}^*(y)$ lives.
> **Please consider citing the paper "Rockafellar, R.T. Convergence of Augmented Lagrangian Methods in Extensions Beyond Nonlinear Programming. Math. Program. 199, 375–420 (2023). https://doi.org/10.1007/s10107-022-01832-5," and compare it with your work.**
This paper discusses a more general setting of ALM, including nonconvex problems and convergence to a local minimum. However, as their setting is more general, the linear convergence of the primal variables also requires a strong assumption about the uniqueness of the primal and dual solutions.
> **Typo: Line 238: Should (8b) be replaced by (8a)?**
Thank you. This is a typo. It should be replaced by (8a).
> **Typo: The notation $\mathbb{R}^n$ should be changed to $\mathbb{R}^n_+$.**
We will double-check our paper and fix the typo. Thanks for your careful reading of our work.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors' responses, I have no further questions and will keep my score. However, I agree with the other reviewers that the paper's presentation could be improved. | Summary: This paper develops Inexact Augmented Lagrangian methods (ALM) for conic optimization problems with focus on Linear Programming (LP) or the non-negative orthant cone, Second-Order Cone (SOCP) problems and Semidefinite programming (SDP) given a linear objective. Under assumptions of strict complementarity and strong duality, the authors show local linear convergence for these problems by first showing quadratic growth property within a compact subset around the optimal solution for both the primal and dual programs and then leveraging the well-established concept that running ALM for the primal is equivalent to running Proximal Point Method (PPM) for the dual. The experiments show linear convergence for the SDP relaxation for the MaxCut problem, the SDP relaxation for matrix completion, and the moment relaxation of binary quadratic program (BQP) given the following metrics: primal distance, dual cost gap and the KKT-based residuals.
Strengths: The biggest strength of the paper is the theoretical contribution to understanding of local linear convergence of inexact ALM methods for a variety of conic problems. Previous works have only shown local linear convergence of dual iterates but this work shows local linear convergence for both primal and dual iterates. The authors provide a thorough theoretical discussion and provide all the proofs in the appendix section.
Weaknesses: One weakness of the paper is the lack of characterization of how inexact evaluation of the proximal operator affects convergence. The authors use the property of linear convergence for Proximal Point Method (PPM) given inexact evaluation of proximal operator and quadratic growth assumption to show similar linear convergence for Inexact ALMs but there are no guarantees based on chosen error tolerance schedule (i.e. choice of $\epsilon_k$ and $\delta_k$ for all k). Another weakness of the paper is the lack of experimental evaluation of conic problems other than SDPs. The work shows theoretical guarantees of local linear convergence under strict complementarity and strong duality for three different types of conic problems: LPs, SOCPs and SDPs but experimental evaluation of LPs and SOCPs are lacking. It maybe because SDPs are a more general class than LPs or SOCPs but it would be good to have a variety of experiments that cover both convex/non-convex constraints as well as different LPs and SOCPs in addition to SDPs. There is also a lack of experimental evaluation to claim that there is local linear convergence given inexact evaluations of the subproblem in Equation (15a).
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why were LPs and SOCPs not evaluated as part of the experimental evaluation but discussed in the theoretical sections and appendix?
2. Have you considered showing the local linear convergence given different random starting points for Inexact ALM?
3. Did you use inexact evaluation of the subproblem in Equation (15a)?
4. Have you considered showing the local linear convergence given different error tolerance schedules for $\epsilon_k$ and $\delta_k$ for all k?
5. Have you considered choosing the $\mathcal{A}$ operator to hold non-convex constraints in the experimental evaluation? The MaxCut SDP and Matrix Completion SDP both have linear constraints and the BQM has a convex constraint of $x_i^2 = 1$ but your guarantees only only assume that $\mathcal{A}$ is surjective.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: There were no discussions of limitations in the work or adequate discussion of future directions of work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's time and effort in evaluating our manuscript. Your comments helped further improve the quality of our work. We provide detailed responses below
> **Lack of characterization of how inexact evaluation of the proximal operator affects convergence.**
Thanks for the valuable comment. It will be interesting to see how the inexactness ($\epsilon_k$ and $\delta_k$) propagates to the guarantee on the primal cost value gap, affine feasibility, and conic feasibility, similar to the work [1]. Our analysis considers the ALM as the dual side of the Proximal Point Method (PPM). Despite the lack of an analysis of the error propagation, the analysis based on PPM is simpler and more intuitive. We will incorporate the discussion on this part into the updated version of our paper.
[1] Xu, Yangyang. "Iteration complexity of inexact augmented Lagrangian methods for constrained convex programming." Mathematical Programming 185 (2021): 199-244.
>**Lack of experimental evaluation of conic problems other than SDPs.**
Motivated by this comment, we have added more numerical experiments other than SDPs in the attached PDF file under the section of Auther rebuttal, l. The results also support our theoretical contributions.
In particular, we consider two classical ML applications: linear SVM and Lasso. Both the linear SVM and Lasso can be reformulated as standard QP or SOCP. For each application, we randomly generate three instances and run ALM for those three instances with various augmented penalty parameters $\rho>0$. The numerical results show that the three quantities (primal cost value gap, dual cost value gap, and the KKT residuals) all converge to zero with a linear rate. The convergence speed of the primal feasibility and conic constraints also depends on the value of the penalty parameter $\rho$. The larger the penalty parameter, the faster the convergence of the feasibilities. We believe the oscillated or flattening behavior that happened in the tail (when the iterates are close to the solution set) is due to the inaccuracy of the subproblem solver and computational errors.
[43] Ding and udell. On the simplicity and conditioning of low rank semidefinite programs. SIAM Journal on Optimization, 31(4):2614–2637, 2021.
>**Lack of experimental evaluation to claim that there is local linear convergence given inexact evaluations of the subproblem**
The linear convergence does happen under the inexactness (see conditions A' and B’ in the paper). This has been empirically verified in SDPNAL+ [20]. Instead of directly dealing with the non-implementable conditions A’ and B’ (as A’ and B’ require the knowledge of the true cost value), they propose other alternative and implementable conditions and incorporate them into their subproblem solver.
As our main focus is to establish the theoretical convergence guarantee for the primal iterates, we use the package YALMIP to formulate the subproblem and call MOSEK to solve the subproblem. It IS difficult to control the subproblem solution quality using a third-party package (MOSEK), as the subproblem is also solved to $\epsilon$-accurately by the conic solver. This is also revealed in our additional numerical experiments in the attached PDF. Specifically, the numerical experiments show some flattening behaviours in the tail when the iterates are close to the optimal solution set. We believe this is due to the computational error and the inexact subproblem solution.
[20] Liuqin Yang, et al. Sdpnal+: a majorized semismooth Newton-CG augmented Lagrangian method for semidefinite programming with nonnegative constraints. Math. Program. Computation, 331–366, 2015.
>**Have you considered showing the local linear convergence given different random starting points for Inexact ALM?**
Throughout our numerical experiments, different random starting points all lead to the same convergence behavior as guaranteed by our theoretical result Theorem 3 which works for any initial points.
>**Did you use inexact evaluation of the subproblem in Eq. (15a)?**
We use the package YALMIP to formulate the subproblem and call the conic solver MOSEK to solve the subproblem. Therefore, the subproblem is solved to $\epsilon$-accuracy by the conic solver. Note that we can not guarantee the $\epsilon$ goes to zero as the ALM proceeds.
>**Have you considered showing the local linear convergence given different error tolerance schedules for $\epsilon_k$ and $\delta_k$ for all $k$?**
We did not investigate this part, as our main focus is the theoretical guarantee. We believe this will be more important in the algorithm development for solving the subproblem.
>**Have you considered choosing the operator $\mathcal{A}$ to hold non-convex constraints in the experimental evaluation? The BQM has a convex constraint of $𝑥_𝑖^2=1$ but your guarantees only assume that $\mathcal{A}$ is surjective.**
We did not solve the nonconvex BQM directly. Instead, we apply the moment relaxation with degree two to turn the nonconvex BQM into an SDP relaxation [61], and then solve the resulting SDP.
[61] Jean B Lasserre. Global optimization with polynomials and the problem of moments. SIAM Journal on optimization, 11(3):796–817, 2001.
>**Limitations**
Thanks for pointing out this issue in our writing. Our paper first establishes the growth property and error bound for three commonly used self-dual cones. Building upon the established growth property and error bound, our paper established the convergence rate of the primal iterates. The current theoretical guarantees do not apply to other cones. It will be interesting to see how to use the current proof techniques to establish the growth property and error bound for other non-symmetric cones. In addition, as solving the subproblem is the main step in the ALM, it will be interesting to develop an efficient subproblem tailored to the problem structures, such as sparsity or manifold structure.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed responses to my comments and questions. It is good to see the additional experimental results for linear SVM and Lasso, as this addresses the main concern about inadequate experimental evaluation, particularly with LPs and SOCPs. As far as showing how the error tolerance schedule $\epsilon_k$ and $\delta_k$ propagates to the guarantees, it is still a missing part of the analysis and useful to consider since achieving low error tolerance is difficult and/or costly in some applications. It is good that you will consider adding that analysis. I understand it is difficult to control the subproblem quality using a third-party solver like MOSEK but there are ways to set them. See this as an example for MOSEK: https://docs.mosek.com/latest/cxxfusion/parameters.html. Still, the additional experimental results merits an increase in my rating from 5 to 6.
---
Rebuttal 2:
Title: Subproblem solution quality in Mosek
Comment: The authors would like to thank the reviewer for positive feedback and for directing us to the MOSEK documentation. We appreciate the reviewer's understanding of the challenges involved in conducting numerical experiments with varying schedules for $\epsilon_k$ and $\delta_k$.
- Following the reviewer's suggestion, we conducted new numerical simulations to assess the impact of subproblem accuracy by varying the accuracy settings of the MOSEK solver.
- We ran the inexact ALM on one instance of SVM and one instance of LASSO, adjusting the MOSEK accuracy settings (primal feasibility, dual feasibility, and relative complementarity gap) from $10^{-2}$ to $10^{-8}$.
- Since we cannot include figures in this response, we present the numerical results in the following tables. The results indicate that higher accuracy in the subproblem solver leads to faster convergence of the residuals. For instance, with an accuracy setting of $10^{-2}$, the inexact ALM fails to converge to a high-accuracy solution after 20 iterations. With an accuracy setting of $10^{-4}$, the residuals converge to the order of $10^{-3}$. At $10^{-6}$ for the subproblem accuracy, the residuals decrease to the order of $10^{-6}$, and at $10^{-8}$, they decrease to around 4.62$\mathrm{e}$-08 after 20 iterations, showing linear convergence as expected by our theoretical analysis.
- A similar effect of subproblem accuracy was observed in the LASSO experiments.
Some detailed numerical results are shown below. We will add the corresponding figures to the final version of our work. We believe these extra numerical experiments further validate our theoretical analysis.
> # **SVM - Primal cost value gap**
| **Accuracy/Iteration** | **1** | **5** | **10** | **15** | **20** |
|------------------------|----------------|----------------|----------------|----------------|----------------|
| **1.00$\mathrm{e}$-02** | 1.78$\mathrm{e}$-01 | 2.48$\mathrm{e}$-01 | 2.48$\mathrm{e}$-01 | 2.48$\mathrm{e}$-01 | 2.48$\mathrm{e}$-01 |
| **1.00$\mathrm{e}$-04** | 4.29$\mathrm{e}$-01 | 1.58$\mathrm{e}$-02 | 6.07$\mathrm{e}$-03 | 5.98$\mathrm{e}$-03 | 5.97$\mathrm{e}$-03 |
| **1.00$\mathrm{e}$-06** | 4.36$\mathrm{e}$-01 | 1.08$\mathrm{e}$-02 | 1.01$\mathrm{e}$-04 | 2.81$\mathrm{e}$-06 | 1.48$\mathrm{e}$-06 |
| **1.00$\mathrm{e}$-08** | 4.37$\mathrm{e}$-01 | 1.07$\mathrm{e}$-02 | 2.70$\mathrm{e}$-04 | 7.29$\mathrm{e}$-06 | 4.62$\mathrm{e}$-08 |
> # **Lasso - Primal cost value gap**
| **Accuracy/Iteration** | **1** | **5** | **10** | **15** | **20** |
|------------------------|----------------|----------------|----------------|----------------|----------------|
| **1.00$\mathrm{e}$-02** | 6.65$\mathrm{e}$+00 | 2.46$\mathrm{e}$-02 | 6.36$\mathrm{e}$-03 | 1.26$\mathrm{e}$-01 | 1.24$\mathrm{e}$-01 |
| **1.00$\mathrm{e}$-04** | 6.78$\mathrm{e}$+00 | 1.13$\mathrm{e}$-03 | 1.27$\mathrm{e}$-03 | 3.37$\mathrm{e}$-03 | 2.14$\mathrm{e}$-03 |
| **1.00$\mathrm{e}$-06** | 6.78$\mathrm{e}$+00 | 3.56$\mathrm{e}$-03 | 5.45$\mathrm{e}$-07 | 4.97$\mathrm{e}$-05 | 4.90$\mathrm{e}$-05 |
| **1.00$\mathrm{e}$-08** | 6.78$\mathrm{e}$+00 | 3.50$\mathrm{e}$-03 | 4.97$\mathrm{e}$-05 | 6.89$\mathrm{e}$-07 | 1.06$\mathrm{e}$-08 | | Summary: The paper addresses the convergence of primal and dual iterates in Augmented Lagrangian Methods (ALMs) for conic optimization, particularly under quadratic growth assumptions. The authors establish that both primal and dual iterates of ALMs demonstrate linear convergence solely based on the strict complementarity assumption and a bounded solution set. This addresses a significant gap in the literature concerning the linear convergence of primal iterates, providing a theoretical foundation that has been previously unresolved.
Strengths: Originality: The paper attempts to establish the conditions under which both primal and dual iterates of Augmented Lagrangian Methods (ALMs) for conic optimization exhibit linear convergence.
Clarity: The paper is well-structured and written with a clear narrative that guides the reader through the complex theoretical developments. Definitions, theorems, and proofs are delineated, making the complex content accessible to readers unfamiliar with the subject area.
Significance: The findings have potential implications for both the theoretical and practical aspects of optimization.
Weaknesses: Presentation of Key Contributions: One significant issue with the paper is the placement and treatment of one of its key contributions, Theorem 3, which appears towards the very end of the document. This theorem, which is central to the paper's claims and theoretical framework, is not proven within the main text, and the appendix provided does not sufficiently validate or elaborate on it. This placement and lack of rigorous proof within the main discourse may detract from the impact of the contribution, as it might be overlooked or undervalued by readers. To improve, the authors could consider integrating Theorem 3 more prominently within the main discussion and ensuring that a complete and accessible proof is included either in the main text or more comprehensively in the appendix.
Numerical Validation: Another area where the paper falls short is in its numerical section. The section dedicated to numerical validation of the theoretical results is very brief and lacks depth/sufficient validation through several diverse problems. A more extensive set of numerical experiments is crucial to demonstrate the practical effectiveness and robustness of the proposed methods under varied conditions. The current numerical validation does not sufficiently cover the scope of the potential applications discussed in the paper. Readers are referred to Appendix H, where additional testing is still missing.
Technical Quality: 3
Clarity: 3
Questions for Authors: Theorem 3 Proof: Can the authors provide a detailed proof of Theorem 3 within the main text or an enhanced appendix to help readers understand its foundational role in the paper's theoretical framework?
Numerical Experiments: Could the authors expand on the range and depth of the numerical experiments presented? Specifically, how do the proposed methods perform under varying conditions and parameter settings?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper does not explicitly discuss the limitations or potential negative societal impacts of the research, which is a critical component of scholarly communication.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The authors are very thankful to the reviewer for taking the time and effort to review our manuscript. We sincerely appreciate all your valuable comments. All your comments are carefully addressed below.
We have provided [some general responses](https://openreview.net/forum?id=Sj8G020ADl¬eId=WyA13MOOMr) in the Author Rebuttal by Authors above to two of your major concerns: 1) Presentation of Key Contributions; 2) Numerical Validation. In the following, we provide more specific responses to your comments on Theorem 3 Proof and Numerical Experiments.
> **Theorem 3 Proof: Can the authors provide a detailed proof of Theorem 3 within the main text or an enhanced appendix to help readers understand its foundational role in the paper's theoretical framework?**
Thanks for your nice suggestion. We will revise the introduction and incorporate further discussions about Theorem 3. Currently, Theorem 3 has two parts (“Dual iterates and KKT residuals” and “Primal iterates”). The first part “Dual iterates and KKT residuals” can be considered a primal version of [27, Theorem 1]. Due to the page limit, we provide the proof for the first part in Appendix E.2. The second part “Primal iterates” is one of our main contributions, and the proof relies on the error bound property for the primal conic problem established in Theorem 1 - (7b). In the updated version, we will highlight the key steps of the proof in the main text and provide a comprehensive and self-contained proof in the appendix.
Moreover, we would like to point out that we have two main contributions to this work
- **Problem Structures:** We establish new quadratic growth and error bound properties for primal and dual conic programs (Theorem 1 and Theorem 2).
- **Algorithm Analysis:** Utilizing our error bound properties, we prove that both primal and dual iterates of the ALMs converge linearly under mild assumptions (Theorem 3).
Thus, Theorem 3 represents only one of the main theoretical contributions. The quadratic growth properties in Theorems 1 and 2 are equally important, and their proofs may be of independent interest (as noted in Remark 1 of the manuscript). The lack of presentation of Theorem 3 in the main content may be due to the page limit. To enhance the readability of the paper and better present Theorem 3, we will follow the reviewers' suggestions and focus on the general problem class—SDP—in the main manuscript. This will not only improve clarity but also give us more space to ensure that Theorem 3 is well-integrated into the main discussion.
[27] Ying Cui, Defeng Sun, and Kim-Chuan Toh. On the R-superlinear convergence of the KKT residuals generated by the augmented Lagrangian method for convex composite conic programming. Mathematical Programming, 178:381–415, 2019.
> **Numerical Experiments: Could the authors expand on the range and depth of the numerical experiments presented? Specifically, how do the proposed methods perform under varying conditions and parameter settings?**
Thank you for this suggestion. We have added more numerical experiments in the attached PDF file under the section of Auther rebuttal, and the results also support our theoretical contributions.
In particular, we consider two classical machine learning applications: linear SVM and Lasso. Both the linear SVM and Lasso can be reformulated as standard conic programs with free, nonnegative, and second-order cone variables. For each application, we randomly generate three instances and run ALM for those three instances with various augmented penalty parameters $\rho>0$. The numerical results show that the three quantities (primal cost value gap, dual cost value gap, and the KKT residuals) all converge to zero with a linear rate. The convergence speed of the primal feasibility and conic constraints also depends on the value of the penalty parameter $\rho$. The larger the penalty parameter $\rho$, the faster the convergence of the residuals. The oscillated or flattening behavior that happens in the tail (when the iterates are close to the solution set) is due to the inaccuracy of the subproblem solver and computational errors.
One key difference between the experiments on semidefinite programs (SDP) and the added experiments is the report of primal solution quality. In the considered SDP applications, “the SDP relaxation of maximum cut (Max-Cut) problem” and “the SDP relaxation of matrix completion” have been shown to have a unique primal solution with high probability [43]. Therefore, we can report the primal distance for SDP applications. On the contrary, the uniqueness of the primal solution for linear SVM and Lasso is not guaranteed, so we report the primal cost value gap instead.
[43] Lijun Ding and Madeleine Udell. On the simplicity and conditioning of low rank semidefinite programs. SIAM Journal on Optimization, 31(4):2614–2637, 2021.
> **Limitations: The paper does not explicitly discuss the limitations or potential negative societal impacts of the research, which is a critical component of scholarly communication.**
Thanks for pointing out this issue in our writing. Our paper focuses on theoretical analysis of the structures of conic programs and linear convergence of ALM. We do not expect negative societal impacts in these aspects.
Regarding limitations, our paper first establishes the growth property and error bound for three commonly used self-dual cones. Building upon the established growth property and error bound, our paper established the convergence rate of the primal iterates. The current theoretical guarantees do not apply to other cones. It will be interesting to see how to use the current proof techniques to establish the growth property and error bound for other non-symmetric cones. In addition, as solving the subproblem is the main step in the ALM, it will be interesting to develop an efficient subproblem tailored to the problem structures, such as sparsity or manifold structure.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer K87V,
Are you satisfied with the rebuttal? Did the authors adequately address your concerns?
Best,
AC | Rebuttal 1:
Rebuttal: The authors would like to thank the four reviewers for their valuable time reading our manuscript and providing helpful comments. While the scores from the reviewers are mixed (3, 5, 8, 3), we find that their comments are generally positive and some are very constructive. In particular, all four reviewers appreciated the soundness of our work (all rated as good), and three reviewers (K87V, Vd1D and X2mJ) rated the contributions of our work as good or excellent.
Two major concerns raised by reviewers (K87V, Vd1D and rMRz) are
- **Presentation of one main theoretical result** (the linear convergence of inexact ALM in Theorem 3). Two reviewers (K87V and rMRz) pointed out that one key contribution in Theorem 3 appears very late in the paper (Page 7). Reviewer K87V commented that the proof for Theorem 3 in the main text is not easy to follow. Reviewer rMRz also suggested that we could use a particular problem class (LP, SOCP, or SDP) to improve paper clarity.
- **Lack of sufficient numerical validations**. Reviewers K87V, Vd1D and rMRz requested more in-depth numerical validations using problems from a diverse range of applications. Reviewer Vd1D specifically suggested that experimental evaluations of LPs and SOCPs would be beneficial in validating the theoretical results.
In addition, Reviewer X2mJ provided many detailed comments which are useful to clarify our contributions further (the comment on Remark 1 and the suggestion of Rockafellar’s recent paper are very appreciated). All the four reviewers’s comments helped us further improve the quality of our work.
Here, we would like to respond to the two major concerns mentioned above (all other comments from the four reviewers have been carefully addressed below).
**Regarding the paper presentation**, we would like to emphasize that the contributions of this work have two key aspects:
1. *Problem Structures*: We establish new quadratic growth and error bound properties for primal and dual conic programs (**Theorem 1** and **Theorem 2**).
2. *Algorithm Analysis*: Utilizing our error bound properties, we prove that both primal and dual iterates of the inexact ALMs converge linearly under mild assumptions (**Theorem 3**).
Thus, Theorem 3 represents only one of the main theoretical contributions. The quadratic growth properties in Theorems 1 and 2 are equally important, and their proofs may be of independent interest (as noted in Remark 1 of the manuscript). In the paper presentation, we have attempted to balance these contributions. However, we agree with the reviewers that it would be beneficial to integrate Theorem 3 more prominently within the main discussion and provide a more complete and accessible proof in the appendix. In the updated version, we will follow the reviewers' suggestions and focus on the general problem class — SDP — in the main manuscript. This change will not only improve clarity but also give us more space to ensure that Theorem 3 is well-integrated into the main discussion. We will revise the proof of Theorem 3 to make it more accessible in the appendix.
**Regarding the numerical validations**: as the reviewers correctly pointed out, the major contributions of this work are theoretical. Due to the page limit, we have provided experiments on three problem classes (Max-Cut, matrix completion, binary quadratic program) to validate the practical convergence performance of ALM. We agree with the reviewers that more in-depth numerical validations will further demonstrate our theoretical contributions. To address this, we have conducted new experiments on two classical machine learning applications—linear SVM and Lasso. The results also supported our theoretical findings, and they are summarized in the attached PDF. In the final version, we will incorporate these new numerical results in the appendix.
We hope that the changes and responses will be sufficient to address the reviewers’ concerns. Please let us know if you have any additional questions or require further clarification. Any feedback will be highly appreciated.
Pdf: /pdf/e995ba559bb30fe56e408fd9754db65dbb6e06ba.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Provably Safe Neural Network Controllers via Differential Dynamic Logic | Accept (poster) | Summary: This work addresses the challenge of verifying the safety of NNCSs for CPS, especially for infinite time horizons. To tackle this problem, the author(s) introduce VerSAILLE, a novel approach that leverages differential dynamic logic to derive specifications for NNs, which are then proven using NN verification tools.
This allows proving infinite-time safety of the NNCS via dL. To bridge the gap between nonlinear arithmetic constraints arising from hybrid systems and linear constraint support in NN verification tools, they also present Mosaic, an efficient, sound, and complete verification method for polynomial real arithmetic properties on piecewise linear NNs. Evaluation shows the effectiveness of VerSAILLE and Mosaic by proving infinite-time safety on benchmarks and enumerating counterexamples in unsafe scenarios.
Strengths: 1. The theoretical novelty of this work is sufficient.
1.1. The proposed VerSAILLE provides sound proof of infinite-time safety for the NNCSs.
This method reuses safety proofs from control-theory literature in the form of dL for the first time and supports a large class of feed-forward NNs.
1.2. The framework Mosaic enables the adaptation of existing open-loop NNV tools to handle polynomial queries with arbitrary logical structures while maintaining completeness to polynomial constraints.
2. The experiments are very thorough, especially the comparison and combination with different methods and tools, which proved the effectiveness of the proposed method.
3. This paper is well-written and the running example throughout the article makes it easier to follow.
Weaknesses: My main concern is the gap between the theoretical contributions and the implementation. As mentioned in lines 71-73, "the implementation (N3V) supports NNs with Relu" and "theoretical contribution (VerSAILLE) reaches far beyond this". What are the reasons for this gap and what are the difficulties in overcoming it?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How is the scalability of N3V? What is the structure of the largest neural network it can handle?
2. what is the time overhead of Mosaic i.e., lifting NNV tools for linear, normalized open-loop queries to polynomial queries of arbitrary logical structure?
Please answer the questions in "Weaknesses*" and "Questions*"
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: There are no concerns of negative broader impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad to see you found our experimental setup to be thorough and consider the paper easy to follow. We are also thankful for your feedback and address your questions and comments below:
## (A) Other architectures & constraints (Weaknesses)
Like many prior works [40,52,54] we focus our implementation on ReLU NNs.
This is also the architecture of the NNs by Julian et al. [51,53] -- demonstrating applicability to NNs from the literature.
Concerning extensibility:
- For piece-wise linear NNs we can extend our current implementation. Though an engineering challenge, there are no theoretical hurdles. Some piece-wise linear activation functions can be "compiled" to ReLUs as done by DNNV [84] for max-pool layers.
- For most non-polynomial input/output constraints, we lose completeness due to undecidability. However, Mosaic's framework may help in separating decidable piece-wise linear reasoning from undecidable nonlinear reasoning.
- For other activation functions (e.g. polynomials or sigmoid, tanh etc.), there exist no complete verifiers. We believe further research will be necessary before our theoretical foundations will manifest in scalable tooling for this setting. Nonetheless, VerSAILLE tells us how to verify such systems once suitable tools become available.
Concerning the latter point, Mosaic fails to generalize to this setting, because we can only simplify the nonlinear arithmetic analysis due to the piece-wise linear behavior of the NN. Without this assumption, the doubly exponential nature of nonlinear arithmetic SMT solving makes the problem very hard to solve without further innovation as we would have to rely on SMT solving (see also comparison in Table 8 Apx. E.2).
## (B) Scale of NNs (Q1)
The largest NNs evaluated in our case studies were the NNs by Julian et al. [51,54] which stem from a proposal for a real-world airborne collision avoidance system. The NNs had 6 layers with 45 neurons each, i.e. 270 nodes. This is significantly larger than prior literature on infinite-time safety (e.g. [23, Appendix] consideres single-layer linear NNs analyzed via dReal or [62] considered NNs with approx. 30 neurons; see also comparison to dReal in Appendix E.2). In some NNCS scenarios such as ACAS, medium-size NNs are chosen precisely because they can encode complex control policies while offering a compact representation.
Moreover, we want to emphasize that the verification performed by our approach is not comparable to local verification properties such as, e.g., local robustness: Mosaic has to verify properties w.r.t. the NN's *entire* input space. Naturally, in this context, verifiers scale differently than when "only" verifying epsilon-balls around individual datapoints. In particular, because the verifier must explore larger quantities of feasible ReLU-phase configurations. This difference in scale scale for global properties is also visible in related fields: For example, a very recent State-of-the-Art paper on verifying global robustness properties [R7] is evaluated on NNs with only 50 ReLU neurons.
## (C) Overhead of Mosaic (Q2)
Out of the box all evaluated NN verification tools provide a significantly weaker specification language (linear, normalized Open-Loop NNV queries). Consequently, we cannot provide a quantiative measurement for overhead since the properties cannot be verified by the previous tools. If we look at the runtime of the NN verification tool in comparison to the verifier's overall runtime (which then includes all nonlinear reasoning) the share greatly varies by benchmark depending on satisfiability status, query complexity and nonlinearity and can range from as low as 30% to as high as 70%. It is also important to mention that the overhead computation is essential to keeping the NN verification time low: For example, the time spent enumerating disjoint input regions (azulejos) avoids duplicate computations for the NN verification tool (see also our conceptual comparison to DNNV in E.2).
[R7] Athavale, Anagha, et al. "Verifying global two-safety properties in neural networks with confidence." CAV 2024
---
Rebuttal Comment 1.1:
Comment: Thank you for clarifications, I have no further questions. | Summary: This paper introduces VerSAILLE, a new method using dL contracts to ensure the safety of Neural Network Controlled Systems with piece-wise Noetherian Neural Networks. Mosaic, implemented in N^{3}V for ReLU NNs, efficiently verifies properties across various case studies, including complex applications like airborne collision avoidance. The approach demonstrates scalability and effectiveness compared to traditional closed-loop techniques, offering a promising path for developing safe and goal-oriented NNCSs in practical settings.
Strengths: The authors have written an good paper with promising contributions which VerSAILLE establishes a formal foundation enabling sound proof of infinite-time safety for NNCSs using dL models from control-theory literature, and Mosaic introduces an efficient, sound, and complete technique for verifying properties in polynomial real arithmetic on piece-wise linear NNs, enhancing existing open-loop NNV tools. Futhermore, Mosaic supports exhaustive characterization of unsafe state spaces, demonstrated effectively in real-world case studies including adaptive cruise control and airborne collision avoidance ACAS X.
Weaknesses: The structure of the paper is well-organized, with tight logic and thorough arguments, and is overall well-written. However, the introduction and related work sections seem unclear and lack readability. It is recommended to simplify the text or organize it into clearer paragraphs.
Technical Quality: 4
Clarity: 4
Questions for Authors: (1) The authors propose a VerSAILLE NNCS verification method, which leverages the efficiency of neural network verification tools while maintaining the rigor of differential dynamic logic. However, the method is only applied experimentally to networks with ReLU activation functions, and most related work focuses solely on this single type of network. Have other types of networks been considered, and has the method been tested for effectiveness across different types NNs?
(2) Additionally, they introduce an effective, sound, and complete method, Mosaic, for verifying polynomial real arithmetic properties on piece-wise linear neural networks, extending existing linear constraint tools to nonlinear settings while preserving completeness. How well does this method work for non-polynomial cases?
(3) Furthermore, in experimental evaluations, the authors demonstrate infinite-time safety in certain scenarios, such as the classical Vertical Airborne Collision Avoidance NNCS verification benchmark. This benchmark appears frequently in related work; are there additional benchmarks available to showcase, and are there more advanced works for a more comprehensive comparison and validation of the proposed approach?
Confidence: 2
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors have sufficiently addressed the limitations, explicitly stating the assumptions underlying the theoretical results, particularly in Section 3, and discussing the exponential worst-case runtime of Mosaic in Section 4.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for providing us with the valuable feedback -- we are happy to see you find our approach promising. We address your questions and comments below and are particularly thankful for your feedback on readability:
## (A) Introduction & Related Work (Weaknesses)
We address the readability concerns in common answers (A) and (B) and would be happy to hear your feedback on our proposed improvements.
## (B) Other architectures & constraints (Q1, Q2)
Like many prior works [40,52,54] we focus our implementation on ReLU NNs.
This is also the architecture of the NNs by Julian et al. [51,53] -- demonstrating applicability to NNs from the literature.
Concerning extensibility:
- For piece-wise linear NNs we can extend our current implementation. Though an engineering challenge, there are no theoretical hurdles. Some piece-wise linear activation functions can be "compiled" to ReLUs as done by DNNV [84] for max-pool layers.
- For most non-polynomial input/output constraints, we lose completeness due to undecidability. However, Mosaic's framework may help in separating decidable piece-wise linear reasoning from undecidable nonlinear reasoning.
- For other activation functions (e.g. sigmoid, tanh etc.), there exist no complete verifiers. We believe further research will be necessary before our theoretical foundations will manifest in scalable tooling for this setting. Nonetheless, VerSAILLE tells us how to verify such systems once suitable tools become available.
## (C) Evaluation (Q3)
We address this concern in common answer (C).
In particular, we propose to add a summary on our additional experiments to the paper's main section.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response and promise to clarify the points in the revision. I have no further questions. | Summary: This paper tackles the challenge of formal verification of neural-network based control systems. While scalability of the existing methods can still be improved, the authors provide an alternative approach via reusing safety proofs from control theory, open-loop neural-network verification, and differential dynamic logic. The authors are mainly concerned with collision avoidance as proxy for safety. Proposed N3V and Mosaic when compared to closed-loop verification tools showed improvement in terms of runtime.
Strengths: The work proposes a novel theoretical framework based on a nontrivial combination of existing theoretical results, previously unemployed by the state of the art. The authors also contribute a technique Mosaic based on a novel algorithmic approach to avoiding repeated checks of the same input regions and instead reusing verification results (azulejos).
The impact of the theoretical framework Versaille extend to the general community of NNCS verification since the core idea of reusing verification proof is generally valuable. The evaluation is extensive enough under the stated assumptions on the activation functions, architecture, input, and safety specification. It would be interesting to see if it extend to other properties and systems. It is a promising step forward to improving verification scalability.
The paper presents technical and theoretical contributions supported by rigorous proofs and extensive evaluation. The evaluation is quite detailed and the authors comprehensively analyze the results in comparison to the state-of-the-art tools.
Overall, the paper proposes a strong theoretical framework for NNCS verification and sound and complete algorithm for the open-loop NN verification. The paper is self-contained and of high quality.
The paper is excellently organized. The authors evidently invested into clear narrative, logical connections, and guidance for the reader.
Weaknesses: One recommendation could be to shortly summarize the take-away from the extended evaluation in the main bod of the paper, so that all evaluation questions are answered without necessarily consulting the appendix. For instance, the Zeppelin insights are quite interesting and probably deserve to to be mentioned in the main paper.
Minor:
- Although "control envelopes" may be a well-understood term in control theory, it would be important to briefly introduce it on the concept level the first time it occurs in the introduction (same applies to differential dynamic logic).
- Illustrative examples and descriptive color names are highly appreciated.
- aircraft
- non-Portuguese speakers would not know how to read "azulejos", would be good to transcribe.
- some abbreviations are not introduced, e.g., DNC (although there original work is cited, it would make the paper self contained to include the definitions).
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Mosaic provides an exponential improvement in worst-case runtime. The authors mention that their approach performs particularly well for neural-network control systems with "small" input dimensions. Do the authors have any insights on how small these have to be to perform well, how well is meant here?
2. It seems safety is analyzed here as "collision avoidance" (as it is the case for all considered benchmarks). Do the results generalize to other notions of safety, e.g., stability (can be expressed via reachability as well)?
3. For "unsafe" level flight scenarios, the author mention using manual approximation. To what extend did that contribute to the resulting times in Table 3?
4. Why was the tool compared to ARCH Comp 2022 and not the 2023 results?
5. What is the overhead in preparing system and specifications for N3V and Mosaic? How easily extendable is the implementation?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The addressed problem setting is guarded by multiple assumptions, however, common for the NNCS verification domain and evaluated extensively. The check list says the software is open source, however, the authors promise to make it open source, rather than making it open-source anonymously at the time of submission.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for providing us with the valuable feedback -- we are happy to see that you like our proposed approach and found the contents to be well organized.
We address your questions and comments below:
## (A) Evaluation (Weaknesses)
Thank you for the suggestion, we made a proposal for a suitable text in common response (C)
## (B) Minor Comments (Weaknesses)
Thank you for the detailed comments which we will integrate in the final version of the paper.
Concerning your first point, we hope that our updated version of the paragraph from lines 44-53 will help clarify this (see common response (A) ).
## (C) Complexity of Mosaic (Q1)
You correctly observed that Mosaic provides an exponential improvement for runtime in comparison to native SMT encoding (p. 7). This is underscored by experimental results (see Table 8 in Apx. E.2) and is what we mean by "well": The problems we solve are beyond the reach of pure SMT.
Since our worst-case complexity is still doubly exponential in the input dimension (but no longer in the NN size), this raises the question how small is "small enough". To this end, there are two answers: To apply the full approach including VerSAILLE, we require an input space which is well-enough understood to be modeled in dL. To apply Mosaic stand-alone, scalability depends on the number of variables and the degree of polynomials. For our case studies, the input spaces had between 2 (ACC) and 4 (Zeppelin/ACAS) dimensions. The output space resp. had dimension 1 (ACC), 2 (Zeppelin) and 9 (ACAS). Due to the unpredictability of SMT solver performance for real-arithmetic we find it difficult to provide reliable guidance in this respect.
## (D) Supported Properties (Q2)
The VerSAILLE approach is agnostic to the particular safety property considered:
We support any property expressible with the box modality ("for all executions ...") in differential dynamic logic.
Collision freedom is often the most important property in NNCS applications, which is why the examples focus on that.
Some mistakes in ACAS NN verification in related work stem from settling for partial properties such as
"If the intruder is distant and is significantly slower than the ownship, [Clear-of-Conflict is among the top choices]" [56].
While that is a significantly simpler sanity check it does not ensure collision freedom (see also [10]).
## (E) Manual Approximation (Q3)
We assume you refer to the following sentence:
"For unsafe level flight scenarios, we exhaustively characterize unsafe regions going beyond (non-exhaustive) point-wise characterizations of unsafe advisories via manual approximation [54]."
We did not perform manual approximation in our case study and the times in Table 3 are therefore not impacted by this. Instead, this refers to a characterization of unsafe points (not regions) in a prior work [54] which was based on manual approximation.
To clarify this, we propose to update the sentence as follows:
"For unsafe level flight scenarios, we exhaustively characterize unsafe regions. This characterization goes far beyond characterizations in prior work [54] which were generated using manual approximation and resulted in (non-exhaustive) point-wise characterizations."
## (F) ARCH Comparison (Q4)
During the initial preparation of the comparison the report on ARCH Comp 2023 was not yet released. Yet the participants of ARCH 2023 are a subset of the participants of ARCH Comp 2022 so we already compare with all participants of ARCH Comp 2023. We will clarify this in the paper's final version.
## (G) Extensibility of Implementation (Q5)
Our implementation is modular to support exchanging of different components (e.g. for the use of different NN/SMT solvers). During our implementation, we have already gained some experience with supporting different SMT solvers (though we ultimately decided to stay with Z3 for now) and would be interested to support other tool combinations in the future. We would also be happy to collaborate with interested parties on implementation extensions.
## (H) Overhead Query Generation (Q5)
Given a dL model, the generation of the verification property amounts to combining the loop invariant and ModelPlex condition into
one verification query via an implication. NCubeV supports the syntax produced by ModelPlex (which in turn is the same used for loop invariants).
Thus, the process is very straight forward.
## (I) Open-Source Tool (Limitations)
We provide our implementation with a GPL License in the supplementary materials.
For the final version, we will link to a GitHub repo containing our implementation with a GPL License. To preserve anonymity we omitted this URL in the draft under review and anonymized the code in the supplementary materials.
---
Rebuttal Comment 1.1:
Comment: Thank you for clarifications, I don't have further questions. (The proposed paragraphs in the general response contain some tautology.) | Summary: Authors introduce VerSAILLE (Verifiably Safe AI via Logically Linked Envelopes), a method of verifying neural network based control systems using a provably safe control envelope in dL. VerSAILLE relied on nondeterministic mirrors, which allows authors to reflect a given neural network and reason within and outside dL simultaneously, ultimately having outside dL specifications imply the safety of the mirrored dL model. Authors also introduce Mosaic to leverage off-the-shelf open-loop neural network verifiers on polynomial queries, in addition to their linear ones. Mosaic is both sound and complete, making it a strong fit of critical applications.
Strengths: The paper is comprehensive in its approach and the authors propose a novel method that seems relatively stand-alone. The paper is organized well and the logic of the method flows well.
Weaknesses: Generally, the paper is difficult to read and understand. Readability would be improved from strong definitions and standardized notation. Paragraphs tend to be very long (notably Related Work), it would be helpful to have more stand alone definitions and methodology.
Page 2, lines 46-53: confusingly written, jumping from point to point without definitions for the terms that are cited. Not sure what the “subsymbolic reasoning of an NN” or “infinite-time control-theory reasoning” are.
Page 2, line 50: “an NN” → “a NN”
Evaluation is conducted only on vertical airborne collision avoidance, and the experiments and parameters are not well defined. On page 8, Table 3, “CE” is not defined.
The use of DPLL(T) is not clearly described in the paper.
POST REBUTTAL:
The authors have cleared up their use of DPLL(T) in their response, which now seems to be an interesting contribution of the work.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Page 2, line 56: Is it trivial to assume “abstract, nondeterministic control envelope has already been verified in dL”?
- What are the complexity limitations of Mosaic?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Not properly discussed. It would be helpful to understand trade-offs between their methods and other ones.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for providing us with valuable feedback on readability concerns with the paper's draft. We were pleased to hear, that you nonetheless found our approach novel and comprehensive.
## (A) Strong Definitions
> Readability would be improved from strong definitions and standardized notation.
Concerning Introduction and Related Work, see (A) and (B) in our common response.
We are a little at a loss what you mean by "strong definitions". While we require some new definitions due to the novelty of our work, we do not see this as a deficiency, but as progress and have put significant effort into presenting our approach in a novel unified mathematical framework. Consequently, we provide formal definitions and proofs for all results throughout the paper and appendix while illustrating the concepts via our running example. We strive for standardized notation and appreciate any concrete feedback on aspects that remained unclear. To this end, are there any particular definitions/notations that you would like us to explain and improve upon?
## (B) Experimental Section
"CE" stands for counterexamples (will be clarified for Camera-Ready). Similarly we will add explanations for "DNC"="Do Not Climb" etc.
We will also make use of the additional page for the final version to move appendix details on the experiments into the paper's main section; see common answer (C). The Appendix reports on two additional case studies (adaptive cruise control, zeppelin steering). Additionally, we provide comparisons to other approaches from the literature (e.g. closed-loop techniques and verification via SMT solving). Concerning the well-definedness of experiments and parameters, we would be happy to clarify any open questions. Are there any specific points that you would like clarification on? For the section's final sentence we propose an improved phrasing in our response to Reviewer H7zS; see answer (E) there.
## (C) Usage of DPLL(T) within Mosaic
Our original description left all technical details to Apx. B.2.
We propose to amend llns. 236-251 as follows:
On the technical side, Mosaic proceeds by executing a regular DPLL(T) loop until a satisfiable conjunction of input constraints is found. At this stage, we fix the conjunction's linear constraints (i.e. the azulejo) and an inner loop enumerates conjunctions over mixed/output constraints that are satisfiable in combination with the fixed azulejo. For each such conjunction we save the conjunction of linear mixed/output constraints. This results in a linear, normalized Open-Loop NNV query (conjunction over input, disjunctive normal from over output). We employ a similar inner loop to enumerate satisfiable conjunctions of nonlinear constraints to later check counterexamples via SMT solving (see Retaining completeness). At each step, we interleave propositional and theory solving to discard conjunctions unsatisfiable in real arithmetic as early as possible.
## (D) dL assumptions (Q1)
For our approach, dL provides a language to specify desired safety properties together with a model of the system's dynamics and action space. Contrary to other specification languages such as LTL etc., dL directly comes with a technique to *prove* safety w.r.t. the chosen model. Our approach would be equally applicable without a dL proof -- while this yields a less rigorous safety guarantee, it would result in a similar approach to modeling desired requirements in LTL and proving those requirements without a formal safety proof on the abstract level. The proof automation of KeYmaera X [36,86] and recent advancements in control envelope synthesis such as CESAR [55] make it easier to prove correctness for dL control envelopes.
Nonetheless, the triviality of assuming provably safe control envelopes depends on the considered case study: This work aims to make control theory results applicable to NN verification. There are numerous analyses which provide control envelopes for CPS use cases (e.g. [48, R2-R5]). However, our approach cannot escape the undecidability of control system analysis [R1] including NNCS. VerSAILLE makes a large, previously untapped collection of research available to the NNCS verification community, which we believe is a valuable contribution.
## (E) Trade-Offs between approaches (Limitations)
We plan to add a summary of our comparative experiments in the paper's main section (see common response). We have made a first attempt at providing insights on the trade-offs between different techniques in this amendment:
- Closed-Loop tools are very useful for *bounded* time guarantees
- Our approach can be more efficient by avoiding a reachability analysis for the system's dynamics
- For prior techniques, approximation-errors make safety proofs impossible close to the controllable region's edge
- SMT solvers are precise but do not scale to the NNs nor can they alone handle the differential equations
We would be happy about feedback on which further insights would be helpful to understand the literature landscape.
## (F) Complexity of Mosaic (Q2)
Mosaic provides an exponential improvement for runtime in comparison to native SMT encoding (see p. 7/ Review H7zS).
This is underscored by experimental results (see Table 8 in Apx. E.2):
The problems we solve are beyond the reach of pure SMT. For futher details see also answer (C) for Reviewer H7zS.
[R1] Platzer, André et al. "The image computation problem in hybrid systems model checking." HSCC 2007
[R2] da Silva, Rafael Rodrigues et al. "Formal design of robot integrated task and motion planning." IEEE CDC 2016
[R3] Mitsch, Stefan, et al. "Formal verification of obstacle avoidance and navigation of ground robots." IJRR 2017
[R4] Wu, May, et al. "A Formally Verified Plasma Vertical Position Control Algorithm." FMICS 2020
[R5] Selvaraj, Yuvaraj et al. "Formal development of safe automated driving using differential dynamic logic." IEEE TIV 2022
---
Rebuttal Comment 1.1:
Comment: Thank you for your response which answered some of the concerns I had.
I have increased my score, though I still have concerns about the limited evaluation of the tool with only 3 case studies.
The DPLL(T) description as described is the standard approach that is used, and the description does not indicate the novelty of the proposed solution. If the contribution is an application of existing techniques, then I would expect a much more extensive evaluation (using say all the relevant benchmarks from VNN-COMP).
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
thank you for updating your review based on our answers -- we are happy to hear that we were able to already lift most of your concerns.
Concerning the usage of DPLL(T) there is a subtle but important difference between classical DPLL(T) and the approach proposed by us:
Classical DPLL(T) takes a formula with arbitrary propositional structure and uses SAT solving on its boolean skeleton to generate *conjunctions* of atoms over a theory which are then handed to a theory solver.
While this approach is extremely useful for classical theory solvers (as they are then not required to analyze complicated propositional formulas), it becomes prohibitively inefficient when used in combination with modern NN verification tools.
The reason for this inefficiency lies in the fact that many NN verifiers perform (somewhat costly) reachablity analysis w.r.t. the NN's input space.
Thus, when a solver supports an interface that allows for a *single* conjunction $C_{\text{in}}$ of constraints in the input space and a *disjunction* of conjunctions $C_{\text{out}}^{1} \lor \dots \lor C_{\text{out}}^{n}$ in the output space,
the verifier can perform the costly reachability analysis w.r.t. $C_{\text{in}}$ *once* and can then use it to simultaneously verify all conjunctions $C_{\text{out}}^{i}$ over the output space.
Therefore, instead of enumerating the conjunctions $\left(C_{\text{in}} \land C_{\text{out}}^{1}\right), \dots, \left(C_{\text{in}} \land C_{\text{out}}^{n}\right)$, our extension of DPLL(T) is capable of generating a *single* query $C_{\text{in}} \land \left(C_{\text{out}}^{1} \lor \dots \lor C_{\text{out}}^{n}\right)$ (see also lns. 238-241) which can make use of this special property of NN verification and, given the cost of NN verification, even an exponential difference.
The reason we developed this technique lies in the fact that the NNV queries we generate have very complicated propositional structures, e.g. the logical structure of some ACAS specifications have 112 distinct atoms with formula syntax trees of depth up to 9 (see llns.324-325 or also the file `/NCubeV-Artifact-FINAL/test/parsing/examples/acas/property-sdes2500-compressed` of the code submission for a concrete example).
This kind of structure is required for NNCS but otherwise still relatively uncommon in NN verification -- in fact VNN-COMP last year explicitly enforced that all specifications had to be in disjunctive normal form [1, p. 4 "Format"] and we are not aware this rule has been changed in the current iteration.
This also makes a comparison on VNN-COMP benchmarks less informative, as these benchmarks:
- Only consider linear arithmetic specifications
- Are forced to have an extremely simple propositional structure
Conversely, our approach was specifically tailored for NNCS benchmarks which:
- contain polynomial arithmetic constraints
- contain very complicated propositional structures
We hope this can clarify the novelty of our Mosaic procedure in comparison to classical DPLL(T) and thank the reviewer once again for taking the time to review our work.
If there are any further questions we would be happy to follow up.
[1] https://arxiv.org/pdf/2312.16760v1 | Rebuttal 1:
Rebuttal: We thank the reviewers for their comprehensive reviews and their detailed feedback on the paper. We were excited to hear that the reviewers found our "comprehensive approach" (rNvs) to be "novel" (rNvs,H7zS), "promising" (JF4H) and "well-written" (CSRh). Furthermore we are glad you agree that VerSAILLE is "a promising step forward to improving verification scalability" (H7zS).
Concerning the presentation issues, we plan to augment the presentation with the reviewers' helpful feedback using the additional page granted for the NeurIPS Camera-Ready. To this end, we propose to extend the manuscript as outlined below and look forward to any feedback on our suggestions.
## (A) Readability of Introduction (rNvs,JF4H)
We propose to turn lines 44-53 into an own, updated, paragraph which provides a high-level summary of our approach:
As an alternative to the three outlined approaches, we propose to verify NNCSs based on the rigorous mathematical foundations of differential dynamic logic (dL). dL is a program logic allowing the proof of infinite-time safety for abstract, nondeterministic control strategies (often called *control envelopes*). Due to its expressiveness and its powerful proof calculus, dL even allows the derivation of such guarantees for continuous-time systems or systems whose differential equations have no closed-form solution. By grounding our verification approach in dL, we can reuse safety results from the control theory literature for NN verification -- especially for cases where characterizations of safe behavior and controllable/invariant regions are known (e.g. airborne collision avoidance [48]). How this knowledge can be reused is a non-trivial question: While dL is an excellent basis for reasoning about symbolic control strategies, the numerical/subsymbolic reasoning of NNs at their scale is far beyond the intended purpose of dL's proof calculus. Conversely, open/closed-loop NNV tools and barrier certificates lack the infinite-time or exact reasoning available within dL. This work demonstrates how open-loop NNV can be combined with dL reasoning to combine their strengths while cancelling out their weaknesses. Consequently, by relying on results from the control theory literature, we prove infinite-time safety guarantees for NNCSs that are not provable through either technique alone.
Furthermore, we propose to split the Overview paragraph in line 62 to delineate the contributions of VerSAILLE from the complementary contributions of Mosaic.
## (B) Readability of Related Work (rNvs,JF4H)
We propose to structure the section into the following paragraphs:
- Shielding (lns. 342-346)
- Barrier Certificates (lns. 346-356)
- Open-loop NNV (lns. 356-367)
- Pre-image computation (lns. 367-369)
- Related Techniques (lns. 369-374)
- Closed-loop NNV (lns. 374-379)
If there are further unclear aspects, we would appreciate your feedback.
## (C) Case Studies & Evaluation Section (rNvs,JF4H,H7zS)
Next-generation Airborne Collision Avoidance (ACAS) NNs are standard benchmarks for NN verification, but despite positive verification results [52] counterexamples exist in these NNs even for the vertical case. This underscores that comprehensive closed-loop analysis is crucial for guaranteeing safety. VerSAILLE and Mosaic are first in providing a comprehensive analysis approach for this setting -- despite its complicated nonlinear dynamics. Currently, a major bottleneck for the application of the approach to further case studies is the availability of *safe* neural networks, because sound techniques cannot positively verify unsafe NNCS but merely falsify them. We believe the evaluation in our paper's main part is particularly interesting, because both the NNs [51,53] as well as the dL formalization [48] are from prior literature. That being said, we will add a summary of our results on the ACC and Zeppelin Case Studies from the appendix to the paper's main section. We propose to summarize the additional experiments as follows (moving the Camera-Ready's Appendix to arXiv):
We provide additional experimental results in Appendix E. First, we demonstrate the feasibility of our approach for the running example of Adaptive Cruise Control. Depending on NN size and chosen linearization, our approach can verify or exhaustively enumerate counterexamples for the NNCS in 47 to 300 seconds. For a case study on Zeppelin steering under (uniformly sampled) wind perturbations, we adapt a differential hybrid games formalization [74] to analyze an NN controller trained by us. Here, we encountered a controller which showed very positive empirical performance while being provably unsafe in large parts of the input space: While performing very well on average, the control policy was vulnerable to unlikely wind perturbations -- an issue we only found through our verification. For ACC, we also perform a comparison to other techniques: While Closed-Loop techniques are useful for the analysis of bounded-time safety, their efficiency greatly depends on the system's dynamics and the considered input space. Our infinite-time horizon approach can be more efficient than Closed-Loop techniques as it evades the necessity to analyze the system's dynamics along with the NN (see Table 6). Usually, it is desirable to show infinite-time safety on the entire (controllable) state space. However, the approximation errors incurred via prior closed-loop NNV techniques prohibit this as they will either ignore states inside the controllable region or allow unsafe actions pushing the system outside its controllable region. Conversely, SMT based techniques do not have these approximation issues, but cannot scale to NNs of the size analyzed in this work. We also provide a conceptual comparison demonstrating the efficiency of the Mosaic procedure for normalized query generation over DNNV's expansion based algorithm. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Single Image Reflection Separation via Dual-Stream Interactive Transformers | Accept (poster) | Summary: This work introduces ADI, a new interactive dual-stream transformer framework for single image reflection separation. It incorporates a dual-attention interaction to explicitly model the dual-stream correlation for improved reflection separation, achieving impressive performance compared to other SOTA methods.
Strengths: 1. This paper addresses the limited interaction of previous dual-stream transformer frameworks for the SIRS task by presenting the ADI module, which simultaneously considers inter-window and intra-window fusion.
2. The quantitative results are impressive compared to previous SOTA works.
3. The discussion and motivation are well-explained and intuitively correct.
Weaknesses: 1. The design is quite plain, involving inter-patch/intra-patch attention.
2. The choice of using gradient and feature loss should be analyzed in ablation studies to verify their contributions.
3. Model complexity and inference time are missing in the main manuscript, making it difficult to determine whether the main contribution comes from the larger model capacity or the proposed algorithm.
Technical Quality: 3
Clarity: 3
Questions for Authors: see the weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing the motivation, technical soundness, and state-of-the-art performance of our method. Below, we address the key concerns you raised:
**Q1**: Why the design of inter-patch/intra-patch attention is quite plain?
**A1**: Keeping the design simple is beneficial to validate its effectiveness. When the pipeline becomes complicated, it can be hard to determine which part of the design really works. Also, more powerful and complicated attention designs can be developed in future work, which can potentially enhance the performance further.
Through the derivation of LaCA (Layer-aware Cross-Attention) provided in the global Author Rebuttal, it is evident that even with a straightforward token concatenation operation and corresponding modifications to the relative position bias, we naturally introduce inter- and intra-layer attention for dual-stream information. This demonstrates the inherent relationship between dot-product self-attention mechanisms and dual-stream interactive modeling.
In the meantime, our strategy can also energize other multi-stream transformer designs. For instance, in high-resolution image generation scenarios where a single GPU cannot support the entire image generation process, multiple information streams of the model can be distributed across different GPUs. Each stream would handle a specific image patch, and inter-stream feature interactions would ensure the coherence of the generated patches, enabling them to be seamlessly stitched together into a complete image (like in [1]).
**Q2**: What about the contributions of gradient and feature losses?
**A2**: Both gradient and feature losses contribute positively to our method. The importance of gradient relationship has been emphasized by early works in SIRS [2],[3],[4]. Feature (or perceptual) loss, on the other hand, significantly aids in modeling natural images by ensuring that the reconstructed images are perceptually similar to the ground truth [5],[6].
As suggested by the reviewer, to confirm their contributions, we additionally conducted ablation experiments, as shown in the table below. We evaluated the impact of removing Gradient Loss (DSIT (w/o Gradient Loss)) and Feature Loss (DSIT (w/o Feature Loss)) on the performance of our model. The results indicate a noticeable decrease in performance when these losses are disabled, underscoring their necessity. Specifically, the gradient loss helps preserve edge details, while the feature loss enhances the perceptual quality of the outputs, both of which are beneficial for effective reflection separation.
| Models | Real20 | Objects | Postcard | Wild | Average PSNR/SSIM |
|:-----------------------:|:---------------:|:---------------:|:---------------:|:---------------:|:-----------------:|
| DSIT (w/o Gradient Loss) | 24.13/0.814 | 27.54/0.922 | 23.38/0.898 | 26.72/0.906 | 25.55/0.906 |
| DSIT (w/o Feature Loss) | 23.13/0.780 | 26.00/0.919 | 23.80/0.896 | 25.88/0.900 | 24.94/0.901 |
| DSIT(Ours) | **25.06/0.836** | **26.81/0.919** | **25.63/0.924** | **27.06/0.910** | **26.27/0.917** |
|
**Q3**: Please provide the model complexity and inference time.
**A3**: We have provided a comparison in model parameters, GFLOPs (for $384 \times 384$ resolution inputs), inference time (for $384 \times 384$ resolution inputs, tested on an RTX 3090 GPU), and the average PSNR/SSIM across 4 datasets (Real20, Objects, Postcards, and Wild) in the table below. The number of parameters of our model is comparable to RAGNet and DSRNet. Due to our multi-scale and non-recursive network structure, we achieve superior GFLOPs efficiency compared to all other methods. Despite being based on a Transformer architecture, our method demonstrates a slightly faster inference time than DSRNet, highlighting the efficiency of our design. The above results show that the improvement of our model is non-trivial. As further illustrated in the additional cases provided in the global rebuttal PDF, our method exhibits impressive generalization performance in real-world scenarios compared to previous state-of-the-art approaches. This reflects the strong capability in effectively modeling reflection scenes of our method.
| Models | ERRNet | IBCLN | RAGNet | YTMT | Dong _et al._ | DSRNet | Ours |
|:-------------------:|:-----------:|:-----------:|:-----------:|:-----------:|:-------------:|:-----------:|:---------------:|
| Parameters (M) | 18.58 | 24.37 | 146.72 | 76.90 | **10.93** | 124.60 | 131.76 |
| GFLOPs | 820 | 708 | 610 | 838 | 666 | 743 | **517** |
| Inference Time (ms) | 72 | 75 | **56** | 101 | 123 | 256 | 215 |
| Average PSNR/SSIM| 23.53/0.879 | 24.10/0.879 | 24.90/0.886 | 24.05/0.886 | 24.21/0.897 | 25.75/0.910 | **26.49/0.922** |
|
[1] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models. CVPR 2024.
[2] User Assisted Separation of Reflections from a Single Image Using a Sparsity Prior. ECCV 2004.
[3] Single Image Layer Separation Using Relative Smoothness. CVPR 2014.
[4] A Generic Deep Architecture for Single Image Reflection Removal and Image Smoothing. ICCV 2017.
[5] Image style transfer using convolutional neural networks. CVPR 2016.
[6] Perceptual losses for real-time style transfer and super-resolution. ECCV 2016.
---
Rebuttal Comment 1.1:
Title: rating
Comment: Thanks for your response. I'd like to keep my rating as 'Borderline accept'.
---
Reply to Comment 1.1.1:
Comment: Thank you for your efforts in reviewing our manuscript. If you have any further concerns, we welcome continued discussion and would be happy to address them. | Summary: The paper proposes a transformer-based network for the single image reflection separation task. It focuses on the network architecture design, proposing several tailored designs, including dual-attention interactive block (DAIB), dual-stream self-attention (DS-SA), and dual-stream cross-attention (DS-CA).
Strengths: 1. The introduction is well-written.
2. The proposed method demonstrates state-of-the-art (SOTA) performance on the traditional single image reflection separation datasets.
Weaknesses: 1. The method section is poorly written, making it difficult to follow due to the extensive use of abbreviations, some of which are not explained upon first mention, such as DS-SA and DS-CA.
2. Despite a thorough reading of the methods and implementation details, it is unclear where the authors have applied the pretrained transformer. Specifically, I mean "pretrained transformer." Additionally, the placement and role of the proposed CNN-based network within the DSIT framework are not clearly clarified.
3. For Eq. (3), the authors should clarify where the 'cross' is present. The equation seems to suggest a self-attention mechanism, as the query, key, and value all derive from the same features.
4. In Fig. 3, it is unclear if the increased intensity of red indicates regions of higher attention. If so, I cannot distinguish the differences between the local and global priors as revealed by the figure.
5. I think the current setting of single image reflection separation does not align well with typical real-world scenarios. Reflections in real scenes typically occur in localized regions, such as the reflection of a building with a glass facade. Therefore, relying heavily on synthetic reflections with minimal real-scene data might not be practical. The authors should consider focusing on real-scene reflections.
Overall, I think the novelty of the paper is limited, and the writing is poor.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please refer to the weaknesses for details.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Given the advancements in deep learning for computer vision, the proposed reflection separation task focuses on synthetic pairs. I suggest the authors shift their research focus towards reflections occurring in real-world scenes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback. We appreciate your comments and try our best to address the issues as follows:
**Q1**: The writing of the method section.
**A1**: We apologize for any difficulties caused by the writing in the method section. In the next version, we will thoroughly proofread and clarify our paper. Specifically, we will ensure that the terms DS-SA and DS-CA are properly defined before their first use, as follows:
"Following the LayerNorm, we apply Dual-Stream Self-Attention (DS-SA) to $\textbf{X}^{\text{LN}}\_0$ and Dual-Stream Cross-Attention (DS-CA) to $\textbf{X}^{\text{LN}}\_1$, obtaining $\textbf{X}\_{\text{SA}}$ and $\textbf{X}\_{\text{CA}}$, respectively. The design of these two attention mechanisms will be detailed later in this section."
**Q2**: (1) The appliance of the pre-trained transformer. (2) The CNN-based network placement.
**A2**: (1) We utilize a pretrained Transformer architecture for the Global Prior Extractor, as shown in Figure 2 (a) in the main paper. The transformer blocks in GPE can load pretrained weights before training, which can leverage rich priors learned from large-scale datasets to aid in reflection separation. This practice is consistent with previous reflection separation methods, which have similarly used pretrained models to enhance feature extraction, such as the HyperColumn used in Zhang et al. [1] and YTMT [2], and the DSFNet in DSRNet [3].
(2) As depicted in Figure 2 (a), the Local Prior Extractor, referred to as the CNN-based Network, is implemented using a convolutional dual-stream interactive network structure. In our implementation, each DSLP Block specifically utilizes the MuGI Block from the DSRNet [3] to highlight our main contribution and avoid other possible factors influencing performance. Additionally, more advanced convolutional modules can be developed in future work.
We will highlight these points in our revision to avoid misunderstanding.
**Q3**: Where is the "cross" presented?
**A3**: We explain in detail how explicit correlation assessment works and where is the "cross" presented in the A1 of common issues in the global rebuttal. We will clarify this point in our revised version.
**Q4**: Elaboration on Figure 3.
**A4**: In the figure, red regions in the feature visualization indicate high attention values, while blue regions represent low attention values. We will add a color bar in the revised version to clarify this.
Please note that the pretrained model used in transfer learning is task-agnostic. We do not expect the priors alone to distinguish between the reflection and transmission layers clearly. Instead, the priors provide varying degrees of attention to different components of the image. For instance, the global prior $\mathbf{F}^{\text{GP}}$, extracted using a pretrained Swin Transformer model, shows higher attention towards reflection components. The shifted window-based self-attention contributes to the coherent attention across the image. Additionally, the Local Prior Extractor captures local prior information for both the transmission stream $\mathbf{F}^{\text{LP}}\_\mathbf{T}$ and the reflection stream $\mathbf{F}^{\text{LP}}\_\mathbf{R}$. These local priors exhibit a "granular" texture and may not accurately attend to the correct components of transmission and reflection. After the Cross-Architecture Interaction with the global prior $\mathbf{F}^{\text{GP}}$, the attention of $\mathbf{F}^{\text{LP}}\_\mathbf{T}$ and $\mathbf{F}^{\text{LP}}\_\mathbf{R}$ towards the transmission and reflection layers becomes more appropriately focused. Through further interaction and optimization of Dual-Attention Interactive Blocks, the dual-stream features $\mathbf{F}^{\text{DAIB}}\_\mathbf{T}$ and $\mathbf{F}^{\text{DAIB}}\_\mathbf{R}$ turn out to be more fine-grained and accurate. This improvement has also been recognized by Reviewer k1uS (Strengths 3).
**Q5**: Does the current setting align with real-world scenarios?
**A5**: Please note that all the testing data used in our paper are actually captured in the real world, including Real20, Nature20, Wild, Objects, and Postcard. These datasets encompass a variety of indoor and outdoor scenes, different glass thicknesses, and varying distances, thereby covering a wide range of real-world reflection scenarios. Again, we emphasize that our method does *NOT* focus on synthetic scenarios, as the reviewer criticized.
Meanwhile, during the rebuttal period, we captured several reflection scenes in the wild according to what the reviewer described as typical real-world scenarios. We compared the inference results of state-of-the-art methods and our approach in the global rebuttal PDF. In Figure 1 of the rebuttal PDF, the first 3 examples primarily contain localized reflection regions. The 4-th example showcases the reflection phenomenon on a water surface, which is not covered in the "minimal real-scene data" during training mentioned by the reviewer. Our results are visually impressive, demonstrating our model's generalization capability. Besides, Figure 2 of the rebuttal PDF presents the predicted reflection layers from our method on these real-world images, along with the corresponding binarized maps. From these visualizations, it is evident that our method can effectively identify localized reflection regions.
Inspired by the suggestion of the reviewer, in future work, we may explore the introduction of a lightweight "reflection localizer". This would allow us to bypass the processing of non-reflective regions (like the manner of early exiting [4]), thereby accelerating the reflection separation process.
[1] Single Image Reflection Separation with Perceptual Losses. CVPR 2018.
[2] Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation. NeurIPS 2021.
[3] Single Image Reflection Separation via Component Synergy. ICCV 2023.
[4] Adaptive Patch Exiting for Scalable Single Image Super-Resolution. ECCV 2022.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' efforts in their rebuttal, providing clearer explanations of the details and additional experimental results. These revisions have significantly addressed my major concerns regarding the novelty and contribution of the work, as well as the experimental design.
Furthermore, the authors' commitment to refining the writing, particularly in Section 3, in the final version is commendable. Given these improvements, I feel positively about the manuscript and have decided to raise my final score to 5: Borderline Accept.
---
Reply to Comment 1.1.1:
Comment: We are glad that our responses have helped you gain a better understanding of our work. We are committed to improving the clarity and readability of our manuscript in the revised version. Thank you for your professional review and timely feedback. The constructive comments have significantly contributed to polishing our paper. | Summary: The authors propose the Dual-Stream Interactive Transformer (DSIT) for single image reflection separation. DSIT is a dual-stream method designed for complex scenarios. Specifically, the authors design the Dual-Attention Interaction (DAI) to achieve feature aggregation of different dimensions through dual-stream self-attention and layer-aware cross-attention. They also propose the Dual-Architecture Interactive Encoder (DAIE) to leverage pre-trained Transformer models. Experiments demonstrate that the proposed method outperforms comparative methods.
Strengths: 1. The authors use DAI to enhance feature aggregation of transmission and reflection information, which is a reasonable approach. The ablation study also proves its effectiveness.
2. The authors introduce a pre-trained Transformer to provide global information and improve feature modeling. As shown in Figure 3, the final F_T^{DAIB} and F_R^{DAIB} have clearer detailed features.
3. Testing on multiple datasets shows that the proposed method outperforms state-of-the-art methods.
4. The authors conduct thorough experiments on various modules in the ablation study, further proving the effectiveness of the proposed method.
Weaknesses: 1. The implementation of CAI is not clearly explained. In L194, CAI seems to be through DAIB(F^{GP}, F_T^{LP}) and DAIB(F^{GP}, F_R^{LP}), while the proposed DAIB in the method deals with F_T and F_R. It is recommended that this be clarified in Figure 2 and the text to improve clarity. Additionally, the implementation of CAI is overly complex. Considering the high computational complexity of DAIB, a simpler implementation should be considered, such as directly using F GP in DAIB(F_T^{LP}, F_R^{LP}).
2. The authors do not provide detailed model settings, such as N_T, N_W, and the detailed structure of DSLP.
3. There is a lack of comparison between FLOPs and Params. The proposed method uses multiple attentions to aggregate features, which introduces high computational complexity.
4. In Figure 2, some parts of the Dual-Stream Information Flow overlap excessively, reducing distinguishability.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Clarify the implementation of CAI and other model settings, such as N_T, N_W, and the structure of DSLP.
2. Provide a comparison of FLOPs in Tables 2 and 3.
3. The proposed DSIT uses a Transformer as a pre-trained model and claims to introduce prior knowledge. How does the model perform without using a pre-trained model? Additionally, why does DSLP not use pre-trained models?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors discuss the method's limitations and societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback on our work. Below, we address your main concerns:
Q1: (1) Clarify the implementation of CAI, (2) N_T, N_W, and (3) DSLP.
A1: (1) CAI is a general design introduced to enable interaction and fusion of features extracted from different network architectures during the feature extraction phase. A specific implementation of this design is the dual-stream feature interaction module. We tested 3 different implementations of CAI as in the ablation study shown in Table 3 of the main paper. It turned out that our current choice (DAIB) is significantly superior to other alternatives, which exhibits the effectiveness of DAIB for both cross-architecture and cross-layer interactions.
(2) In our experiments, the training image size is fixed at $384\times384$. The window size of attention mechanisms, $N_W$, is fixed to $12\times12$. The number of windows, $N_T$, varies depending on the spatial scale of the features. For instance, if the spatial scale is $384\times384$, then $N_T = (384/12)\times(384/12)=1024$.
(3) The DSLP can be any convolutional dual-stream network module, such as the YTMT Block [1] and MuGI Block [2]. To highlight our main contribution and avoid other possible factors influencing performance, we simply adopt the MuGI Block in our experiments. Also, more powerful convolutional modules can be developed in future work.
We will clarify these model settings in our revised version and thoroughly proofread our paper to eliminate other writing problems.
Q2: Simpler Implementation of DAIB.
A2: In Table 3 of the main paper, we provided an ablation study on CAI. The "Add" variant represents the removal of the DAIB, directly adding the global priors on the dual-stream local priors. Although this setting reduces the complexity of the model (from 517 GFLOPs to 471 GFLOPs), it decreases the average PSNR from 26.27 to 25.74. To ensure superior performance, we retained the DAIB as the interaction module for CAI. However, considering broader applications, exploring more lightweight model designs can be more attractive, which may use a more lightweight backbone (MobileViT for example), reduce the number of channels and blocks, and employ knowledge distillation techniques. Thank you for your valuable suggestion, which inspires us to optimize the model for more practical use in the future.
Q3: Provide a comparison of FLOPs and Params.
A3: As shown in the Table below, we present a comparison in terms of GFLOPs (obtained via fvcore package at $384\times384$ input resolution) and learnable parameters between previous state-of-the-arts and ours. It can be seen that the parameter amount of our model is comparable to that of RAGNet and DSRNet, but our model is of the lowest computational complexity among all the compared methods. This efficiency is achieved by our multi-scale and non-recurrent model design. Moreover, our method demonstrates the best average performance across all datasets, as evidenced by the average numerical metrics on all testing images from Real20 and SIR^2.
|Models|ERRNet|IBCLN|RAGNet|YTMT|Dong _et al._|DSRNet|Ours|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Parameters (M)|18.58|24.37|146.72|76.90|**10.93**|124.60|131.76|
|GFLOPs|820|708|610|838|666|743|**517**|
|Performance|23.53/0.879|24.10/0.879|24.90/0.886|24.05/0.886|24.21/0.897|25.75/0.910|**26.49/0.922**|
|
Q4: Some parts in Figure 2 overlap excessively.
A4: We will definitely polish the figures in our revised version, for better visual appeal.
Q5: How does the model perform without using a pre-trained model?
A5: As shown in the table below, DSIT (w/o Pretrain) refers to our model with the Transformer backbone randomly initialized, and trained from scratch alongside the main network. The results reflect a remarkable performance drop, which comes from the data-hungry nature of Transformer architectures; training from scratch on a relatively small dataset can negatively impact the performance of models [3]. This outcome also underscores the importance of high-semantic pretraining in addressing the ill-posed nature of the reflection separation task, which has been demonstrated in previous methods like the HyperColumn in YTMT [1] and DSFNet in DSRNet [2]. In future work, improvements of the pre-training technique, such as incorporating same-task pre-training strategies like HAT [4], could further enhance the model performance.
Q6: Why does DSLP not use pre-trained models?
A6: We have already employed the pretrained Global Prior Extractor to extract high-semantic prior, the information flow of which is task-agnostic and requires task-specific guidance to adapt to our current task. The proposed dual-stream Local Prior Extractor (LPE) serves this purpose, guiding the task-agnostic information to be more relevant to reflection separation. Therefore, there is no need to further introduce a pre-trained task-agnostic LPE.
To further validate this, we experimented as shown in the table below. In the DSIT with CNN Pretrain, we replaced the original LPE with a pre-trained RepVGG-B3 model. We used CAI to interact and fuse features extracted by the pre-trained models from both architectures. However, we observed that this approach not only increased the model's overall GFLOPs but also resulted in weaker performance, which confirms our opinion above that using a pre-trained CNN network in place of our LPE does not improve performance. Thank you for your insightful question.
|Models|Average PSNR/SSIM|GFLOPs|
|:-|:-:|:-:|
|DSIT(w/o Pretrain)|25.07/0.897|517|
|DSIT(CNN Pretrain)|25.83/0.911|525|
|DSIT(Ours)|**26.27**/**0.917**|**517**|
|
[1] Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation. NeurIPS 2021.
[2] Single Image Reflection Separation via Component Synergy. ICCV 2023.
[3] Training data-efficient image transformers & distillation through attention. ICML 2021.
[4] Activating More Pixels in Image Super-Resolution Transformer. CVPR 2023.
---
Rebuttal Comment 1.1:
Title: After Rebuttal
Comment: Thanks for the rebuttal. The authors clarify the design details and compare Params/FLOPs.
Overall, the authors address my concerns, so I raise my score to 6.
---
Reply to Comment 1.1.1:
Comment: We are honored that our response helped the reviewer better understand our paper. We sincerely appreciate the thoughtful review of the reviewer and will further refine our manuscript accordingly. | Summary: This paper introduces a Dual-Stream interactive transformer to tackle the single image reflection removal task. The motivation of the proposed method is based on the drawbacks of existing methods that the dual-stream methods cannot assess the effectiveness of the information flowing between the two streams. Based on this analysis, this paper proposed . The proposed method constructs the encoder and decoder in dual-stream structure and crossover the outputs of the two sub-networks. Comprehensive experiments show that the proposed method outperforms the baseline methods.
Strengths: 1. This paper demonstrates the proposed method consistently outperforms the baseline methods in benchmarking datasets on image reflection separation task.
2. The motivation of the proposed method is based on observations, which is one of the existing problems of the reflection separation task.
Weaknesses: 1. The methodology section is not described clearly. First, in line 146, the superscript symbol k on feature parameter F_T^{k-1} is neither explained in the text nor in the figures. Does this number k indicate the network is running in recurrent manner? And in line 149, the superscript is used to indicates layer normed features. Also in Figure 1, the superscript of feature F is using l instead. Second, I would assume F_T represents transmission features and F_R represents reflection features. Third, in line 151, the DS-SA and DS-CA should be explained or at least come with citations.
2. Line 155 and Figure 2, how does DSLP FNN extract local features? How are local features defined? What is the difference between the input of DSLP and the input of the Transformer block?
3. In the introduction section, this paper criticizes the existing dual-stream methods about the insufficient assessment on the information flowing between the two streams. However, I could not see the assessment of the data or any experiments analyzing the information. To make the claim of the paper valid, the authors should show evidences that the proposed method is able to make the information positively effective for reflection separation.
Technical Quality: 2
Clarity: 1
Questions for Authors: Generally, the paper is not well written. The equations, symbols and definitions are not clearly described in the text and therefore the idea of the methodology becomes confusing. The authors are high advised to polish the presentation of the manuscript and re-submit it. In addition, the paper does not provide enough evidence to its claim. It lacks novelty in that sense.
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: The limitation is addressed in the appendix: the proposed method fails to deal with the regions with dominant reflections.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First of all, we sincerely appreciate your comments, which can help us further improve the quality of our paper. We also apologize for any confusion or inconvenience caused by writing issues. Below, we address the main concerns.
**Q1**: What does the superscript symbol $k$ mean? It is not consistent with the superscript of $\mathbf{F}$ in Figure 1.
**A1**: $\mathbf{F}$ denotes the feature flow in our dual-stream architecture. Specifically, $\mathbf{F_T}$ and $\mathbf{F_R}$ represent the feature flows for the transmission and reflection layers, respectively. When the superscript of $\mathbf{F}$ is a lowercase letter, such as in $\mathbf{F}^k_\mathbf{T}$, it indicates the feature flow of the transmission layer after passing through $k$ DAIBs (Dual-Attention Interactive Blocks). The superscripts $l$ in Figure 1 and $k$ in the methodology section have the same meaning. Moreover, when the superscript is formed by uppercase letters, such as in $\mathbf{F}^{\text{LN}}_\mathbf{T}$, it designates the feature flow after passing through a specific layer within a DAIB (in this case, LayerNorm). We appreciate your attention to this detail. We will definitely unify the notations in the next version for clarity.
**Q2**: The DS-SA and DS-CA should be explained or at least come with citations.
**A2**: The DS-SA (Dual-Stream Self-Attention) and DS-CA (Dual-Stream Cross-Attention) mechanisms are introduced in Section 3.1 of our paper, which are explained in the subsequent paragraphs following their first mention. In the revision, we will ensure that a brief explanation is provided at their first mention to clarify their roles as follows:
"Following the LayerNorm, we apply Dual-Stream Self-Attention (DS-SA) to $\mathbf{X}^{\text{LN}}\_{0}$ and Dual-Stream Cross-Attention (DS-CA) to $\mathbf{X}^{\text{LN}}\_{1}$, obtaining $\mathbf{X}\_{\text{SA}}$ and $\mathbf{X}\_{\text{CA}}$, respectively. The design of these two attention mechanisms will be detailed later in this section."
**Q3**: (1) How does DSLP FFN extract local features? (2) How are local features defined? (3) What is the difference between the input of DSLP and the input of the Transformer block?
**A3**: (1) Extraction of local features by DSLP FFN: The DSLP FFN (Dual-Stream Locality-Preserving Feed-Forward Network) and DSLP Block can be any convolutional dual-stream network modules, such as the YTMT Block [1] and MuGI Block [2]. In our experiments, we employ the MuGI Block. Since reflection decomposition is a dense prediction task, one of the primary functions of the DSLP is to maintain local information. DSLP achieves this goal by using convolutional structures to extract local features, like other hybrid Transformer architectures [4],[5],[6].
(2) Definition of local features: Local features refer to the local correlation among adjacent pixels that often have similar values. CNNs capture the local structures through limited receptive fields, weight sharing, and/or spatial sub-sampling [3],[4].
(3) Differences between inputs to DSLP and Transformer blocks: Both the features for the Transformer block and the DSLP block originate from the input image $\mathbf{I}$. However, the features fed into the Transformer block first undergo a PatchEmbed process. Before being input into the DSLP FFN, these features are reshaped after the LayerNorm to fit the convolution operation, like other hybrid Transformer architectures [5],[6].
To avoid any confusion, we will provide a detailed explanation in the next version to address these mentioned points thoroughly.
**Q4**: How does the proposed method assess the information flow between the two streams?
**A4**: We explain in detail how explicit correlation assessment works in the Common Issue in the Global Rebuttal. Additionally, the ablation study on DAIB in Table 3 of the main body of our paper shows that removing the cross-attention mechanism ("w/o DS-CA") leads to significant performance degradation on real-world datasets. This also demonstrates that our proposed method is able to make the information positively effective for reflection separation.
**Q5**: Does the proposed method fail to deal with the regions with dominant reflections?
**A5**: In the situation mentioned by the reviewer (Figure 13), some regions in the image are dominant with strong reflections (in other words, information of the transmission layer has been largely suppressed), which poses challenges not only to our method but also to previous state-of-the-arts. While other methods struggle to discriminate the reflection component from the entangled layers in such scenarios, our method removes significant portions of the reflections, way more effectively, which corroborates our generalization capability.
[1] Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation. NeurIPS 2021.
[2] Single Image Reflection Separation via Component Synergy. ICCV 2023.
[3] Object recognition with gradient-based learning. CGCV 1999.
[4] CvT: Introducing Convolutions to Vision Transformers. ICCV 2021.
[5] Incorporating Convolution Designs into Visual Transformers. ICCV 2021.
[6] Uformer: A General U-Shaped Transformer for Image Restoration. CVPR 2022.
---
Rebuttal 2:
Title: Comments on rebuttal
Comment: The authors' rebuttal works have addressed most of the weaknesses raised in the initial review. Generally, the main concern is addressed and I've raised the score to 5.
---
Rebuttal Comment 2.1:
Comment: Thank you for your professional review, which has been instrumental in helping us improve our paper. We will further enhance the presentation of our revised version to eliminate the writing problems. | Rebuttal 1:
Rebuttal: **Common Issue**:
**Q**: Illustration of Explicit Correlation Assessment and the Cross-Attention mechanisms in the proposed method.
**A**: We utilize the Layer-aware Cross-Attention (LaCA) mechanism for explicit correlation assessment between the two streams. A formal derivation of LaCA has already been provided in Appendix A.2 of our initial submission. We here again explain its principle as follows:
Given the feature streams of transmission layer $\textbf{T} \in \mathbb{R}^{N\times C}$ and reflection layer $\textbf{R} \in \mathbb{R}^{N\times C}$ , we concatenate them along the first dimension to form a matrix $\textbf{X} \in \mathbb{R}^{2N\times C} = \begin{bmatrix} \textbf{T} \\\ \textbf{R} \end{bmatrix}$, where $N$ represents the number of tokens and $C$ denotes the number of channels in each token, respectively.
The query $\mathbf{Q} \in \mathbb{R}^{N\times D}$, key $\textbf{K} \in \mathbb{R}^{N\times D}$, and value $\textbf{V} \in \mathbb{R}^{N\times D}$ matrices can be computed by applying the linear projections to the merged stream $\mathbf{X}$ via:
$$\textbf{Q} = \textbf{X}\textbf{W}_q , \quad \textbf{K} = \textbf{X}\textbf{W}_k, \quad \textbf{V} = \textbf{X}\textbf{W}_v ,$$
where $\textbf{W}_q, \textbf{W}_k, \textbf{W}_v \in \mathbb{R}^{C \times D} $ are the weight matrices of linear projections, changing the number of channels of each token from $C$ to a hidden dimension $D$.
The attention score matrix $\mathbf{A}\in\mathbb{R}^{2N\times 2N}$ is computed by:
$$
\textbf{A} = \text{Softmax}(\frac{\textbf{Q}\textbf{K}^T}{\sqrt{D}}) = \text{Softmax}(\frac{1}{\sqrt{D}}\begin{bmatrix}
\textbf{T} \\\\
\textbf{R}
\end{bmatrix}\textbf{W}_q\textbf{W}_k^T\begin{bmatrix}
\textbf{T}^T & \textbf{R}^T
\end{bmatrix}) = \text{Softmax}( \frac{1}{\sqrt{D}}\begin{bmatrix}
\textbf{T}\textbf{W}_q\textbf{W}_k^T\textbf{T}^T & \textbf{T}\textbf{W}_q\textbf{W}_k^T\textbf{R}^T \\\\
\textbf{R}\textbf{W}_q\textbf{W}_k^T\textbf{T}^T & \textbf{R}\textbf{W}_q\textbf{W}_k^T\textbf{R}^T
\end{bmatrix} ),$$
where the intra-layer terms $\textbf{T}\textbf{W}_q\textbf{W}_k^T\textbf{T}^T$and $\textbf{R}\textbf{W}_q\textbf{W}_k^T\textbf{R}^T$ represent interactions within the transmission stream $\textbf{T}$ and the reflection stream $\textbf{R}$, respectively. The inter-layer terms $\textbf{T}\textbf{W}_q\textbf{W}_k^T\textbf{R}^T$and $\textbf{R}\textbf{W}_q\textbf{W}_k^T\textbf{T}^T$ indicate interactions between the transmission stream $\textbf{T}$ and the reflection stream $\textbf{R}$.
By denoting the Softmax function with a scaling factor $\frac{1}{\sqrt{D}}$ as $\mathcal{S}(\cdot)$, the output matrix $\textbf{Y}$ is then calculated as:
$$\textbf{Y} = \textbf{A}\textbf{V} = \begin{bmatrix}
\mathcal{S}(\textbf{T}\textbf{W}_q\textbf{W}_k^T\textbf{T}^T) \textbf{T} \textbf{W}_v + \mathcal{S}(\textbf{T}\textbf{W}_q\textbf{W}_k^T\textbf{R}^T) \textbf{R} \textbf{W}_v \\\\
\mathcal{S}(\textbf{R}\textbf{W}_q\textbf{W}_k^T\textbf{T}^T) \textbf{T} \textbf{W}_v + \mathcal{S}(\textbf{R}\textbf{W}_q\textbf{W}_k^T\textbf{R}^T) \textbf{R} \textbf{W}_v
\end{bmatrix}.$$
We further simplify the form of $\textbf{Y}$ by introducing a function $\mathcal{G}(\textbf{A},\textbf{B})=\mathcal{S}(\textbf{A}\textbf{W}_q\textbf{W}_k^T\textbf{B}^T)\textbf{B}\textbf{W}_v$, where $\mathbf{A}\in\mathbb{R}^{N\times C}$ and $\mathbf{B}\in\mathbb{R}^{N\times C}$ can be chosen between $\mathbf{T}$ and $\mathbf{R}$, yielding the follows:
$\textbf{Y} = \begin{bmatrix}
\mathcal{G}(\textbf{T},\textbf{T})+\mathcal{G}(\textbf{T},\textbf{R}) \\\\
\mathcal{G}(\textbf{R},\textbf{T})+\mathcal{G}(\textbf{R},\textbf{R})
\end{bmatrix}.$
We finally obtain the output of the dual streams as $\textbf{T}_o=\mathcal{G}(\textbf{T},\textbf{T})+\mathcal{G}(\textbf{T},\textbf{R})$ and $\textbf{R}_o=\mathcal{G}(\textbf{R},\textbf{R})+\mathcal{G}(\textbf{R},\textbf{T})$.
Specifically, the output features of the transmission stream, $\mathbf{T}_o$, consist of two parts: intra-layer explicit correlation assessment $\mathcal{G}(\mathbf{T},\mathbf{T})$ and inter-layer explicit correlation assessment $\mathcal{G}(\mathbf{T},\mathbf{R})$. Similarly, the output features of the reflection stream, $\mathbf{R}_o$, include intra-layer explicit correlation assessment $\mathcal{G}(\mathbf{R},\mathbf{R})$ and inter-layer explicit correlation assessment $\mathcal{G}(\mathbf{R},\mathbf{T})$.
Additionally, to facilitate a more intuitive understanding of LaCA, we have included an illustrative example in Figure 3 of the **Global Rebuttal PDF** (attachment below), which sequentially displays the transmission stream $\mathbf{T}$, the reflection stream $\mathbf{R}$, the concatenated matrix along the token channel $\textbf{X}$, and 4 internal blocks of the attention matrix $\mathbf{A}^* = \begin{bmatrix}
\textbf{T}\textbf{W}_q\textbf{W}_k^T\textbf{T}^T & \textbf{T}\textbf{W}_q\textbf{W}_k^T\textbf{R}^T \\\\
\textbf{R}\textbf{W}_q\textbf{W}_k^T\textbf{T}^T & \textbf{R}\textbf{W}_q\textbf{W}_k^T\textbf{R}^T
\end{bmatrix}$.
*The diagram clearly shows that the two submatrices along the main diagonal represent the self-attention of the layers, while the other two along the off-diagonal represent the cross-attention of the two layers*.
Pdf: /pdf/5ee4088ec4ff371f212d79e8f396679e9e6f0e17.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Doob's Lagrangian: A Sample-Efficient Variational Approach to Transition Path Sampling | Accept (spotlight) | Summary: The paper is concerned with sampling trajectories with a terminal condition. For stochastic processes governed by a Brownian Motion, Doob's h-transform gives an posterior SDE that leads to samples with the final condition. However, estimating the h-function that is needed for the posterior SDE usually involves simulating trajectories, which is inefficient if the terminal condition is rarely reached.
The authors propose a simulation-free variational optimization method to estimate the h-function based on a least action principle and Gaussian approximations to the marginal densities.
Strengths: 1. The authors provide a solid and clear background on Doob's h transform that gets supported by the provided proofs in the appendix.
2. The paper clearly highlights the challenges of the optimization problem and proposes an efficient solution adressing these challenges.
3. The related work section gives a good overview and nicely connects to related topics.
Weaknesses: 1. I found the path histograms in Figure 2 to be too cluttered. I would propose adding less samples.
2. The paper misses a learning curve. It would in general be interesting to have more training details.
Minor Weaknesses:
In Chapter 3 and in the appendix, the authors change from trajectory length T to the unit interval. There should be a sentence that explains this change.
Technical Quality: 3
Clarity: 3
Questions for Authors: I could not totally follow the derivation in the appendix. Can you explain how to get to Equation 25 from the previous equation in line 548?
I do not see why second term in equation 25 gets subtracted, while in line 548 only the last term has a minus.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors adequately adressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1. I found the path histograms in Figure 2 to be too cluttered. I would propose adding less samples.
Thank you for your feedback and the concrete suggestion. Our goal was to illustrate the diversity of the ensemble of transition paths. We will revise Figure 2 by reducing the number of paths and will highlight example trajectories in different colors. This will help visualize the converged behavior while allowing for the investigation of individual transitions.
> Q2. The paper misses a learning curve. It would in general be interesting to have more training details.
We agree with you and have uploaded a PDF containing training curves to provide more insight. While the loss itself may not be very revealing, we showcase the quality of paths (i.e., max energy) at a certain compute budget (i.e., number of potential evaluations).
Additionally, we will include two algorithms in the revised manuscript to clarify details on the training and inference. These algorithms are also included in the rebuttal PDF.
> Q3. In Chapter 3 and in the appendix, the authors change from trajectory length T to the unit interval. There should be a sentence that explains this change.
Thank you for pointing out this inconsistency. We have corrected this error by unifying the notation throughout the paper to consistently consider trajectories of length $T$.
> Q4. I could not totally follow the derivation in the appendix. Can you explain how to get to Equation 25 from the previous equation in line 548? I do not see why second term in equation 25 gets subtracted, while in line 548 only the last term has a minus.
After the substitution of (24) into the equation in line 548, the third term becomes
$$\int dtdx\ s_t \langle\nabla,q_t(b_t + 2G_t\nabla s_t)\rangle = \int dtdx\ s_t \langle\nabla,q_t b_t\rangle - 2\int dtdx\ q_t\langle\nabla s_t,G_t\nabla s_t\rangle,$$
where the last equality is due to the integration by parts. The last term in this equation, together with the first term in line 548, yields the second term in equation (25).
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I will increase my original score by one. | Summary: This paper proposes a variational formulation of Doob's h-transform, which characterizes the distribution over paths with a given endpoint. Instead of relying on potentially wasteful sampling approaches, the authors propose directly optimizing a tractable variational distribution over transition paths which satisfy the initial and terminal conditions by design. This approach reduces the search space over trajectories and avoids trajectory simulation. Experiments on real-world molecular simulation and protein folding tasks demonstrate the applicability of the approach.
Strengths: The paper clearly motivates the problem it tackles, describes the challenges well and nicely introduces the idea behind their method in an illustrative fashion. Overall the paper is very well-written and structured. In particular, it builds up the method piece-by-piece explaining the choices along the way. As far as I can judge the related work seems to be exhaustive. The experiments are done on interesting problems as far as I can tell and I also appreciated that the authors made their code publicly available.
Weaknesses: My main concern comes from trying to interpret the experimental results and in particular judging the performance compared to MCMC. It seems quite clear from Tables 1 and 2 that the variational approach requires fewer calls to the potential energy function than MCMC, however their performance differences are harder to judge in my opinion.
More specifically, in Table 1 the standard deviations are so large that there is basically no meaningful statistical difference between the shown results especially for the Max Energy, but also the Log-likelihood. Now this might mean that MCMC and the presented method both perform well, but its surprising to me that there is that much inherent variation. Similarly in Table 2, the variance for the Max Energy of MCMC in the first line is huge.
I'm also a bit confused by the Max Energy increases when using a mixture in Table 2.
To better understand how the performance of the variational approach improves during training, it would be nice to see a plot that shows the Max Energy as a function of the training epochs.
Finally, I would have liked to see a discussion of the limitations of the approach. Section 6 has Limitations in the title, but does not actually discuss them in any way beyond extensions of the proposed method.
Technical Quality: 3
Clarity: 4
Questions for Authors: - How do you choose the number of mixture components in practice?
- Why does the MaxEnergy increase when using a mixture distribution as a variational approximation in Table 2?
- How do you explain the huge variance of the Max Energy of MCMC (variable length) in the first row of Table 2?
- What are the main limitations of the current approach?
Confidence: 2
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Section 6 has "Limitations" in the title, but limitations are not discussed, only future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1. To better understand how the performance of the variational approach improves during training, it would be nice to see a plot that shows the Max Energy as a function of the training epochs.
Please see the supplementary PDF for the plot of max energy as a function of energy evaluations. We will include this plot in the final version of the paper.
> Q2. More specifically, in Table 1 the standard deviations are so large that there is basically no meaningful statistical difference between the shown results especially for the Max Energy, but also the Log-likelihood.
The variance of the max energy (and log-likelihood) is measured over independently sampled paths from the target path-measure (the Brownian motion conditioned on the endpoint). Note that the large variance is not due to inaccurate measurements or poor performance of the model but due to the diffusion coefficient of the reference measure. Thus, we expect our model to match these numbers requiring fewer energy evaluations rather than surpass the MCMC results in terms of energy or likelihood. Therefore, the performance measure in Table 1 is the number of evaluations.
> Q3. How do you explain the huge variance of the Max Energy of MCMC (variable length) in the first row of Table 2?
In Table 2, the variance of the max energy is measured across paths of different lengths, which gives very different estimates of the maximum energy. Indeed, when the path’s length is not optimal (too short or too long paths), the maximum energy is very high (the trajectory either has to take shortcuts or can wander into high energy regions). Note that these paths do not affect the estimate of the minimum energy but contribute to the estimate of its mean and variance, which results in the enormous value of the latter.
> Q4. Why does the MaxEnergy increase when using a mixture distribution as a variational approximation in Table 2?
For a single component, our algorithm samples from a low-energy transition path (mode-seeking behavior). When we add more components into the mixture, it starts including less likely, higher-energy paths, covering other modes and increasing the variance and mean. Note that min-max energy does not change with the introduction of several components.
> Q5. Finally, I would have liked to see a discussion of the limitations of the approach. Section 6 has Limitations in the title, but does not actually discuss them in any way beyond extensions of the proposed method.
Thank you for the suggestion. Indeed, currently, the discussion is focused mostly on the future work rather than limitations. We will address this issue in the revised manuscript: (1) computational inefficiency in learning a mixture of Gaussian paths, (2) as already pointed out in part of our future work, the rigidity in defining states A and B to be a point mass with Gaussian instead of any arbitrary set, and (3) as also mentioned in the future work section, our method is limited to a fixed length of transition path instead of varied-length.
> Q6. How do you choose the number of mixture components in practice?
The number of mixture components is a hyperparameter that should be chosen based on the complexity of the system and the available computational budget. Increasing the number of Gaussian mixtures enhances expressivity and improves the model's ability to capture diverse transition paths.
When using learnable weights $w$, less-dominant reaction channels receive smaller weights during training. In our toy experiment in Figure 3, we observed that increasing the number of mixtures beyond the number of reactive channels resulted in only the first two mixtures having significant weights (around 0.5), while higher mixtures had weights close to zero. This suggests that the weights can be used as a proxy to determine the optimal number of components, similar to how principal component analysis identifies significant components and discards less-likely channels.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I feel my original score is still appropriate. | Summary: The Authors of this paper tackle the problem of sampling conditioned SDEs with a specific interest in "transition path sampling", i.e. sampling a Langevin-type SDE undergoing a transition between an initial state (or set of states) $A$ and a final target set $B$. Sampling transition paths efficiently can provide a tremendous boost to research in catalysis or drug design. In this paper, the Authors model the transition path distribution with either (i) a parametrized Gaussian process or (ii) a parametrized mixture of Gaussians processes. These families of priors are then optimized by leveraging a variational formulation of the problem developed by the Authors starting from Doob's $h$-transform.
Strengths: The Authors tackle a challenging problem with a novel variational formulation, which is apt to be optimized by leveraging techniques developed by the generative modeling community within ML. A clear strength of the paper is the sound theoretical analysis justifying the proposed method. Interestingly, the choice of Gaussian process (or a mixture of Gaussian processes) variational priors allows the Authors to simplify the algorithm using analytical results and sidestep lengthy calculations at the deployment phase.
Weaknesses: I find the paper overall clearly written, but I had to go through section 3.2 (Computational Approach) several times to grasp how the method can be deployed in practice. Specifically, I think that it can be improved the explanation of the fact that, because of the Gaussian prior, you only need to model the transition path probability and not the $h$-transform itself, as well as the explanation of the actual optimization step (Reparametrization of Gradients).
The subscript $0,T$ is introduced for the first time in Eq. (6) without explanation and used throughout the manuscript to label variables related to the conditioned process. It might be useful to clarify this notation explicitly.
The experimental evaluation is somewhat limited, especially concerning the Chignolin protein. Over the years, many transition path sampling strategies have been developed, as well as very much related enhanced sampling techniques. It would be nice to have a comparison also to different baselines, as well as a discussion on the computational complexity and actual running times of the baselines.
Technical Quality: 3
Clarity: 3
Questions for Authors: Is the $h$-transform related to the committor function in some way? If so, how does your Thm 1 relates to the variational formulation of the committor (see e.g. Eq. 20 of "Transition Path Theory and Path-Finding Algorithms for the Study of Rare Events" by Weinan E and Eric Vanden-Eijnden)
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limited experimental evaluation
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1. I find the paper overall clearly written, but I had to go through section 3.2 (Computational Approach) several times to grasp how the method can be deployed in practice. Specifically, I think that it can be improved the explanation of the fact that, because of the Gaussian prior, you only need to model the transition path probability and not the h-transform itself, as well as the explanation of the actual optimization step (Reparametrization of Gradients).
Thank you very much for the concrete suggestion. We will improve the presentation to emphasize that we only optimize for $q$ due to the Gaussian parameterization. In the revised manuscript, we also introduce two algorithms (which can be found in the PDF uploaded during the rebuttal) detailing the training and inference procedure. These additions should make our approach more understandable. We welcome any further suggestions you might have on making the paper more accessible.
> Q2. The subscript 0, T is introduced for the first time in Eq. (6) without explanation and used throughout the manuscript to label variables related to the conditioned process. It might be useful to clarify this notation explicitly.
Thank you for pointing this out. We have updated the manuscript to explicitly introduce and clarify the subscript notation. We hope this makes the notation clearer and easier to follow throughout the manuscript.
> Q3. The experimental evaluation is somewhat limited, especially concerning the Chignolin protein. Over the years, many transition path sampling strategies have been developed, as well as very much related enhanced sampling techniques. It would be nice to have a comparison also to different baselines, as well as a discussion on the computational complexity and actual running times of the baselines.
Thank you for raising your concerns. There is minimal difference among MCMC methods regarding the quality of paths, as they all guarantee convergence to the true distribution. We are unaware of any non-MCMC-based transition path sampling procedures that significantly outperform existing techniques in terms of runtime and trajectory quality.
Thus, we view MCMC results not as baselines to surpass but as a gold standard to approximate. Comparing with different variations of MCMC (e.g., improved shooting-point selection [1,2], machine-learned MCMC [3]) might reduce the number of evaluations needed but would not affect the theoretical guarantees. Since the effectiveness of these techniques varies greatly across different systems, we did not include such comparisons. However, we will discuss the computational complexity and running times in the revised manuscript to provide a clearer picture of the performance trade-offs involved.
As for Chignolin, we showcased we could find plausible paths with a reasonable amount of energy evaluations. However, due to the already high dimensionality of Chignolin it is challenging to sample a meaningful ensemble of transition paths with existing methods. This highlights the advantage of our approach.
> Q4. Is the h-transform related to the committor function in some way? If so, how does your Thm 1 relates to the variational formulation of the committor (see e.g. Eq. 20 of "Transition Path Theory and Path-Finding Algorithms for the Study of Rare Events" by Weinan E and Eric Vanden-Eijnden)
The committor functions defined in eq. (18) of [4] is different from the h-transform. Indeed, the committor function $q_{+}$ is time-independent and satisfies eq. (18) in [4], while the PDE for the h-transform (eq. (8b) of our paper) includes the time-derivative. The committor function can be obtained as the integration of $h$ over different event times $T$, however, this is beyond the scope of the current paper.
References
[1] P. G. Bolhuis, D. Chandler, C. Dellago, and P. L. Geissler, 2002, “TRANSITION PATH SAMPLING: Throwing Ropes Over Rough Mountain Passes, in the Dark” Annual Review of Physical Chemistry, vol. 53, no. 1. Annual Reviews, pp. 291–318.
[2] J. Juraszek and P. G. Bolhuis, 2008, “Rate Constant and Reaction Coordinate of Trp-Cage Folding in Explicit Water,” Biophysical Journal, vol. 95, no. 9. Elsevier BV, pp. 4246–4257..
[3] H. Jung et al., 2023, “Machine-guided path sampling to discover mechanisms of molecular self-organization,” Nature Computational Science, vol. 3, no. 4. Springer Science and Business Media LLC, pp. 334–345.
[4] Vanden-Eijnden, Eric. "Transition-path theory and path-finding algorithms for the study of rare events." Annual review of physical chemistry 61 (2010): 391-420.
---
Rebuttal Comment 1.1:
Comment: I thank the Authors for their thorough rebuttal and clarifications. I will keep my evaluation to "Accept". | Summary: The submitted manuscript presents a variational formulation of Doob’s h-transform, leading to a novel (simulation-free) computational approach for rare event sampling in transition paths. The task of interest involves conditioning a dynamical system driven by Brownian motion with a known drift term to reach a given endpoint. In theory, this terminal condition can be addressed using Doob’s h-transform, resulting in an associated SDE for the conditional dynamic. However, practical implementation requires knowledge of the h-transform. The authors propose a variational problem formulation that provides the necessary information about the h-transform as its solution. Solving this variational problem results in a computational approach for simulating the conditional dynamic. Compared to existing methods, it is claimed that the proposed approach avoids importance sampling estimators and expensive trajectory simulations. The method is tested on both synthetic and real datasets.
Strengths: The paper introduces a novel variational objective to describe Doob’s h-transform, leading to a promising and innovative computational approach for simulating the conditional dynamics of transition paths. Although I appreciate the general approach presented in Section 3.1, I am not convinced by the computational approach proposed in Section 3.2 due to several reasons pointed out below. If these issues can be addressed, I believe that the approach can be further utilized to construct highly efficient sampling strategies.
Weaknesses: - The proposed approach in Section 3.2.1 introduces certain issues. The connection to Doob’s h-transform established in Theorem 1 is only guaranteed if an exact solution to the optimization problem (9) or (11) is found. However, by introducing the Gaussian parametrization of $q_{t|0,T}$, this guarantee is lost. There is a lack of discussion regarding the implications and potential effects of this parametrization on the overall accuracy and validity of the method.
- Moreover, the formulation of the task of solving equation (12) given $q_{t|0,T}$ is misleading. In reality, you are solving an inverse problem here. Given the solution of the PDE in (12), your goal is to reconstruct $u_{t|0,T}$, which is generally an ill-posed problem. For instance, introducing uncertainties into the model description via the drift $b_t$ or diffusion $\Xi_t$ can lead to significant challenges. This scenario is likely when applying the approach in practice under model misspecification. A critical question to address is the robustness of the proposed framework against model misspecification, such as small perturbations in $b_t$.
- The claim that the approach is simulation-free is not entirely clear. In line 173, it is stated that the Gaussian parametrization allows for the generation of arbitrary samples. However, this assumes that the states $x_{t|0,T}$ are independent for all $t$, which is not true when $x_{t|0,T}$ is a solution of the SDE in (10). The dependence between the states needs to be accounted for in the sampling process, and this aspect seems to be overlooked.
- I am wondering what the measured quantities in the numerical experiments actually reveal about the correctness of the proposed approach. Evaluating the maximum energy and the likelihood might not provide sufficient information about the accuracy of the estimated distribution. If the sampled paths only follow high likelihood regions, this does not necessarily indicate that the correct distribution has been captured. The objective should be to estimate quantities of interest related to the conditioned SDE in (6). However, it is not clear whether the simulated paths correspond to accurate simulations of (6). For instance, how does the method perform when estimating rare event probabilities or other Monte Carlo estimators with respect to (6)? This aspect needs to be thoroughly addressed to validate the effectiveness of the proposed approach.
Technical Quality: 2
Clarity: 3
Questions for Authors: - To enhance the practical applicability of the proposed framework, it is crucial to study the impact of inexactly solving the variational problems (9) or (11). Specifically, how do errors propagate to the conditioned dynamical system driven by the SDE (6) when an approximate solution of (9) or (11) is used? Understanding this error propagation is essential for assessing the robustness and reliability of the proposed method in practical scenarios.
- Is the Optimization problem in (9) well-posed? This means, is the optimal solution unique?
- In several instances, there are missing commas in the notation for the inner product (e.g., equations (9b) and (12)).
- Many important mathematical details and assumptions are missing. Most importantly, what are the assumptions on the drift vector field $b_t$ to ensure the well-posedness of the proposed scheme?
- Regarding Section 3.2.2: Instead of introducing $\xi_{\min}$ I suppose that one could also directly work with a pseudoinverse when defining $G_t^{-1}$.
- In Proposition 4, should it be $u_{t|0,T}^{(k)}$ on the right hand side of the equation for $u$?
- When referring to probability density functions, it's important to use proper notation. Instead of $\rho(x_t=x)$, it would be clearer to use $\rho_t(x)$ depending on the context.
- In the abstract, you claim that no "inefficient" importance sampling estimators are required. However, it would be beneficial to compare this approach with such estimators to demonstrate its advantages more clearly.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: - There is no discussion about error propagation arising due to the Gaussian parametrization.
- The approach is limited to conditioning the sample path on point sets $x_T = B$.
- Limited statistical evidence is presented in the numerical experiments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >Q1: The connection to Doob’s h-transform established in Theorem 1 is lost due to the Gaussian parametrization of q_t. Lack of discussion regarding the implications of this parametrization on the accuracy and validity of the method.
We highlight the potential lack of expressivity for the Gaussian parameterization through experiments in Fig. 3 and lines 283-286, where a single Gaussian parameterization fails to capture several modes of the transition path. In this case, our mixture of Gaussian parameterization (proposed in Sec. 3.2.3) is necessary.
The approach of finding an approximate solution by optimizing within a tractable family is common in variational methods [1]. Indeed, our contributions include deriving a variational objective for Doob’s h-transform in Thm 1. Analysis of the accuracy of particular variational families is difficult to characterize for general problems and is beyond the scope of this work.
>Q2: The formulation of the task of solving equation (12) given q_t is misleading. Given the solution of the PDE in (12), your goal is to reconstruct u_t, which is generally an ill-posed problem.
As we point out in line 152, indeed, numerous vector fields $u_{t|0,T}$ can satisfy eq. (12) for the given densities $q_{t|0,T}$. In Proposition 3, our goal is to simultaneously parametrize both $q_{t|0,T}$ and $u_{t|0,T}$ such that they satisfy (9b) (thus avoiding constrained optimization problem).
>Q3: For instance, introducing uncertainties into the model description via the drift b_t or diffusion \Sigma_t can lead to significant challenges.
We agree that analyzing the robustness of the proposed approach against model misspecification is crucial for practical applications. However, this analysis is beyond the scope of the current work.
>Q4: The claim that the approach is simulation-free is not entirely clear.
The sampling of independent $x_{t|0,T}$ for a given time $t$ is justified by parameterizing marginal densities rather than the entire path measure. Indeed, our parametrization in eq. (15) defines the parameters of the marginals that change continuously over time (the SDE that has these marginals can be obtained via Proposition 1). The optimized objective in Theorem 1 relies only on the samples from these marginals, which means that during training, no simulation is needed. When sampling paths, the drift term does not need to be evaluated, allowing for efficient sampling.
We added a pseudocode of the proposed training and inference algorithm in the rebuttal. We will attempt to clarify this further in the manuscript.
>Q5: Evaluating max energy and the likelihood might not measure the accuracy of the estimated distribution of the conditioned SDE in (6).
Indeed, the likelihood on its own does not capture the distribution of sampled paths. However, transition path sampling is an extremely highdimensional problem (D=66\*2\*1000), and thus it is difficult to characterize accuracy of matching the full distribution. We use conventional metrics, such as describing paths by the transition state (point with the highest energy) [2,3], or estimating the likelihood of trajectories [4].
>Q6: It is crucial to study the impact of inexactly solving the variational problems (9) or (11).
In this paper, we learn the target path-measure $P^*$, which corresponds to the Doob’s h-transform. Corollary 2 shows that the optimized objective corresponds to the KL-divergence between the parameterization and the reference measure $D_{\text{KL}}(Q:P^{\text{ref}})$. Using the Pythagorean relation (see, e.g., Theorem 3.3 in [5]), one can show
$$D_{\text{KL}}(Q:P^{\text{ref}})=D_{\text{KL}}(Q:P^*)+D_{\text{KL}}(P^*:P^{\text{ref}}),$$
where the last term is a constant. Thus, the minimized objective is the KL-divergence between the parameterization and the target measure. Thank you for your suggestion, we will add the corresponding discussion in the final version.
>Q7: Is the Optimization problem in (9) well-posed? This means, is the optimal solution unique?
As stated in Theorem 1, the optimization problem in (9) has a unique solution (see proof in Appendix A.2).
>Q8: In several instances, there are missing commas in the notation for the inner product (e.g., equations (9b) and (12)).
Both equations (9b) and (12) contain the divergence operator rather than an inner product. The notation for the divergence operator is $\langle\nabla_x, \cdot\rangle = \text{div}(\cdot)$, which is introduced in line 98.
>Q9: Missing mathematical details and assumptions. Most importantly, what are the assumptions on the drift vector field b_t to ensure the well-posedness of the proposed scheme?
For the mathematical details and assumptions, we refer the reader to [6] which defines rigorously the necessary conditions for the Doob’s h-transform and the corresponding PDEs. The necessary assumptions for our result are stated at the beginning of Appendix A.2.
>Q10: Instead of introducing \xi_{min} I suppose that one could also directly work with a pseudoinverse when defining G_t^-1.
We will discuss this option in the final version of the paper.
>Q11: In Proposition 4, should it be u_t on the right hand side of the equation for u?
Yes, we will correct it.
>Q12: Justification of the “inefficiency” of importance sampling is missing.
For example, the recent work [7], which relies on importance sampling, requires 120M energy evaluations to output a reasonable transition path.
>Q13: The approach is limited to conditioning the sample path on point sets x_T = B.
In the transition path sampling, the rare event is usually represented by the point $B$. If several rare events are given, we can run our method several times using different values of $B$. Including conditioning on other sets is a direction of future studies.
>Q14: When referring to probability density functions, it's important to use proper notation. Instead of \rho(x_t = x), it would be clearer to use \rho_t(x).
We will clarify the notation.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: Thank you very much for the detailed response.
I might be misunderstanding the entire simulation-free approach, but don’t you sample independently at each time $t$ from the marginal distributions to generate paths of the conditioned SDEs? (Please correct me when I am misunderstanding something) While this sampling scheme might be consistent under certain assumptions on the drift and diffusion coefficients, it certainly isn’t universally applicable. Generally, how do you ensure the continuity of the sampled paths? This concern remains relevant regardless of whether you employ Gaussian or Gaussian mixture parameterizations.
I would be less critical of my concern if there were clear empirical experiments to support the correct sampling distribution. In particular, the synthetic data example presents an opportunity to conduct a statistically robust case study.
Additionally, I remain concerned about the lack of explicit mathematical assumptions and the challenges associated with conditioning on point sets. Doesn’t it impose technical challenges when applying Doob‘s h-transform for conditioning on null sets? For example, the application of Jamison (1975) requires a positive function h (as also noted in line 514).
While I find the proposed framework very promising, I am maintaining my score due to the concerns I've outlined.
---
Reply to Comment 1.1.1:
Title: Clarifying Misunderstanding of Test vs. Train Sampling
Comment: **Testing vs. Training**
> I might be misunderstanding the entire simulation-free approach, but don’t you sample independently at each time 𝑡 from the marginal distributions to generate paths of the conditioned SDEs?
Simulation-free refers to our *training* method, where our objective in Thm 1 only requires samples from the time-marginals $q_{t|0,1}$ of the conditioned SDE. This justifies neglecting full trajectory information and sampling directly from $q_{t|0,1}$ (without SDE simulation) during training.
See Algorithm 1 of the general-response PDF or cell 20 of the [anonymized code](https://anonymous.4open.science/r/TPS-Doob-843E/notebooks/tps_gaussian.ipynb).
> … don’t you sample independently at each time $𝑡$ from the marginal distributions to generate paths of the conditioned SDEs? Generally, how do you ensure the continuity of the sampled paths?
We **do not** sample each marginal independently to generate paths of the conditioned SDE. Instead, our Algorithm 2 in the general-response PDF outlines our sampling approach.
For a given set of time marginals $q_{t|0,1}$, we can calculate the appropriate drift $u_{t|0,1}$ for an SDE with given diffusion coefficients. **At generation (test) time, we simulate this SDE**: $dx_{t|0,1} = u_{t|0,1}(x_{t|0,1}) dt + \Xi_t dW_t$ starting from $x_0 \sim \mathcal{N}(A, \sigma_{\text{min}} \mathbb{I})$. Thus, our sampled paths are continuous (up to the SDE solver error).
Note that $u_{t|0,1}$ implies a control drift term $v_{t|0,1}$ (in Eq. 10) as in Eq. 14, but we do not need to evaluate the costly drift term $b_t$ at generation time.
See cells 24-25 of [anonymized code](https://anonymous.4open.science/r/TPS-Doob-843E/notebooks/tps_gaussian.ipynb).
> (Please correct me when I am misunderstanding something)
Thank you for pinpointing this misunderstanding! We hope that the above points help to address these concerns.
Based on your feedback, we agree that the following changes will greatly improve the final manuscript:
- Discussion of the sampling procedure (and Algorithm box) will be included in the main text
- Distinctions between the time marginals $q_{t|0,1}$ of the conditioned process and the full path measure or SDE will be carefully emphasized throughout.
- We will visualize individual trajectories in Figure 2 to show that the paths are continuous and similar to MCMC trajectories.
- We will modify line 47 to “our training method is simulation-free”. It appears that all other usage of “simulation-free” explicitly refers to training or optimization, rather than generation.
**Evaluating Sampling Trajectories**
> I would be less critical of my concern if there were clear empirical experiments to support the correct sampling distribution.
Thank you for this suggestion. For the Müller-Brown experiment in Figure 2, we report Wasserstein-1 distances between (i) samples from expensive MCMC simulation (which we treat as ground-truth) and (ii) sample trajectories generated using Algorithm 2 with our learned model. We report the mean and std of the W1 distance across discrete $t$:
| Wasserstein W1| Value |
|-----------------------|-------|
| Mean | 0.1251 |
| Std | 0.0392 |
| Median | 0.1130 |
| Min | 0.0393 |
| Max | 0.2115 |
We will report further numbers and plots in the revised version of the manuscript.
**Conditioning on Dirac Deltas**
> …challenges associated with conditioning on point sets. Doesn’t it impose technical challenges when applying Doob‘s h-transform for conditioning on null sets?
In the case of the conditioning on a point mass $\delta(x-B)$, the h-function becomes a density, i.e.
$h(y,t) := \rho(x_T = B | x_t = y)$ is the density of the transition probability $\mathbb{P}(x_T \in dx| x_t = y)$.
Conditioning on point sets is commonly used [1 Thm 7.11,2,3], and indeed, all the derivations in Appendix A hold if we take $h(y,t):=\rho(x_T = B | x_t = y)$. We apologize for the confusion, we will clarify it in the next version.
[1] Särkkä, Simo, and Arno Solin. Applied stochastic differential equations. Vol. 10. Cambridge University Press, 2019.
[2] Heng, J., De Bortoli, V., Doucet, A. and Thornton, J., 2021. Simulating diffusion bridges with score matching. arXiv preprint arXiv:2111.07243.
[3] Liu, Xingchao, Lemeng Wu, Mao Ye, and Qiang Liu. "Let us build bridges: Understanding and extending diffusion generative models." arXiv preprint arXiv:2208.14699 (2022).
---
Rebuttal 2:
Title: References
Comment: [1] Blei, Kucukelbir, McAuliffe. “Variational Inference: A Review for Statisticians”, 2016.
[2] Jónsson, H., Mills, G. and Jacobsen, K.W., 1998. Nudged elastic band method for finding minimum energy paths of transitions. In Classical and quantum dynamics in condensed phase simulations (pp. 385-404).
[3] Weinan, E., Ren, W. and Vanden-Eijnden, E., 2004. Minimum action method for the study of rare events. Communications on pure and applied mathematics, 57(5), pp.637-656.
[4] C. Dellago, P. G. Bolhuis, and P. L. Geissler, 2006, “Transition Path Sampling Methods,” Computer Simulations in Condensed Matter Systems: From Materials to Chemical Biology Volume 1. Springer Berlin Heidelberg, pp. 349–391.
[5] Brekelmans, Rob, and Kirill Neklyudov, 2023, "On Schrödinger Bridge Matching and Expectation Maximization." In NeurIPS 2023 Workshop Optimal Transport and Machine Learning.
[6] B. Jamison, 1975, „The Markov processes of Schrödinger“, Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, Bd. 32, Nr. 4. Springer Science and Business Media LLC, S. 323–331.
[7] Holdijk, L., Du, Y., Hooft, F., Jaini, P., Ensing, B. and Welling, M., 2024. Stochastic optimal control for collective variable free sampling of molecular transition paths. Advances in Neural Information Processing Systems, 36. | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for their diligent review and valuable comments, which helped us to improve the manuscript. In this general response, we would like to address shared concerns raised by more than one reviewer.
To make the implementation and training procedure for our method clearer, we have included pseudocode for the training and inference processes in the revised manuscript. We hope that this addition makes the approach more accessible and easier to follow. We have also uploaded this information as a PDF in the rebuttal.
Further, some reviewers expressed interest in the training behavior (e.g., loss, training curves) of our model. To address this, we have included additional plots showcasing the model's performance in the manuscript and have uploaded these as a PDF in the rebuttal. The figures provided are:
* Loss vs training iterations.
* Transition states (i.e., maximum energy) of newly sampled paths vs number of energy evaluations.
We are providing individual responses to the questions raised by each reviewer.
Pdf: /pdf/a5ca541bc7f95bf5310b7d0cbff700ec7aad47e6.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper proposes a variational formulation of the Doob's h-transform to change the problem from the expensive simulations of trajectories to an optimization problem over possible trajectories from given initial point and end point. The model parameterization introduced imposes the desired boundary conditions by design and uses a mixture of Gaussian as the variational family. The variational formulation offers an alternative to expensive simulations and MCMC which is hard to scale. The proposed framework is applied on a simulated case study and real world problem in material science/computational chemistry of transition path sampling which is a study of molecule transitions betweeb local energy minima and metastable states under random fluctuations. The results are along expected lines where it is possible to scale the solution on computationally intractable problems with MCMC and other sampling based methods.
Strengths: 1. The paper is well written, with sound math, a lot of references on recent work and older literature on the subject. I found the introduction to be a great summary of the work.
2. The idea is sound and through the use of variational framework to solve a complex problem of estimating rare event probabilties by sampling forward trajectories brings out the desired qualities of sample efficiency, reducing search space and matching a complex target distribution with an easier to sample variational distribution. As also observed in Table1 and 2, MCMC cannot be scaled to certain large scale experiments where Variational inference can give good results.
3. The work finds application for solving transition path sampling problem which is important in material science and chemistry from what it seems(I am not so well acquanited with those domains.) The case study on protein is well documented and explained.
4. The figures and illustrations are clean and support the narrative.
Weaknesses: Minor things
- Increase font sixe for Table1, bold the important results.
- I would have liked to see a bit more dicussion on the dimensionality, what are the typical values of D ..
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. I am curious, similar to Figure 3, the authors can show an experiment with familiy of dtistributions other than Gaussian. How does the expressivity then come to play ?
2. as for any SDE problem solution, what is the effect of discretization values and parameters in practice as given in line 184-185.
3. How big is the constraint of fixed length(T) on transition paths in practice as solved in this method ?
4. What is the behaviour of the problem when there are multiple basins of attraction ?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors have addressed the challenges quite well and what the future research directions could be taken.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: I would have liked to see a bit more discussion on the dimensionality, what are the typical values of D ..
Thank you for your valuable feedback. We will discuss this in the next revision of the manuscript. The dimension $D$ depends on the specific system being modeled. For instance, alanine dipeptide with its $22$ atoms results in $D = 22 \times 3 = 66$. If modeled with second-order dynamics, $D$ doubles to $132$. Chignolin has $166$ atoms and results in $D = 996$ in our experiments. Large molecular systems, such as proteins with explicit solvent environments, can exceed $10,000$ atoms, with some even reaching $100,000$ atoms. As such, TPS approaches must accommodate these high-dimensional systems.
> Q2: I am curious, similar to Figure 3, the authors can show an experiment with familiy of dtistributions other than Gaussian. How does the expressivity then come to play ?
Note that the crucial restriction on the family of marginals is the availability of the analytic form of the vector field satisfying the Fokker-Planck equation in order to avoid the min-max formulation in Corollary 1. We conducted the experiments using Corrolary 1 in toy settings but found it unstable when scaled to real-world systems.
> Q3: as for any SDE problem solution, what is the effect of discretization values and parameters in practice as given in line 184-185.
We do not discretize time during training, the time variable is drawn uniformly $t \in [0,T]$ in order to estimate the time-integral. We will clarify this in the final version of the manuscript.
These training details can also be seen in the training algorithm uploaded in the rebuttal PDF.
> Q4: How big is the constraint of fixed length(T) on transition paths in practice as solved in this method?
In general, there are ways to estimate the transition time $T$ [1,2,3], and this problem is much easier than sampling the path itself. However, we consider fixed trajectory length to be one of the limitations of our algorithm and a direction of future work.
> Q5: What is the behaviour of the problem when there are multiple basins of attraction?
Our algorithm is designed to solve the concrete problem statement that trajectories end in a particular state $B$. However, due to the stochastic nature of the dynamics, there are multiple pathways to reach B. The most vivid example is presented in Figure 3, where we consider a symmetric potential, and the sampled paths follow two different ways. Similarly, in the Müller-Brown experiment, despite local minima attracting points, transitions still end in the target state. This illustrates that the algorithm can handle multiple basins of attraction by sampling diverse paths that all converge to the designated end state.
> Q6: Increase font size for Table1, bold the important results.
We will improve the formatting of the tables and the presentation of our results in the revised manuscript.
References
[1] G. H. Taumoefolau and R. B. Best, 2021, “Estimating transition path times and shapes from single-molecule photon trajectories: A simulation analysis,” The Journal of Chemical Physics, vol. 154, no. 11. AIP Publishing.
[2] H. S. Chung, J. M. Louis, and W. A. Eaton, 2009, “Experimental determination of upper bound for transition path times in protein folding from single-molecule photon-by-photon trajectories,” Proceedings of the National Academy of Sciences, vol. 106, no. 29. Proceedings of the National Academy of Sciences, pp. 11837–11844..
[3] E. Suárez, J. L. Adelman, and D. M. Zuckerman, 2016, “Accurate Estimation of Protein Folding and Unfolding Times: Beyond Markov State Models,” Journal of Chemical Theory and Computation, vol. 12, no. 8. American Chemical Society (ACS), pp. 3473–3481. | null | null | null | null | null | null |
Time-Varying LoRA: Towards Effective Cross-Domain Fine-Tuning of Diffusion Models | Accept (poster) | Summary: This paper introduces a low-rank adapter, Terra, for effective cross-domain modelling through the construction of a continuous parameter manifold. This approach facilitates knowledge sharing across different domains by training only a single low-rank adaptor. The expressiveness of the model was analyzed theoretically and bounds for approximation error were given. Extensive experiments on various UDA and UG benchmarks demonstrate the effectiveness of the proposed method. Ablation studies verify the robustness of the proposed framework to variations of key components.
Strengths: **(S1)** The proposed Terra is a simple yet effective PEFT method for diffusion models fine-tuning, which successfully generates various images in a customized domain flow.
**(S2)** This paper is well-written and easy to follow. The proposed framework facilitate effective and flexible knowledge sharing across different domains while maintaining parameter efficiency.
**(S3)** Terra involves constructing a continuous parameter manifold using a time variable, with its expressive power theoretically analyzed and smooth interpolation empirically verified.
**(S4)** Terra can serve as a plugin for existing UDA and DG methods to help alleviate domain shifts, achieving state-of-the-art performance on various benchmarks.
**(S5)** Codes are available for reproducibility. (Thanks for providing the codes, resolved some key questions on the implementations of the method)
Weaknesses: **(W1) Qualitative Evaluation**: Although Figure 3 is interesting as it demonstrate how Terra can handle morphing under different scenarios, it is suggested that the authors perform more qualitative samples to evaluate the interpretability of Terra more comprehensively.
**(W2) Comparative analysis**: Although the paper includes extensive comparisons of Terra with two baseline methods in the UDA experiments, it would be beneficial to extend comparisons with recent CNN-based and ViT-based UDA methods, such as CoVi [1] and PMTrans [2].
**(W3) Clarity of Technical Details**: The rationale behind randomly sampling the value of "t" for the DG experiments to generate diverse domains is unclear. While Figure 6 shows that the learned time variables cluster within the existing source domains, the distribution of the target domain is not adequately explained.
[1] Contrastive vicinal space for unsupervised domain adaptation, ECCV 2022.
[2] Patch-mix transformer for unsupervised domain adaptation: A game perspective, CVPR 2023.
Technical Quality: 4
Clarity: 4
Questions for Authors: In addition to the above weaknesses, I have the following questions regarding the theoretical part:
**(Q1) Scalability to multiple domains:** To my understanding, Terra can express knowledge of multiple domains (e.g., in the DG setting). However, the paper only demonstrates its equivalence with two LoRAs in Theorem 1. I am curious about the scalability of Terra when applied to more domains or tasks (since this greatly enhances the potential of Terra under real-world scenarios (i.e., multiple environments)).
**(Q2) Comparison with other LoRA variants**: Could the authors provide an analysis of how Terra differs from other LoRA variants, such as MoLE [3], particularly on the expressiveness of these models?
[3] Mixture of LoRA Experts, ICLR 2024.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors discuss the limitations adequately in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: #### [W.1] Qualitative Evaluation
> We appreciate the reviewer's suggestion. we have included additional qualitative samples in Fig. r2 of the Rebuttal-PDF. These samples further demonstrate Terra's ability to handle morphing under various scenarios.
#### [W.2] Comparative Analysis
> We thank the reviewer for suggesting additional comparisons. In response, we have conducted experiments with CoVi and PMTrans, and the results are presented in Tab. r4 of the Rebuttal-PDF. Notably, Terra consistently improves performance in all tasks with those UDA methods, further verifying the effectiveness of our method.
#### [W.3] Clarity of Technical Details
> Thank you for your comment. In the DG experiments, we first use contrastive learning loss to train a network $g(\cdot)$ to predict sample-level $t$ for Terra. This approach allows us to better capture inter-domain and intra-domain styles differences. After training, randomly sampling the value of $t$ can generate more diverse samples between domains. Specifically, we use two-dimensional $t$, with each dimension sampled from -2 to 2 at intervals of 0.1 to generate diverse samples. We will include these details in the revision.
> Moreover, we present the distribution of the **learned time variable** of the target domain in Fig. r4 of the Rebuttal-PDF. As illustrated, the random sampling of $t$ effectively covers the target domain, offering a clearer understanding of the rationale behind our approach that the generated samples may bring useful information for the target domain under the DG setting.
#### [Q.1] Scalability to Multiple Domains
> We appreciate the reviewer's interest in the scalability of Terra. Our theoretical analysis indeed extends to multiple domains or tasks.
>
> **Theorem 1 can be generalized to multiple matrices using higher-order generalized singular value decomposition (HO GSVD)** [r1]. Specifically, given K low-rank adapters $\Delta W_k \in \mathbb{R}^{m \times n}$ for K domains/tasks, we can concatenate the matrices horizontally and vertically to form $H$ and $V$, and let $k = \max\\{rank(H), rank(V)\\}$. Then, similar to Eqs. (14) and (15), based on HO GSVD, each matrix can be exactly factored as $\Delta W_i = Y K_i X$, where $Y \in \mathbb{R}^{m \times k}$, $K_i \in \mathbb{R}^{k \times k}$, and $X \in \mathbb{R}^{k \times n}$. We can then construct $\mathcal{K}(t)$ to satisfy $\mathcal{K}(t_i) = K_i$ for fixed $t_i$. In the case of multiple domains or tasks, we propose using a vector $\symbfit{t}_i$ with a small increase in parameters. The matrix $K_i$ can be constructed using interpolation methods, such as polynomial interpolation or spline interpolation, or non-linear time-varying matrices as those in Table 6.
>
> In summary, this approach allows Terra to use one adapter structure to represent multiple LoRAs with fewer parameters. When the domains share knowledge—implying $k$ is small—the required parameters are further reduced. Additionally, for unknown tasks, we can determine the matrices $\mathcal{K}(t_i)$ using least squares or other optimization algorithms [r2], enabling a meta-learning approach.
>
>
>[r1] Ponnapalli SP, Saunders MA, Van Loan CF, Alter O. A Higher-Order Generalized Singular Value Decomposition for Comparison of Global mRNA Expression from Multiple Organisms. PLOS ONE, 2011.
>
>[r2] Friedland S, Torokhti A. Generalized rank-constrained matrix approximations. SIAM Journal on Matrix Analysis and Applications, 2007.
#### [Q.2] Comparison with Other LoRA Variants
> For cross-domain learning based on MoLE, it can be seen as respectively training LoRAs on different domains first and then training a gating function to combine the trained LoRAs. While both MoLE and Terra are designed for customization of diffusion models, they differ in several key aspects:
> - **Objective**: MoLE focuses on combining multiple pre-trained LoRAs to achieve multi-concept customization, whereas Terra aims to learn a single adapter structure that can capture multiple domains and construct a domain flow for generation.
> - **Training**: MoLE only optimizes the gating function to preserve the characteristics of trained LoRAs on different domains, whereas **Terra participates in the diffusion fine-tuning stage** and aims to learn domain-general knowledge and domain-specific knowledge, allowing for control over different domains through a time variable.
> - **Expressiveness**: MoLE uses a separate gating function for each LoRA layer, which requires entropy-based balancing to resolve conflicts when combining multiple LoRAs. In contrast, Terra achieves domain adaptation through a single time variable $t$, making it more stable. For two-domain interpolation, Terra and MoLE have similar expressiveness. Considering two domains with time variables $t_1$ and $t_2$, we have
> $$
> \begin{equation}
> \Delta W(\alpha t_1 + (1-\alpha)t_2) = B\mathcal{K}(\alpha t_1 + (1-\alpha)t_2)A =(\alpha t_1 + (1-\alpha)t_2)BWA+ BA = \alpha \Delta W(t_1) + (1-\alpha) \Delta W(t_2).
> \end{equation}
> $$
> This is equivalent to the linear arithmetic composition in MoLE. As shown in the response to [Q.1], this conclusion can be extended to interpolation with three or more LoRAs.
>
> Finally, the relation between MoLE and Terra is similar to that between **Gaussian Mixture Model (GMM)** and **Gaussian Process (GP)**. GMM composes a complex distribution by multiple Gaussian distributions, and GP is a distribution over functions within a continuous domain (such as time). Analogously, MoLE excels at the composition capabilities, while Terra excels at constructing a manifold.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: Thanks the authors for their detailed response. The rebuttal addressed my concerns very well. I am happy to raise my scores. I think this is a technically strong paper with solid theoretical and empirical results, and it is is ready to be accepted. | Summary: The paper presents a variant of LoRA, with additional time variable conditioned low-rank square matrix, for fine-tuning a diffusion model for unsupervised domain adaptation and domain generalization. Besides, the paper also study how to better apply the proposed Terra on UDA and DG tasks. Compared to prior arts, the proposed method achieve better performance.
Strengths: - The paper presents an interesting idea on introducing a "time" condition on the LoRA matrix, which paves a path from source domain towards target domain.
- The paper also presents in depth analysis, providing interesting insight
- The paper is generally well-written and technically sound
Weaknesses: - I feel the term "time" and "t" contradicts to the widely used time step t in diffusion model. Although I do see the authors used another symbol for diffusion models' timestep, it would be much better to use another term for the proposed "time" term to avoid confusion.
- I appreciate the provided possible forms of Terra in Tab 6. Yet, I am wondering if the authors can provide more insights/principles in terms of how to choose the function form. Besides, there seems missing an ablation on the different functions for Terra, to justify why "Linear" is used for generative interpolation/UDA and "cosine-sine" is used for DG?
Technical Quality: 3
Clarity: 3
Questions for Authors: Generally I feel the paper propose a simple yet interesting method for the domain generalization problem. Please see weakness for my questions.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No other limitation as far as I can see.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: #### [W.1] I feel the term "time" and "t" contradicts to the widely used time step t in diffusion model. Although I do see the authors used another symbol for diffusion models' timestep, it would be much better to use another term for the proposed "time" term to avoid confusion.
> We appreciate the reviewer's concern. As you mentioned, we have used the symbol $\tau$ to differentiate our notation from the timestep in diffusion models. We intend to retain the term "time" because Terra draws inspiration from multiple research fields, including **Fluid Dynamics** and **Control Theory**, where "time" and "t" are also commonly used notations.
>
> - In the context of fluid dynamics, for each time variable $t$, the matrix updates of Terra is $\Delta W(t)$, enabling a time-dependent velocity field $\frac{d}{d t} \Delta W(t)$. Terra constructs a "LoRA flow" in the parameter space based on the Lagrangian and Eulerian descriptions [r1]. Therefore, after the cross-domain diffusion fine-tuning, Terra demonstrates the ability to generate domain flow.
>
> - From a control theory perspective, Terra can be viewed as solutions to time-varying systems. For instance, consider a linear system $\dot x(t) = A x(t)$, where the "state vector" $x(t)$ can be seen as the continuous image feature across different domains, and a closed-form solution is $x(t) = e^{A(t)} x_0$ [r2]. Here is the "exponential" form Terra with the time-varying matrix function.
>
> To clarify, we will provide a more detailed explanation of $\tau$ used in the diffusion model. Specifically, "since $t$ in this paper refers to the time variable in Terra, we use $\tau$ here to represent the timestep $t$ in the diffusion model".
>
> [r1] Villani C. Optimal transport: old and new. Berlin: springer, 2009, Pages 26.
>
> [r2] Williams R L, Lawrence D A. Linear state-space control systems. John Wiley & Sons, 2007, Pages 52-55.
#### [W.2a] I appreciate the provided possible forms of Terra in Tab 6. Yet, I am wondering if the authors can provide more insights/principles in terms of how to choose the function form.
> We appreciate the reviewer's interest in the possible forms of Terra presented in Table 6. To provide more insights, we elaborate on the guiding principles behind the choice of the three forms:
>
> - **Linear**: The $tW+I$ is the simplest form, related to a straight and steady flow, which is sufficient for two domains according to Theorem 1 and 2. Its constant velocity of weight changes ensures smooth morphing and is suitable for simple interpolating between two domains under the UDA setting.
>
> - **Cosine-Sine**: This form is adopted because of the bounded range and non-linearity of trigonometric functions, preventing image collapse during generation and enabling a complex parameter manifold to capture relationships between multiple domains. We recommend using this form in complex scenarios, such as interpolating multiple domains in DG.
>
> - **Exponential**: $e^{tW} = I + \sum_{k=1}^{\infty} \frac{t^k}{k!} W^k$, implemented using `torch.matrix_exp`, also defines a smooth curve in a high-dimensional manifold. This form is more expressive and suitable for handling multiple domains, as it enables feature transformations as mentioned in our response to [W.1]. Notably, it related to three types of transformations: scalings, rotations, and shears [r3].
>
> [r3] Ronald N. Goldman, VII.3 - Transformations As Exponentials, Graphics Gems II, 1991, Pages 332-337.
#### [W.2b] Besides, there seems missing an ablation on the different functions for Terra, to justify why "Linear" is used for generative interpolation/UDA and "cosine-sine" is used for DG?
> We appreciate the reviewer's comment and would like to provide clarification on this point.
> - For tasks involving only two domains, such as generative interpolation and UDA, the expressive abilities of the "Linear" and "cosine-sine" functions are equivalent. Specifically, when $t=0$, both terms of the middle matrix reduce to the identity matrix $I$, and when $t=1$, the learnable matrices are constrained equally.
> - However, for tasks involving more domains, such as DG, we have conducted an empirical ablation study to investigate the effectiveness of the "cosine-sine" form compared to the "linear" form. The results, presented in Table 7 of the manuscript and reproduced below, demonstrate the superiority of the "cosine-sine" form on the PACS dataset in DG scenarios.
> \begin{array}{lcccccl}
\hline
\textbf{Method} & \text{A} & \text{C} & \text{P} & \text{S} & \text{Average} \newline
\hline
\text{Linear} & 87.47 & 80.17 & 97.85 & 77.16 & 85.66 \newline
\text{Cosine-sine (dim=1)} & 88.29 & \textbf{82.36} & 97.53 & 73.31 & 85.37 \newline
\text{Cosine-sine (dim=2)} & \textbf{89.51} & 79.66 & \textbf{98.20} & \textbf{78.64} & \textbf{86.50} \newline
\hline
\end{array}
---
Rebuttal Comment 1.1:
Title: post-rebuttal
Comment: Thanks for the clarification. Most of my concerns are addressed, and I would like to keep my final rating as weak accept. | Summary: This article introduces Terra, a simple time-varying low-rank adapter based on LoRA for domain flow generation. Terra efficiently bridges the source and target domains using a parameter-efficient method. By generating data with smaller domain shifts, Terra effectively improves performance in incorporation, UDA, and DG tasks.
Strengths: - The writing is clear and easy to follow.
- The method provides an intuitive and effective approach to enhancing LoRA for domain flow generation, with a theoretical analysis of its expressive power.
- Terra shows promising results in interpolation tasks.
- The idea of generating data with smaller domain shifts to the training set is innovative and enhances model performance in UDA and DG.
Weaknesses: - The method essentially adopts the LoRA approach and constructs a low-rank parameter manifold through F(W,t)=tW+I. This can be seen as an interpolation version of LoRA with fewer parameters, which might limit the novelty of the model.
- There is a lack of more direct comparative experiments, as mentioned in the Questions section.
- As a method that fine-tunes SD, the paper would benefit from directly evaluating the quality of generated images using different fine-tuning methods (such as those mentioned in lines 58-61).
- Lack of discussion of limitations and failure cases.
Technical Quality: 3
Clarity: 3
Questions for Authors: This article uses Terra-finetuned SD XL to generate more training data for UDA and DG, which is quite similar to other works that enhance classifier performance through data generation (e.g., [1,2]). Both approaches leverage the pre-trained SD's prior knowledge to generate images as a form of data augmentation. Hence, I recommend adding a comparison with these methods to demonstrate Terra's design advantages, given that both utilize the SD prior. If direct integration into the current evaluation framework is not feasible, a simple approach could be to generate corresponding data augmentations by changing prompts, showcasing the improvements brought by the SD prior itself. Alternatively, I encourage the authors to illustrate the respective contributions of Terra and the SD prior to the performance boost through other reasonable means.
Additionally, I noticed a minor issue in Table 1: DGP is based on a GAN model, while Terra uses SD XL. Therefore, a direct numerical comparison is unfair. I suggest the authors include the base model for clarity.
References:
[1] Synthetic data from diffusion models improves imagenet classification.
[2] Active Generation for Image Classification
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors didn't discuss the limitations in the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: #### [W.1] The method essentially adopts the LoRA approach and constructs a low-rank parameter manifold through F(W,t)=tW+I. This can be seen as an interpolation version of LoRA with fewer parameters.
> Terra constructs a "LoRA flow" in the parameter space, and is NOT an interpolation version of LoRA with fewer parameters. Here are three key differences:
>
> - **Formulation**: $tW+I$ is just one instance of Terra. Other possible forms are listed in Table 6 of the Appendix, i.e., $\exp(t{W})$ and $\cos(t{W})$.
>
> - **Training**: Unlike LoRA interpolation, which needs training separate LoRAs for different domains, our method only needs to train one adapter for multiple domains. In our training, the middle time-varying matrix $\mathcal{K}(t)$ is domain-specific, while the matrices $W_{up}$ and $W_{down}$ are shared across domains, enabling domain-general knowledge learning.
>
> - **Application**: The interpolation between two domain is just one application of our Terra. More importantly, based on Terra, we can design effective frameworks for UDA and DG:
> - For UDA, our framework can learn the domain-general subjects and domain-specific styles due to the Terra structure, enabling the generation of target-like samples from both source samples and random noise, thereby reducing the domain gap.
> - For DG, Terra constructs a domain manifold in the parameter space, facilitating random interpolation to generate diverse samples, which enhances the model's generalization ability.
#### [W.2] There is a lack of more direct comparative experiments, as mentioned in the Questions.
> Please refer to our following response to [Q.1a].
#### [W.3] As a method that fine-tunes SD, the paper would benefit from directly evaluating the quality of generated images using different fine-tuning methods (such as those mentioned in lines 58-61).
>- The methods mentioned involve customized image generation tasks, such as image editing and multi-concept generation, focusing primarily on single-domain image generation. In contrast, our approach enhances the LoRA structure to create a continuous parameter manifold, allowing for **image generation across a domain flow**.
>
>- A related work is Diffmorpher, which trains two LoRAs on image pairs and introduces techniques for **generating a continuously interpolative sequence of images**, referred to as image morphing. We compare Diffmorpher with our method in the manuscript.
>
>- To demonstrate that the paper's potential, we conduct an additional experiment by replacing the two LoRAs with our proposed Terra. The combined method "Terra + Diffmorpher" yields improved FID and PPL scores. The qualitative and quantitative results are presented in Fig. r1 and Tab. r1 of the Rebuttal-PDF.
#### [W.4] Lack of discussion of limitations and failure cases.
> - **Limitations**: We have thoroughly discussed the limitations in Appendix D. Additionally, while we have adapted to downstream domains through fine-tuning, our model may still be influenced by the prior of the foundation model to some extent.
>
> - **Failure cases**: We acknowledge that a small number of generated images may exhibit poor quality due to the conflict between SD prior knowledge and the knowledge required for downstream tasks. We showcase some failure cases in Fig. r3 of the Rebuttal-PDF. However, the number of those poor-quality images is small, and it does not affect the overall performance of the model.
#### [Q.1a] I recommend adding a comparison with data augmentation with SD's prior knowledge to demonstrate Terra's design advantages. Alternatively, I encourage the authors to illustrate the respective contributions of Terra and the SD prior to the performance boost through other reasonable means.
> We appreciate the reviewer's suggestion to explore the prior of foundation models. To address this concern, we design several methods to synthesize data based on the SDXL model and evaluate their effectiveness on UDA tasks:
> - (1) **SDXL (random)**: We use the prompt `A [CLASS]` to generate samples for each class, where [CLASS] denotes the placeholder for the label.
> - (2) **SDXL (styles)**: We first use the prompt `Generate 50 prompts describing diverse styles for image generation` to ask GPT-4, and then use the prompt `A [CLASS], an everyday object in office and home, in the style of [STYLE]` to generate samples, where [STYLE] denotes the placeholder for style prompts generated by GPT-4 (e.g. "Classic", "Modern").
> - (3) **SDXL (target)**: Based on (2), we use the name of target domain (e.g. "Clipart") to replace the [STYLE] as the new placeholder for exploring the SD prior on the target domain.
> - (4) **SDXL (target styles)**: We use the prompt `Generate 50 prompts describing [TARGET] style for image generation` to ask GPT-4 and obtain more detailed style prompts for synthesis.
> - (5) **SDXL (selected)**: Inspired by [2], we use a confidence-based activate learning method to filter out poor-qulity and misclassified samples generated in (4) and select valid samples.
>
> The comparison results on Office-Home for UDA are shown in Tab. r3 of the Rebuttal-PDF. Terra outperforms the comparison methods, indicating that despite the boost in accuracy from target style design and active learning, the prior knowledge is insufficient to align with the downstream tasks. This issue can be furthur mitigated through finetuning with Terra, which demonstrates Terra's design advantages.
#### [Q.1b]: I noticed a minor issue in Table 1: DGP is based on a GAN model, while Terra uses SD XL. Therefore, a direct numerical comparison is unfair. I suggest the authors include the base model for clarity.
> Thank you for pointing out this issue. Here, we compare with GAN-based model DGP and the stable diffusion-based methods DDIM, LoRA Interpolation, and DiffMorpher. In our revision, we will include the base models for clarity. You can find the updated Tab. r1 and Fig. r1 of the Rebuttal-PDF.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' detailed response! My concern is well-addressed and I will raise the scores. | Summary: This paper proposes Terra, a time-varying low-rank adapter based on the Low-Rank Adapter (LoRA) method, designed to enable continuous domain shifts from one domain to another. The core idea is to incorporate a time-dependent function between the LoRA low-rank matrices, using time t to control the interpolation of generated samples between the source and target domains. Qualitative results demonstrate that Terra facilitates continuous changes when transferring images across domains. Quantitative experiments indicate that Terra can serve as a foundation for generalization-based unsupervised domain adaptation and domain generalization tasks, thereby improving performance.
Strengths: - The proposed method is well-motivated, straightforward and sounded.
- The authors conduct systematically experiments to verify the usage of Terra in multiple domains (by combining with most of the off-the-shelf methods).
- The code is provided.
Weaknesses: - The presentation can be improved. I suggest elaborate and provide more details on the section 3.3 and to move Fig. 7 in the appendix to the main paper, since this part should be the most crucial part as how it constructs the evolving visual domains and serve as the basis to the following applications. Besides, the implementation details of morphing between style and subject should also be explained here.
- The experimental comparisons should be improved. For Sec 4.2, instead of only simply stating that "following the setting employed in DiffMorpher", the authors should also provide qualitative comparisons. Terra also does not show performance improvement against DiffMorpher or LoRA Interp. in terms of PPL. As a result, for Sec 4.3 and 4.4, it is expected that off-the-shelf UDA/DG + Terra can improve against those without Terra, but the comparisons should include off-the-shelf UDA/DG + other similar morphing works (e.g. DiffMorpher) for a fair comparison with prior arts.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please refer to the weaknesses.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations are discussed in the paper. There are no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: #### [W.1] The presentation can be improved. I suggest elaborate and provide more details on sec. 3.3 and to move Fig. 7 in appendix to the main paper. The implementation details of morphing between style and subject should be explained here.
>We will follow your suggestion by (a) relocating Fig. 7 to Sec. 3.3 and (b) enhancing the organization of Sec. 3.3 to better illustrate how to construct evolving visual domains.
> - **Stage 1: Fine-tune the parameters of Terra** (i.e., $\Delta_\theta=W_{up}\cup W_{mid}\cup W_{down}$) using the loss function defined in Eq. (5), where the first part with $t=0$ uses source samples $\mathcal{D}_S$ and the second part with $t=1$ uses target samples $\mathcal{D}_T$.
> - **Stage 2: Generate an intermediate domain** by (a) uniformly sampling $t$ from [0,1] and (b) inputting the text prompt and a random noise into the fine-tuned diffusion model corresponding to domain $t$ (i.e., $\theta_0 + \Delta W(t)$ where $\theta_0$ is the pre-trained diffusion model) for the backward process.
>
>Regarding the implementation details of generative interpolation tasks, including those involving morphing between image pairs, styles and objects:
> - We will follow the reviewer's suggestion to **introduce a new section** on Generative Interpolation via Terra between Secs. 3.3 and 3.4. This section will list **generative interpolation as one of the three concurrent applications** of the proposed domain flow generation framework of Terra. It will present the details originally provided in Line 241-256 and 604-614.
> - For convenience, we also provide the details here.
> - Morphing in image pairs: In Stage 1, we instantiate the loss function in Eq. (5) by setting (a) $\mathcal{D}_S$ to include **one image of the pair**, (b) $\mathcal{D}_T$ to include **the other image**, and (c\) the text prompt to describe the images. In Stage 2, we generate intermediate images by uniformly transiting $t$ from 0 to 1 using the same text prompt as in Stage 1; each value of $t$ results in an interpolated image.
> - Morphing in styles/objects: This differs from morphing in image pairs only in $\mathcal{D}_S$ and $\mathcal{D}_T$ used in Stage 1, which are **a group of images in one style/object and a group of images in another style/object**, respectively.
#### [W.2a] The authors should provide qualitative comparisons.
>We provide qualitative comparisons of our method with all other baselines in Fig. r1 of the Rebuttal-PDF, which shows that Terra could generate intermediate images that are visually smooth and natural.
#### [W.2b] Terra does not show performance improvement over DiffMorpher or LoRA Interp. in PPL. Comparisons should include off-the-shelf UDA/DG+other similar morphing works for a fair comparison.
>We'd like to humbly clarify the potential misunderstandings.
>
>First, Terra is a **general framework that constructs a continuous parameter manifold** and thus generates a domain flow. To illustrate, in this paper, we show the three representative applications of Terra, including image morphing, UDA, and DG.
>
>Second, image morphing, UDA, and DG are three parallel applications of Terra, so that **the roles of Terra in them differ**.
> - In image morphing, similar to DiffMorpher or LoRA Interpolation, Terra generates a continuously interpolative sequence of images.
> - In UDA, as shown in Fig. 2(a), Terra bridges the gap between the source and target domains by generating more target-like samples from source samples and random noise. Thus,
> - in UDA, image morphing, whose output is an interpolative sequence of images, cannot be directly applied into off-the-shelf UDA methods.
> - for a fair comparison with prior arts in Terra's effectiveness in **generating more target-like samples**, we add a comparison against **off-the-shelf UDA + direct generation of target style samples** using prompts on Office-Home:
> - **SDXL (target)**: We use the prompt `A [CLASS] in the style of [TARGET]` to generate samples, where [CLASS] and [TARGET] denote the placeholders for the label name and target domain name, respectively.
> - **SDXL (target styles)**: We prompt GPT-4 with `Generate 50 prompts describing [TARGET] style for image generation` to obtain more detailed style descriptions for synthesis.
>
> The results in Tab. r3 of the Rebuttal-PDF verify Terra's effectiveness in generating more target-like samples.
> - In DG, as shown in Fig. 2(b), Terra expands the source domains by (a) learning a $t$ predictor that maps each source domain to [-1,1] and (b) randomly sampling values of $t$ to generate more domains.
> - Image morphing can be adapted to off-the-shelf DG methods by including interpolative sequences of all pairs of images from different source domains, despite being computationally expensive.
> - For a fair comparison in Terra's effectiveness in **expanding source domains that generalize better**, we include the comparison against **off-the-shelf DG + morphing works** on Office-Home. That is, we train a LoRA for each domain and adopt **LoRA Interp./DiffMorpher** to interpolate. The results in Tab. r2 of the Rebuttal-PDF verify Terra's effectiveness, since Terra interpolates between domains instead of images and thus better models distributions in two domains.
>
>Third, even on the image morphing, Terra outperforms the DiffMorpher, which specifically designed for morphing by customized techniques such as attention interpolation, adaptive normalization, and a new sampling schedule.
> - Terra supports morphing of styles and objects, which DiffMorpher seems incapable of.
> - Terra enjoys better rationality and fidelity (measured by FID) than DiffMorpher.
> - Terra is competitive in smoothness and consistency (**measured by PPL**) with DiffMorpher, despite being simple and general without any specific design. Moreover, **equipped with those customized techniques used in DiffMorpher, Terra is even better than DiffMorhper** (see Tab. r1 of the Rebuttal-PDF).
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' efforts in providing additional clarifications and experimental results to address all of my concerns and I am happy with these answers. I will raise my final score to accept. | Rebuttal 1:
Rebuttal: Dear Reviewers and ACs,
We sincerely thank all the reviewers and ACs for your diligent efforts and high-quality reviews. If you have any additional question or require further clarification, please feel free to let us know. Your insights are highly valued.
We are delighted to note that reviewers find that:
- our method is innovative (`Reviewer 76UZ`), well-motivated, straightforward, and sounded (`Reviewers Y8y3 and LKTK`), with clear and easy-to-follow writing (`Reviewers 76UZ, LKTK, and STii`).
- our paper is providing an intuitive and effective approach to construct a continuous parameter manifold for domain flow generation (`Reviewers 76UZ, LKTK, and STii`) and synthesize data to help alleviate domain shifts (`Reviewers 76UZ, LKTK, and STii`).
- our method includes a theoretical analysis of its expressive power (`Reviewers 76UZ, LKTK, and STii`), achieving promising results on various benchmarks (`Reviewers Y8y3, 76UZ, and STii`), and is supported by reproducible results with provided code (`Reviewers Y8y3 and STii`).
In responce to your valuable suggestions, we have conducted additional experiments and included the new results in the supplementary Rebuttal-PDF for your convenience:
- **Figure r1 & Table r1**: We have added the **qualitative** and **quantitative** comparison results of image morphing (suggested by `Reviewer y8Y3`) and introduced a combined method "Terra + Diffmorpher" (suggested by `Reviewers y8Y3 and 76UZ`).
- **Figure r2**: Supplementary samples for qualitative evaluation of the image morphing tasks (suggested by `Reviewer STii`).
- **Figure r3**: Some failure cases in generated samples (suggested by `Reviewer 76UZ`).
- **Figure r4**: We present the learned time variables to provide rationale behind our generation approach (suggested by `Reviewer STii`).
- **Table r2**: A comparison with **morphing works** (LoRA Interp. and DiffMorpher) that expand source domains under the DG setting (suggested by `Reviewer y8Y3`).
- **Table r3**: A comparison with **target-like samples augmentation methods** using the SDXL prior under the UDA setting (suggested by `Reviewers 76UZ and y8Y3`).
- **Table r4**: A **comparative analysis** with two state-of-the-art baseline methods (suggested by `Reviewer STii`).
Finally, due to character limits, we have condensed some reviews from `Reviewers y8Y3, 76UZ, and STii` in our responses.
Best regards,
The Authors
Pdf: /pdf/cecc6dc77572da2daab8cc65e76bb2330d1c7d36.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Removing Length Bias in RLHF is not Enough | Reject | Summary: The authors considers methods for removing bias in RMs, specifically the bias towards long responses and the bias certain prompts might have to generate high rewards (this stems from the BardleyTerry model being underspecified.). For the second problem the authors proposed PBC which adds a linear layer to the last token of the prompt, the output of which predicts the average reward of completions from the prompt. For the first problem the authors propose to combine PBC with existing length bias correction methods which adds a correlation term to the loss. For experimental results the authors considers RLHF training LLama-7B on the RM-static dataset. They find that their method outperforms baselines on academic benchmarks (Table 2) and in head-to-head comparisons (Fig 4). They also consider hyperparameter stability and ablations in Fig 5.
Strengths: 1. Bias in RLHF can potentially have a large impact if addressed correctly.
Weaknesses: 1. Academic metrics like MMLU are not a good fit for RLHF. MT-bench is better.
2. There are no error bars, unclear how strong the signal is.
3. The writing is rather handwavy at times, e.g. the motivation in section 3.1. is very qualitative.
4. The novelty is low.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Is the method in section 3.3 novel? It seems like not. The intro states “We show that the developed PBC method can be flexibly combined with existing methods 53 of removing length bias”. Please clarify this.
2. The Bradley–Terry model is indeed underspecified as in eq (3). Do you have any quantitative evidence that this is a problem in practice? Section 3.1. is rather handwavy.
3. What prompt is used for GPT4 evaluation is Figure 4? Please clarify in paper.
4. Can you add error bars to Table 2 and Figure 4?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: na
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing the impact of our research direction.
### W1
As your suggestion, we have compared our method with other baselines on MT-bench.
The results have been exhibited in the following.
| MT-Bench | Turn 1 | Turn 2 | Average Score |
|----------------------------|----------------------|----------------------|----------------------|
| RLHF |3.95 |2.22 |3.09
| ODIN |3.98 |2.26 |3.12
| PBC |3.61 |2.35 |2.98
| ODIN+PBC |4.22 |2.20 |3.21
| LPBC |4.53 |2.81 |3.67
From the results, we can find that our developed LPBC method still ourperforms other baselines on the evaluation of MT-bench.
### W2&Q4
Thanks for your suggestion, we have reported error bars in Tables 2 and Fig. 4.
| Method | MMLU | DROP | BBH | TQA |
|----------------------------|----------------------|----------------------|----------------------|----------------------|
| RLHF | 43.82 ± 0.63 | 29.53 ± 0.39 | 31.65 ± 0.08 | 36.57 ± 0.17
| ODIN | 42.29 ± 0.15 | 29.82 ± 0.37 | 32.01 ± 0.52 | 39.43 ± 0.66
| PBC | 43.84 ± 0.28 | 31.61 ± 0.02 | 30.99 ± 0.01 | 38.50 ± 0.22
| ODIN+PBC | 45.56 ± 0.14 | 32.04 ± 0.33 | 31.32 ± 0.33 | 40.80 ± 0.72
| LBPC | 45.94 ± 0.48 | 31.57 ± 0.26 | 32.04 ± 0.10 | 38.75 ± 0.12
| LBPC $v.s.$ | PBC | ODIN | ODIN+PBC |
|----------------------------|----------------------|----------------------|----------------------|
| Win | 45.33 ± 2.43 | 36.67 ± 1.02 | 27.33 ± 2.50 |
| Tie | 52.67 ± 1.90 | 56.00 ± 1.73 | 57.67 ± 1.95 |
| Loss | 2.00 ± 1.51 | 7.33 ± 1.89 | 15.00 ± 1.42 |
### W3
Sorry for the inconvenience caused in reading. We will improve the writing quality to ensure better understanding.
The brief statement of the motivation is that the prompt-template bias learned by RM will result in LLMs preferring to generate responses in a specific format after RLHF fine-tuning, regardless of the format requested in the prompt. Thus, we develop a method to estimate the prompt-template bias so that we can remove it in the following RLHF process.
### W4&Q1
Thank you for giving us the opportunity to clarify the novelty of our work.
The main contribution of our work is revealing the cause of prompt-template bias in reward modeling through theoretical analysis in Section 3.1 and propose PBC method to address this issue in Section 3.2.
In Section 3.3, we propose a novel method to simultaneously estimate prompt-template bias with PBC method and length bias with existing method, e.g. ODIN.
We did not simply stack these two methods, that is, we did not just remove the prompt-template bias on an RM that had already been adjusted for length bias (ODIN+PBC in our paper).
We chose a more refined modeling approach by decomposing the prompt-template bias into quality and length components for separate estimation, as shown in Eq.(14) (LPBC in our paper).
Note that we have inlcuded the comparison beween ODIN+PBC and LPBC in our experiments.
For novelty in the aspect of technical, we admit the bias estimation and combination method is a little straightforward, but the benefit is that it won’t bring too much computation burdens and can be easily deployed on the original implementation of RLHF, which is particularly important given the significant computational resources required for LLM training today.
### Q2
As mentioned in W3, our work mainly focuses on addressing the issue of prompt-template bias.
As shown in Fig.3, the reward distributions on different categories show that the RM trained with vanilla preference loss tend to assign higher reward scores on the response in a specific format, e.g. tech article.
The quantitative experimental results shown in Table 2 show that our developed PBC method for removing prompt-template bias can lead to significant performance improvements compared to the original implementation of RLHF.
### Q3
The prompt we used for GPT-4 evaluation is listed below, following the same experimental settings as in ODIN for a fair comparison.
"[System Prompt]
You are a helpful and precise assistant for checking the quality of the answers.
[User Prompt]
{prompt}
[The Start of Assistant1’s Answer]
{response_a}
[The End of Assistant1’s Answer]
[The Start of Assistant2’s Answer]
{response_b}
[The End of Assistant2’s Answer]
We would like to request your feedback on the performance of two AI assistants in response to the user question displayed above. Please rate the helpfulness, relevance, accuracy, level of details of their responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance. Please first output a single line containing only two values indicating the scores for Assistant 1 and 2, respectively. The two scores are separated by a space. In the subsequent line, please provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment."
We wish the reivewer prioritize RLHF techniques that are truly practical and implementable, rather than those that may appear fancy but remain confined to papers. Our work is driven by real-world problems, identified the underlying causes through theoretical analysis, and effectively addressed the issue in practice.
---
Rebuttal 2:
Comment: Dear reviewer
Sorry to bother you, as the discussion period is nearing its end, we hope that our response has adequately addressed your concerns regarding our paper.
If not, we kindly ask you to list your remaining concerns, so that we can improve the quality of our paper in the next round.
Your comments on our paper will be extremely important to us. Thanks.
Best wishes | Summary: This paper studies the prompt bias in RLHF, especially the reward modeling --- beyond the length bias that might exist.
Alleviating reward hacking is an important topic in RLHF, however, with the current paper, some details or contributions are not very clear. I'll elaborate in the following sections.
Strengths: The problem studied is important. The illustrative figures are helpful.
Weaknesses: Some notations do not make sense, for example, in Equation (5), averaging over y does not make sense. Would not it be better to average over the C rather than y.
The presentation of the problem itself is not yet clear to me. Although the authors keep using examples in the context to anchor their ideas (which I appreciate), it is still unclear what is the problem this work aims to solve. I like the general idea of Figure 1, however, what does the red color highlighting mean? This figure makes a good contrast between your RM and conventional RM, yet it fails to illustrate the problem your RM aims to solve.
The experimental results are not supportive enough.
Technical Quality: 2
Clarity: 1
Questions for Authors: In Equation (6) the authors compare reward values between two different prompts. What is the motivation for making such comparisons --- as RLHF only ranks/compares responses within a template?
In experiments, what is the performance of different reward models? The authors didn't report the accuracy or other quantitative performance information on the RMs.
Real-world case studies would be very helpful in understanding the paper's contribution: would it be possible for the authors to show some reward hacking examples (not the ones generated by GPT4)?
The proposed algorithm seems fragile w.r.t. its hyper-parameters. There is no clear trend in the heat maps of Figure 5. Could the authors also report the standard deviation of each evaluation?
Error bars are missing in the reported results of Table 2.
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: Please see weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing the impact of our research direction.
### **W1**
In tems of notation, we assume it is correct to perform the averaging operation on the variable $y$ rather than the function symbol $C$.
### **W2**
Thanks for your suggestion. We admit that Fig.1 is used to illustrate the difference between our method and vanilla RM training, where the word highlighted in red color indicates the template request in the prompt.
The statement of the problem is that the prompt-template bias learned by RM will result in LLMs preferring to generate responses in a specific format after RLHF fine-tuning, regardless of the format requested in the prompt. Thus, we develop a method to estimate the prompt-template bias so that we can remove it in the following RLHF process.
The cause of prompt-template bias is that the datasets for RM training usually only collect responses that satisfy the template/format requests in the prompt. Because it will be time-consuming and expensive to construct responses in various formats for each prompt in practice.
### **Q1**
As stated in W2, the main cause of prompt-template bias issue is that the datasets for RM training usually only collect responses that satisfy the template/format request in the prompt.
Our method developed in this paper can allevaite this issue.
The comparison in Eq.(6) is used to illustate that there is a chance that $C(x_a, \overline{y}_a)>>C(x_b, \overline{y}_b)$ will lead to $C(x_b, y_{ab}) > C(x_b, y_b)$, leading to the issue in Eq.(7).
### **Q2**
Actually, we have compares the accuracy of various RMs on Fig.5(a).
From the results, we can find that the constraint terms introduced by our method won't significantly affect the RM's accuracy.
Moreover, we have included a comparison of other quantitative performance metrics on the RM benchmark, which is likely the most popular benchmark for evaluating trained RMs.
| Metric | Chat | Chat Hard | Safety | Reasoning|
|----------------------------|----------------------|----------------------|----------------------|----------------------|
| Vanila RM | 89.66 ± 0.60 | 41.89 ± 0.18 | 31.34 ± 0.00 | 52.16 ± 1.30
| ODIN | 85.20 ± 0.13 | 37.94 ± 0.27 | 30.96 ± 0.20 | 47.94 ± 1.59
| PBC | 73.97 ± 1.18 | 34.43 ± 1.19 | 34.40 ± 2.19 | 55.35 ± 3.10
| PBC+ODIN | 89.11 ± 0.23 | 40.35 ± 0.52 | 30.60 ± 0.26 | 49.39 ± 0.89
| LPBC($\eta_l=\eta_c=0.01$) | 90.50 ± 0.26 | 42.54 ± 0.36 | 28.79 ± 0.32 | 45.80 ± 1.20
| LPBC($\eta_l=\eta_c=0.05$) | 88.24 ± 1.50 | 45.39 ± 1.07 | 28.69 ± 0.25 | 51.30 ± 1.70
| LPBC($\eta_l=\eta_c=0.10$) | 85.94 ± 0.39 | 45.83 ± 0.45 | 27.76 ± 0.67 | 49.80 ± 1.09
Based on the RM benchmark comparison, our method (LPBC) does not significantly impact RM performance and even enhances it in certain areas, such as Chat and Chat Hard, where the focus is on the quality of generated responses.
### **Q3**
Due to the page limitation, we have attached showcases of reward hacking in the pdf file, sorry for the inconvenience.
### **Q4**
Thanks. Actually, Fig.5 mainly aims to illustrate that the performance of our method is not sensitive to the selection of hyperparameters, as there is no clear trend in the heat maps, as you mentioned.
As you suggested, we also report the standard deviation of each evaluation in the table below
| MMLU | $\eta_c=0$ | $\eta_c=0.01$ | $\eta_c=0.05$ | $\eta_l=0.1$|
|----------------------------|----------------------|----------------------|----------------------|----------------------|
| $\eta_l=0$ | 43.82 ± 0.63 | | |
| $\eta_l=0.01$ | | 40.30 ± 0.65 |45.47 ± 0.52 |45.91 ± 0.55
| $\eta_l=0.05$ | | 44.57 ± 0.52 | 45.94 ± 0.48 |43.89 ± 0.67
| $\eta_l=0.1$ | | 43.93 ± 0.77 | 42.25 ± 0.67 |36.54 ± 0.98
| DROP | $\eta_c=0$ | $\eta_c=0.01$ | $\eta_c=0.05$ | $\eta_c=0.1$|
|----------------------------|----------------------|----------------------|----------------------|----------------------|
| $\eta_l=0$ | 29.53 ± 0.39 | | |
| $\eta_l=0.01$ | | 30.63 ± 0.19 | 31.57 ± 0.22 | 31.47 ± 0.25
| $\eta_l=0.05$ | | 32.52 ± 0.15 | 31.57 ± 0.26 | 27.41 ± 0.41
| $\eta_l=0.1$ | | 31.06 ± 0.28 | 32.60 ± 0.32 | 30.96 ± 0.33
### **Q5**
Thanks for your suggestion, we have also included error bars in Table.2.
| Method | MMLU | DROP | BBH | TQA |
|----------------------------|----------------------|----------------------|----------------------|----------------------|
| RLHF | 43.82 ± 0.63 | 29.53 ± 0.39 | 31.65 ± 0.08 | 36.57 ± 0.17
| ODIN | 42.29 ± 0.15 | 29.82 ± 0.37 | 32.01 ± 0.52 | 39.43 ± 0.66
| PBC | 43.84 ± 0.28 | 31.61 ± 0.02 | 30.99 ± 0.01 | 38.50 ± 0.22
| ODIN+PBC | 45.56 ± 0.14 | 32.04 ± 0.33 | 31.32 ± 0.33 | 40.80 ± 0.72
| LBPC | 45.94 ± 0.48 | 31.57 ± 0.26 | 32.04 ± 0.10 | 38.75 ± 0.12
---
Rebuttal 2:
Comment: Dear reviewer
Sorry to bother you, as the discussion period is nearing its end, we hope that our response has adequately addressed your concerns regarding our paper.
If not, we kindly ask you to list your remaining concerns, so that we can improve the quality of our paper in the next round.
Your comments on our paper will be extremely important to us. Thanks.
Best wishes | Summary: This paper introduces the Prompt Bias Calibration (PBC) method to address prompt-template bias in reward training of RLHF. The proposed PBC method is validated through extensive empirical results and mathematical analysis, showing its effectiveness in combination with existing length bias removal methods.
Strengths: 1. Good Writing: The paper is well-written and easy to follow.
2. Innovative Methodology: Introduces Prompt Bias Calibration (PBC) to address prompt-template bias in RLHF.
3. Strong Empirical Evidence: Demonstrates significant performance improvements through comprehensive evaluations.
Weaknesses: see questions
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why Eq (6) would happen? In general training, $C(x_a, \bar{y}_a)$ and $C(x_b, \bar{y}_b)$ may exhibit some gaps but should be in a reasonable area as there is no Bellman process in the training process.
2. For Eq (7), suppose there are two elements in the prompt: a (paper type) and b (themes). The original sample in the dataset is (b, b) = (brief on theme b) and (a, a) = (academic paper on theme a). The authors mentioned the margin sample of $y_{ab}=(a, b)$ = (academic paper on theme b). There are two fundamental assumptions: (1) $r_{\theta'}(x_b, y_{ab}) = r_{\theta'}(x_b, y_b)$ for the same theme (b). (2) $C(x_b, \bar{y}_a) \approx C(x_a, \bar{y}_a)$ due to formulate preference.
2.1 Further clarification is necessary for these two assumptions: (1) The first assumption is too strict focusing on text creation, where the reward function only depends on $b$ rather than $a$. For example, a (code languages) b=(Neural networks), the $y_{ab}$ is heavily dependent on (a). (2) I think the second assumption is wrong as the prompt bias is heavily associated with $x$ and we cannot make this assumption. The following algorithm in Eq (8) aligns with my intuition.
2.2. There are also two more comments on this: (1). joint distribution is approximated as the margin distribution for the reward function is generally unacceptable for me, which means the $x_1$ is not important. So, the optimization function could be: $r(x_2, y) + C(x_2, \bar{y})$ in Eq 5. If this is true, $C$ will not be associated with $x_1$, i.e., template in this work. (2) The $x_{a, b}$, as well as $y_(a, b)$ is the out-of-distribution sample, and the accuracy for the reward estimation should exhibit high bias.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: see the above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First of all, we greatly appreciate your responsible review of our theoretical analysis on the issue of “prompt-template bias” and also thanks for acknowledging the performance of our method.
We assume that our greatest disagreement lies in the theoretical analysis part, so we will try to address your concerns one by one in the following.
### **Q1**
The training of reward modeling is not related to RL, nor is it related to the Bellman process.
$C(x_a, \overline{y}_a)$ and $C(x_b, \overline{y}_b)$ exhibits gaps due to the absence of constraints addressing prompt-template bias in the original preference loss, typically the value of $C(x_a, \overline{y}_a)$ won't affect the preference order within the set of prompt-response pairs with the same prompt $x_a$ and response format $\overline{y}_a$.
We assume that you argee with the point that $C(x_a, \overline{y}_a)$ and $C(x_b, \overline{y}_b)$ will exhibit some gaps, but believe it should be within a reasonable range.
However, it is very difficult to define what is a reasonable range in practice.
From the experimental results shown in Fig.3(a), the RM trained with original preference loss does assign higher scores on responses formatted as Tech Article, leading to the misordering of prompt-response pairs exhibited in Table. 1.
As long as this gap causes the RM to assign higher reward scores to certain marginal samples (possibly OOD samples), like $r_\theta(x_b,y_{ab}) > r_\theta(x_b, y_b)$ in Eq.(7), it will guide the LLM to generate responses in a specific format after RLHF fine-tuning.
### **Q2.1**
Thank you for your insightful comments on Eq.(7). However, we believe there may be significant misunderstandings regarding the assumptions underlying Eq.(7).
**To assumption 1**, we only claim $r_{\theta'}(x_b, y_{ab}) \approx r_{\theta'}(x_b, y_b)$ in our paper rather than strictly constraining $r_{\theta'}(x_b, y_{ab}) = r_{\theta'}(x_b, y_{b})$.
We absolutely agree with your comment that the reward function should depend on the whole response $y_{ab}$ rather than only its template $\overline{y}_b$.
It is also the reason why we only assume $r_{\theta'}(x_b, y_{ab}) \approx r_{\theta'}(x_b, y_b)$ because it can be possibly achieved on the reward distribuion $r_{\theta'}(x,y)$ learned to approximate "gold standard" reward model $r_{\theta*}(x,y)$
**To assumption 2**, we didn't assume $ C(x_b, \overline{y}_{a}) \approx C(x_a, \overline{y}_a) $, but only demonstrate there is a chance that
$ C(x_a, \overline{y}_a) >> C(x_b, \overline{y}_b) $ leads to $ C(x_b, y_ab) > C(x_b, y_b) $ .
Please notice the difference. $ C(x_a, \overline{y}_a) >> C(x_b, \overline{y}_b) $ is achievable because the original preference loss does not impose any constraints on prompt-template bias and this gap can cause trained RM to prefer either prompt $ x_a $ over $x_b$ or format $ \overline{y}_a $ over $ \overline{y}_b $.
As long as there is a tendency that RM prefers format $ \overline{y}_a $ over $ \overline{y}_b $, we can have
$C(x_b, y_{ab}) > C(x_b, y_b)$ to lead to $r_\theta(x_b, y_{ab}) > r_\theta(x_b, y_b)$ in Eq.(7)
### **Q2.2**
Firstly, there are two templates in your comment: 1) template request in the prompt, denoted as $x_1$; 2) response templates, denoted as $\overline{y}$.
In our understanding, your question is why the generated response $y$ will be unrelated to the template request $x_1$ during RM training.
Actually, it is the shortcoming of the original preference loss that we want to highlight.
Assume there is a set of prompt-response pairs with the sample prompt $x_a$ and responses in the same template $\overline{y}_a$.
Notably, during training RM with the preference loss, these characteristics (or elements) shared across all prompt-responses pairs won't affect the preference order, e.g. prompt $x_a$, response template $\overline{y}_a$ and the correlation between $x_a$ and $\overline{y}_a$.
The most effective way to address this issue is to construct responses in various formats for the prompt $x_a$, helping the reward model to distinguish whether the response template meet the requirement of the prompt $x_a$. However, constructing responses in various formats for each prompt will be time-consuming and expensive in practice.
For OOD sample, it seems that we have reached a consensus on this point that reward estimation should exhibit high bias on OOD samples, no matter it is $x_{ab}$ or $y_{ab}$.
So, we wonder if you agree with the fact that Eq.(7) can be probably achieved if $(x_b, y_{ab})$ is an OOD sample?
### **Summary**
At last, we thank you for your effort in improving the quality of our paper, even if it is a rejected score.
**This issue actually stems from the process of deploying RLHF technology in our text creation product, so there is no doubt that it will occur in practice (even if we used nearly 190k human-annotated preference pairs for RM training).**
In this paper, we aim to analyze and explain this phenomenon, and propose a solution that can be practically deployed in the product, rather than a flashy but impractical algorithm.
---
Rebuttal Comment 1.1:
Title: Official Comments by Reviewer ehkZ
Comment: Thanks for your response! I would appreciate the authors providing further clarifications for 2.1 and 2.2.
1. For Q1, my point is that it it reasonable to assume $C(x_a, \bar{y}_a)>C(x_b, \bar{y}_b)$ rather than $C\(x_a, \bar{y}_a)>>C(x_b, \bar{y}_b)$. I think this would not influence further analysis as $>>$ is generally a stronger condition compared to $>$.
2.1 Assumption 1: $r_{\theta^{\prime}}(x_b, y_{a b}) \approx r_{\theta^{\prime}}(x_b, y_b)$ is not a claimed assumption for me and the reason is explained above.
Without Assumption 2: $C(x_b, \bar{y}_a) \approx C(x_a, \bar{y}_a)$, why there is a chance for:
$$
C\left(x_a, \bar{y}_a\right) \gg>C\left(x_b, \bar{y}_b\right) \text { leads to } C\left(x_b, y_a b\right)>C\left(x_b, y_b\right)
$$
2.2 I think the response should be highly related to prompt $x_a$ or $x_b$. OOD scenarios should be considered in Eq (7) but still, Eq (7) is not convincing for me for the above reasons.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer
Sorry to bother you, as the discussion period is nearing its end, we hope that our response has adequately addressed your concerns regarding the theoretical analysis section of our paper.
We greatly appreciate your efforts on reviewing our paper and also responsible comments. Since your concerns about our paper mainly stem from your belief that the theory is wrong, we believe there is a misunderstanding, which we feel we have already clarified. Therefore, we kindly ask if you could reconsider your evaluation of our paper.
If not, we kindly ask you to list the issues in our theoretical proof in mathematical form, so that we can improve the quality of our paper in the next round.
Your comments on our paper will be extremely important to us. Thanks.
Best wishes
---
Rebuttal 2:
Comment: Thank you for your responsible comments and your willingness to discuss with us.
First, please allow us to emphasize the consensus we have reached:
1. There is indeed hacking of the current RM through specific response formats, which causes the LLM after RLHF finetuning to tend to generate responses in a specific format (we have indeed encountered this issue in product implementation).
2. The theoretical analysis in this paper is only intended to confirm that prompt-template bias in some cases of preference pairs may also lead to reward hacking, but not all preference pairs will result in this.
3. As long as the prompt-template bias in certain preference pairs leads to reward hacking, the LLM aligned with this reward model may overlook the template requirements in prompts and tend to generate responses in a few specific formats.
### **Q1**
Thanks for your patience in carefully reading our response. We are pleased to see that we have reached a consensus that there is a possibility of achieving $C(x_a, \overline{y}_a) > C(x_b, \overline{y}_b)$ in certain prompt-response pairs.
The reason why we strictly assume $C(x_a, \overline{y}_a) >> C(x_b, \overline{y}_b)$ is because we wish the prompt-template bias term to play a dominant role in the comparison of prompt-response pairs from different sets, leading to
$ r_\theta(x_a, y_a) >> r_\theta(x_b, y_b) $
where
$ r_\theta(x_a, y_a) = r_{\theta'}(x_a, y_a) + C(x_a, \overline{y}_a) $,
$ r_\theta(x_b, y_b) = r_{\theta'}(x_b, y_b) + C(x_b, \overline{y}_b) $
We agree with your statement that $C(x_a, \overline{y}_a) >> C(x_b, \overline{y}_b)$ is a stronger condition, but you cannot deny that there are prompt-response pairs that satisfy this condition, which could lead to reward hacking.
### **Assumption 1**
Sorry for not explaining it clearly in the first response. This time, we will provide an explanation based on the examples you provided.
Given a response $y_{ab}$ where $a$ indicates the template of code language and $b$ denots the theme on neural networks, the trained reward model will assign a reward score to the concatenation of the prompt $x_b$ and the response $y_{ab}$ as
$r_\theta(x_b, y_{ab}) = r_{\theta'}(x_b, y_{ab}) + C(x_b, \overline{y}_a)$.
We fully agree with your comment that the reward function $r_\theta(x_b, y_{ab})$ should depend on the entire response $y_{ab}$, not just on elements $a$ or $b$ individually.
However, please note that we have split $r_\theta(x_b, y_{ab})$ into two components:
1) $C(x_b, \overline{y}_a)$,
which models the RM's scoring of a prompt $x_b$ when the response follows format $a$,
2) $r_{\theta’}(x_b, y_{ab})$,
which is intended to model the remaining part of the reward score, such as whether the response’s theme satisfies with the prompt's request.
Thus, we can assume $r_{\theta'}(x_b, y_{ab}) \approx r_{\theta'}(x_b, y_b)$ because both $y_{ab}$ and $y_b$ follow the same theme $b$ after discarding the impact of response template.
For a response $y_{ab}$ that heavily depends on $a$, its reward score will be dominated by the term $C(x_b, \overline{y}_a)$.
Moreover, as stated in our reply to Q.1, we assume the prompt-template bias term to play a dominant role in the comparsion, and thus the gap between $r_{\theta}'(x_b, y_{ab})$ and $r_{\theta}'(x_b, y_b)$ will be relative small when compared to the gap between $C(x_b, \overline{y}_a)$ and $C(x_b, \overline{y}_b)$.
### **Assumption 2**
Actually, there is no need to assume $C(x_b, \overline{y}_a) \approx C(x_a, \overline{y}_a)$.
Given $C(x_a, \overline{y}_a) >> C(x_b, \overline{y}_b)$, we may have $C(x_a, \overline{y}_a) >>/> C(x_b, \overline{y}_a) > C(x_b, \overline{y}_b)$.
This can be achieved when there is a tendency that RM perfers format $\overline{y}_a$ over $\overline{y}_b$,
we can have $C(x_b, \overline{y}_a) > C(x_b, \overline{y}_b)$ to lead to
$r_\theta(x_b, y_{ab}) > r_\theta(x_b, y_b)$ in Eq.(7)
### **Q2.2**
The reason for the prompt-template bias lies in what you mentioned: the responses collected for each prompt are too closely aligned with the prompt's requirements.
As a result, the RM trained these preference pairs has not encountered responses with different templates and cannot distinguish whether a response's template meets the requirement in the prompt.
Yeah, we are trying to explain why an OOD sample, such as $(x_b, y_{ab})$, might receive an overly high bias estimate, e.g. Eq.(7).
### **Additional Comments**
We still wish our responsible reviewer to prioritize RLHF techniques that are truly practical and implementable, rather than those that may appear fancy but remain confined to papers.
Our work is driven by real-world problems, identified the underlying causes through theoretical analysis, and effectively addressed the issue in practice.
Moreover, this technique has been proven effective in our industry and hope you can provide a fair judgment. | Summary: The paper addresses the issue of reward hacking in RLHF training, superficially, identifying prompt-template bias, defined as when a reward model (RM) develops a preference for responses that adhere to specific formats or templates, even when these formats are not explicitly specified or desired in the prompt and proposes Prompt Bias Calibration (PBC) method that successfully tackles this issue. PBC can also be combined with existing length debiasing methods like ODIN to mitigate both hacks in the reward signal.
Strengths: * The paper identifies and analyzes "prompt-template bias" in RLHF, a potentially impactful issue.
* PBC is easy to implement and as shown can be combined with existing approaches.
* Strong empirical validation with good coverage in the experiments and ablation.
Weaknesses: * Choosing one specific bias - while the title claims that removing the length bias is not enough, it seems to change the scene to potentially removing length and prompt-template bias not being enough. Leading to concerns about needing to combine many methods, one for each mitigation.
Technical Quality: 3
Clarity: 3
Questions for Authors: Any insights on whether the approach would work for the larger models beyond 7B?
Could the approach be generalized to address other instances of reward hacking?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The limitations are covered.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing our work. We believe that you are a reviewer with genuine experience in implementing RLHF and are well aware of the current shortcomings of RLHF.
### **W1**
Thanks for your suggestion. The orginal title aims to emphasize that existing RLHF research mainly focuses only on length bias, while overlooking other potential biases, e.g. prompt-template bias in our paper.
We also argee with your point and will try to come up with a new title for our article.
### **Q1**
Thanks for your insightful questions.
Actually, we have evaluated our method on an LLM with 13B parameters, which we pretrained ourselves and intend to deploy in text creation products, and it consistently outperforms the original RLHF pipeline.
Notably, the evaluation metrics on the industrial side are more stringent, but our developed method still performs well.
For generalization, the developed PBC is not limited to removing template bias in responses, but can address any characteristics shared among responses to a specific prompt, e.g. the language of the response (english or chinese).
We constructed the concept of prompt-template bias in the text creation scenario purely for the ease of understanding.
In real-world applications, the situation is more complex, but this does not undermine the effectiveness of our method, as it remains challenging to achieve diversity in responses to each prompt.
Overall, we have great respect for you because you are a researcher who genuinely focuses on the practical implementation of RLHF, regardless of whether this paper is accepted or not.
---
Rebuttal 2:
Comment: Dear reviewer
Sorry to bother you, as the discussion period is nearing its end, we hope that our response has adequately addressed your concerns regarding our paper.
If not, we kindly ask you to list your remaining concerns, so that we can improve the quality of our paper in the next round.
Your comments on our paper will be extremely important to us. Thanks.
Best wishes
---
Rebuttal Comment 2.1:
Title: Thank you
Comment: Thank you for providing further explanations on the generalizability of the method. I have read all responses and maintain my rating. | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewers' efforts and valuable feedback in helping to enhance the quality of our paper.
Here, we would like to highlight the motivation and key contributions of our work in the following:
### **Motivation**
The motivation for our work stems from the process of deploying RLHF to enhance the performance of LLM-based content creation products.
We find that these LLMs after RLHF fine-tuning prefer to generate responses in a specifc format, even regardless of the format requested in the prompt.
This observation motivates us to uncover the source of this phenomenon.
Through therotical analysis, we attribute it to the issue of prompt-template bias and propose a novel method that imposes almost no additional computational burden to address this issue in RLHF.
### **Contribution**
1. We reveal that reward models with prompt-template bias tend to assign higher reward scores to responses in specific formats, such as the responses in technical articles shown in Table 1.
2. Through theoretical analysis, we reveal that the reward model learns prompt-template bias because the dataset typically only includes responses that adhere to the format specified by the prompt.
3. Without introducing too much additional computational burden to RLHF pipeline, we develop PCB method to estimate prompt-template bias during RM training, so that we can remove the bias term in the following PPO process
4. We demonstrate that the developed PCB method can be integrated with existing algorithms to simultaneously eliminate prompt-template and length biases during RM training, further improving the quality of responses generated by LLMs.
### **Additional Comments**
1. We admit the most effective solution to address the issue of prompt-template bias is to contruct responses in various formats for each prompt.
However, manually constructing responses in specific formats and annotating their preference order is extremely time-consuming and costly in practice.
Most publicly available datasets for RM training even provide only a single preference pair per prompt.
2. We acknowledge that the developed method is somewhat straightforward, but it has proven to be an effective solution for alleviating prompt-template bias in our product without introducing additional annotated preference pairs.
We wish reviewers and ACs to prioritize RLHF techniques that are truly practical and implementable, rather than those that may appear fancy but remain confined to papers.
Our work is driven by real-world problems, identified the underlying causes through theoretical analysis, and effectively addressed the issue in practice.
We also hope that the issues we’ve uncovered and the methods we’ve proposed will assist the community in deploying RLHF more effectively.
Pdf: /pdf/bc5760cd62a799b4c4a7253c5fd90bae93558e7a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Unlearnable 3D Point Clouds: Class-wise Transformation Is All You Need | Accept (poster) | Summary: The paper introduces UMT (Unlearnable Multi-Transformations), the first approach designed to render 3D point cloud data unlearnable for unauthorized deep learning models, by applying class-wise transformations. It also presents a data restoration scheme to enable authorized users to effectively train on the unlearnable data. Theoretical analysis and extensive experiments on various datasets and models demonstrate the effectiveness of UMT in safeguarding sensitive 3D data while allowing authorized access for legitimate training purposes. The main contributions include the proposal of the first 3D unlearnable scheme, a novel data restoration approach, theoretical insights into the unlearnability mechanism, and empirical validation of UMT's superiority.
Strengths: 1. The paper is clearly written, making complex concepts easy to understand.
2. Simple and Effective Unlearnable Scheme: The paper proposes a straightforward unlearnable scheme that leverages the characteristics of 3D data. By combining four types of 3D transformations, it effectively protects 3D point clouds data. Additionally, it allows authorized users to restore the transformed dataset using inverse transformations.
3. The extensive experiments conducted in the paper demonstrate the effectiveness of the proposed method. The UMT dataset successfully prevents unauthorized users from achieving high performance on the target test set, thereby protecting the data.
4. The method provides theoretical proof, which enhances the credibility and robustness of the approach.
Weaknesses: 1. The experiments do not investigate the impact on UMT dataset performance when mixed with other datasets. Currently, transformations are applied to the entire training set, altering the distribution between the UMT training and test sets. If only part of the training set is UMT data, can the performance on the UMT test set still be maintained?
2. The experiments rely on randomness. Are the results averaged over multiple runs to ensure reliability? While the appendix provides experiments with different random seeds, it does not show the variance of results in other experiments.
3. The proposed transformation patterns, such as rotation and scaling, are easily detectable by comparing the UMT training set and test set. This predictability might allow unauthorized models to adapt to these transformations, potentially compromising the method's effectiveness.
Technical Quality: 3
Clarity: 3
Questions for Authors: The questions have been listed in the Weaknesses. Please check the weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have discussed the limitations and Broader Impacts in the last section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1:** If only part of the training set is UMT data, can the performance still be maintained?
**Re:** We are thankful for your feedback on this matter. To provide a response to this concern, we performed experiments to evaluate the impact of two UMT schemes on final test accuracy using five different UMT proportions (20%, 40%, 60%, 80%, 100%). The results are presented in Table 1 of the attached PDF.
From the results, we conclude that if only part of the training set is UMT data, the test accuracy will improve, which weakens the data protector's unlearnable performance. This is because the model can only learn incorrect shortcut information from the portion of data with UMT applied, while it can learn the true knowledge from the clean data without UMT. This allows the model to achieve a certain degree of generalization on the test set. Therefore, the higher the proportion of UMT data, the better the protection effect.
We appreciate the reviewer’s concern regarding the significance of different UMT data proportions. Therefore, in the revised version, we will add a section in the experimental part to discuss the effects of varying UMT data proportions.
> **Q2:** Are the results averaged over multiple runs to ensure reliability?
**Re:** We are deeply grateful for this valuable comment you have shared. We fully agree with the reviewer’s suggestion to perform multiple runs and average the results to improve reliability. In the main experiments presented in the paper, each result was obtained from a single run with the random seed set to 2023. To address your concerns, we conduct additional experiments on the KITTI and ScanObjectNN datasets using two more random seeds (23 and 1023). The average results with standard deviations from three runs are provided in the Table 2 and Table 3 of the attached PDF. It can be seen that UMT continues to demonstrate excellent performance. Due to space limitation, we will include the average results on remaining benchmark datasets in the revised paper.
> **Q3:** Transformation patterns are easily detectable by comparing the UMT training set and test set. This predictability might allow unauthorized models to adapt to these transformations.
**Re:** We greatly appreciate this valuable insight you have provided. We agree with the reviewer’s perspective that obvious transformation patterns, such as rotation and scaling, could allow unauthorized users to detect anomalies. To address this issue, our proposed *Category-Adaptive Allocation Strategy* restricts the extent of transformations to avoid excessive changes. For rotation, we limit $\alpha$ and $\beta$ to a small range. Similarly, for scaling, shear, and twisting, we restrict the parameters controlling the transformation extent within a lower and upper bound that can be predefined by the data protector (In the revised version, we will provide a more detailed explanation of these settings and their significance). Furthermore, even if unauthorized users detect that the training data have been transformed, these UMT data may be perceived as regular augmented data because standard training procedures often use common transformations (e.g., rotation, scaling) to serve as data augmentation techniques to enhance model generalization.
Moreover, let us assume that unauthorized users are aware of the UMT transformations and have designed adaptive attacks (as the reviewer has expressed concern). This situation is considered in Lines 245-249, where we discuss using random transformations as an adaptive attack against UMT, and the experimental results are presented in Table 2 (main text). The results in the last row of Table 2 demonstrate that although the effectiveness of the UMT is somewhat diminished, it still maintains a certain degree of robustness, with the test accuracy remaining 28.67% lower than the clean baseline.
As for unauthorized models using networks that are invariant to these transformations for adaptive attacks, we conducted experiments with RIConv++ (a rotation-invariant network) and 3DGCN (a scale-invariant network) in Table 1, showing that these networks can successfully neutralize the effects of the transformations. We totally agree with the reviewer and acknowledge that future networks, designed to be invariant to all these types of transformations, might overcome our UMT. However, as we discuss in Section 6 (Lines 322-327), networks invariant to non-rigid transformations like twisting and shear have not yet been developed. Thus, our research calls for the creation of more transformation-invariant networks, which is also a key contribution of our work.
---
Rebuttal 2:
Comment: For Q1, the experiment has a few unclear aspects. First, the dataset used in this experiment is not specified. Second, the experiment does not provide a comparison using 0% UMT data.
For Q2, the experiments have addressed my concern.
For Q3, I still have a concern. According to Table 3, although RIConv++ is a rotation-invariant network, the combination of several transformations is useless for RIConv++.
---
Rebuttal 3:
Title: Response to Reviewer nf2f [1/2]
Comment: > Q1: Specify the dataset and provide a comparison using 0% UMT data.
**Re:** Thank you very much for your reply. The dataset we used here is the ModelNet10 dataset. We have provided the results for 0% UMT data in the table below, and it can be seen that the trend remains consistent with our previous conclusion: the higher the proportion of UMT data, the better the protection effect.
**Table X: Test accuracy (%) results when using different ratios of UMT(k=1, $\mathcal{R}$) ModelNet10 training data.**
| UMT ratio | PointNet++ | PointNet | PointCNN | DGCNN | AVG |
| :-------: | :--------: | :------: | :------: | :---: | :---: |
| **100%** | 30.51 | 29.85 | 29.46 | 35.13 | 31.24 |
| **80%** | 47.03 | 47.14 | 46.92 | 48.57 | 47.42 |
| **60%** | 60.35 | 62.44 | 63.55 | 62.89 | 62.31 |
| **40%** | 73.79 | 66.63 | 67.40 | 69.49 | 69.33 |
| **20%** | 84.03 | 84.14 | 83.26 | 85.68 | 84.28 |
| **0%** | 92.95 | 89.32 | 89.54 | 92.73 | 91.14 |
**Table Y: Test accuracy (%) results when using different ratios of UMT(k=2, $\mathcal{R}\mathcal{S}$) ModelNet10 training data.**
| UMT ratio | PointNet++ | PointNet | PointCNN | DGCNN | AVG |
| :-------: | :--------: | :------: | :------: | :---: | :---: |
| **100%** | 28.08 | 22.25 | 21.48 | 14.10 | 21.48 |
| **80%** | 24.44 | 20.98 | 21.65 | 25.11 | 23.05 |
| **60%** | 41.74 | 40.51 | 40.74 | 40.62 | 40.90 |
| **40%** | 58.48 | 59.04 | 58.04 | 59.04 | 58.65 |
| **20%** | 76.00 | 75.00 | 75.78 | 76.00 | 75.70 |
| **0%** | 92.95 | 89.32 | 89.54 | 92.73 | 91.14 |
---
Rebuttal 4:
Title: Response to Reviewer nf2f [2/2]
Comment: > Q3: The combination of several transformations is useless for RIConv++ on KITTI dataset.
**Re:** Thank you for your thorough review. This is a really great and insightful question! After our in-depth consideration, we believe that this counter-intuitive phenomenon is caused by RIConv++'s local feature extraction process overlooking the shortcut points of UMT KITTI samples that are distant from the original sample.
The following is our reasoning process: Because this phenomenon only occurs when training RIConv++ on KITTI (our KITTI dataset was created following the settings in [1,2]). Therefore, our intuition is that the unique input point cloud samples in RIConv++ requiring additional normal vectors caused this phenomenon, because RIConv++[3] claims the LRA-based normal vector is the key factor in its rotation invariance. To further investigate this, we replaced RIConv++ with PointNet (with normals) and conducted experiments on KITTI, as shown in the table below (counterintuitive results are highlighted in bold, with "√" indicating the use of normals calculated by the LRA method in RIConv++, and "×" indicating the absence of normals).
| Datasets | ModelNet10 | ModelNet40 | ShapeNetPart | ScanObjectNN | KITTI | KITTI | KITTI |
| :--------: | :----------: | :----------: | :----------: | :-----------: | :----------: | :------------: | :------------: |
| Models | RIConv++ (√) | RIConv++ (√) | RIConv++ (√) | RIConv++ (√) | RIConv++ (√) | PointNet (×) | PointNet (√) |
| Clean | 86.01 | 85.82 | 97.39 | 66.34 ± 1.21 | 99.64 ± 0.09 | 98.04 ± 2.23 | 99.39 ± 0.40 |
| UMT (R) | 85.16 | 81.90 | 96.96 | 62.33 ± 10.68 | 98.53 ± 1.19 | 36.23 ± 30.18 | 27.79 ± 5.81 |
| UMT (RS) | 18.64 | 11.32 | 2.62 | 10.76 ± 1.76 | **99.80 ± 0.09** | 31.24 ± 7.11 | 23.83 ± 0.72 |
| UMT (RSW) | 13.77 | 8.63 | 3.51 | 11.69 ± 1.41 | **98.48 ± 1.61** | 19.13 ± 6.93 | 26.17 ± 6.32 |
| UMT (RSWH) | 29.24 | 13.68 | 32.68 | 31.83 ± 2.20 | **99.34 ± 0.23** | 26.84 ± 16.26 | 29.52 ± 4.09 |
We can see that: (1) Using normal vectors did not influence the final outcome of PointNet; (2) The unexpected results occurred only on the KITTI dataset. Thus, we exclude the possibility of influence from the normal vectors and shift our focus to considering the potential impacts between the KITTI samples themselves and other processes involved in the RIConv++ implementation.
After carefully reading the original RIConv++ paper [3] and thoroughly inspecting the KITTI samples, we infer that this is because following the experimental procedures outlined in [1-2], the only two categories in the KITTI samples, "pedestrian" and "vehicle"(extracted from autonomous driving dataset), tend to be relatively flat in 3D space and consist of only 256 points. For example, pedestrian samples typically stand upright, with most data points concentrated around this plane-like surface, while vehicle samples are concentrated near the horizontal plane, with limited height range. Other datasets like ModelNet10 avoid this issue due to their diverse categories and 1024 points, with objects like chair and bathtub showing distinct 3D characteristics. After applying transformations to KITTI with UMT, the limited 3D sample features resulted in restricted transformation influence: the plane-like surface's features constrained the range of UMT transformations, and the number of points limited the points UMT could change, leading to local anomalies (e.g., when shear operations are applied to pedestrian samples, some points may stretch far from the plane-like surface, creating anomalies). While these anomalies would still be considered class-wise shortcuts in other models, resulting in defense effects, they are overlooked by RIConv++ since RIConv++ is designed with convolutional operators that extract local rotation-invariant features, ignoring points distant from local features.
We are happy to add a discussion of this counterintuitive phenomenon in the revised manuscript and plan to delve deeper into the differing effects of UMT on the KITTI dataset and various models in future work, as this presents an intriguing area for more detailed and complete investigation. Thank you once again for your valuable feedback, we hope our response resolves your concerns. If you have any further questions, feel free to let us know. Wishing you a wonderful day :)
[1] A backdoor attack against 3D point cloud classifiers. ICCV'21
[2] PointCRT: Detecting backdoor in 3D point cloud via corruption robustness. MM'23
[3] RIConv++: Effective rotation invariant convolutions for 3D point clouds deep learning. IJCV'22 | Summary: This paper studies the protection scheme against unauthorized learning on 3D point cloud data. A simple class-wise transformation method is designed to mislead the model to learn the transformation patterns of points instead of categorical knowledge. The method is evaluated on popular used 3D datasets and models.
Strengths: - This paper studies a new problem and presents a reasonable and simple method to tackle the problem.
- Extensive experiments are conducted. I appreciate the comprehensive ablation studies and analyses provided in the paper.
Weaknesses: - I am concerned about the value of the settings studied in the paper. Most classification datasets considered in the paper are used to develop 3D modeling algorithms and are usually far from real-world 3D/point cloud applications that require high safety standards like 3D face recognition and scene-level point clouds for autonomous driving. These scenarios usually have much more categories (e.g., facial ids) or cannot be modeled as a classification problem (e.g., scene understanding for autonomous driving). Can the proposed method be easy to transfer to these settings?
- How about the results if we randomly apply R, S, W, H when we train the model? Such strong data augmentation may lead to lower classification accuracy but the model may achieve >30% test accuracy on the UMT datasets.
- Rotation/scale/SE(3) equivariant 3D modeling has been extensively studied in previous work [r1-r4]. Will the proposed method fail if we use these models for 3D modeling?
[r1] SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks, NeurIPS 2020
[r2] Vector neurons: A general framework for so (3)-equivariant networks, ICCV 2021
[r3] E2PN: Efficient SE(3)-Equivariant Point Network, CVPR 2023
[r4] Scale-Equivariant Deep Learning for 3D Data
Technical Quality: 3
Clarity: 3
Questions for Authors: - The title "Class-wise Transformation Is All You Need" is quite general, which may not be helpful for readers to understand the topic studied in the paper. I was a bit confused when I first read the title along with the abstract.
Overall, this paper studies a new problem and proposes a simple approach against unauthorized learning on 3D point cloud data. Extensive experiments and analyses are provided to evaluate the method. However, I still have concerns about the value of the settings considered in the paper and the robustness of the proposed method. I am not an expert on the topic (safety in machine learning), thus I would like to rate this paper as borderline and wait for further discussions.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations and broader impacts of the method have been discussed in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer Yqki [1/3]**
Due to space limitation, we have divided our response into three parts. Thank you for your understanding.
> **Q1:** Scene understanding for autonomous driving need to be considered.
**Re:** We appreciate your thoughtful and constructive feedback and strongly agree with your point about conducting experiments in more practical scenarios. We present the UMT performance on semantic segmentation task using semantic scene understanding dataset SemanticKITTI [1] in the Table R3 below. It can be observed that UMT is still effective.
**Table R3:** Evaluation of UMT on semantic segmentation task using semantic scene understanding dataset SemanticKITTI [1].
| Metrics | Eval accuracy (%) | Eval accuracy (%) | mIoU (%) | mIoU (%) |
| :----------------: | :---------------: | :------------------: | :--------: | :------------------: |
| **Semantic KITTI** | PointNet++ | Point Transformer V2 | PointNet++ | Point Transformer V2 |
| Clean baseline | 29.89 | 72.92 | 14.16 | 54.78 |
| **UMT (k=2)** | **4.69** | **19.40** | **0.80** | **13.39** |
Furthermore, in our original paper, we performed classification tasks on real-world datasets, such as the autonomous driving dataset KITTI and the indoor dataset ScanObjectNN (Table 1). We also performed semantic segmentation tasks of scene understanding on the indoor dataset S3DIS (Table 3). These experimental results demonstrate the effectiveness of UMT and reflect its efficacy across different tasks in real-world datasets. The key reason UMT is effective for compromising deep learning tasks across different scenarios is that it creates an erroneous mapping between the class-wise transformation shortcuts and the ground-truth labels after applying UMT to the training data. This results in a significant drop in the model's generalization performance on clean test data without any transformation shortcut (a more detailed explanation with data support can be found in Lines 125-134 of the original paper). We will add the results of SemanticKITTI in our revised paper based on your insightful suggestions.
[1] *SemanticKITTI: A dataset for semantic scene understanding of LiDAR sequences.* ICCV'19
> **Q2:** How about the results if we randomly apply R, S, W, H when we train the model?
**Re:** We appreciate your pointing out this concern. We fully agree with your point that random data augmentation might degrade the performance of UMT. In fact, we have already reported the experimental results of using random $\mathcal{R} \mathcal{S}$ for UMT(\mathcal{R}\mathcal{S}) in Table 2 of the original manuscript. It can be seen that after data augmentation, the test accuracy indeed increased, indicating a decrease in UMT performance. However, we also observed that the final average accuracy is still 28.67% lower than the clean baseline, demonstrating that UMT still exhibits certain robustness (as discussed in Lines 238-249). To further explore this valuable insight you provided, we supplement the experimental results using four types of random augmentations according to your suggestion in the table below, and find that the conclusion is consistent with our manuscript.
**Table R4:** Test accuracy (%) results using UMT training data and UMT+random augmentation training data.
| ModelNet10 | PointNet | PointNet++ | DGCNN | PointCNN | AVG |
| :----------------------------------------------------------: | :------: | :--------: | :---: | :------: | :---: |
| Clean baseline | 89.32 | 92.95 | 92.73 | 89.54 | 91.14 |
| UMT(k=4) | 16.19 | 36.56 | 17.62 | 27.42 | 24.45 |
| UMT(k=4) + random $\mathcal{R}\mathcal{S}\mathcal{H}\mathcal{W}$ | 25.99 | 61.78 | 61.89 | 44.16 | 48.46 |
While we acknowledge that these random augmentations offer some defense against UMT, and even achieve an accuracy exceeding 30%, we still believe that this cannot be considered a qualified defense. According to the prevailing view in the current literature on defenses against unlearnable attacks [1-4], a defense is only considered successful if the accuracy after the defense reaches or exceeds the level of the clean baseline. As stated earlier, random augmentations have not yet achieved this, so UMT remains robust at present.
[1] *What can we learn from unlearnable datasets?* NeurIPS'23
[2] *Image shortcut squeezing: Countering perturbative availability poisons with compression.* ICML'23
[3] *ECLIPSE: Expunging clean-label indiscriminate poisons via sparse diffusion purification.* ESORICS'24
[4] *Purify unlearnable examples via rate-constrained variational auto-encoders.* ICML'24
---
Rebuttal 2:
Title: Response to Reviewer Yqki [2/3]
Comment: > **Q3:** 3D face recognition also need to be considered.
**Re:** We value your insightful and constructive feedback and fully concur with your suggestion to conduct experiments in more practical scenarios. We present the UMT performance on face recognition in the Table R5 below using Basel Face Model 2017 [1] to generate 3D point cloud face scans. Our experimental setup for 3D face BFM 2017 is consistent with [2], which involves generating 100 classes of face scans (each of them contains 50 point clouds), dividing the whole 5000 face data into training and test part, containing 4000 and 1000 data respectively, and then randomly choosing 1024 points for each point clouds. It can be seen that UMT is still effective in this scenario, which is also because the class-wise transformation causes the model to overfit the transformation-based shortcut of the training data, making it difficult to generalize to clean test data.
**Table R5:** Evaluation of UMT on face recognition using BFM2017-generated 3D point cloud face dataset.
| PointNet | Clean baseline | UMT(k=1, $\mathcal{R}$) | UMT(k=2, $\mathcal{R}\mathcal{S}$) | UMT(k=3, $\mathcal{R}\mathcal{S}\mathcal{W}$) | UMT(k=4, $\mathcal{R}\mathcal{S}\mathcal{W}\mathcal{H}$) |
| :---------------: | -------------- | ----------------------- | ---------------------------------- | --------------------------------------------- | -------------------------------------------------------- |
| Test accuracy (%) | 98.10 | 0.81 | 1.11 | 0.91 | 1.01 |
[1] *Morphable face models-an open framework.* International Conference on Automatic Face & Gesture Recognition'18
[2] *Toward availability attacks in 3D point clouds.* ICML'24
> **Q4:** The performance of proposed method against rotation/scale/SE(3) equivariant models.
**Re:** We appreciate your insight on this matter. In our original paper, we have indeed discussed the robustness of UMT against rotation/scaling invariant models. The results for RIConv++ (rotation invariant) and 3DGCN (scaling invariant) networks in Table 1, as well as RIConv, LGR-Net (rotation invariant), and 3DGCN in Table 4, demonstrating that these invariant networks can indeed defend against class-wise rotation and scaling. It is worth noting that these networks, which are invariant to a single transformation, cannot defend against UMT formed by a combination of multiple transformations. Thus, it appears that networks like SE(3), which are invariant to multiple transformations in space, can potentially overcome UMT. Therefore, following your insightful advice, we included experimental results for the SE(3) network SE (3)-Transformer [1] in the following Table R6.
**Table R6:** Test accuracy results (%) of SE(3) equivariant model SE (3)-Transformer using UMT and clean training set.
| Clean baseline | 49.07 |
| :----------------------------------------------------------: | :-------: |
| **UMT(k=3, $\mathcal{R}\mathcal{S}\mathcal{W}$)** | **17.51** |
| **UMT(k=4, $\mathcal{R}\mathcal{S}\mathcal{W}\mathcal{H}$)** | **13.55** |
However, it can be seen that SE (3)-Transformer cannot defend against UMT (k=3, $\mathcal{R}\mathcal{S}\mathcal{W}$) and UMT(k=4, $\mathcal{R}\mathcal{S}\mathcal{W}\mathcal{H}$). This is because existing transformation-invariant networks, even including SE(3) invariant networks [1], are designed only for rigid transformations (rotation, scaling, reflection, and translation, as shown in Fig. 7 in the original paper). There are currently no invariant networks proposed for non-rigid transformations like shear and twisting. Therefore, if a data protector wants UMT to be more robust, they can including non-rigid class-wise transformations to defeat existing rigid transformation-invariant networks. Of course, we also acknowledge that more robust invariant networks may be developed in the future, but it is exactly for this reason that the introduction of UMT will advocate for the design of more robust 3D point cloud networks, which is also where the value of our work lies.
[1] *SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks*. NeurIPS'20
---
Rebuttal 3:
Title: Response to Reviewer Yqki [3/3]
Comment: > **Q5:** The title is quite general, which may not be helpful for readers to understand the topic studied in the paper.
**Re:** Thank you for bringing this matter to our attention. We apologize for any confusion caused by the title. To better convey the focus of the paper, we will revise the title to: "Unlearnable 3D Point Clouds: Class-wise Transformation Is All You Need".
---
Rebuttal Comment 3.1:
Comment: Thanks for your detailed response. My concerns about data augmentation and existing SE(3)-equivariant methods have been addressed. After reading other reviews, I would like to upgrade my score to Borderline Accept.
---
Reply to Comment 3.1.1:
Title: Response to Reviewer Yqki
Comment: Dear Reviewer Yqki,
We want to extend our sincere thanks for your detailed review and increasing the score during this phase. Your feedback was instrumental in helping us improve the quality of the paper, contributed significantly to the development of the community, and we are truly grateful for your support.
Thank you for your time and understanding.
Sincerely,
Submission19662 Authors | Summary: After reading the author’s response, I would like to increase my evaluation to borderline accept.
—
This paper addresses a critical issue by extending unlearnable strategies to 3D point cloud data, introducing the Unlearnable Multi-Transformations (UMT) approach. The use of a category-adaptive allocation strategy and multiple transformations is innovative and well-conceived. Notably, the paper highlights a gap in existing literature by acknowledging the challenges even authorized users face in learning from unlearnable data, proposing a data restoration scheme to address this. The theoretical and empirical validation across six datasets, sixteen models, and two tasks convincingly demonstrates the framework's effectiveness.
Strengths: 1. This paper studies an interesting and important problem. The problem is realistic but not widely explored.
2. The experiments of this work are comprehensive, which is admirable.
Weaknesses: 1. The presentation and writing need to be polished to reach the acceptance bar. For the current form, there are a series of unclear descriptions and explanations.
2. The theoretical contributions are overall weak, which provides limited insights to the research community. Besides, some theoretical analysis needs to be justified.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. The title of this paper is somewhat overclaimed or exaggerated. It is not related to the research topic of this work, which is also less informative.
2. In the introduction, the paper analyzes three issues and challenges of the research problem. However, there is no intuition about why the proposed method can handle the issues and challenges. There are just some technical details. Could the paper supplement more intuitions for a better understanding of reviewers?
3. In the introduction, there is an understanding gap between the method descriptions and theoretical analysis. Some definitions such as the decision boundary of the Bayes classifier in the Gaussian Mixture Model, are very strange, and cannot describe the work principle of the proposed method.
4. For the data protector and authorized user, do we need to add some noise to implement them with the minimax optimization? More details are needed. Besides, are they mutually reversible in practice?
5. For Eq. (3), is there some evidence about the choice of $\mathcal{A}_N$?
6. As for Property 2, why do we need all four transformation matrices we employ and the multiplicative combinations of any of these matrices are all invertible matrices? I can understand the theoretical analysis. However, it seems that this does not work for the main conclusion of this paper.
7. Similarly, the bounds in Lemma 5 and Theorem 6 are very loose. With them, it is hard to believe the proposed method can work well (although the experimental results are good). More discussions are needed.
8. The proof of Lemma 3 is simple since the space of $y$ is limited to 2. When the components of GMMs are more than 2, will the claim still hold? Could the paper add some discussions about this?
9. From Line 782 to Line 783, could the paper supplement some details to describe why the last line of inequalities holds?
10. For Line 811, the paper assumes that $\alpha_2=\frac{1}{3}$ and $d=3$. Will the assumption make the theory less general? More discussions are also needed.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Due to space limitation, we have divided our response into three parts (including one rebuttal part and two official comments). Thank you for your understanding.
#### **Response to Reviewer BhZK [1/3]**
> **Q1:** The title is exaggerated, less informative, and not related to the research topic.
**Re:** Thank you for pointing out that the original title may have appeared exaggerated and less informative. We agree with your assessment and have decided to revise the title to: "Unlearnable 3D Point Clouds: Class-wise Transformation Is All You Need". This new title is intended to better convey the specific nature of our research.
> **Q2:** There is no intuition about why the proposed method can handle the challenges.
**Re:** Thank you for pointing out the need for a clearer intuition behind our proposed method. We appreciate your feedback and would like to provide additional context to address this concern. In the introduction, we identified three main challenges: (i) Incompatibility with 3D data; (ii) Poor visual quality; (iii) Expensive time expenditure. Our intuition is that 3D transformations are custom-designed for handling 3D point cloud data (solving Challenge 1). Many of these transformations, like rotation and scaling, only alter the geometric shape without impacting visual presentation (solving Challenge 2). They are achieved through matrix operations, which are computationally linear and thus less costly than methods involving complex model optimization (solving Challenge 3). In the revised version, we will include the intuition between Line 43 and Line 44 to facilitate better understanding for the readers.
> **Q3:** Some definitions such as the decision boundary of the Bayes classifier in the Gaussian Mixture Model cannot describe the work principle of the proposed method.
**Re:** Thank you for your insightful review. We acknowledge that our Introduction section lacks sufficient connection and explanation between the theoretical analysis and the proposed approach, and we apologize for any understanding gap this may have caused. Here is our explanation:
> To theoretically analyze UMT, we define a binary classification setup similar to that used in [1-3]. Meanwhile, we employ a Gaussian Mixture Model (GMM) to model the clean training set and use the Bayesian optimal decision boundary to model the point cloud classifier. Theoretically, we prove that the UMT training set also follows a GMM distribution and indicate the existence of that the classification accuracy of UMT dataset to be lower than that of the clean dataset in a Bayesian classifier, verifying that the proposed UMT scheme can be effective.
We will insert this explanation between Lines 50 and 51 to bridge the understanding gap. We sincerely thank you once again for your valuable suggestions, which will significantly improve the quality of our paper!
[1] *CUDA: Convolution-based unlearnable datasets*. CVPR'23
[2] *Precise statistical analysis of classification accuracies for adversarial training.* The Annals of Statistics'22
[3] *The curious case of adversarially robust models: More data can help, double descend, or hurt generalization.* Uncertainty in Artificial Intelligence'21
> **Q4:** Do we need to add some noise to implement them with the minimax optimization?
**Re:** Thank you for bringing this matter to our attention. We do not need to add noise for the minimax optimization. Here are our explanations: Solving the optimization problem in Eq. (1) directly is infeasible for neural networks because it necessitates unrolling the entire training procedure within the inner objective and performing backpropagation through it to execute a single step of gradient descent on the outer objective [1]. Therefore, existing unlearnable schemes to address this optimization process are generally divided into two strategies: *model-dependent noise-based optimization* [1, 2] and *model-agnostic shortcut-based operations* [3, 4]. Due to the increased computational complexity of noise-based optimization schemes for more complex point cloud data and the impact of irregular noise on sample quality, we opted for the model-agnostic shortcut-based scheme. This kind of approach usually does not involve the optimization process; however, it is necessary to activate DNN shortcuts to achieve the outcome specified in Eq. (1), like our proposed class-wise 3D transformations. The reason why our proposed model-agnostic operation can ultimately satisfy Eq. (1) is explained in Sec. 3.2 (iii). We will include a more detailed explanation of how we implement Eq. (1) at Line 89 in the revised version.
[1] *Adversarial examples make strong poisons*. NeurIPS'21
[2] *Unlearnable examples: Making personal data unexploitable.* ICLR'21
[3] *Availability attacks create shortcuts.* KDD'22
[4] *CUDA: Convolution-based unlearnable datasets*. CVPR'23
> **Q5:** Are the data protector and authorized user mutually reversible in practice?
**Re:** Thank you for bringing this matter to light. In practice, the optimization process between data protectors and authorized users does not necessarily need to be mutually reversible. It only requires that after the data protector releases the unlearnable data, the authorized user can use a certain method to train normally on the unlearnable data. Our proposed reversible class-wise transformation approach is just one way to achieve this goal. There are also studies that use irreversible methods to achieve this goal [1]. This paper is currently the only other work besides ours that considers authorized user access in the context of unlearnable examples. It proposes a tailored network for authorized users to learn from unlearnable image data, but this process does not involve reversible unlearnable noise. We will add more explanation about authorized users after Line 93 in the revised version.
[1] *Ungeneralizable examples* CVPR'24
---
Rebuttal 2:
Title: Response to Reviewer BhZK [2/3]
Comment: > **Q6:** Why is the choice of $\mathcal{A}_N$ for Eq. (3)?
**Re:** Thank you for highlighting this issue. We set $\mathcal{A}\_N$ this way to ensure that the number of classes in the final rotation matrix is greater than or equal to N (the number of classes in the training set), thereby ensuring that the UMT scheme satisfies the class-wise setup. Concretely, in the rotation operation, each of the three directions has $\mathcal{A}\_N$ distinct angles, which means that the final rotation matrix has $\mathcal{A}\_N^3$ possible combinations. To satisfy the class-wise setup, $\mathcal{A}\_N^3$ must be at least $N$, requiring $\mathcal{A}\_N$ to be no less than $\lceil \sqrt[3]{N} \rceil$. Therefore, we configure $\mathcal{A}\_N$ in this way in Eq. (3). This point will be explained in more detail in Lines 144,145 in the revised version.
> **Q7:** Why transformation matrices are all invertible matrices in Property 2?
**Re:** Thank you for your careful review and for identifying this problem. The transformation matrices need to be invertible for the purpose of designing our proposed data restoration scheme (Sec. 3.4). Specifically, authorized users can build inverse matrices for the transformations after receiving the class-wise parameters from the data protector. This allows the authorized users to normally train on the protected data by leveraging the property that multiplying a matrix by its inverse results in the identity matrix (Lines 196-197). In the revised version, we will provide a detailed explanation of the necessity of Property 2 when it is mentioned for the first time in Line 159.
> **Q8:** The bounds in Lemma 5 and Theorem 6 are loose.
**Re:** Your feedback on this issue is greatly appreciated. We acknowledge that these bounds are quite loose. Our theorem aims to demonstrate the existence of an unlearnable situation that UMT satisfies (as stated in Line 67 and Line 189), rather than emphasizing the performance of UMT. Our theory is intended to prove that the equation $\tau\_{\mathcal{D}\_c} (P_u) < \tau\_{\mathcal{D}\_c} (P)$ has a solution, not to solve this equation. This theoretical analysis follows a previously well accepted theoretical analysis proposed by 2D unlearnable literature [1, 2]. We apologize that our theorem has given you the impression of a weak contribution, but within the current literature on unlearnable examples, this series of proofs is accepted in the theoretical analysis of the unlearnable effectiveness [1]. In the revised version, we will add a statement about the limitations of our theoretical analysis in Sec. 6.
[1] *CUDA: Convolution-based unlearnable datasets*. CVPR'23
[2] *Corrupting Convolution-based Unlearnable Datasets with Pixel-based Image Transformations.* arXiv:2311.18403
> **Q9:** Will the claim still hold when the components of GMMs are more than 2?
**Re:** We are thankful for your feedback on this matter. The claim of Lemma 3 will still hold. Here are our proofs:
> **Proof (GMM with *n* components):** Assuming the dataset $\mathcal{D}\_c$ is represented by a GMM $\mathcal{N}(y\mu, \boldsymbol{I})$, where $y \in \\{y_i\\}\_{i=1}^n$. Let us take $y = y_i$, then $\mathcal{D}\_{cy_i} \sim
> \mathcal{N}(y_i\mu, \boldsymbol{I})$, UMT data point is $(\mathbf{T}\_\{y\} x, y)$, then we have:
>
> **Mean:** $\mathbb{E}_ \{ (x,y) \sim \mathcal{D}\_{cy_i} } [\mathbf{T}\_\{y_i\} x] = \mathbf{T}\_\{y_i\} \mathbb{E}\_\{(x,y) \sim \mathcal{D}_{cy_i}\} x = \mathbf{T}\_\{y_i\} y_i\mu$
>
> **Variance:** $\mathbb{E}_ \{ (x,y) \sim \mathcal{D}\_{cy_i} } [(\mathbf{T}\_\{y_i\} x - \mathbf{T}\_\{y_i\} y_i\mu) (\mathbf{T}\_\{y_i\} x - \mathbf{T}\_\{y_i\} y_i\mu)^{\top}] = \mathbf{T}\_\{y_i\} \mathbb{E}_ \{ (x,y) \sim \mathcal{D}\_{cy_i} } [(x - y_i\mu) (x - y_i\mu)^{\top}] \mathbf{T}\_\{y_i\}^{\top} = \mathbf{T}\_\{y_i\} \boldsymbol{I} \mathbf{T}\_\{y_i\}^{\top} = {\lambda^2\_\{y_i\} } \boldsymbol{I} $
>
> Thus we have: $\mathcal{D}\_u \sim \mathcal{N}(y \mathbf{T}_y \mu, \lambda\_{y}^{2} \boldsymbol{I})$.
We model a binary classification problem for theoretical analysis as it allows for a simpler expression of the Bayesian decision boundary, and this setting is widely used and accepted by the community [1,2,3].
[1] *CUDA: Convolution-based unlearnable datasets*. CVPR'23
[2] *Precise statistical analysis of classification accuracies for adversarial training.* The Annals of Statistics'22
[3] *The curious case of adversarially robust models: More data can help, double descend, or hurt generalization.* Uncertainty in Artificial Intelligence'21
---
Rebuttal 3:
Title: Response to Reviewer BhZK [3/3]
Comment: > **Q10:** From Line 782-783, could you describe why the last line of inequalities holds?
**Re:** Thank you for noting this point. Firstly, we define $\alpha_1$ as a variable $\geq 0$ on line 779. Therefore, assuming we define $s(\alpha_1) = \frac{d^2}{2\alpha_1} + \frac{\alpha_1}{2}$. The minimum value of this function can be obtained using the Arithmetic Mean-Geometric Mean Inequality (AM-GM Inequality), which states that $a + b \ge 2\sqrt{ab}$ for $a, b \ge 0$. Thus, $s(\alpha_1) \ge 2\sqrt{\frac{d^2}{2\alpha_1} \cdot \frac{\alpha_1}{2}} = d$. Since $\beta_1 \le s(\alpha_1)$, we can infer that $\beta_1 \le s(\alpha_1)_{\text{min}}$, which means $\beta_1 \le d$. In the revised version, we will include these explanations between Line 782 and Line 783 to make the reason of this inequality more clearly.
> **Q11:** Line 811 assumes $\alpha_2 = \frac{1}{3}, d=3$. Will this make the theory less general?
**Re:** Thank you for your thorough review and for highlighting this issue. We recognize that setting specific values for these parameters may make the proof of the theorem less general, but in the special case of our existence theorem proof, this will not make Theorem 6 less general. First, since the dimensionality of the 3D point cloud data we study is 3, it is appropriate to use $d=3$ to represent the data dimension in this theoretical analysis. Secondly, assigning an appropriate value to $\alpha_2$ aims to prove the existence of a solution (as mentioned in Line 768) rather than finding the solution of the inequality $\tau\_{\mathcal{D}\_c} (P_u) < \tau_{\mathcal{D}\_c} (P)$. This similar existence theorem proof to demonstrate the effectiveness of an unlearnable scheme is also adopted by a previous unlearnable work [1]. We are very grateful for the valuable advice you have given, in the revised version, we will add the above explanation in Line 811 to make our theory more clearly.
[1] *CUDA: Convolution-based unlearnable datasets.* CVPR'23 | Summary: The paper introduces a novel approach called Unlearnable Multi-Transformations (UMT) to make 3D point cloud data unlearnable by unauthorized users. This method employs a category-adaptive allocation strategy to apply class-wise transformations, thereby preventing unauthorized training on the data. Additionally, the authors propose a data restoration scheme that allows authorized users to learn from the unlearnable data effectively. The effectiveness of UMT is validated through theoretical analysis and extensive experiments on multiple datasets and models, demonstrating its potential in protecting sensitive 3D point cloud data from unauthorized exploitation while enabling authorized access.
Strengths: + A novel unlearnable scheme specifically designed for 3D point cloud data is proposed, and it addresses the issue of enabling authorized users to utilize the protected data effectively.
+ The proposed method seems reasonable, and it has been theoretically demonstrated that the classification accuracy is lower than that of the clean dataset under the Bayes classifier's decision boundary in the Gaussian Mixture Model.
+ Extensive experiments on three synthetic and three real-world datasets, using 16 widely adopted point cloud model architectures for classification and semantic segmentation tasks, verify the superiority of the proposed method.
Weaknesses: - Are object categories sensitive to the combination of transformations? For instance, are there categories where using two different sets of transformations results in significant differences in outcomes? I'm not sure if I've missed something or not, but conducting multiple experiments to observe performance and its variance might be a more reasonable approach.
- It's still not very clear how the point cloud segmentation experiments were conducted. Each object category would be in different regions of the point cloud, so would transformations be applied to the corresponding regions based on the GT? If so, would these transformations and the subsequent inverse transformations affect the segmentation results for authorized users?
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weakness
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the authors mention some limitations of the proposed method, such as its vulnerability to rotation and scale transformations. However, they also note that other types of transformations are not easily compromised in the current context.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1:** Are object categories sensitive to the combination of transformations?
**Re:** Thank you for your thoughtful and valuable comment. In the original paper's Table 7, we have examined the effect of various combinations of transformations on the final performance. It can be seen that employing only rigid transformations like $\mathcal{R}$, $\mathcal{S}$, and $\mathcal{R}\mathcal{S}$ yields better results compared to using solely non-rigid transformations such as $\mathcal{H}$, $\mathcal{W}$, and $\mathcal{H}\mathcal{W}$. Nevertheless, regardless of the combination used, the final accuracy is significantly lower than the clean baseline, demonstrating excellent UMT effectiveness.
Following your advice, we carried out further experiments with different random seeds (seed=2023,1023,23), and the outcomes are displayed in the Table R1 below. From the results, it can be observed that when k=2, the average performance across different combinations are not significantly different, all exhibiting good unlearnable effects of UMT. Additionally, the combination of only rigid transformations $\mathcal{R}\mathcal{S}$ outperforms that using only non-rigid transformations $\mathcal{H}\mathcal{W}$, with mixed combinations of both types of transformations yielding intermediate results, which aligns with the findings previously discussed and presented in Table 7. Thank you once again for your valuable advice. We will include a discussion section in our experiments to explore this intriguing phenomenon.
**Table R1:** Average test accuracy (%) results (from three runs with random seeds 23, 1023, 2023) using diverse combinations of transformation for UMT.
| ModelNet10 | PointNet | PointNet++ | DGCNN | PointCNN | AVG |
| :-----------------------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: |
| $\mathcal{R} \mathcal{S}$ | 15.12 ± 6.20 | 26.62 ± 5.11 | 25.22 ± 10.20 | 17.26 ± 3.68 | 21.05 ± 0.73 |
| $\mathcal{R} \mathcal{H}$ | 33.70 ± 11.09 | 30.65 ± 17.12 | 36.67 ± 15.64 | 34.99 ± 13.48 | 34.00 ± 13.66 |
| $\mathcal{R} \mathcal{W}$ | 39.83 ± 6.71 | 30.18 ± 12.22 | 36.71 ± 8.10 | 43.47 ± 9.20 | 37.55 ± 5.07 |
| $\mathcal{S} \mathcal{H}$ | 21.22 ± 0.84 | 46.15 ± 8.82 | 31.87 ± 6.52 | 30.87 ± 10.94 | 32.53 ± 4.81 |
| $\mathcal{S} \mathcal{W}$ | 23.50 ± 6.08 | 51.28 ± 6.69 | 38.33 ± 6.10 | 28.27 ± 5.25 | 35.34 ± 2.88 |
| $\mathcal{H} \mathcal{W}$ | 54.41 ± 7.99 | 54.22 ± 14.34 | 55.14 ± 9.42 | 57.75 ± 6.09 | 55.38 ± 6.98 |
> **Q2:** How the point cloud segmentation experiments were conducted?
**Re:** We appreciate your valuable comments regarding this issue. Yes, you are correct. In the semantic segmentation scenario, we use class-wise transformations based on the ground-truth labels of the corresponding regions. We will clarify this process in Sec. 3.3.1 (methodology part) and Sec. 4.2 (experimental part) in the updated version.
> **Q3:** Would the transformations and the subsequent inverse transformations affect the segmentation results for authorized users?
**Re:** We appreciate your pointing out this concern. The transformations and the subsequent inverse transformations do not negatively affect the final segmentation results for authorized users. After applying UMT and then adding the inverse transformation of the restoration scheme, the experimental results are shown in the Table R2 below (each result is the average of three runs to ensure reliability). It can be seen that the final results after applying UMT + Restoration scheme are marginally above the clean baseline results. The UMT+data restoration scheme did not affect the final segmentation results because the data restoration scheme is designed to break UMT's class-wise transformation patterns through class-wise inverse transformations of reversible matrices. The original matrix's effect is neutralized when the class-wise reversible matrix is multiplied by the corresponding class-wise inverse matrix, thus eliminating the unlearnable effects of UMT, allowing authorized users' point cloud DNNs to learn the features of samples, thereby achieving standard segmentation performance. This principle is consistent with its effectiveness in classification tasks.
**Table R2: Semantic segmentation.** Test accuracy (%) and mIoU (%) results using standard training and UMT+data restoration scheme.
| Test accuracy (%) | PointNet++ | Point Transformer v3 | SegNN | AVG |
| :------------------------: | :--------------: | :----------------------: | :--------------: | :--------------: |
| Clean baseline | 74.76 | 74.72 | 79.00 | 76.16 |
| **UMT+Restoration scheme** | **80.23 ± 2.42** | **76.86 ± 1.29** | **80.14 ± 0.31** | **79.08 ± 1.23** |
| **mIoU (%)** | **PointNet++** | **Point Transformer v3** | **SegNN** | **AVG** |
| Clean baseline | 40.06 | 40.57 | 50.27 | 43.63 |
| **UMT+Restoration scheme** | **48.61 ± 3.24** | **43.28 ± 1.35** | **50.37 ± 0.06** | **47.42 ± 1.43** | | Rebuttal 1:
Rebuttal: ## **Global Response**
We express our heartfelt thanks to all the reviewers for their valuable time and are encouraged that they found the paper to be:
1. **Clearly written, making complex concepts easy to understand** *(nf2f)*.
2. The studied problem is **interesting** *(BhZK)*, **important** *(BhZK)*, **realistic** *(BhZK)*, and **new** *(Yqki)*.
3. The proposed scheme is **novel** *(upBr)*, **reasonable** *(upBr, Yqki)*, **simple** *(Yqki, nf2f)*, and **effective** *(nf2f)*.
4. **Theoretical proofs** enhance the **reasonableness** *(upBr)*, **credibility** *(nf2f)*, and **robustness** *(nf2f)* of the scheme.
5. **Extensive experiments** verify the effectiveness of the proposed method *(upBr, BhZK, Yqki, nf2f)*.
We have answered the reviewers' concerns and questions in response to their official reviews and are open to discussing any additional issues that may arise. We would greatly appreciate any further feedback on our detailed rebuttal. Due to the space limitation of each rebuttal, we include tables of some experimental results addressing your concerns in the **attached PDF file**.
Pdf: /pdf/8fce5c5363959a12af49c5ae9a9cf9d53edb4668.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
FUGAL: Feature-fortified Unrestricted Graph Alignment | Accept (poster) | Summary: In this paper, the authors propose an algorithm for graph matching. The method is based on the classical relaxation to the set of doubly stochastic matrices, where the authors add two variants to the optimization: (1) an additional term to match node features (computed from the graph as degree, clustering coefficient, and others), and (2) an extra term to push the matrix to be closer to a permutation matrix. The paper presents some promising experimental evaluations.
Strengths: The paper is well written in general. The strength of the paper is the combination of a classical formulation (the relaxation to doubly stochastic matrices, and I would add the features-term as classical as well) with a new term in the optimization, pushing the solution closer to a permutation matrix. Although the paper has no theorethical guarantees at all, it provides some experimental evaluation.
Weaknesses: One huge weakness is the absolute lack of references to related work.
In the paper (line 57), the authors say that FAQ is the only algorithm addressing the graph matching (they say the QAP, actually) problem directly through the adjacency matrices. This is profoundly inaccurate.
Here are some references, some of them with formulations extremely related to the one presented in this paper. Most of them pose the problem by relaxing to the set of doubly stochastic matrices. Some of them add a term for "feature matching" (like [1] and [3], equation (21) in [1] adds exactly the same term as in eq (7) in this paper). And some of them add an extra term pushing the doubly stochastic matrix closer to a permutation matrix (as presented in this paper), for instance [1] and [6].
[1] Zaslavskiy, M., Bach, F., & Vert, J. P. (2008). A path following algorithm for the graph matching problem. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(12), 2227-2242.
[2] Fiori, M., Sprechmann, P., Vogelstein, J., Musé, P., & Sapiro, G. (2013). Robust multimodal graph matching: Sparse coding meets graph matching. Advances in neural information processing systems, 26.
[3] Zhou, F., & De la Torre, F. (2015). Factorized graph matching. IEEE transactions on pattern analysis and machine intelligence, 38(9), 1774-1789.
[4] Aflalo, Y., Bronstein, A., & Kimmel, R. (2015). On convex relaxation of graph isomorphism. Proceedings of the National Academy of Sciences, 112(10), 2942-2947.
[5] Fiori, M., & Sapiro, G. (2015). On spectral properties for graph matching and graph isomorphism problems. Information and Inference: A Journal of the IMA, 4(1), 63-76.
[6] Nadav Dym, Haggai Maron, Yaron Lipman. (2017) Ds++: A flexible, scalable and provably tight relaxation for matching problems
This makes the experimental section kind of weak also, since at least I would have compared with [1] and eventually [6].
Besides that, to me the experimental section lacks an experiment showing the gain from each addition to the vanilla formulation. This is, what is the difference in performance with and without the features, and lambda=0 vs other lambda.
Another weakness is the lack of theoretical results in terms of guarantees of the method, or at least a comment on which part has or has not guarantees. For instance, the formulation is guaranteed to obtain the solution for some graphs? (in the spirit of Aflalo et. al [4]). The optimization algorithm is guaranteed to obtain a local minima?
I understand the properties of the FW method, but I'm not sure that the presented algorithm solves exactly every sub-process of the FW method.
Another comment is the way of measuring the error. The authors measure the coincidence between the obtained permutation matrix and "the" true isomorphism. However, there may be many solutions, if the graph has nontrivial automorphism group (hence the "the" in the previous sentence). This is the case for instance of the graph presented in the paper in Figure 13.
A more suitable measure could be $||AP-PB||_F^2$
Technical Quality: 2
Clarity: 1
Questions for Authors: In Algorithm 1, the number of iterations T is the value for lambda??
Why does lambda vary in the integers? And how is the value of lambda chosen?
When you change the problem of finding the q minimizing the inner product with grad, with the formulation in eq (16), the solution is not guaranteed to be the same.
How do you choose the lambda in this case? (also, not a good idea to use the same name for different parameters)
In line 179, the rounding process is the same as projecting the solution to the set of permutation matrices? Because this projection is solved exactly as an LAP with the Hungarian algorithm for example.
Also, why use the Hungarian algorithm and not the Sinkhorn formulation here? It is the same problem.
Minor comment:
In line 33, there's a word missing (method?) "In this work, we propose an unrestricted graph alignment m that avoids restricting ..."
Confidence: 5
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: There's no explicit mention of any limitation of the algorithm.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1(a). Lack of references: the authors say that FAQ is the only algorithm addressing the graph matching (they say the QAP, actually) problem directly through the adjacency matrices. This is profoundly inaccurate. [1]-[6]**
**Answer:** We apologize for the lack of precision in our original statement. Our intent was to convey that FAQ is one of the few algorithms that solves QAP for network alignment **without** introducing additional regularizers, which we term the *unmediated* approach. As defined in footnote 1, when QAP is augmented with mediated features, such as in our regularizer and several other algorithms (e.g., PATH [1] using feature matching, [6] employing all pairs shortest paths, [3] with feature matching, and [4] utilizing vector attributes), they fall into *unrestricted* category.
We acknowledge that our initial claim about FAQ being the only unmediated approach was incorrect. GLAG [2] is also an unmediated approach and we will revise our statement accordingly. Besides, [7] shows that GLAG's relaxation leads to worse alignments than FAQ.
[7] Lyzinski, V. et al. Graph matching: Relax at your own risk. PAMI, 2015.
**W1(b) Compare with [1] and [6].**
**Answer:** We have incorporated PATH[1] and DDSP[6]. `Fig. 1 in the pdf attached to the global rebuttal` presents the results. Both baselines fall short compared to FUGAL. Furthermore, as the graph sizes grow, the performance gap widens.
Additionally, both algorithms are prohibitively slow and fail to return an alignment within 5 hours even on the smallest real-world dataset in our experiments, ca-netscience. Hence, we report results on synthetic datasets. [7] (cited above) have shown FAQ to outperform Path.
**W2. What is the difference in performance with and without the features, and lambda=0 vs other lambda.**
**Answer:** We already include experiments on feature ablation study and the impact of setting both $\lambda=0$ and $\mu=0$. We have now also added the third ablation for $\lambda=0$.
* **Features:** We have a detailed ablation study in Appendix A.3 (referred from Section 5.2 in main manuscript), where we systematically turn on each feature and study its impact (Fig 8). The results reveal that degree is the most important feature, followed by the mean degree of neighbors.
* **$\lambda=0$**: `Fig. 3 in the pdf attached to the global rebuttal` presents the results. If $\lambda$ is not iteratively increased (recall Alg. 1), the performance suffers.
**W3(a). Is the formulation guaranteed to obtain the solution for some graphs? (like [4]).**
**Answer:** We do not have guarantees of an optimal solution for any specific class of graphs.
**W3(b). Is the optimization algorithm guaranteed to obtain a local minima?**
**Answer:** The problem is not convex due to the regularizer that guides the solution towards a permutation matrix. In this scenario, we do not guarantee obtaining local minima. However, as discussed in W2, the proposed strategy of iteratively increasing $\lambda$ yields better accuracy.
**W3(c). I'm not sure that the presented algorithm solves exactly every sub-process of FW.**
**Answer:** It does. Each iteration of the algorithm uses FW within which we employ optimal transport to determine the doubly-stochastic matrix $q$ that minimizes the inner product with $grad$. However, across iterations, we increment $\lambda$ to guide the solution towards a permutation matrix.
**W4. ...A more suitable measure could be $||AP-PB||_F^2$.**
**Answer:** We have added *Frobenius Norm*, as well as, *matched neighborhood consistency (MNC)* as additional metrics. The results are presented in `Fig. 2 of the PDF attached to the global response`. Consistent with the accuracy results, FUGAL maintains its competitive edge.
**Q1. In Alg. 1, the number of iterations T is the value for lambda?? Why does lambda vary in the integers? And how is the value of lambda chosen?**
**Answer:** Yes, $\lambda$ varies from 0 to $T-1$ in integers. We set $T$ to $15$ across all datasets. Varying $\lambda$ in the integers is just an empirical decision; it's not a constraint. Both $T$ and the granularity of increasing $\lambda$ are hyper-parameters. We choose $T=15$ since at this value FUGAL demonstrates robust accuracy across all datasets.
**Q2. When you change the problem of finding the $q$ minimizing the inner product with grad, with the formulation in eq (16), the solution is not guaranteed to be the same. How do you choose the lambda? (also, not a good idea to use the same name for different parameters)**
**Answer:** We set $\lambda=1$ across all datasets. We will update the variable name and mention this hyperparameter setting in App. A.3.
**Q3(a). In line 179, the rounding process is the same as projecting the solution to the set of permutation matrices? Because this projection is solved exactly as an LAP with the Hungarian algorithm.**
**Answer:** Our rounding algorithm employs maximum weight matching using the Hungarian method. We construct a complete bipartite graph, with nodes from graphs A and B forming the two partite sets. Edge weights correspond to the values in the quasi-permutation matrix produced by Alg. 1. The Hungarian algorithm determines the optimal one-to-one mapping between the node sets, maximizing the cumulative weight of the selected edges.
**Q3(b). why use the Hungarian and not Sinkhorn here?**
**Answer:** Sinkhorn projects the matrix into the space of doubly stochastic matrices. But we need a permutation matrix as output, hence the Hungarian.
**Q4. In line 33, there's a word missing (method?)**
**Answer.** We'll correct this.
----------
# Appeal to the reviewer
Thank you for helping us with actionable comments on our work. We have comprehensively incorporated _all_ suggestions by adding new baselines, ablation studies, metrics, and clarifications. We would be grateful if the reviewer could reassess our paper in light of these improvements and consider adjusting the rating accordingly.
---
Rebuttal Comment 1.1:
Title: Eagerly awaiting feedback from Reviewer ts95
Comment: Dear Reveiwer ts95,
First of all, thank you for taking your time out in reviewing our work and providing constructive feedback. Based on your suggestions, we have incorporated new baselines, ablation studies, metrics. We are eagerly awaiting your feedback on the rebuttal. We humbly appeal to you to please review the revisions made since the discussion phase closes in less than 2 days from now.
regards,
Authors.
---
Reply to Comment 1.1.1:
Title: Keenly awaiting feedback from Reviewer ts95
Comment: Dear Reviewer ts95,
We are less than a day away from the closure of the author-reviewer discussion phase. We are keenly awaiting your feedback on our detailed rebuttal. We thank you for your constructive feedback and hope all your concerns have now been addressed.
regards,
Authors
---
Rebuttal Comment 1.2:
Comment: I want to thank the authors for the detailed response.
Some of my concerns were taken, as well as suggestions. I'll raise my score accordingly.
I still think that the heuristic optimization needs a more precise framework, and a stronger theoretical study. | Summary: The paper proposes FUGAL, a method for graph alignment by using additional features of nodes to guide optimization of a relaxed problem. It combines strengths of 2 lines of methods, using full graph information and structural features enrichment.
---
score raised after rebuttal.
Strengths: I think the paper blends the approach in a reasonable way to take the advantages of mediated and unmediated approaches. The idea seems right and its formulation seems sound. It has very good performance to back up the choices in the method.
Weaknesses: This is not a totally novel approaches but it is understandable given the nature of the combinatorial optimization problem.
There should be a lot of room for improvement in terms of the structural features used in the paper. There might be many more to choose from. Depending on the nature of the application domain, some feature sets may make sense more than other. Some additional analysis on this part might be interesting.
Technical Quality: 3
Clarity: 3
Questions for Authors: N/A
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comments on our work. Please find below some clarifications on the queries posed.
**W1. Clarification on novelty**
*Answer.* The novelty of our work lies in:
1. crafting a regularizer using network features that makes the QAP potent for network alignment. State of the art algorithms for network alignment had generally moved away from a QAP based formulation due to its non-competitive performance
2. Devising a novel optimization strategy wherein we guide the Frank-Wolfe algorithm through a Sinkhorn distance objective, and gradually steer the resulting doubly stochastic solution towards a quasi-permutation matrix.
With these innovations, we design an algorithm that outperforms 13 different baselines in a comprehensive empirical benchmarking exercise and achieves state-of the-art-performance.
**W2. There should be a lot of room for improvement in terms of the structural features used in the paper. There might be many more to choose from. Depending on the nature of the application domain, some feature sets may make sense more than other. Some additional analysis on this part might be interesting.**
*Answer.* We'd like to draw your attention to Appendix A.3 (referenced in Section 5.2 of the main manuscript), where we've already included a comprehensive ablation study examining the importance of each feature (Fig. 8). Our findings indicate that degree is the most crucial feature, closely followed by the mean degree of neighbors.
We concur with your insight that identifying other beneficial structural features remains an open research question. This point is explicitly addressed in lines 128-131 of our manuscript. To further supplement this analysis with empirical data, we further expanded our feature ablation study by covering more node features spanning PageRank (PR), Eigen Centrality (EC) and Closeness Centrality (CC).
The table below compares the efficacy of using only one of these features as opposed to the 4 vanilla features used in Fugal (degree, mean degree of neighborhood, clustering coefficient, mean clustering coefficient in neighborhood). The experiment is performed on the ca-netscience dataset at various noise levels. Among individual features, we note Degree to have the maximal impact and closely followed by PageRank. This result is not surprising given that PageRank is correlated to degree. Overall, Fugal, which uses 4 features, continues to be superior to relying on any single feature.
#### **Caption**: Accuracy in the ca-netscience dataset at various noise levels. Fugal represents the default version that uses the four features mentioned above (and in Section 3.1). The other columns represent the accuracy achieved when only the corresponding feature is used in the LAP regularizer.
Noise | CS | Degree| EC| PR | Fugal
---|---|---|---|---|--
0| 0.70 |0.71| 0.70 |0.70 |0.70
5| 0.67 |0.70| 0.67 |0.69 |0.68
10| 0.57 |0.67| 0.61 |0.64 |0.67
15| 0.52 |0.63| 0.51 |0.55 |0.61
20| 0.34 |0.56| 0.39 |0.53 |0.57
25| 0.35| 0.48| 0.27| 0.46| 0.53
The effectiveness of features can vary across datasets due to differences in network properties. Automatically selecting optimal features would require shifting to a supervised pipeline. We must also consider feature interactions, as individually informative features may not provide any marginal improvement when combined. The optimal feature set needs to account for potential signal overlap. However, this approach would demand training on each specific dataset, substantially increasing our framework's computational demands. This dataset-specific optimization, while potentially more accurate, would significantly complicate the overall process.
We will include the above discussion in our revision.
-------------------
# Appeal to the reviewer
If the reviewer feels satisfied with the clarifications provided, we would appreciate support for our work by adjusting the rating accordingly.
---
Rebuttal Comment 1.1:
Comment: I appreciate the rebuttal. I can see that the paper's contribution is pretty solid. I raise my score to reflect my current assessment of the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for your constructive feedback, which has helped us improve our work. We appreciate your support and the Accept rating.
Sincerely,
The Authors | Summary: The paper presents FUGAL, a method for aligning graphs by finding a permutation matrix. FUGAL is an unrestricted method as it (also) operates on adjacency matrices, unlike most methods that rely only on intermediary graph representations.
FUGAL combines a Quadratic Assignment Problem (QAP) with a Linear Assignment Problem (LAP). The QAP focuses on finding the permutation matrix directly on the adjacency matrices, while the LAP works on node feature vectors built using structural features. Essentially, FUGAL augments the QAP with a LAP regularizing term. An initial algorithm is defined to find a quasi-permutation matrix Q . Then, the authors propose refining Q using the Hungarian algorithm. Interestingly, FUGAL relaxes the solution space to doubly stochastic matrices.
Strengths: The paper is well-written and the method is well-presented, especially in Section 3. The idea of combining QAP and LAP leads to an elegant solution, and the fact that it does not use intermediary graph representations distinguishes it significantly from other methods.
Weaknesses: The authors do not clearly define the limitations of their method, and there is no clear sensitivity analysis or ablation study of certain aspects.
Besides, there is a complete lack of statistical significance analysis.
Technical Quality: 3
Clarity: 2
Questions for Authors: Here are some suggestions and comments:
- Probably, a paragraph summarizing those cases in which the method works well and when it might perform poorly is missing. Maybe I missed something, but sections 5 and 6 should include the limitations as per your “NeurIPS Paper Checklist,” though they might not be well emphasized.
- In Chen, X., Heimann, M., Vahedian, F., & Koutra, D. (2020, October). “Cone-align: Consistent network alignment with proximity-preserving node embedding.” In Proceedings of the 29th ACM International Conference on Information & Knowledge Management (pp. 1985-1988) [CONE], there is an experiment on MNC and matched neighborhood consistency. Specifically, I refer to Figure 4. I suggest conducting a similar experiment, even if just to add in the appendix for further comparison.
- In Figure 1, using the variants of MultiMagna instead of $q$ may confuse the reader slightly.
- I wonder if the multimagna dataset you used corresponds to the PPI dataset used, for example, in [CONE] and also used in Xu, H., Luo, D., & Carin, L. (2019). “Scalable Gromov-Wasserstein learning for graph partitioning and matching.” Advances in neural information processing systems, 32.
If PPI is multimagna, in other works, such as in [CONE], I see that they used the average accuracy with standard deviation in error bars. You mentioned that “The error bars are compromising the visual interpretability of the plots due to the large number of baselines compared against,” but these values are significant, so at least in the appendix, I would suggest to visualize some by rearranging the plots.
- Regarding sensitivity analysis, I refer, for example, to the structural features. It is possible that only one of these features is actually useful or very influencing. I understand that this is not the focus of your method, but it would be interesting to see the influence of these node features for completeness.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Including some sentences that clearly stree the method's limitations would improve the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Probably, a paragraph summarizing those cases in which the method works well and when it might perform poorly is missing. Maybe I missed something, but sections 5 and 6 should include the limitations as per your “NeurIPS Paper Checklist,” though they might not be well emphasized.**
*Answer:* Our discussion on the limitations of FUGAL is interspersed in the discussion in Sections 5 and 6. These include:
- In _line 325-326_, we point out that S-GWL is more efficient that FUGAL on small graphs.
- In _line 566_, we complement that CONE, S-GWL, and PARROT exhibit superior time complexity compared to FUGAL.
- In _Section 6_, we highlight the ethical aspect that advances in graph alignment also enhance the abilities of attackers attempting to de-anonymize sensitive network data.
As suggested, we will collect them into a single paragraph to emphasize them more prominently.
**Q2. In Chen, X., Heimann, M., Vahedian, F., & Koutra, D. (2020, October). “Cone-align: Consistent network alignment with proximity-preserving node embedding.” In Proceedings of the 29th ACM International Conference on Information & Knowledge Management (pp. 1985-1988) [CONE], there is an experiment on MNC and matched neighborhood consistency. Specifically, I refer to Figure 4. I suggest conducting a similar experiment, even if just to add in the appendix for further comparison.**
*Answer:* As suggested, we have added *matched neighborhood consistent (MNC)* and *Frobenius Norm* as additional metrics. The results are presented in `Fig. 2 of the PDF attached to the global response`. These new metrics align with our previous findings using the accuracy metric, further vindicating FUGAL's superior performance across multiple evaluation criteria.
**Q3. In Figure 1, using the variants of MultiMagna instead of $q$ may confuse the reader slightly.**
*Answer:* We will change the $x$-axis to $q$ as suggested.
**Q4. I wonder if the multimagna dataset you used corresponds to the PPI dataset used, for example, in [CONE] and also used in Xu, H., Luo, D., & Carin, L. (2019). “Scalable Gromov-Wasserstein learning for graph partitioning and matching.” Advances in neural information processing systems, 32.**
*Answer:* Yes, these two refers to the same dataset.
**Q5. If PPI is multimagna, in other works, such as in [CONE], I see that they used the average accuracy with standard deviation in error bars. You mentioned that “The error bars are compromising the visual interpretability of the plots due to the large number of baselines compared against,” but these values are significant, so at least in the appendix, I would suggest to visualize some by rearranging the plots.**
*Answer:* Thanks for the suggestion. We will add standard deviation in the appendix.
**Q6. Regarding sensitivity analysis, I refer, for example, to the structural features. It is possible that only one of these features is actually useful or very influencing. I understand that this is not the focus of your method, but it would be interesting to see the influence of these node features for completeness.**
*Answer:* We already include experiments on feature ablation study and the impact of setting both $\lambda=0$ and $\mu=0$ in Eq. 13. We have now also added the third ablation of only $\lambda=0$. Specifically:
* **Features:** We have a detailed ablation study in Appendix A.3 (referred from Section 5.2 in main manuscript), where we systematically turn on each feature and study its impact (Fig 8). The results reveal that degree is the most important feature, followed by the mean degree of neighbors.
* **$\lambda=0$ and $\mu=0$:** This setting degenerates to the FAQ method, already present in our experimental evaluation. As evident from our experiments, the deterioration in efficacy is significant.
* **$\lambda=0$:** We have added this experiment now. `Fig. 3 in the pdf attached to the global rebuttal` presents the results. We see clear evidence that if $\lambda$ is not iteratively increased (recall Alg. 1), the performance is compromised.
------------
# Appeal to the reviewer
With the inclusion of additional metrics, ablation studies and clarifications, we hope the reviewer finds our manuscript improved. If the reviewer agrees, we would appreciate support for our work by increasing the rating accordingly.
---
Rebuttal Comment 1.1:
Comment: I appreciate your detailed rebuttal. It has helped clarify several points. | Summary: The current work tackles the problem of graph alignment, where the objective is to find an optimal alignment between two graphs. The current work attempts an unrestricted approach by solving a QAP and augmenting it with a LAP regularizer for tractability. This is in contrast to past work where the matching happens in an embedding space or an intermediate representation of the original graph, which incurs loss of information and consequently reduces the performance. The QAP finds a permutation matrix directly on the adjacency matrices of both the graphs and the augmented LAP uses structural features of the nodes to enhance node similarity of the matchings.
Strengths: - The problem is aimed at solving an important problem of graph alignment
- The paper is clearly written
- I checked out the code, it is quite clean
Weaknesses: There are several weaknesses of the paper:
1. I think the QAP problem has been well studied long before since 1990. See for example: https://www.math.cmu.edu/users/af1p/Texfiles/QAP.pdf.
https://link.springer.com/chapter/10.1007/978-1-4757-3155-2_6
https://link.springer.com/article/10.1007/s12652-018-0917-x
The authors need to perform comprehensive comparison against such approximation algorithm literature, since this paper does fall in the domain of improving the optimization of QAP. Otherwise, this paper does not add significant value.
2. One of the big advantage of embedding based graph alignment is during *test time* one does not have to re-optimize for alignment.
While yes, data driven methods will never be as strong as procedural algorithms, but on the flip side, the procedural algorithm have to re-run for unseen graphs as well. I was expecting a comprehensive analysis to investigate the trade off. Just reporting time and accuracy won't help. One needs to see the curve between accuracy and time.
Technical Quality: 2
Clarity: 2
Questions for Authors: See above.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 1
Limitations: I stated the key limitations in the weakenesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1. I think the QAP problem has been well studied long before since 1990. See for example: https://www.math.cmu.edu/users/af1p/Texfiles/QAP.pdf.
https://link.springer.com/chapter/10.1007/978-1-4757-3155-2_6 https://link.springer.com/article/10.1007/s12652-018-0917-x The authors need to perform comprehensive comparison against such approximation algorithm literature, since this paper does fall in the domain of improving the optimization of QAP. Otherwise, this paper does not add significant value.**
*Answer:* The comments appears to step from a misunderstanding on the objective . **The objective of our work is not approximating QAP**. The objective **is network alignment**, which, among other formulations, can be formulated as an instance of QAP. Hence, we are not aiming to provide the best approximation for any given QAP, but for the specific network alignment instances. Our comprehensive empirical comparison already involves comparison with _11 state-of-the-art baselines_ for the network alignment problem, as evident from this survey [3]. On the other hand, due to the QAP formulation our study already included the FAQ baseline, that adopts the QAP approach for network alignment. Yet, FAQ fails to achieve competitive performance. To further shed light between graph alignment and QAP, we have added two more QAP-based baselines for network alignment [1, 2]. The experiments reveal the same pattern (See Fig. 1 in the pdf attached to global response), i.e., vanilla QAP formulations attain significantly worse results in graph alignment.
The *value* of our work lies designing effective regularizers, that leverage and integrate various network-based features such as degree, clustering coefficient, etc., to the vanilla QAP, and a customized optimization strategy that empowers it to **achieve state of the art performance for network alignment.**
We do not claim to have designed a state of the art QAP approximation algorithm. Rather, we assert that our method, which combines QAP with carefully selected network-based regularizers and optimization strategies, achieves state-of-the-art performance specifically for network alignment problems.
[1] Zaslavskiy, M., Bach, F., & Vert, J. P. (2008). A path following algorithm for the graph matching problem. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(12), 2227-2242.
[2] Nadav Dym, Haggai Maron, and Yaron Lipman. 2017. DS++: a flexible, scalable and provably tight relaxation for matching problems. ACM Trans. Graph. 36, 6, Article 184 (December 2017).
[3] Konstantinos Skitsas, Karol Orlowski, Judith Hermanns, Davide Mottin, and Panagiotis Karras., Comprehensive evaluation of algorithms for unrestricted graph alignment. EDBT 2023.
**W2. One of the big advantages of embedding based graph alignment is during test time one does not have to re-optimize for alignment. While yes, data driven methods will never be as strong as procedural algorithms, but on the flip side, the procedural algorithm have to re-run for unseen graphs as well. I was expecting a comprehensive analysis to investigate the trade off. Just reporting time and accuracy won't help. One needs to see the curve between accuracy and time.**
*Answer:* To our knowledge, no embedding-based method has demonstrated competitive performance in network alignment. Moreover, supervised embedding approaches typically require dataset-specific training. This raises questions about the feasibility of performing inference on unseen test graphs without retraining. We would welcome information about any published, peer-reviewed algorithm that contradicts these observations. If such work exists, we would be happy to compare.
**Accuracy-time trade-off:** The accuracy-time tradeoff against S-GWL, which is well established as the state-of-the-art, is already included in Fig. 11 along with a detailed discussion in Appendix A.7. At best, S-GWL achieves 20% lower accuracy when restricted to the running time of FUGAL. In addition, it never approaches the same accuracy as FUGAL.
-------
# Appeal to the reviewer
We hope that our clarification on the scope and objectives of our work has provided a better context for our research. In response to the comments received, we have made several significant enhancements to our manuscript:
1. Added two more QAP-based baselines [1,2]
2. Introduced new metrics to provide a more robust assessment of performance
3. Included additional ablation studies to deepen the understanding of our method's components
These empirical enhancements further substantiate our claim of advancing the state of the art in network alignment. In light of these improvements, we kindly request you to reassess our work and consider adjusting the rating accordingly. We would be glad to engage in further discussion during the author-reviewer period if you have any additional concerns.
---
Rebuttal Comment 1.1:
Title: Eagerly awaiting feedback on our rebuttal
Comment: Dear Reviewer T3ao,
We are less than a day away from the closure of the discussion phase. We would greatly appreciate you feedback on the clarifications offered in our rebuttal. We are also happy to inform that two reviewers have already rated our work as "Accept (7)".
regards,
Authors
---
Rebuttal 2:
Title: Response
Comment: Thanks to the authors for the rebuttal. My responses are as follows:
> The objective of our work is not approximating QAP. The objective is network alignment, which, among other formulations, can be formulated as an instance of QAP. Hence, we are not aiming to provide the best approximation for any given QAP, but for the specific network alignment instances.
A key application of QAP is network alignment.
See for example:
(1) https://arxiv.org/abs/1908.00265
(2) https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0121002
I do not understand why a generic QAP solver will not work on the current problem.
QAP is often given in the form of:
$$ \max _{P} \text{trace}(MPQPR) $$ where $M,Q,R$ are known matrix and $P$ is a permutation matrix. Conceptually, the above problem can be easily cast into the above term (plus the linear term Tr(P^T D)). Except the linear term, there is no difference from any general QAP problem. In fact, the current algorithm is not treating A, B etc., beyond ordinary matrices--- so there is nothing specific in the context of graphs in the underlying solution.
In fact, the formulation is also almost same here: Eugene: explainable unsupervised approximation of graph edit distance https://arxiv.org/abs/2402.05885
The novelty factor of this paper is too limited. In my view, the work is a QAP problem (with network alignment application), solved using a numerical optimization method (no learning involved) with some modification of very known algorithm (Sinkhorn/knopp, already used in GWL paper).
> To our knowledge, no embedding-based method has demonstrated competitive performance in network alignment.
There are many papers, other than GWL or S-GWL. For example:
ERIC: https://openreview.net/forum?id=lblv6NGI7un
Deep Graph Matching Consensus: https://arxiv.org/abs/2001.09621
Also, the complexity of S-GWL is O((V+E)K) and the current paper is O(V^3), right? In this light, I could not parse the results in Appendix A.7
In a nutshell, I think this is an interesting work, but it does not add too much value to the literature--- as it is, it is a numerical optimization to QAP, which is quite rich in literature. As presented in this paper, the contribution with respect to such broad literature is quite limited. In fact, I think it may be of interest to data mining community e.g., SDM or ICDM, with significant modifications. But, I don't think this has crossed the bar of Neurips *as of now*.
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer T3ao,
**QAP Vs. Network Alignment:** QAP solvers approximate the optimal solution. This approximation is of low quality unless aided with regularizers and features specific to network alignment, which we do. We have compared with generic QAP solvers and outperform them comprehensively. Our novelty lies in designing the regularizer and optimization strategies that work in the context of network alignment. Herein lies our core novelty. Furthermore, **we have outperformed well-established state-of-the-art network alignment algorithms such as S-GWL, both in accuracy and efficiency.** This further substantiates the impact of our work.
**Supervised Network Alignment:** The referred papers for supervised learning are **inaccurate.** As we mentioned in our rebuttal, we would appreciate examples of peer-reviewed works that perform network alignment on unseen networks without retraining. In this context, we point out why the referred papers do not serve this role.
1. ERIC is tailored for computing graph edit distance and can't scale for large graphs. The average size of the graphs ERIC aligned is less than 30 whereas we are aligning graphs with more than 1000 nodes. Moreover, ERIC needs to be trained on every dataset separately as GNN is a function of number of features and these vary across datasets.
2. Same is the case with Deep graph matching consensus as it needs to be trained for every network separately.
3. SGWL, GWL are unsupervised and need to learn embeddings for every graph pair separately. We have already compared with these methods and comprehensively outperform them.
**Efficiency of S-GWL:** Our empirical study includes comprehensive efficiency comparison with S-GWL and reveals its inability to scale to large datasets, which is consistent with several other works in the literature [1,2,3,4]. One contributing factor to this limitation is that S-GWL performs recursive -partitioning, a process that is not conducive to parallelization on modern CPUs with high levels of hyper-threading [4].
[1] Zeng, Z., Zhang, S., Xia, Y., and Tong, H. Parrot: Position-aware regularized optimal transport for network alignment. WWW ’23, pp. 372–382.
[2] Hermanns, J., Skitsas, K., Tsitsulin, A., Munkhoeva, M., Kyster, A., Nielsen, S., Bronstein, A. M., Mottin, D., and Karras, P. Grasp: Scalable graph alignment by spectral corresponding functions. ACM Trans. Knowl. Discov. Data, 17(4), Feb 2023.
[3] Skitsas, K., Orlowski, K., Hermanns, J., Mottin, D., and Karras, P. Comprehensive evaluation of algorithms for unrestricted graph alignment. In EDBT 2023, Ioannina, Greece, March 28-31, 2023, pp. 260–272.
[4] Li, J., Tang, J., Kong, L., Liu, H., Li, J., So, A. M.-C., and Blanchet, J. A convergent single-loop algorithm for relaxation of gromov-wasserstein in graph data. In ICLR, 2023.
---
Reply to Comment 2.1.1:
Title: Further evidence on QAP efficacy for network alignment.
Comment: Dear Reviewer T3ao,
We wish to further highlight that the paper (2) you refer to in your last comment, that is the quoted line below, is the FAQ paper. FAQ is one of the primary baselines we compare to and show that it is significantly inferior to FUGAL (Figs. 1-4, Table 2).
> A key application of QAP is network alignment. See for example: (1) https://arxiv.org/abs/1908.00265 (2) https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0121002
This serves as further evidence of:
1. Why the stated claim of generic solvers being effective on network alignment **stands on unsubstantiated grounds.**
2. The impact and novelty of the regularizers, features and optimization strategies we design for the network alignment problem that enables us to **outperform FAQ and 12 other baselines** in a comprehensive benchmarking exercise.
Our extensive benchmarking exercise provides compelling evidence of the inefficacy of generic QAP solvers for network alignment, thereby underscoring the significance and contributions of our work.
We hope this clarification helps to highlight the merits of our approach in the context of existing solutions for network alignment.
Thank you for your consideration.
Best regards,
The Authors | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their insightful and constructive feedback. Below, we provide a comprehensive point-by-point response to their comments. Additionally, we attach a PDF document containing plots of several new empirical analyses as suggested by the reviewers. The key revisions and insights include:
1. **Enhanced Empirical Benchmarking:**
* Integration of two new QAP-based baselines: Path [1] and DSPP [2]. (`Fig. 1 in PDF`)
* Addition of Frobenius Norm and Matched Neighborhood Consistency performance measures. (`Fig. 2 in PDF`)
* Enhanced ablation study highlighting the importance of each feature in the LAP regularizer and the impact of $\lambda$ in guiding the doubly-stochastic matrix towards a quasi-permutation matrix. (`Fig. 3 in PDF`)
2. **Expanded Related Works:**
* We will cite and discuss various related works pointed out by Reviewer `ts95`. Two of these works [1,2] have been incorporated as new baselines.
3. **Clarifications and Improvements:**
* We provided will better highlight the theoretical characterization of our work on algorithm convergence and termination.
* Better consolidation of limitations and future scope for improvement.
* Incorporation of standard deviation and other plot enhancements as suggested by Reviewer `b2PL`.
We believe these revisions significantly strengthen our manuscript. We are open to further engagement with the reviewers for any additional queries or suggestions. In light of these improvements, we kindly request the reviewers to reassess their ratings of our work.
[1] Zaslavskiy, M., Bach, F., & Vert, J. P. (2008). A path following algorithm for the graph matching problem. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(12), 2227-2242.
[2] Nadav Dym, Haggai Maron, and Yaron Lipman. 2017. DS++: a flexible, scalable and provably tight relaxation for matching problems. ACM Trans. Graph. 36, 6, Article 184 (December 2017).
Pdf: /pdf/d0ed8a1d072069676c183e5fbd19a6b618db8100.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Information Re-Organization Improves Reasoning in Large Language Models | Accept (poster) | Summary: The paper proposes a simple yet effective method to work with most of the current reasoning strategies. It automatically organizes the content into structural form, excluding noises and unused information during this process. The experiments are done using three models across ten datasets. The method consistently improves a vanilla setting and also a CoT setting. Ablation study shows the effectiveness of both the extraction part and the pruning part.
Strengths: 1. The method is compatible with other reasoning strategies such as CoT and can further improve them. The method is easy to follow implement.
2. The paper includes extensive experiments. The proposed method is verified on all the models (3 LLMs) across all selected tasks (10 datasets) and shows consistent improvement in all scenarios.
3. The ablation study shows that each of the two components in the proposed method can improve the overall performance independently.
Weaknesses: 1. Do you consider trying gpt-3.5-0613/1106/0125 instead of text-davinci-003? The text-davinci-003 model is no longer usable in OpenAI APIs.
2. Some rationales need further explanation in the design of the proposed method. Do you consider using LLM for the information pruning part instead of a pre-trained BERT? Will there be any improvement if using LLM? Do you consider using an end-to-end model to do extraction and pruning at the same time?
3. It is great to see those analysis in section 5.2. It will greatly improve the insight of your paper if you can include:
a. Will the extraction process introduce some hallucination?
b. The paper focuses on zero-shot setting currently. How can few-shot setting help? It may help from two perspectives: (1) Better information extraction; (2) Better downstream task performance.
c. The paper uses a very simple reasoning technique, the CoT prompting. Will the proposed method help in more advanced on such as ToT?
4. I am particularly interested in the aspect of minimizing noise. However, currently the experiments are done on dataset with normal data without many noises. Could you please try on something like web-crawled data? Another choice can be manually injecting some noises in the datasets you use, i.e., adding some meaningless tags (<a></a>), or some irrelevant sentences. I am interested to see how the proposed method can automatically exclude these noises.
5. Also, since the method can formulate the content in a structural data. Could you please try something like changing the order of sentences in the content? Will it affect how the method extract data?
Minor suggestions to presentation:
1. A space is missing between “2” and “(” at line 125.
2. The “Large language models (LLMs)” appears at line 22, line 75, and line 353. Please do the abbreviation once at the beginning and use the abbreviation directly afterwards. Same suggestion applies to others like “Chain of Thought (CoT)” at line 190.
3. There seems to be an extra period in the “Prompt Content for Logical Relationship Extraction” in appendix A.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why do you name the method RE-organization? I don’t see any other organization process before you’re the process of your method in the pipeline. Will it be more proper to name it “Information Organization?”
2. What version of LLaMA-2 are you using? The temperature of the hugging face version needs to be a positive value so it cannot be zero.
3. Line 317-319: Is it also because GPT-4 has a better ability to do the re-organization job than GPT-3?
4. Are the 100 wrong predictions you annotated randomly sampled?
5. Could you please give some mind map examples that LLM output in JSON format? Probably you can append them in the Appendix for a clear illustration.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks to the reviewer for giving these valuable feedback and comments.
**W1**: We supplement the results across all datasets using GPT-35-Turbo-0613, including the main experimental results and ablation results. The results are shown in Tables 1, 2, and 3 in the PDF file under the Author Rebuttal section.
**W2**: We chose pre-trained BERT for information pruning due to its high generalizability. In our experiments using LLAMA-2-70B, GPT-3.5, and GPT-4 for the pruning on 2WikiMultiHopQA, only GPT-3.5 and GPT-4 surpass the current method. However, we aim to develop a foundational and general method, not one limited to specific conditions. Additionally, we are concerned about self-preference bias in LLMs. As Arjun Panickssery et al. noted in "LLM Evaluators Recognize and Favor Their Own Generations," LLMs tend to favor their own outputs. Using LLMs for both extraction and pruning could intensify this bias, making it hard to determine if improvements are due to methodological enhancements or bias.
An end-to-end approach combining extraction and pruning was tested in our method. As shown in Table 6 of Appendix A, our prompt for context extraction included the specific question to filter unrelated content. However, the ablation study in Table 3 shows that additional pruning improves performance, indicating that simultaneous extraction and pruning is less effective than our current separate method.
We will supplement the rationale analysis in the revised version of the paper to enhance the rationale of the approach.
**W3**: a. LLM-generated content often contains hallucinations. In our experiments, we used GPT-4 to verify the factual consistency of the extracted information. If inconsistencies were found, we re-extracted the information until it was consistent. According to our statistics, 98% of the extraction results are consistent with the original text after the first extraction.
b. The context of the dataset used in our experiments is relatively long. To prevent the text length from exceeding the maximum limit of LLMs, we adopt a 1-shot setting during the information extraction on the 2WikiMultiHopQA dataset, followed by direct reasoning with GPT-4. This approach resulted in an F1 score of 74.67%, which is lower than the existing zero-shot information extraction results of 76.52%. Considering the diversity of the samples in the dataset, using the 1-shot setting limited the effectiveness of the information extraction.
c. To explore whether our method contributes to the ToT method, we conduct experiments with GPT-4 on all datasets. Specifically, we use GPT-4 to decompose a multi-hop question into a tree in BFS order, where the root node represents the original question and the other nodes represent sub-questions. The results are shown in Tables 1 and 2 in the PDF file under the Author Rebuttal section. The results show that our method, combined with ToT, can further enhance reasoning capabilities.
**W4**: Following the reviewer's suggestion, we test our method on a noisier dataset constructed using retrieval tools. Due to time constraints, this is validated only on the 2WikiMultiHopQA dataset. Specifically, we use BM25, implemented with the Pyserini toolkit, to retrieve the top-3 paragraphs from the October 2017 Wikipedia corpus as context documents. We then apply our method and conduct the ablation study as described in section 5.2 using GPT-4. The results indicate that when the context contains more noise, the pruning operation brings a significant 2.55% improvement. The specific ablation results are as follows:
| Methods | 2WikiMultiHopQA |
| --- | --- |
| Standard | 43.70 |
| Full model (InfoRE) | 48.82 |
| w/o extraction | 46.14($↓ 2.68$ ) |
| w/o pruning | 46.27($↓ 2.55$ ) |
**W5**: If there are no dependencies between sentences, changing their order will not affect the extraction results. Conversely, if there are dependencies, such as reference relationships, changing the sentence order will affect the extraction results.
**presentation suggestions**:
Thanks to the reviewer for the presentation suggestions, we will address them in the revised version.
**Q1**: We chose the term "Re-Organization" to emphasize the characteristics of reintegration and optimization of the information process of our method. Although no prior organization process is explicitly mentioned before the pipeline, we consider that the raw data processing and analysis are usually organized and processed in some initial form. Therefore, our method further "re-organizes" this to enhance the relational richness and usability of information.
However, we greatly appreciate your suggestion to use the name "Information Organization" to more intuitively reflect the function of the method. We will consider this change to more accurately describe our method.
**Q2**: We use the official version of the model. Specifically, we register on the Meta website (https://llama.meta.com/llama-downloads/) to download the model. Following the usage example in example_chat_completion.py on the official Llama GitHub (https://github.com/meta-llama/llama), we set the temperature to 0.
**Q3**: More specifically, it is because GPT-4 inherently has stronger capabilities than GPT-3, whether in terms of organizing information or reasoning.
**Q4**: Yes, these 100 samples are randomly selected from our results of prediction errors. This random sampling ensures an unbiased analysis of the error types our model is making, providing us with a comprehensive view of the potential areas for improvement.
**Q5**: Due to the rebuttal length limit of 6000 tokens, we do not have enough space to display the JSON format mind map requested by the reviewer. However, the JSON format mind map we used is very similar to the example shown in Figure 2 of the paper, except that the example in Figure 2 omits the curly braces "{}".
More samples will be included in the appendix of the revised version for a clear illustration.
---
Rebuttal Comment 1.1:
Comment: Thanks very much. Most of my concerns have been adressed. I raise my score by one point.
---
Rebuttal 2:
Title: Appreciation for Positive Feedback
Comment: Dear Reviewer x5hg,
Thank you for your positive reassessment. We greatly appreciate your recognition and insightful suggestions. We will incorporate suggestions and additional analysis results into the revised paper to enhance its completeness and clarity.
Once again, thank you for your time and effort in reviewing our paper. If you have any further questions, please feel free to reach out at any time.
Best regards,
The authors of InfoRE | Summary: This paper proposes a method called information re-organization (InfoRE) to enhance the performance of large language models on some reasoning tasks. Unlike existing approaches that primarily focus on refining the reasoning process, InfoRE emphasizes re-organizing contextual information to identify logical relationships before reasoning. This method involves extracting logical relationships from the context and pruning redundant content to minimize noise. The re-organized information is then used in the reasoning process. The authors demonstrate the effectiveness of InfoRE through experiments on several contextually aware multi-hop reasoning tasks, achieving an average improvement of 4% in a zero-shot setting.
Strengths: - The idea of information re-organization is intuitive.
- The demonstrated improvement in reasoning performance by an average of 4% across tasks, as well as the ablation study, verifies the functionality of the proposed method.
- This paper is easy to follow.
Weaknesses: - The idea of this paper can be connected to existing prompt engineering works, e.g., performing retrieval from context and query/context rewriting. I feel this paper lacks technical depth (though this may be the fashion of a prompt engineering work, i.e., insightful but lacking technical depth). And the insight is not significant enough to compensate for the limited technical depth.
- Though a 4% performance improvement may be considered significant, the baseline in this paper is quite weak (e.g., standard prompting or vanilla CoT). More baselines may need to be included and compared.
- From the limited ablation study on 2WikiMultiHopQA, we can observe that the improvement from pruning is not significant. This makes me doubt whether such a complex/expensive design is really necessary.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: I don't see potential negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for reviewing our work and providing these valuable comments.
**W1**: Our paper introduces a strategy specifically designed to enhance multi-hop reasoning capabilities by effectively organizing contextual information before reasoning processes begin. This approach addresses a noticeable gap in the literature, which traditionally emphasizes intermediate reasoning steps without sufficient focus on the initial organization of relevant contexts. Our method is simple yet effective and reflects a deliberate design choice aimed at maximizing efficiency and applicability across various tasks and datasets. We have rigorously tested our approach on multiple datasets, demonstrating its effectiveness in improving reasoning outcomes, with substantial improvements over baseline methods.
Contrary to existing works that focus primarily on query rewriting (Query Rewriting for Retrieval-Augmented Large Language Models, EMNLP 2023) to better retrieve context that is more relevant to the query, and context rewriting (Learning to Compress Prompts with Gist Tokens, NeurIPS 2023) aims to compress the length of the context, our approach innovates by post-processing context to extract and prune information precisely. This fine-grained manipulation of context is both technically challenging and critical for the nuanced understanding required in multi-hop reasoning tasks.
Furthermore, we focus on developing a foundational precedent for future developments in large language model reasoning processes. The practical implications of our research offer a scalable solution that can be integrated with existing and future LLM frameworks to facilitate more robust reasoning capabilities.
We hope that this clarification addresses your concerns and further illustrates the technical rigor and innovative insights our work contributes to the field of prompt engineering.
**W2**: Firstly, while the Chain of Thought (CoT) method is indeed simpler than some other methods, it has consistently demonstrated its effectiveness across a variety of reasoning tasks. Its widespread use and acknowledged efficacy in the research community make it an appropriate baseline for evaluating novel methodologies like ours. We aim to build upon well-established foundations to ensure that our contributions are both measurable and meaningful.
Secondly, following the reviewer's suggestion, we add a stronger baseline method Tree-of-Thoughts (ToT). According to the original Tree-of-Thoughts paper, we adopt BFS search to solve the complex question. Specifically, we use GPT-4 to decompose a multi-hop question into a tree in BFS order, where the root node is the original question, and the other nodes are sub-questions. The specific results are as follows:
| Model | Methods | HOVER-2 | HOVER-3 | HOVER-4 | FEVEROUS | SCIFACT |
| --- | --- | --- | --- | --- | --- | --- |
| GPT-4 | ToT | 74.36 | 72.43 | 70.64 | 92.78 | 92.33 |
| GPT-4 | ToT+InfoRE | 76.96 | 74.18 | 72.30 | 95.82 | 94.42 |
| Model | Methods | 2WMHQA | MuSiQue | SQA | HQA | WIKIHOP |
| --- | --- | --- | --- | --- | --- | --- |
| GPT-4 | ToT | 75.32 | 65.12 | 68.96 | 81.73 | 57.46 |
| GPT-4 | ToT+InfoRE | 79.45 | 69.57 | 72.28 | 85.26 | 60.24 |
The inclusion of the Tree-of-Thought baseline has provided a more challenging comparison, and our method still demonstrates nearly a 4% improvement in performance on the same datasets, reinforcing the effectiveness and robustness of our approach.
**W3**: Our current experiments are done on normal data without much noise. This is why the impact of the pruning operation is not very pronounced, although it still results in a 1.53% improvement on the 2WikiMultiHopQA dataset. To further demonstrate the significance of the pruning effect, we use a retrieval tool to construct contexts with more noise for the 2WikiMultiHopQA dataset, replacing the currently used golden contexts. Specifically, we use BM25, implemented with the Pyserini toolkit, to retrieve the top-3 paragraphs from the October 2017 Wikipedia corpus as context documents. We then apply our method on this dataset and conduct the ablation study as described in section 5.2 using GPT-4. From the ablation study results, we observe a notable increase in performance. Specifically, in these noisier conditions, the pruning operation resulted in a significant improvement of 2.55%, which is comparable with the ability of extraction.
The ablation study results are as follows:
| Methods | 2WikiMultiHopQA |
| --- | --- |
| Standard | 43.70 |
| Full model (InfoRE) | 48.82 |
| w/o extraction | 46.14($↓ 2.68$ ) |
| w/o pruning | 46.27($↓ 2.55$) |
---
Rebuttal Comment 1.1:
Title: A Gentle Reminder
Comment: Dear Reviewer cK3D,
Thank you for your efforts in reviewing our paper. We greatly value the feedback from each reviewer and have provided detailed responses to the concerns raised. Up to this point, we have received feedback from reviewers wQkW and x5hg, and are pleased to have their positive recognition. However, we have not yet received any feedback from you.
As the discussion period is nearing its end, we would greatly appreciate it if you could let us know whether we have adequately addressed your concerns. Your insights are important to us, and we believe they will help us further improve the quality of our paper.
Thank you once again for your time and consideration.
Best regards,
The authors of InfoRE | Summary: The paper introduces an "Information Re-organization" (InfoRE) method aimed at improving the reasoning abilities of large language models (LLMs). It highlights the deficiencies of existing approaches that focus on intermediate reasoning steps without adequately addressing the preliminary organization of contextual information. The proposed InfoRE method involves extracting logical relationships and pruning redundant content from contextual data, guiding the reasoning process. This restructuring allows LLMs to recognize and use these logical relationships effectively, potentially enhancing reasoning quality. The method's efficacy is demonstrated using various LLMs like GPT-3.5 and GPT-4 across multiple reasoning tasks, showing an average improvement of 4% in performance.
Strengths: 1. The paper introduces a novel method of restructuring information before reasoning, which could be foundational for future developments in LLM reasoning processes.
2. Results indicate that InfoRE significantly improves reasoning tasks across multiple datasets and model architectures.
3. The paper provides a comprehensive breakdown of the method's components and their contributions to the overall performance.
Weaknesses: 1. Information extraction and pruning might require intricate tuning and be computationally expensive, making it less practical for real-time applications.
2. The paper is well structured. However, authors should provide more details about their experiments.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What are the differences between your extraction method and extracting to knowledge graph triples? Leveraging Structured Information ... (Li et al., EMNLP 2023) proposed a method to extract knowledge graph triples from documents. Moreover, you should add this paper to your related work since it also extracts knowledge from the document.
2. Since generations from LLMs always change, can you provide a standard variance or significant score of your results to show your improvement is substantial?
3. Do you limit the type of entities and relations during extraction? Do you apply open relations or closed relations during extraction? I only find the prompt in Appendix A. Is this all about your extraction part?
4. How do you check the completeness and quality of your extraction results? Can you provide more details about this?
5. LLM answers are always long, even with the format instruction. How do you compare your generated answers with golden answers? Do you apply exact match or other methods?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper adequately describes its limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for reviewing our work and providing these valuable feedback and comments.
**W1**: In our method, information extraction is implemented using closed-source or publicly available LLMs with frozen parameters, significantly reducing computational expense.
The primary resource-intensive component of our method is the BERT-based pruning model. However, it's important to highlight that this model contains only 110M parameters. This makes our approach less demanding in terms of computational resources. The frozen parameters LLMs-based information extraction and BERT-based pruning allow our method for real-time applications.
**W2**: Due to the paper space limitation, we describe the primary details of the experiments in Section 4 (Experimental), including the statistical information of the datasets, the methodologies of the baseline approaches, the evaluation metrics, the detailed parameters, and the environment setting of the model training process. To enhance the clarity and understanding of our experiments, we also provide the code in the supplementary materials. In addition to these main experimental details, the other details of our experiments include: we use the Adam optimizer, and set the hidden size of the linear layer to 2048. In the revised version, we will include these experimental details in the appendix.
**Q1**: The primary difference between our extraction and extraction into knowledge graph triples is that our extraction results are more centralized, whereas the knowledge graph triples are more scattered. The centralized extraction results of our method have two advantages. Firstly, our results aggregate content related to the same topic together. For example, as shown in Figure 2, all information related to the movie "Julius Caesar" is concentrated under the "Julius Caesar" node. Secondly, our extraction results include multi-hop connections, such as the path Julius Caesar → Producer→ Education in Figure 2, which is very helpful for answering complex the multi-hop question: Where did the producer of Julius Caesar study or work?. In the revised version, we will include a description of the differences between our method and the knowledge graph triples extraction method (Leveraging Structured Information ... (Li et al., EMNLP 2023)) in the related work.
**Q2**: The reported results in the paper are the means of the three times of each experiment. Based on the three times results of each experiment, the specific standard deviations for all datasets and LLMs involved in the paper are as follows:
| LLMs | Methods | HOVER-2 | HOVER-3 | HOVER-4 | FEVEROUS | SCIFACT |
|----------|----------|----------|----------|----------|----------|----------|
| LLAMA2-70B | InfoRE | 52.83 ± 0.6 | 51.42 ± 1.64 | 50.04 ± 0.37 | 67.84 ± 0.16 | 63.81 ± 0.27 |
| GPT-3.5 | InfoRE | 68.21 ± 1.89 | 66.45 ± 1.53 | 64.91 ± 1.82 | 91.31 ± 0.57 | 81.54 ± 0.4 |
| GPT-4 | InfoRE | 75.87 ± 0.66 | 74.06 ± 1.62 | 73.08 ± 1.71 | 95.62 ± 0.25 | 93.67 ± 1.99 |
| LLMs | Methods | 2WMHQA | MuSiQue | SQA | HQA | WIKIHOP |
|----------|----------|----------|----------|----------|----------|----------|
| LLAMA2-70B | InfoRE | 57.62 ± 0.52 | 52.78 ± 1.11 | 55.32 ± 1.77 | 69.98 ± 0.55 | 42.90 ± 0.12 |
| GPT-3.5 | InfoRE | 64.58 ± 1.62 | 58.03 ± 1.91 | 63.16 ± 1.31 | 77.12 ± 0.89 | 51.87 ± 0.32 |
| GPT-4 | InfoRE | 76.52 ± 1.1 | 66.36 ± 1.34 | 71.20 ± 1.44 | 83.22 ± 0.64 | 58.01 ± 0.93 |
**Q3**: We don't limit the types of entities and relationships during extraction and apply open relationships. This is because closed relations restrict the relationships between entities to specific categories, making it challenging to express complex textual content. Limiting entity types further exacerbates this issue. We use a large language model based on the prompts in Appendix A for extraction. This extraction method was adopted after extensive analysis and experimentation. The extraction results aggregate content related to the same topic together and include multi-hop connections.
**Q4**: In our experiments, we don’t separately assess completeness but instead focus on consistency. Specifically, we use GPT-4-32K to determine whether the extraction results are consistent with the original documents. If the results are found to be inconsistent, we re-execute the extraction process until consistent results are obtained. According to our statistics, we found that over 98% of the extraction results are consistent with the original text after the first extraction. This indicates that inconsistencies are rare. Due to the high consistency rate, we initially did not include this content in the paper. To improve the clarity of our method, we will include these details in the revised version.
**Q5**: In our experiments, we use special tags <answer>{final_answer}</answer> to distinguish the final reasoning answer from intermediate results, as shown in Table 7. After using regular expressions to extract the final answer between the <answer></answer> tags, we combine it with the gold labels provided in the dataset and input them into the evaluation script to obtain the evaluation results. For the datasets 2WikiMultiHopQA, StrategyQA, HotpotQA, MuSiQue, and WIKIHOP, we run the official evaluation scripts provided by these datasets to get the evaluation results, including EM (exact match), F1, Precision, and Recall. For the HOVER, FEVEROUS, and SCIFACT datasets, we extract True or False predictions from final answer and use run classification evaluation scripts to get the evaluation results, including F1, Precision, and Recall.
---
Rebuttal Comment 1.1:
Title: Reply to Rebuttal
Comment: Thank you for the response. It has solved my primary concern. I have updated my rating score.
---
Reply to Comment 1.1.1:
Title: Appreciation for Positive Feedback
Comment: Dear Reviewer wQkW,
Thank you for your positive reassessment, and we appreciate your acknowledgment of our work. We are truly grateful for your insightful feedback throughout the entire review process, which has helped us further improve our work.
Best regards,
The authors of InfoRE | null | null | Rebuttal 1:
Rebuttal: Thank you very much to the reviewers for reviewing our work and providing valuable feedback and suggestions.
In the PDF file, we supplement the results of our method on the GPT-35-Turbo-0613 version.
Pdf: /pdf/97994460dedb042bc5ed8e0ac11a202e3d72ee73.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
DisC-GS: Discontinuity-aware Gaussian Splatting | Accept (poster) | Summary: This paper proposes a novel kernel function to model static scenes, addressing the difficulty of modeling high-frequency boundaries caused by the $r^2$ decay from center to edge of the Gaussian kernel. To tackle the smoothing decay of the Gaussian kernel, the authors first divide the Gaussian in screen space using M Bézier curves, introducing discontinuities in the kernel function. To address the gradient discontinuity issue caused by the segmented Gaussian, the authors propose the Bézier-boundary Gradient Approximation Strategy to approximate the gradients during backpropagation, ensuring stable optimization. Quantitative and qualitative experiments on real-world scenes demonstrate that DisC-GS can model high-frequency boundaries and achieve the best rendering quality.
Strengths: I think the ideas proposed in this paper are sound and easy to follow. Using two Bézier curves to segment the Gaussian, making it better at modeling high-frequency boundaries, is very reasonable and much needed by the community. The comparison in Tab. 1 with numerous compelling baselines and the excellent rendering metrics are greatly appreciated. While achieving the best rendering quality, a balance with FPS was also maintained, as shown in Tab. 11, where the rendering speed did not significantly decrease. The ablations are also appreciated.
Weaknesses: 1. Too few qualitative comparisons and mismatched quantitative metrics. In Figs. 3 and 4, only two scenes from deep blending, two scenes from tanks&temples, and Mip360's room are shown. After carefully comparing the rendered images from corresponding views of Scaffold-GS, I found that although DisC-GS shows slightly better visual effects, the differences in rendering metrics, especially SSIM and LPIPS, should not be so significant. Hence, I have a few questions: Did DisC-GS use the same data as vanilla GS (without rerunning COLMAP)? What was the image resolution used for calculating the rendering metrics, and were the SSIM and LPIPS (VGG) calculations done in the same manner as for vanilla GS?
2. Lack of comparisons on synthetic scenes. In real-world scenes, there can be inaccuracies in camera poses, which may lead to improvements in metrics that are unrelated to the method itself. Therefore, the rendering quality on synthetic scenes would be more convincing. I am very much looking forward to seeing the rendering metrics and image quality of DisC-GS on the NeRF and NSVF datasets.
3. Lack of discussion on the number of Gaussians, training time and storage. I am very curious about the approximate number of Gaussians used in DisC-GS, as the number of Gaussians can greatly affect rendering quality.
4. I have some doubts about the optimization of DisC-GS. It seems that this optimization method could easily get stuck in local optima, and introducing polynomial computations in CUDA kernels might significantly reduce FPS. Did the authors use any special optimization strategies?
Technical Quality: 4
Clarity: 2
Questions for Authors: I have some doubts about evaluation in this paper. However, due to the strong results and sound methodology, if the authors can address my concerns, I am very open to raising my score to `accept`.
Confidence: 4
Soundness: 4
Presentation: 2
Contribution: 2
Limitations: Please refer to the weakness part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >*Q1: Qualitative comparisons and quantitative metrics.*
**A1:** **(1) Qualitative comparisons.** In the PDF uploaded during rebuttal (at the bottom of the "Author Rebuttal by Authors" comment), besides in the 3D scenes in Figs 3 and 4 in paper, we also provide qualitative comparisons in more 3D scenes. Moreover, Scaffold-GS is also added in the comparisons. As shown, across different 3D scenes, our framework consistently achieves better visual effects than existing methods such as Scaffold-GS, further showing the efficacy of our framework. Due to the space limitation of the PDF, we'll also add qualitative comparisons in more 3D scenes to paper.
**(2) Quantitative metrics.** Moreover, for the mentioned mismatch between the difference in rendering metrics and the difference in visual effects, this can be because, different testing views are of different complexities (i.e., containing different numbers and scales of discontinuities). During measuring the rendering metrics, all different testing views (including those very complex ones) are utilized and they all contribute to the measurement. Yet, in Figs 3 and 4 in paper which serve as the qualitative results of our method in general, we did not dedicatedly show all those very complex testing views that suffer from very severe boundary modeling issue.
As shown in the PDF uploaded during rebuttal, while good visual effects can be consistently achieved by our framework across different viewpoints and different 3D scenes, especially over those testing views that are more complex (richer in boundaries), the difference in visual effects between our method and existing methods such as Scaffold-GS tends to be more significant. These complex testing views, during quantitative evaluation, also tends to more significantly contribute to the superior performance of our method in metrics such as SSIM and LPIPS. We will also add more qualitative comparisons over those complex testing views across different 3D scenes to paper.
**(3) Questions w.r.t. evaluation details.** (3.1) Yes. We use the same data as vanilla GS without rerunning COLMAP. (3.2) Following the vanilla GS, during calculating the rendering metrics, for images with width exceeding 1600, we would rescale them for their width to be 1600. For rest images we would use their original resolution. (3.3) Yes, SSIM and LPIPS calculations are both done in the same manner as vanilla GS.
>*Q2: Synthetic scenes.*
**A2:** As suggested, we also evaluate our method on the synthetic NSVF and NeRF datasets, on an RTX 3090 GPU.
For rendering metrics, below, we report the SSIM, PSNR, and LPIPS metrics. Moreover, we also show images rendered over these two datasets in the PDF uploaded during rebuttal.
|Method|NSVF-PSNR↑|NSVF-SSIM↑|NSVF-LPIPS↓|NeRF-PSNR↑|NeRF-SSIM↑|NeRF-LPIPS↓|
|-|-|-|-|-|-|-|
|2D Gaussian Splatting|37.59|0.984|0.014|33.96|0.969|0.032|
|Ours (on 2D Gaussian Splatting)|**38.37**|**0.988**|**0.010**|**34.18**|**0.973**|**0.025**|
|3D Gaussian Splatting|37.07|0.987|0.015|33.32|0.970|0.031|
|Ours (on 3D Gaussian Splatting)|**38.32**|**0.988**|**0.012**|**34.01**|**0.972**|**0.029**|
As shown above, on synthetic scenes, our framework, when applied on both 2D and 3D Gaussian Splattings, can consistently achieve performance improvements. Moreover, as shown in the PDF, on synthetic scenes, our framework can also render images with high quality. This further shows our framework's efficacy.
>*Q3: Discussion on the number of Gaussians, training time and storage.*
**A3:** Below, we show the number of Gaussians, storage, training time, and inference time of our framework, on the Tanks&Temples dataset on an RTX 3090 GPU.
|Method|PSNR↑|Number of Gaussians|Storage|Training time|Inference time (per image)|
|-|-|-|-|-|-|
|2D Gaussian Splatting|23.30|~1585K|376MB|0.27 hour|0.007s|
|Ours (on 2D Gaussian Splatting)|24.96|~909K|299MB|0.32 hour|0.008s|
|3D Gaussian Splatting|23.14|~1784K|423MB|0.27 hour|0.007s|
|Ours (on 3D Gaussian Splatting)|24.67|~1094K|410MB|0.33 hour|0.008s|
As shown, though our framework achieves obviously better performance, it uses less number of Gaussians, does not increase the memory storage, only brings a relatively small increase of training time, and can perform rendering efficiently during inference. We'll discuss the above in paper.
>*Q4: Optimization of DisC-GS.*
**A4:** **(1) Computations.** No, we do not use special optimization strategies. Instead, we would like to highlight that computations introduced by DisC-GS are not mathematically complex. Specifically, in the forward rendering process, as mentioned in lines 261-265 in paper, for each Bézier curve, the binary indicator function that DisC-GS builds for it is with $O(1)$ time complexity. Meanwhile, in the gradient backpropagation process, the cubic functions that DisC-GS introduces (Eq. 11 in paper) also have closed-form solutions, and can thus be very efficiently solved also in $O(1)$ time complexity. Thus, overall, as shown above in **A3**, our framework would only bring a small increase of training time. Meanwhile, during inference, our framework can also perform rendering efficiently.
**(2) Local optima.** As mentioned above, the computations our framework introduce are not complex. It thus would not add significant complexity to the typical Gaussian Splatting technique. Specifically, across different 3D scenes among different datasets, our framework consistently outperforms both 2D and 3D Gaussian Splattings leveraging the typical Gaussian Splatting technique. This also implies that our framework would not easily get stuck in local optima. Meanwhile, in the PDF uploaded during rebuttal, we also provide loss curves over the 3D scene Train in the Tanks&Temples dataset. As shown, no matter when applied to 2D or 3D Gaussian Splattings, our framework consistently achieves a similar loss reduction trend. This further implies that our framework would not raise the risk of easily getting stuck in local optima.
---
Rebuttal Comment 1.1:
Title: Feedback from reviewer ncnd
Comment: Thanks for authors' response and the additional experimental results based on the review. However, I still do not understand why there is such a significant improvement in the quantitative metrics (tanks from 0.178 to 0.120, db from 0.240 to 0.199) based on the authors' statement that they used the same resolution and rendering metrics as vanilla 3D-GS. I have the following questions:
- Fig. 3(b)(c) shows a significant reduction in floaters, but I have no idea about the relationship between this and DisC-GS itself. The essence of DisC-GS is to address the need for extensive boundary modeling in 3D-GS, which is unrelated to floater removal. The paper should clearly state what causes the removal of floaters instead of obscuring the components that actually contribute to this effect.
- Fig. 3(a) presents slightly better visual results compared to Scaffold-GS, but it does not sufficiently support the improvement of LPIPS (vgg) from 0.177 to 0.120 in the tanks scene.
- The paper does not show changes in the distribution of GS. DisC-GS theoretically reduces the number of GS required for modeling high-frequency boundaries, so the reduction in quantity and memory usage makes sense. However, the lack of visualization of the point cloud distribution is quite confusing.
- While DisC-GS theoretically addresses the ability of 3D-GS to model high-frequency boundaries, is it really always better than 3D-GS? In many scenes (e.g., diffuse scenes), we actually need the smooth kernel provided by 3D-GS.
Overall, this paper still lacks sufficient evaluation to demonstrate the advantages of DisC-GS relative to vanilla 3D-GS (excluding improvements beyond changes in the kernel function). Therefore, I still maintain a negative evaluation of this work.
---
Reply to Comment 1.1.1:
Title: Response to reviewer ncnd [1/2]
Comment: Thank you for your time and effort. Below, we would like to answer your follow-up questions.
>*Q5: Fig. 3(b)(c) shows a significant reduction in floaters, but I have no idea about the relationship between this and DisC-GS itself. The essence of DisC-GS is to address the need for extensive boundary modeling in 3D-GS, which is unrelated to floater removal. The paper should clearly state what causes the removal of floaters instead of obscuring the components that actually contribute to this effect.*
**A5:** We carefully investigate Fig. 3(b)(c) and their corresponding 3D scene representations. We observe that, in the rendered images of existing methods in Fig. 3(b)(c), a large number of continuous Gaussian kernels are messily stacked in areas with rich boundaries and discontinuities. This stacking leads to floaters and blurriness, particularly in the red-boxed areas of these figures. We emphasize that this issue arises from a fundamental limitation of Gaussian Splatting, which our DisC-GS framework aims to address. Specifically, due to the continuous nature of Gaussian distributions, Gaussian Splatting struggles to accurately render discontinuities and boundaries in images (see Lines 3-5 in the paper). As a result, existing Gaussian Splatting methods often produce low-quality renderings in boundary-rich areas, with noticeable blurriness and floaters. The above points out that, **our floater reduction in Fig. 3(b)(c) is closely related to the essence of our DisC-GS framework**. We will also discuss this in paper.
>*Q6: Fig. 3(a) presents slightly better visual results compared to Scaffold-GS, but it does not sufficiently support the improvement of LPIPS (vgg) from 0.177 to 0.120 in the tanks scene.*
**A6:** (1) In **Fig. 3(a)**, our DisC-GS framework only slightly improves LPIPS over Scaffold-GS, i.e., from 0.188 to 0.179 by 0.009. This is consistent with the slight visual result improvement of DisC-GS over Scaffold-GS in Fig. 3(a).
(2) **Yet, we highlight that, the improvement of LPIPS from 0.177 to 0.120 in the tanks dataset is still sufficiently supported by its testing images**. This is because, in many other testing images in the tanks dataset, DisC-GS achieves much more significant LPIPS improvements over Scaffold-GS. For example, in the 10th testing image in the *Train* scene in this dataset, DisC-GS improves LPIPS over Scaffold-GS from 0.211 to 0.110 by 0.101; in the 13th testing image, DisC-GS improves LPIPS over Scaffold-GS from 0.203 to 0.101 by 0.102. Note that, the *Train* scene in the above sentence refers to the Train (railway) scene, but not the scene used for training. Meanwhile, we also observe that, in these testing images that DisC-GS achieves much more significant LPIPS improvements, DisC-GS can also achieve much more significant visual result improvements. So overall the improvement of LPIPS on the whole dataset is obvious. Thanks for this comment. Besides Fig. 3(a), we will also add these testing images in paper. | Summary: This paper introduces DisC-GS, a method that utilizes Bezier curves for discontinuity-aware image rendering on 3DGS. By employing Bezier curves, this approach significantly enhances the rendering results of scene boundaries. A set of experiments and ablation studies substantiate the effectiveness of this proposed method.
Strengths: 1. The author thoroughly discusses the discontinuous characteristics of GS, which lead to inaccurate drawing of boundary curves in images. Innovatively, they propose using Bezier curves to draw continuous boundaries.
2. The experimental results are highly convincing, demonstrating the superiority of the proposed method over state-of-the-art algorithms, and are supported by abundant ablation experiments.
3. The authors’ clear writing, well-structured discussion of the problem, detailed methods, and comprehensive presentation of experimental results make the paper easy to understand.
Weaknesses: 1. Introducing Bezier curves to refine Gaussian image rendering is indeed very clever. However, this paper only uses 2D parameters to control the position of the control points. Could this cause rendering problems in areas significantly affected by large viewpoint changes?
2. When additional properties such as control points and curve rendering are introduced, what impact does this have on training efficiency? While there is a time comparison for real-time rendering in the appendix, it would be interesting to know whether adding these parameters has any additional impact on training time and convergence (number of iterations).
3. Given that the authors changed the properties of each Gaussian, it would be interesting to see how these modifications affect the distribution of Gaussians in edge and surface regions. One aspect that could be explored is whether the number of Gaussians in the scene will differ significantly from the original 3DGS.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weakness part.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This paper lacks a discussion of limitations. It is recommended to provide a detailed description of the challenges associated with large scenes, the handling of huge parameters, posed images, and observation views that require improvement in reconstruction to ensure completeness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >*Q1: 2D parameters to control the position of the control points.*
**A1:** As mentioned in lines 121-125 in the paper, in existing works, both 2D and 3D Gaussian Splattings have been utilized to represent the 3D scene. In this work, we proposed a framework that can be applied to both 2D and 3D Gaussian Splattings. Specifically, as mentioned in lines 218-220 and lines 388-392 in the paper, **when applying our framework on 2D Gaussian Splatting, we use 2D parameters to control the positions of the control points, while when applying our framework on 3D Gaussian Splatting, we instead use 3D parameters**. As shown in Tab. 2 in the paper, no matter when applied to 2D or 3D Gaussian Splattings, our framework can consistently achieve performance improvements. Meanwhile, in both cases, we do not observe rendering problems in areas significantly affected by large viewpoint changes. We will further clarify the above in the paper to avoid confusion.
>*Q2: When additional properties such as control points and curve rendering are introduced, what impact does this have on training efficiency? [...] impact on training time and convergence(number of iterations).*
**A2:** **(1) Training time.** Below, similarly to the rendering time shown in the Appendix, we also show the training time of our framework, on the Tanks&Temples dataset on an RTX 3090 GPU.
| Method | PSNR↑ | Training time |
|-|-|-|
| Mip-NeRF360 | 22.22 | 48 hours |
| 2D Gaussian Splatting | 23.30 | 0.27 hour |
| 3D Gaussian Splatting | 23.14 | 0.27 hour |
| Ours (on 2D Gaussian Splatting) | 24.96 | 0.32 hour |
| Ours (on 3D Gaussian Splatting) | 24.67 | 0.33 hour |
As shown, though our framework achieves obviously better performance, no matter when applied to 2D or 3D Gaussian Splattings, our framework only brings a relatively small increase of training time.
**(2) Convergence (Number of iterations).** Meanwhile, in the PDF uploaded during rebuttal (at the bottom of the "Author Rebuttal by Authors" comment), we also provide loss curves over the 3D scene Train in the Tanks&Temples dataset. As shown, our framework consistently achieves a similar convergence rate compared to the baseline without parameters such as the control points added. Moreover, for a fair comparison, in the experiments in the paper, we train our framework for the same number of iterations (i.e., 30k iterations) as the existing Gaussian Splatting works.
We will add the above discussion on training time and convergence to the paper.
>*Q3: Affects on the distribution of Gaussians in edge and surface regions. [...] whether the number of Gaussians in the scene will differ significantly from the original 3DGS*
**A3:** Below, we aim to compare the total number of Gaussians in the whole 3D scene, as well as the number (distribution) of Gaussians in the edge and plane regions of the 3D scene, of our framework with the original 3DGS. Yet, both the edge regions and the plane regions of the 3D scene are not annotated in the dataset. Thus, to enable the latter of the above comparisons, below, we first describe how we estimate the edge regions and the plane regions of each 3D scene.
Specifically, given a 3D scene, to estimate its edge regions, we here conduct the following 4 steps: (E.1) for each testing image of the scene, we first pass the image over the Canny algorithm [9] for detecting the pixels that are on the edge regions of the image. (E.2) After that, each testing image is further passed over the Depth Anything model [a] to acquire the depth values of its edge-region pixels detected in (E.1). (E.3) Then for each edge-region pixel detected in (E.1), leveraging its 2D coordinate, its depth value, and the viewpoint of its corresponding testing image, we can map this pixel back to the 3D space of the 3D scene. (E.4) Finally, we can define the union of the 3D positions of all the edge-region pixels as the estimated edge regions of the given 3D scene.
Meanwhile, to estimate the plane regions of a 3D scene, we here conduct the following 3 steps: (P.1) for each testing image of the scene, we first pass it over the Depth Anything model [a] to acquire its depth map. (P.2) Given the depth map of a testing image, following [b], using Hough Transform, we can detect the pixels that are on the plane regions of the image. (P.3) Then similar to (E.3-E.4) above, we can transform all the plane-region pixels to the space of the 3D scene, and we can then finally define the union of the 3D positions of all the plane-region pixels as the estimated plane regions of the 3D scene.
With the edges and the plane regions of the 3D scenes estimated in the above way, we can then estimate the number of Gaussians that cover (overlap with) these regions and thus perform comparisons over the number of Gaussians in the edge and plane regions of the 3D scene. Below, we compare the total number of Gaussians in the whole 3D scene, as well as the number of Gaussians in the edge and plane regions of the 3D scene, between our framework and the original 3DGS, on the Tanks&Temples dataset.
|Method|Number of Gaussians in edge regions|Number of Gaussians in plane regions|Total number of Gaussians in the whole scene|
|-|-|-|-|
| Original 3DGS | ~605K | ~647K | ~1784K |
| Ours (on 3DGS) | ~196K | ~433K | ~1094K |
As shown, compared to the original 3DGS, our framework can represent the 3D scene with much fewer Gaussians, especially over its edge and plane regions. This further demonstrates our framework's ability in performing discontinuity-aware rendering and handling boundary issues. We will also discuss the above more in paper.
[a] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data.
[b] Hough Transform for real-time plane detection in depth images.
>*Q4: Description of the challenges.*
**A4:** Thanks for your suggestion. We agree that the Gaussian Splatting technique still faces challenges. As suggested, we will provide a detailed description of the mentioned challenges in the paper.
---
Rebuttal Comment 1.1:
Comment: I have carefully reviewed the authors' responses and the supplementary material provided in the rebuttal.
**Concerning A1**: The authors have not fully addressed my concern regarding the utilization of Bezier curves to shape the Gaussian distributions in 3D space. While Figure 2 in the manuscript illustrates the use of two Bezier curves to maintain the sharpness of the Gaussian on a plane, the methodology for employing Bezier curves to confine a 3D Gaussian within three-dimensional space remains inadequately described, both in the original manuscript and in the rebuttal.
**Regarding A2 and A3**: The authors' method does enhance image rendering quality and reduces the number of Gaussians required, without a significant increase in training time.
My previous Q3 was about whether your method shows a significant performance difference between edge and non-edge areas.
| | Edge | Non-edge | Total |
| --- | --- | --- | --- |
| Original 3DGS | ~605K | ~1179K | ~1784K |
| Proposed Method on 3DGS | ~196K | ~898K | ~1094K |
The significant reduction in the number of Gaussians used in edge regions (approximately 1/3 of the original count) suggests a potential enhancement in edge regions' representation. I recommend that the authors conduct additional experiments to compare the number of Gaussians and the resulting image render quality, specifically in edge regions. This could further substantiate the benefits of your approach and strengthen its appeal to others.
---
Reply to Comment 1.1.1:
Comment: Thank you for your time and effort in reviewing our work. Below, we further answer your following questions.
>*Concerning A1: [...] the methodology for employing Bézier curves to confine a 3D Gaussian within three-dimensional space remains inadequately described.*
**Answer:** To clarify, in our framework, we do not scissor (confine) 3D Gaussians within 3D space. Instead, even for 3D Gaussian Splatting, as mentioned in Lines 240-242 in paper, our framework always scissors (confines) 2D Gaussians that have been projected on the image plane. This is because, in our framework, the final goal is to modify the $\alpha$-blending function and thus enable the function to perform discontinuity-aware image rendering (see Lines 85-87 in paper). Note that, even for 3D Gaussian Splatting, its $\alpha$-blending function is also calculated based on the 2D Gaussians that have been projected on the image plane (see Eq. 2 in paper). Thus, in our framework, to effectively and conveniently modify the $\alpha$-blending function, we directly scissor 2D Gaussians on the image plane. We will make the above clearer in paper to avoid confusion.
>*Regarding A2 and A3: My previous Q3 was about whether your method shows a significant performance difference between edge and non-edge areas. [...] The significant reduction in the number of Gaussians used in edge regions (approximately 1/3 of the original count) suggests a potential enhancement in edge regions' representation. I recommend that the authors conduct additional experiments to compare the number of Gaussians and the resulting image render quality, specifically in edge regions. This could further substantiate the benefits of your approach and strengthen its appeal to others.*
**Answer:** (1) Yes, our method shows a significant performance difference in the edge and non-edge areas of the testing images. As shown below on the Tanks&Temples dataset, in the edge areas of its testing images, our method can achieve a much more significant performance improvement compared to 3DGS. This further shows the effectiveness of our method over edge areas of the image. Note that here, to perform evaluation effectively over image sub-areas, following [40], we use the MaskedSSIM metric (the larger the better).
|Method|Number of Gaussians in edge areas of the scene|MaskedSSIM in edge areas of testing images|Number of Gaussians in non-edge areas of the scene|MaskedSSIM in non-edge areas of testing images|
|---|---|---|---|---|
| Original 3DGS | ~605K | 0.802 | ~1179K | 0.922 |
| Ours (on 3DGS) | ~196K | 0.865 | ~898K | 0.928 |
|Performance gain||+0.063||+0.006|
(2) As suggested, below, we also compare the number of Gaussians and the resulting image render quality, in edge areas on the Tanks&Temples dataset. As shown, in our framework, with the increase of the number of Bézier curves we equip for each Gaussian, the number of Gaussians in edge areas of the scene consistently decreases. At the same time, in the edge areas of the testing images, our framework's image render quality consistently improves. In paper, taking our framework's efficiency into consideration, we equip 3 Bézier curves for each Gaussian (see Line 371 in paper).
|Method|Number of Gaussians in edge areas of the scene|MaskedSSIM in edge areas of testing images|Performance gain over 3DGS|
|---|---|---|---|
| 0 Bézier curve for each Gaussian (Original 3DGS) | ~605K | 0.802 ||
| 1 Bézier curve for each Gaussian | ~324K | 0.838 |+0.036|
| 2 Bézier curves for each Gaussian | ~230K | 0.852 |+0.050|
| 3 Bézier curves for each Gaussian | ~196K | 0.865 |+0.063|
| 4 Bézier curves for each Gaussian | ~191K | 0.866 |+0.064|
We will also add the above experiments to paper.
---
Rebuttal 2:
Comment: Thank you to the author for providing additional explanation.
So, if we assume the presence of 4M control points, are these points situated within a 3D space and projected onto a 2D image plane?
If this is the case, it appears that the impact on 2DGS might be minimal. However, for 3DGS, it seems that the method might inadvertently reduce 3DGS to a quasi-2D state.
Additionally, the MaskedSSIM comparison offers a clearer measure of improvement in the edge areas.
---
Rebuttal Comment 2.1:
Comment: Thanks for your timely response.
**(1)** Yes, in our framework, the 4M control points are first situated in a 3D space and then projected onto a 2D image plane.
**(2)** We're glad you think 2DGS appears to be compatible to this design. Below, we highlight that, with this design, when applying our framework on 3DGS, we can also achieve a good novel-view image rendering quality. This is because, as mentioned in existing works like [29], for Gaussian Splatting to perform good and detailed novel-view synthesis, in its (experimental) setting, Gaussian Splatting naturally expects the pose (angle) differences between the novel testing views and their surrounding training views to be not large. In this case with non-large view differences, even though we handle 3DGS in a quasi-2D manner (i.e., contouring scene boundaries on the image planes of the training views instead of directly in the 3D space), over the novel testing views, our framework can still acquire a good representation of scene boundaries.
Meanwhile, we highlight that, rather than handling 3DGS in a quasi-2D manner, directly scissoring Gaussian kernels into non-Gaussian kernels in the 3D space and then projecting (splatting) these kernels onto the image plane can be a design that is not compatible to Gaussian Splatting. This is because, as mentioned in [52], while the splatting (projection) of Gaussian kernels approximately holds the closed-form solution (see Eq. 1 in paper) and can thus be easily performed, the splatting (projection) of non-Gaussian kernels can be a very difficult problem.
In summary, in our framework, from an innovative perspective, we propose to perform post-scissoring after Gaussian kernels have been projected onto the image plane. This can bypass the difficulty of splatting (projecting) non-Gaussian kernels, while at the same time can also enable scene boundaries to be effectively represented and rendered. As shown in Table 2 in paper, this also leads the applications of our framework on both 2DGS and 3DGS to consistently achieve good performance.
We will also discuss the above in paper. | Summary: This paper proposes DisC-GS, a technique that enhances the boundary rendering quality for Gaussian splatting. DisC-GS takes into account the discontinuity of shapes and uses Bézier curves to model the boundaries. To enable differentiable rendering, the authors propose a novel discontinuity-aware rendering pipeline paired with a gradient approximation strategy. Experiments show that DisC-GS surpasses existing methods in rendering quality.
Strengths: This paper addresses a very important problem regarding the representability of 3DGS. It introduces a reasonable pipeline to tackle this issue. The idea of using Bézier curves to define the shape is novel and sound.
DisC-GS encodes Bézier curves as an additional attribute of Gaussians, and the rendering and training schemes are compatible with the original 3DGS. This compatibility means that this method can be easily adopted by most 3DGS-based methods.
Furthermore, this paper proposes an effective gradient approximation strategy, making the training of this representation feasible.
Weaknesses: DisC-GS will increase the storage of the 3D scene representaion. The authors did not evaluate this in experiments.
The authors do not provide additional qualitative results, such as a video with continuous camera movements. It would be interesting to see if DisC-GS can preserve multi-view consistency. DisC-GS produces hard boundaries for Gaussians, which may cause artifacts (e.g., flickering) in such videos.
The methods compared in the paper are not very comprehensive; for example, there are no results compared with mip-splatting. The aliasing artifacts demonstrated in Figure 3 may not be entirely due to boundary issues, and more comparisons are needed to prove this.
Technical Quality: 4
Clarity: 3
Questions for Authors: If I understand correctly, the Bézier curves for each Gaussian are only defined in a 2D subspace. Is there a specific reason for this design? Can they be directly defined in a three-dimensional space? Is this design similar to 2D Gaussian splatting?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The authors do not provide any limitations. There may be a limitation about the storage size.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >*Q1: Evaluation of the storage.*
**A1:** Below, we show the storage of our framework on the Tanks&Temples dataset on an RTX 3090 GPU. As shown, either applied on 2D or 3D Gaussian Splattings, our framework **does not increase the storage size**.
This can be because, admittedly, for each Gaussian representing the 3D scene, our framework equips it with a fifth attribute to represent the Bézier curves and thus makes it have slightly more parameters. Yet, on the other hand, with this additional attribute incorporated, our framework also enables each Gaussian representing the 3D scene to be aware of the discontinuity of shapes and thus enhance its representation ability. As shown below, this allows our framework to ultimately represent the 3D scene with much fewer Gaussians, thus not incurring the increase of memory storage. We'll discuss this more in paper.
|Method|Storage|Total number of Gaussians|
|-|-|-|
|2D Gaussian Splatting|376MB|~1585K|
|Ours (on 2D Gaussian Splatting)|299MB|~909K|
|3D Gaussian Splatting|423MB|~1784K|
|Ours (on 3D Gaussian Splatting)|410MB|~1094K|
>*Q2: Additional qualitative results.*
**A2:** (1) As suggested, we have rendered additional qualitative results (i.e., videos over different 3D scenes each with continuous camera movements). Among these videos, we observe that DisC-GS can consistently preserve multi-view consistency. Meanwhile, in these videos, artifacts such as flickering are also not observed. This can be because, while DisC-GS produces hard boundaries, the boundaries produced over different viewpoints are all based on the projection of the same sets of control points stored in the Gaussians representing the 3D scene. Thus, (multi-view) consistency of boundaries across different views can be kept. Meanwhile, with boundaries rendered in a multi-view consistent manner, artifacts such as flickering are also not observed. (2) During rebuttal, due to its format and space constraint, we provide a sample of such video qualitative results in GIF format in the PDF at the bottom of the "Author Rebuttal by Authors" comment. To view those results in animation mode, please use a computer and Adobe Acrobat Reader Version 2024.002 (downloadable from Adobe Reader's official website). We'll also include more results in paper.
>*Q3: Comparison with mip-splatting.*
**A3:** Below, we also compare our method with mip-splatting. As shown, on all three metrics and across various datasets, our framework consistently outperforms mip-splatting. This further shows the efficacy of our framework. We'll include mip-splatting in Tab. 1 in paper.
|Method|Tanks&Temples-PSNR↑|Tanks&Temples-SSIM↑|Tanks&Temples-LPIPS↓|Mip-NeRF360-PSNR↑|Mip-NeRF360-SSIM↑|Mip-NeRF360-LPIPS↓|Deep Blending-PSNR↑|Deep Blending-SSIM↑|Deep Blending-LPIPS↓|
|-|-|-|-|-|-|-|-|-|-|
| Mip-splatting | 23.78 | 0.851 | 0.178 | 27.79 | 0.827 | 0.203 | 29.69 | 0.904 | 0.248 |
| Ours | **24.96** | **0.866** | **0.120** | **28.01** | **0.833** | **0.189** | **30.42** | **0.907** | **0.199** |
>*Q4: More comparisons w.r.t. boundary issues.*
**A4:** In Figs 3 and 4 in paper, we demonstrate that, compared to both 2D and 3D Gaussian Splattings, our method can mitigate aliasing artifacts and achieve better rendering quality, especially in image regions containing numerous boundaries. To more clearly show that such advantages of our framework are due to its ability in handling boundary issues, we also perform quantitative comparisons.
In Tab. 4 in paper (re-shown below), we quantitatively test our framework, particularly over the boundary-rich areas of the testing images. Specifically, for each testing image, to acquire its boundary-rich areas, we first use the Canny algorithm [9] to detect its boundaries. We then define its areas that involve or surround its Canny-detected boundaries as its boundary-rich areas, and define its rest areas as its boundary-sparse areas. To enable evaluation over image sub-areas, following [40], we use the MaskedSSIM metric (the larger the better). As shown, especially in the boundary-rich areas of the testing images, our framework can achieve a significant performance improvement. This shows our framework's ability to handle boundary issues from one perspective.
|Method|Boundary-rich areas|Boundary-sparse areas|
|-|-|-|
|Baseline(2D Gaussian Splatting)|0.819|0.922|
|Ours|0.855|0.934|
Meanwhile, in Tab. 6 in paper (re-shown below), to evaluate if our framework can render sharp boundaries accurately instead of rendering them with blurriness, we also test our framework from the image sharpness perspective. Following [15], we measure image sharpness using the energy gradient function. As shown, our framework can increase the sharpness of its rendered images. This also from another perspective implies our framework's ability in accurately rendering the sharp boundaries in the image.
|Method|Image sharpness|
|-|-|
|Baseline(2D Gaussian Splatting)|51.50%|
|Ours|57.72%|
We will further clarify the above in paper.
>*Q5: Bézier curves in a 2D subspace.*
**A5:** (1) In DisC-GS, we can define the Bézier curves in a 2D subspace, as well as directly in a three-dimensional space (3D space). (2) In the method section, we define the Bézier curves in a 2D subspace since we there take 2D Gaussian Splatting as an example and explain how we apply our framework on 2D Gaussian Splatting (as mentioned in lines 168-169 in paper). (3) We can also similarly define the Bézier curves in a 3D space for 3D Gaussian Splatting. As mentioned in lines 388-392 in paper, to achieve this, we just need to modify a single place of our framework (i.e., defining the control points of the Bézier curves for each Gaussian in the 3D space coordinate system instead). (4) As shown in Tab. 2 in paper, no matter when defining the Bézier curves in the 2D subspace or the 3D space, our framework can consistently achieve performance improvements. We will further clarify this in paper.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their response and the additional experimental results. Through their response, I realized that there are both 2D and 3D versions of Bézier curves. Even with this information, I believe that the description of the 3D version of Bézier curves in the paper is insufficient. It might be necessary to reorganize the methodology structure and provide some 3D illustrations to better explain it. Additionally, visualizing the learned Bézier curves could potentially offer a clearer explanation of why storage hasn't changed much, rather than vaguely attributing it to 'being aware of the discontinuity' and 'enhancing its representation ability.'
---
Reply to Comment 1.1.1:
Comment: Thank you for your time and effort and also thanks for your suggestions.
**(1)** In our original paper, to ease readers' understanding, we first described the 2D version of Bézier curves as an example. After that, we described how the 3D version of Bézier curves can be similarly used in our framework (in Lines 388-392 in paper). Thank you for the suggestion, **to more sufficiently describe the 3D version of Bézier curves in our paper**, we will reorganize the methodology structure by adding a subsection at the end of the method section, as follows:
1. Specifically, in this subsection at the end of the method section, we will state that:
Above, we focus on describing how we use 2D Bézier curves in our DisC-GS framework and correspondingly apply DisC-GS on 2D Gaussian Splatting. Here in this subsection, we further describe how we use 3D Bézier curves in our DisC-GS framework. Specifically, the transition from 2D to 3D Bézier curves in DisC-GS requires only two minimal modifications. (1) Firstly, for each Gaussian representing the 3D scene, the control points of its Bézier curves are stored directly in the 3D spatial coordinate system rather than in a 2D subspace. Note that, this modification can be very simply made. Specifically, for each Gaussian in the 3D space in our DisC-GS framework, we only need to use $c_{curve}^{3D} \in \mathbb{R}^{4M\times3}$ instead of $c_{curve} \in \mathbb{R}^{4M\times2}$ to represent the control point coordinates of its 3D Bézier curves. In other words, for each 3D Gaussian, we only need to introduce it with $c_{curve}^{3D}$ instead of $c_{curve}$ as its new attribute. (2) Moreover, since we already directly introduce $c_{curve}^{3D}$ as the new attribute for each 3D Gaussian in our framework, during rendering, we omit Eq. 4 above in Sec. 4.1, which originally is used to acquire $c_{curve}^{3D}$ from $c_{curve}$. Overall, the above two modifications are sufficient to incorporate DisC-GS with 3D instead of 2D Bézier curves.
2. Moreover, in this subsection, we will also draw a figure to better explain the usage of 3D Bézier curves in our framework.
Specifically, we will draw this new figure in a similar way as the current Figure 2 in paper. That is, in this new figure, we will include three sub-figures. Among these three sub-figures, in sub-figure (a), we will draw a 3D coordinate system and demonstrate the control points of the 3D Bézier curves in 3D space. In sub-figure (b), we will draw an image plane, on which we will draw the control points of the Bezier curves that have been projected on that image plane. Finally, in sub-figure (c), similar to the sub-figure (c) in the current Figure 2, we will demonstrate how Bézier curves are used in our framework to scissor the Gaussian distribution.
We hope that this new subsection would facilitate the 3D version of Bézier curves to be more sufficiently described in our paper.
**(2)** Moreover, we agree that the visualization of the learned Bézier curves can help to better explain why our framework does not increase the storage size. We will follow your suggestion to add this to our paper. | Summary: The authors proposed an innovative framework DisC-GS, which enables Gaussian Splatting to represent and render boundaries and discontinuities in an image, they also introduce several designs to make the pipeline is discontinuity-aware and differentiability, and their method achieves superior performance on the evaluated benchmarks
Strengths: 1. The authors propose a robust and sound pipeline that effectively emphasizes discontinuities and boundary situations while maintaining differentiability.
2. The proposed method demonstrates superior performance on the evaluated benchmarks
3. The paper presents a well-reasoned argumentation and reasoning process
Weaknesses: The paper defines numerous labels, but they are presented in a noisy and unclear manner. It would be better to include a list of labels and their explanations. Additionally, I have to refer back to the main manuscript frequently as the lack of label explanations under each equation.
The paper does not include an analysis of the computational complexity of the proposed method.
There is no explanation or discussion on how the curves affect the densification procedure. For example, will the new cloned/splitted points are assigned the same curve attribute?
Technical Quality: 4
Clarity: 3
Questions for Authors: Is there any paneltation to optimize/adjust the positions of four contral points in a curve?
As it will “scissored out” some part of a Gaussian point, will this affect the opacity of the whole point, and whether it will helps the point escape pruning.(pruning occurs when opacity < threshold)
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The paper lacks of computational complexity of the proposed model. There is no explanation or discussion on the curves effect the densification procedure. Too many labels.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >*Q1: Defines numerous labels. [...] It would be better to include a list of labels and their explanations. [...] label explanations under each equation.*
**A1:** Thanks for your suggestion. Following it, (1) below, we formulate a list of labels (symbols) and their explanations. (2) Meanwhile, under each equation in our paper, we will fully explain all the labels used in that equation. For example, we will re-explain $\mu$, $r_1$, and $r_2$ under Eq. 4, and re-explain $P$ and $W$ under Eq. 5. We will also include a list of all labels and their explanations in the Appendix of our paper.
|Label|Explanation|
|---|---|
|$\mu$|Center of the Gaussian|
|$\Sigma$|Covariance matrix of the Gaussian|
|$c_{SH}$|Spherical harmonic coefficients of the Gaussian|
|$\alpha$|Opacity of the Gaussian|
|$R$|Rotation matrix of the Gaussian|
|$r_1$, $r_2$|First column and second column of the rotation matrix $R$|
|$S$|Scale matrix of the Gaussian|
|$\mu^{2D}$|Center of the projected Gaussian|
|$\Sigma^{2D}$|Covariance matrix of the projected Gaussian|
|$W$|Viewing transformation matrix|
|$P$|Projective transformation matrix|
|$J$|Jacobian of the affine approximation of the projective transformation|
|$p$|Pixel|
|$C(p)$|Color at pixel $p$|
|$\omega_0$, $\omega_1$, $\omega_2$, $\omega_3$|Four control points of the cubic Bézier curve|
|$B(t)$|Cubic Bézier curve function where $t$ is a random real number|
|$B_{imp}(x, y)$|Implicit representation form of $B(t)$|
|$M$|Hyperparameter representing the number of Bézier curve|
|$c_{curve}$|Fifth attribute of the Gaussian|
|$c_{curve}^{3D}$|$c_{curve}$ in 3D space coordinate system|
|$c_{curve}^{2D}$|$c_{curve}$ in image plane coordinate system|
|$g(p)$|Indicator function w.r.t. all the $M$ Bézier curves|
|$g_{sc}(\omega_0, \omega_1, \omega_2, \omega_3; p)$|Single-curve indicator function|
|$g_{sc}^0(p)$|Single-curve indicator function w.r.t. the first Bézier curve|
|$L$|Loss function|
|$\phi$|Desired value of $c_{curve}^{2D}[0, 0]$|
|$S_{\phi}$|Set of all possible real number solutions for $\phi$|
|$\widetilde{\phi}$|Solution in $S_{\phi}$ that is nearest to $c_{curve}^{2D}[0, 0]$|
|$\widetilde{\phi_1}$|Solution in $S_{\phi}$ that is nearest to $c_{curve}^{2D}[0, 0]$ from its left side|
|$\widetilde{\phi_2}$|Solution in $S_{\phi}$ that is nearest to $c_{curve}^{2D}[0, 0]$ from its right side|
|$\epsilon$, $\epsilon_1$, $\epsilon_2$|Small numbers used to avoid the gradient exploding problem|
>*Q2: Analysis of the computational complexity of the proposed method.*
**A2:** Below, we show both the training time and the rendering time (inference time) of our proposed method, on the Tanks&Temples dataset on an RTX 3090 GPU.
| Method | PSNR↑ | Training time | Rendering time (per image) |
|---|---|---|---|
| Mip-NeRF360 | 22.22 | 48 hours | 7.143s |
| 2D Gaussian Splatting | 23.30 | 0.27 hour | 0.007s |
| 3D Gaussian Splatting | 23.14 | 0.27 hour | 0.007s |
| Ours (on 2D Gaussian Splatting) | 24.96 | 0.32 hour | 0.008s |
| Ours (on 3D Gaussian Splatting) | 24.67 | 0.33 hour | 0.008s |
As shown, though our framework achieves obviously better performance, it only brings a relatively small increase of training time. Besides, during inference, our framework can also achieve a competitive rendering time (speed) compared to the existing methods leveraging the conventional Gaussian Splatting technique and satisfy most real-time requirements. We will extend Tab. 11 in the paper for the table to also include the analysis of training time (besides the analysis of rendering time).
>*Q3: How the curves affect the densification procedure. For example, will the new cloned/splitted points be assigned the same curve attribute?*
**A3:** Yes. In the densification procedure of our framework, when a point (Gaussian) is cloned/splitted into two new points (Gaussians), similar to how other attributes such as the spherical harmonic coefficients and the opacity are assigned to the new points, we simply assign both the new points with the same curve attribute as the original point. We will discuss this in the paper.
>*Q4: Is there any penalty to optimize/adjust the positions of four control points in a curve?*
**A4:** Yes. In our framework for which we have maintained its differentiability, similar to the other attributes of the Gaussian, the positions of the four control points stored in the curve attribute are also learnable parameters. This means that, during training, based on the loss function as the penalty, the gradients would first backpropagate from the loss function to the positions of control points stored in the curve attributes. Leveraging such backpropagated gradients, the positions of control points would then be correspondingly optimized/adjusted to facilitate the accurate representation of the 3D scene.
>*Q5: Will this affect the opacity of the whole point, and whether it will help the point escape pruning.*
**A5:** (1) In Gaussian Splatting, the opacity of the whole point (Gaussian) that would be used during point (Gaussian) pruning is the opacity attribute $\alpha$. In our framework, to perform "scissoring out", we add a binary indicator function $g$ to the color blending function and this does not edit the opacity attribute $\alpha$ (as can be seen in Eq. 6 in the paper). (2) In other words, as long as "opacity $\alpha$ < threshold", the pruning process of a point (Gaussian) would always still occur in our framework, and our framework would not interfere this process. Thus, our "scissoring out" process would not help the point escape pruning. | Rebuttal 1:
Rebuttal: We thank all reviewers for recognition of our contributions (Reviewer TofM: "an innovative framework", "a robust and sound pipeline"; Reviewer F99C: "a novel discontinuity-aware rendering pipeline", "addresses a very important problem", "the idea of using Bézier curves to define the shape is novel and sound"; Reviewer Kjj1: "significantly enhances the rendering results of scene boundaries", "innovatively, they propose using Bézier curves to draw continuous boundaries"; Reviewer ncnd: "a novel kernel function", "very reasonable and much needed by the community").
Pdf: /pdf/afb8c228537f0e002c79a02e852d84b1ba11bbfc.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
GITA: Graph to Visual and Textual Integration for Vision-Language Graph Reasoning | Accept (poster) | Summary: The paper introduces the GITA framework, which innovatively integrates visual graphs into general graph reasoning tasks for LLMs. By combining visual and textual information of a graph, GITA improves the comprehensibility and flexibility of graph reasoning, outperforming other LLM approaches. The authors also develop the Graph-based Vision-Language Question Answering (GVLQA) dataset, the first vision-language dataset for graph reasoning. The study also highlights the benefits of layout augmentation in visual graphs and pretraining on the GVLQA dataset.
Strengths: 1. Integrating visual graphs into graph reasoning tasks for LLMs, is a novel approach. This represents a creative combination of visual and textual modalities to improve graph comprehension and reasoning. The creation of the GVLQA dataset is original and fills a gap in the current datasets available for graph reasoning by incorporating both visual and textual elements.
2. Extensive experiments on both the GVLQA and five real-world datasets validate the effectiveness of the proposed framework, providing strong empirical evidence for the paper's claims.
3. The paper is clearly written, with well-defined sections and logical flow. The detailed explanations of the GITA framework's components and the dataset construction process enhance understanding.
4. By successfully integrating visual information into graph reasoning tasks, the paper addresses a significant limitation of existing LLMs and VLMs. This has the potential to substantially advance the field of graph reasoning.
Weaknesses: 1. The motivation of the framework is challenged. As for the commonly used graph reasoning methods such as GNNs, the authors state that "These methods often lack generalizability, flexibility, and user-friendliness." However, the advantages of GITA in these areas are not directly demonstrated. The zero-shot performance of LLM-based methods is poor for many questions (such as MaxFlow, SP, TS). Considering this, users need to fine-tune when addressing questions on a new dataset. Additionally, GITA requires data augmentation and manual template-based construction for task-specific queries. Therefore, it is not clear that LLM-based methods like GITA are better than GNNs in terms of generalizability, flexibility, and user-friendliness, and the motivation for using LLM for graph reasoning is not justified.
2. While the paper compares GITA to various language models, it does not provide a comparison to dedicated GNN models that are designed specifically for graph reasoning tasks. A comparison to GNNs would help demonstate the performance gains of GITA and its advantages or disadvantages.
3. The paper mentions k-hop subgraph sampling for handling large graphs but does not provide a detailed analysis of GITA's scalability or performance as graph sizes increase. A more comprehensive evaluation of scalability, potentially including comparisons to dedicated graph methods, would be valuable for assessing the practical applicability of GITA in real-world scenarios.
4. The alignment between visual and textual modalities could be explored. The paper notes performance degradation in some tasks for larger models due to potential alignment issues, but it does not provide a detailed analysis or solution. Further investigation into improving modality alignment is needed.
5. Some places need double-check. For example, the meaning of GITA needs to be unified: the authors mention GITA as "Graph to vIsual and Textual Integration" in line 44, while they refer to it as "Graph to Image-Txt Assistant" in line 117. Another example is in line 120, which needs to be changed to "Firstly, V and D are designed to produce visual depictions and textual descriptions."
Technical Quality: 3
Clarity: 4
Questions for Authors: Please see the above weaknesses.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors have acknowledged several key limitations of their work but could benefit from providing more comprehensive solutions and experimental evidence. As for my suggestion for improvement, please see the “weaknessess” part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your thorough comments and insightful suggestions, we here address your concerns and adopt your suggestions as follows:
> **W1**: It is not clear that LLM-based methods like GITA are better than GNNs in terms of generalizability, flexibility, and user-friendliness, and the motivation for using LLM for graph reasoning is not justified.
In Appendix A, we have detailed GITA's flexibility, user-friendliness, and generalizability. Unlike GNNs, whose task-specific feature engineering and architecture adjustments require professional techniques on model architectures and coding, GITA employs a consistent architecture that simplifies adaptation to new tasks using language-based templates. This makes it accessible even to non-experts, significantly enhancing **flexibility**. Moreover, GITA utilizes natural language for input and output, enabling intuitive graph reasoning through simple queries like 'Is there a cycle in this graph?' and providing straightforward 'Yes' or 'No' answers, thereby improving **user-friendliness** compared to the human-unreadable vector representations/embeddings in GNNs.
Besides, we further demonstrate that GITA-ZS (zero-shot) also has promising performance. According to the results shown in the **Table 3 in the rebuttal supplement PDF**, where 'PT' represents advanced prompting techniques including in-context-learning (ICL), chain-of-thought (CoT), and self-consistency (SC), its performance can be further improved with both the evolution of the VLM reasoner (i.e., from GPT-4v to GPT-4o) and advanced prompting techniques including ICL, CoT, and SC. Based on this result and the results reported in the submission, our GITA method exhibits promising zero-shot capabilities (**generalizability on tasks**).
> **W2**: While the paper compares GITA to various language models, it does not provide a comparison to dedicated GNN models that are designed specifically for graph reasoning tasks. A comparison to GNNs would help demonstate the performance gains of GITA and its advantages or disadvantages.
According to your suggestion, we compare with dedicated GNNs, including GCN and SAGE, and present the results in **Table 4 in rebuttal supplement PDF**. Compared to dedicated GNNs, zero-shot GITA (with prompt techniques) and fine-tuned GITA-7B model have similar average graph reasoning performance. The larger GATA-13B model performs slightly better.
In particular, compared to GNNs, the GITA model shows a stronger ability to recognize local structures in graphs (Connect and Cycle) and to accomplish tasks with obvious layout heuristics (BGM). We believe that this advantage comes from GITA's visual perception. For SP and MaxFlow, GITA performs inferior to GNNs. This may be because GNNs process edge weight more effectively through its message passing mechanism. For HP and TS, GITA-ZS performs best. Those results will be put in the revision.
> **W3**: The paper mentions k-hop subgraph sampling for handling large graphs but does not provide a detailed analysis of GITA's scalability or performance as graph sizes increase. A more comprehensive evaluation of scalability, potentially including comparisons to dedicated graph methods, would be valuable for assessing the practical applicability of GITA in real-world scenarios.
According to your suggestion, we conducted experiment to evaluate the scalability and performance for GITA and dedicated GNNs while increasing the number of hops $k$ in the ca-HepTh dataset.
The results shown in **Table 5 in supplement PDF** demonstrate the scalability of GITA is kept stable with the sampled graph size (i.e., $k$) increasing.
According to the accuracy reported in **Table 6 in supplement PDF**, GITA, GCN, and SAGE achieve their respective peak performance when $k$ equals 2, demonstrating that a small sampled graph size is enough for the performance. Dedicated GNNs show higher peak performance than GITA, but also show worse performance when $k$ becomes larger (e.g., 3 or 4), which demonstrates the performance of GITA is more stable than dedicated GNNs w.r.t. $k$. We will include those results in the revision.
> **W4**: The alignment between visual and textual modalities could be explored. The paper notes performance degradation in some tasks for larger models due to potential alignment issues, but it does not provide a detailed analysis or solution. Further investigation into improving modality alignment is needed.
Because existing Multilingual Large Language Models (MLLMs) are not inherently attuned to graph data, they require fine-tuning to effectively align vision and text inputs in a graph context. The effectiveness of this alignment during fine-tuning is influenced by the proportion of tunable parameters. **Following LLaVA**, for the larger GITA-13B, the trainable parameter ratio is only 0.78\%, which is much smaller than the 1.46\% for GITA-7B. This **limitation in tunable parameters** may result in **less effective alignment** for GITA-13B compared to GITA-7B, potentially leading to poorer performance of GITA-13B on some tasks.
To address this issue, two solutions are proposed: one is to employ **full-finetuning**, which directly tunes all trainable parameters to align the modalities. Another approach involves applying the proposed GITA to **more diverse graph data**, thereby obtaining a richer source of vision-language data for alignment ([r1] illustrates diverse and rich data can hugely improve alignment). These additional vision-language data can be fine-tuned together with the current task data, or used as pretraining (as demonstrated in Section 5.3, using GVLQA checkpoints for real-world datasets).
[r1] Visual Instruction Tuning. NeurIPS, 2023
> **W5**: Some places need double-check, including the inconsistent full name of GITA and a typo in line 120.
Thank you for pointing out the inconsistent terms and notations. We will fix them in the revision.
---
Rebuttal Comment 1.1:
Comment: I thank the author for the rebuttal. It addressed most of my questions and I am increasing the score to 6. | Summary: To fill the gap that LLM overlook the rich vision modality with graph structure, this paper proposes GITA to incorporates visual graphs into general graph reasoning. A large graph vision-language dataset called GVLQA is designed to boost the general graph reasoning capabilities.
Strengths: 1. This paper may involve a relatively large workload, proposing a large vision-language dataset, which can greatly promote the development of VLM.
2. Integrating visual graphs into VLM holds significant value.
Weaknesses: 1. This paper appears to be quite technical or engineering-oriented, with the models largely based on existing tools, and lacks strong novelty in terms of methodology.
2. This paper primarily focuses on integrating visual graphs into large models, however, it fails to provide any specific examples of visual graphs throughout the paper, making it difficult for readers to comprehend how the visual image improve the performance.
3. In page3 line 120, the 'G' in the end actually refers to 'D'? Some explaination should be provided.
Technical Quality: 3
Clarity: 2
Questions for Authors: The dataset will be released in the future? If so, I appreciate it.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See the weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We acknowledge and appreciate your insightful review. Below, you can find our responses addressing your concerns point by point. If you have any additional questions or require further clarification, please feel free to let us know.
> **W1**: This paper appears to be quite technical or engineering-oriented, with the models largely based on existing tools, and lacks strong novelty in terms of methodology.
We acknowledge that our work builds upon existing backbone models and tools. Incorporating visual information into general graph reasoning, however, is an interesting idea that has not been explored before. As a result, we have encountered **several challenging issues** during our work. For instance,
1) how to maintain the usability of vision graphs while managing context length in large-scale graphs;
2. how to balance consistency and variability in vision graphs, and the impact of specialized augmentations for vision graphs, etc.
**To handle those issues that have never been explored before**, we proposed the GITA framework and provided valuable empirical findings.
> **W2**: This paper primarily focuses on integrating visual graphs into large models, however, it fails to provide any specific examples of visual graphs throughout the paper, making it difficult for readers to comprehend how the visual image improve the performance.
The illustrations of the graphs have been provided in **Figure 2** as part of the case study for readers to understand how vision plays a role in graph reasoning. We have also included examples of visual graph generated by different graph visualizer tools implemented by us in **Appendix C**, as well as illustrations of these visual graphs for each subset of GVLQA in Figures 6-9 in **Appendix G**. We will highlight them in the revision.
>**W3**: In page3 line 120, the 'G' in the end actually refers to 'D'? Some explaination should be provided.
Thank you for pointing out this typo. The 'G' in the end should be 'D'. We will correct it in the revision.
>**Q1**: Will the dataset be released in the future? If so, I appreciate it.
Of course! The dataset will been released soon. Due to the submission policy, we do not provide the link in the submission.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. My concerns are addressed, therefore I maintain my score. | Summary: The paper introduces an end-to-end framework called Graph to Visual and Textual Integration (GITA) to visualize graphs in order to improve LLMs’ reasoning capabilities on graph tasks. GITA consists of three main components: a Graph Describer to translate a graph into a natural language description, a Graph Visualizer to visualize the graph as an image, and a Questioner to generate task-specific queries conditioned on the task information. GITA shows improvements compared to vanilla LLMs and VLMs on graph reasoning tasks and some real-world graph datasets.
Strengths: 1. The motivation is convincing. Just as humans reason over structured data, it is natural to visualize the structure first. The additional visual modality can provide rich information that can assist in reasoning over structured data. Therefore, generalizing this human behavior to LLMs/VLMs makes a lot of sense.
2. The method is simple yet effective for some graph reasoning tasks on small graphs. The idea of visualization is a form of data augmentation.
Weaknesses: 1. The applications are limited. The proposed method is only applicable to tiny graphs or large graphs with k-hop sampling, which hurts its practical application value.
2. The design of key components, including the Graph Describer, Graph Visualizer, and Task-specific Query, requires human efforts to adjust to fit different datasets.
3. Lack of comparison with SOTA methods. The experiments only compared with LLMs and VLMs; some graph LLMs should also be compared, such as Graph Chain-of-Thought, GraphLLM, and GraphToken [1-3], etc.
[1] Graph Chain-of-Thought, https://arxiv.org/abs/2404.07103
[2] GraphLLM, https://arxiv.org/abs/2310.05845
[3] GraphToken, https://arxiv.org/abs/2402.05862
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Can you provide some statistics about the graph size (average number of nodes and edges) used in your dataset?
2. In Table 1, is GITA-7B (VO) equivalent to Llava-7B?
3. In Table 1, in the fine-tuning setting, how are the LLMs fine-tuned? Is GITA-7B fine-tuned on only the alignment projector or the whole model?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: See weekness above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We acknowledge and appreciate your insightful review. Below, you can find our responses addressing your concerns point by point. If you have any additional question or require further clarification, please feel free to let us know.
> **W1**: The applications are limited. The proposed method is only applicable to tiny graphs or large graphs with k-hop sampling, which hurts its practical application value.
In fact, k-hop subgraph sampling is a common practice in graph learning. Typically, a k-hop sampling may not lead to performance degradation. On the contrary, k-hop subgraphs show more dedicated awareness of the local structure details. For example, ShaDow-GNN [1] indicates from both theoretical and empirical analysis, that in graph data, one can ignore distant neighbors ($k >= 4$), and the most effective $k$ is 2 or 3.
To further illustrate, We conducted experimented to show the performance of both GITA and Vicuna by varying $k$ on large graph datasets ca-GrQc and ca-HepTh and show the results in the following table. It is evident that both GITA and Vicuna reach their good performance at $k = 2$. Thus, increasing $k$ does not necessarily enhance the performance, and k-hop sampling does not lead to a decline in performance.
**Based on this observation, we think that the proposed GITA method is applicable to general graph reasoning tasks.**
| Model | **ca-GrQc** | **ca-HepTh** |
| ------------ | ----------- | ------------ |
| Vicuna (k=2) | **78.95** | **89.85** |
| Vicuna (k=3) | 78.95 | 89.66 |
| Vicuna (k=4) | 76.53 | 85.24 |
| GITA (k=2) | **79.70** | **91.13** |
| GITA (k=3) | 79.67 | 90.31 |
| GITA (k=4) | 75.47 | 86.10 |
[1] Decoupling the Depth and Scope of Graph Neural Networks. NeurIPS, 2021.
> **W2**: The design of key components, including the Graph Describer, Graph Visualizer, and Task-specific Query, requires human efforts to adjust to fit different datasets.
We think that the requirements of human efforts in GITA are negligible based on the following aspects.
1. For a new dataset, the Graph Visualizer and Graph Describer in GITA can be directly used in a **function-invoking** manner. We provide the default setting. As a result, users do not need to design them.
2. The task-specific template inside the **Questioner** is **the only component requiring human efforts**. It only requires users to describe the task definition and the concrete meanings of the graph elements in user-friendly natural language.
3. We also offer **an automated approach** that allows users to generate task-specific queries by prompting an agent like ChatGPT. An example of the query generation for a custom gaming scenario has been provided in **Appendix E**. Though this automated approach still necessitates human input to describe the task initially, it is a minimum and unavoidable requirement for any language-based method.
> **W3**: Lack of comparison with SOTA methods. The experiments only compared with LLMs and VLMs; some graph LLMs should also be compared, such as Graph Chain-of-Thought, GraphLLM, and GraphToken.
Graph Chain-of-Thought is not applicable to the general graph reasoning tasks, because it is built on knowledge graph (KG), but general graph reasoning tasks usually do not have the corresponding KG. GraphToken does not provide its data and code. Hence, in the following table, we here provide the performance comparison with GraphLLM on a randomly chosen half-size GVLQA-Base subset.
| Model | Connect | Cycle | TS | SP | MaxFlow | BGM | HP | Avg |
| ----------- | --------- | --------- | --------- | --------- | -------- | --------- | --------- | --------- |
| GraphLLM-7B | 94.74 | 92.36 | 42.17 | **56.72** | **52.0** | 58.24 | 26.3 | **60.36** |
| GITA-7B | **99.05** | **97.48** | **44.11** | 33.05 | 24.89 | **93.37** | **28.15** | 60.01 |
Based on the results, for **substructure-awareness tasks** such as **Connect** and **Cycle** and tasks with beneficial visual heuristics such as **BGM**, the visual modality introduced by GITA is more advantageous than the graph modality introduced by GraphLLM. GraphLLM shows its superiority in MaxFlow and SP, which may be due to their graph transformer encoder being more effective in processing edge weights. Finally, GITA and GraphLLM show similar abilities on sequential ordering tasks TS and HP, and **comparable average performance**. We will include this comparison in the revision.
> **Q1**: Can you provide some statistics about the graph size (average number of nodes and edges) used in your dataset?
The average numbers of nodes and edges for each task in GVLQA are shown in the following table.
| Average / Task | Connect | Cycle | TS | SP | MaxFlow | BGM | HP |
| -------------- | ------- | ----- | ------ | ----- | ------- | ----- | ----- |
| #nodes | 25.01 | 23.42 | 21.86 | 13.65 | 13.90 | 21.13 | 13.24 |
| #edges | 95.46 | 23.66 | 114.10 | 23.99 | 49.16 | 51.03 | 45.05 |
>**Q2**: In Table 1, is GITA-7B (VO) equivalent to LLaVa-7B?
GITA-7B (VO) represents **a variant of GITA** using LLaVa-7B reasoning on the vision graph generated by GITA Graph Visualizer and a direct question like "Is there a cycle in this undirected graph", but **without the textual descriptions** of the graph generated by GITA Graph Describer.
>**Q3**: In Table 1, in the fine-tuning setting, how are the LLMs fine-tuned? Is GITA-7B fine-tuned on only the alignment projector or the whole model?
We have introduced the detailed fine-tuning settings in both Section 5 and Appendix F in our submission. For the fine-tuning setting in **Table 1**, we **fine-tune the LoRA adapters** for all weight matrices within the text decoder and **the alignment projector**, while keeping the vision encoder in the VLM reasoner frozen.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you to the authors for the rebuttal. I have carefully reviewed it, as well as the responses to the other reviewers.
Regarding W1, my concern about the limited applications remains. I acknowledge that k-hop sampling is a widely adopted approach and may not lead to performance degradation in some cases. However, for datasets like Cora and CiteSeer, the reported accuracy is much lower compared to pure GNN methods[1] (Table 3). I am uncertain whether this is due to the k-hop sampling or because GITA only provides graph structures without node/edge attributes. If it is the latter, then GITA may not be effective enough for handling graphs with attributes, whether in numerical or text format.
For W3, the average performance of GraphLLM and GITA is similar. I am conservative about the contribution of GITA over existing methods.
For the new experiments during rebuttal, for table 5-7, the comparision with GNNs is an important baseline and should be displayed in the main text. Besides, for small datasets (Table 5), simple GNN method has very close performance with GITA. For large datasets (Table 6,7), pure GNN method show better performance and significantly better efficiency. I am skeptical about the necessity of using LLM with visual information to address these tasks given simple GNNs have already perform good with great efficiency.
For W2 and Q1-Q3, my questions are well addressed.
[1] cora leaderboad, https://paperswithcode.com/sota/node-classification-on-cora
---
Reply to Comment 1.1.1:
Comment: Thanks for your detailed, patient and insightful discussion. We respond to these questions as follows.
> **Q1**. Regarding W1, my concern about the limited applications remains. I acknowledge that K-hop sampling is a widely adopted approach and may not lead to performance degradation in some cases. However, for datasets like Cora and CiteSeer, the reported accuracy is much lower compared to pure GNN methods (Table 3). I am uncertain whether this is due to the k-hop sampling or because GITA only provides graph structures without node/edge attributes. If it is the latter, GITA may not be effective enough for handling graphs with attributes, whether in numerical or text format.
We think the reported inferior performance in Cora and Citeseer to the leading models is mostly because we do not use node attributes, but not due to the k-hop sampling. That is because many methods (e.g., Graph Transformer like UGT, and most GNN variants where $k$ is equivalent to the number of GNN layers) in your listed leaderboard also use k-hop sampling to achieve leading performance.
However, we want to point out that though using extra information beyond graph structures like node attributes is helpful (in Cora and Citeseer, they provide extra 0-1 binary word occurrences info), it requires specific models to handle them, thus hurts the generalizability, conflicting with our motivation, i.e., a "general-purpose" graph reasoning framework. Note that "generality" is a significant reason for why research works like GraphLLM and our GITA are interested in using LLMs for graph reasoning, where GraphLLM also does not consider the node attributes.
The hurt for the node attributes to generalizability can be reflected in several aspects: First, handling node attributes typically requires designing specific model tailored to their shapes (e.g., vector dimensions, length, matrix size), which hinders a general and consistent solution for handling diverse tasks. Second, the meanings (e.g., word vectors of titles, embeddings of degree or index, one-hot representation of node classes) of node attributes vary in data, thus the model can overfit in a specific task and hurt the zero-shot abilities. Finally, these diverse models to handle the node attributes also increase the complexity of the model.
As the major concern of GITA is how well vision+LLM performs on general graph scenarios, where node attributes are not necessary provided by default, we only concentrate on pure graph structures and do not incorporate task-specific models for handling extra node attributes.
Moreover, we can provide several potential solutions for combining GITA with node attributes, and leave them for future work.
1. We can include the text node attributes with the explanation of their concrete meanings inside the text prompt. However, such an approach may not perform well for some types of attributes such as occurrence data, because they are too abstract.
2. Similar to many existing works, we can use a specific module to encode the attributes and another fusion module to combine them with the backbone model (i.e., MLLMs used in GITA). However, such an approach needs extra designs for these additional modules and needs task-specific adjustments.
In the following table, We also provide an additional comparison between GITA and dedicated GNNs under non-attribute setting on Cora and Citeseer, where non-attribute setting is commonly used to evaluate how well models can understand graph structure. The experimental results show that GITA is much more effective in non-attributed Cora and Citeseer than dedicated GNNs, showcasing a more powerful awareness of their pure graph structure.
| Model | Cora | Citeseer |
|-------|-------|----------|
| GITA | 85.24 | 75.07 |
| GCN | 73.35 | 68.71 |
| SAGE | 69.19 | 64.69 |
---
Reply to Comment 1.1.2:
Comment: > **Q2**. For W3, the average performance of GraphLLM and GITA is similar. I am conservative about the contribution of GITA over existing methods.
We would like to highlight that the contributions of GITA are unique compared with existing methods such as GraphLLM.
**1. Pioneering Use of Vision in Language-Based Graph Reasoning**: GITA is the first to explore the effectiveness of vision in language-based graph reasoning. We believe that this vision benefit (our contribution) is not covered by existing methods like GraphLLM. As illustrated, our experiments show that GITA and GraphLLM excel in different types of graph understanding tasks. Therefore, the methods proposed by us (vision modality) and by them (graph modality) are expected to be combined and complement each other in our future work.
**2. Superior Performance**: Though GraphLLM and GITA-7B have similar average performance, when apply our proposed vision augmentation, GITA-VO(AUGLY) can be more powerful than GraphLLM in overall performance (i.e., 63.36\% in Table 2 vs. 60.36\%). Besides, zero-shot GITA can also achieve comparable average performance (59.73\% in rebuttal supplement Table 3) with GraphLLM, where the former is training-free.
------------------------------
> **Q3.** For the new experiments during rebuttal, for table 5-7, the comparison with GNNs is an important baseline and should be displayed in the main text.
According to your suggestion, we will add them accordingly in the main text of the revision.
> **Q4.** I am skeptical about the necessity of using LLM with visual information to address these tasks given simple GNNs have already perform good with great efficiency.
GITA and GNNs highlight the distinction solutions between general-purpose solutions and specialized solutions. As discussed in our motivation and in our response to W1 of Reviewer oE8d, GNNs are not sufficiently flexible, general, or user-friendly for addressing general graph reasoning tasks, with the following points:
**1. Ease of Adaptation (Flexibility)**: GNNs often need to be combined with other models to meet specific task requirements. For instance, sequence output tasks (such as SP, HP, TS) require integration with LSTM, transformers, etc. In contrast, GITA can achieve this with a unified model architecture. Besides, adapting GNNs to specific tasks requires modifications within the model structure, necessitating a background in deep learning and coding skills. GITA, on the other hand, only requires language skills that all humans possess to fit task variations.
**2. Zero-Shot Capability (Generalizability)**: GNNs lack zero-shot capabilities, whereas GITA has promising zero-shot capabilities (as demonstrated in rebuttal Table 3).
**3. Human-Readable Operations and serve as agent (User-friendliness)**: GNNs operate on unreadable vectors, while GITA operates on human-understandable language and images. Therefore, though it is not as efficient as GNNs, it can serve as an agent for answering graph questions with natural language in acceptable time, whereas GNNs show inabilities because they can not handle language.
Therefore, GITA is a more flexible, general and user-friendly framework for general-purpose graph reasoning than GNNs.
------------------------------------
**To sum up**, this paper is the first work to conduct general-purpose vision-language graph reasoning, and has illustrated that vision could bring overall advances on LLM-based graph reasoning. As an initial direction, it is not as mature as GNNs with years explored. However, it has shown comparable overall performance and special superiorities (flexibility, generalizability, user-friendliness, and promising zero-shot capabilities), demonstrating it is a promising direction to explore as a general solution for graph reasoning. | Summary: This paper introduces Graph to Visual and Textual Integration (GITA), a novel framework that enhances graph reasoning by integrating visual representations with traditional text-based processing. GITA's innovation lies in using both visual and textual inputs to address graph-structured data, a significant deviation from the typical text-only approaches in Large Language Models (LLMs). Key contributions include the development of GITA, the creation of the Graph-based Vision-Language Question Answering (GVLQA) dataset—the first for visual-textual graph reasoning—and extensive evaluations showing GITA's superior performance over existing LLMs across various datasets, highlighting the advantage of incorporating visual information into graph reasoning tasks.
Strengths: 1. The paper introduces the novel idea of integrating visual graphs with textual descriptions for enhancing graph reasoning tasks.
2. The creation of the GVLQA dataset is a significant contribution, as it is the first vision-language dataset designed specifically for general graph reasoning.
Weaknesses: 1. While I think it is an interesting idea to introduce vision into graph reasoning tasks, I am skeptical about its value. This is because vision is a perceptual ability, which is a fast intelligence (i.e., system one). While graph reasoning is a task that requires rigorous inference modeling and step-by-step execution to get the final precise answer, e.g., the graph reasoning tasks covered in this paper have their corresponding graph algorithms to get the precise answer. In my opinion, compared with the visual ability, the ability to rigorously execute traditional graph data structures and algorithms is the key for LLM/MLLM to have the ability to solve graph reasoning tasks. The experiments in Table 1 also show that, except for Connect and Cycle, which can be answered quickly and visually (provided the graph layout is concise and clear), the other graph reasoning tasks, GITA (VO) do not perform well.
2. I think this paper is like a data track paper, i.e., proposing novel datasets, testing the capabilities of existing LLM/MLLM, and finally proposing viable solutions for testing. Instead, this paper is counterproductive in wrapping it up as a methodological framework paper. The biggest problem is that the graph visualizer, the graph describer, and the questioner are both part of the methodology and the means of constructing the dataset, which is very confusing. I don't think they fit as part of the methodology because they are just tools for engineering the dataset, and there is no innovation at the level of ideas, modeling, or anything else.
3. Table 1 compares only text-based LLMs and lacks a comparison with visually aware MLLMs such as GPT-4V/GPT-4O/LLaVa/Gemini.
Technical Quality: 3
Clarity: 2
Questions for Authors: please refer to the weaknesses.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We acknowledge and appreciate your insightful review. Below, you can find our responses addressing your concerns point by point. If you have any additional questions or require further clarification, please feel free to let us know.
**Note: Some tables mentioned in this rebuttal are contained in our rebuttal supplement PDF.**
> **W1**: While I think it is an interesting idea to introduce vision into graph reasoning tasks, I am skeptical about its value. This is because vision is a perceptual ability, which is a fast intelligence (i.e., system one). While graph reasoning is a task that requires rigorous inference modeling and step-by-step execution to get the final precise answer, e.g., the graph reasoning tasks covered in this paper have their corresponding graph algorithms to get the precise answer. In my opinion, compared with visual ability, the ability to rigorously execute traditional graph data structures and algorithms is the key for LLM/MLLM to have the ability to solve graph reasoning tasks. The experiments in Table 1 also show that, except for Connect and Cycle, which can be answered quickly and visually (provided the graph layout is concise and clear), the other graph reasoning tasks, GITA (VO) do not perform well.
We agree with you that only the visual modality may be not so good for graph reasoning tasks. Please note that our work is to show that **integrating the visual modality with the textual modality** could achieve better performance than a single modality, as evidenced by our submitted paper Tables 1 and 3.
While "VO" demonstrates limited performance in Table 1 for most tasks, we have **successfully enhanced its capabilities** through **layout augmentation** in GITA, as evidenced in Table 2.
The visual modality could **complement** the textual modality in graph reasoning tasks. Here we provide **a case study**. Typically, the visual modality is more excellent in recognizing beneficial substructures/local patterns compared to the textual modality, and some of them are crucial in graph reasoning. For instance, "hop number" serves as a heuristic in shortest path calculations, "leaf nodes" are critical in topological sorting, and "cycles" must be prevented in Hamiltonian path construction. We extracted these substructures inside the GVLQA-Base and manually labeled them. Employing the frozen ViT in LLaVA with a trainable MLP decoder, we achieved identification accuracies of 89.92\%, 95.16\%, and 92.39\%, respectively for hop number counting, leaf nodes identification, and cycle identification. In contrast, using a pre-trained Bert with the same trainable MLP decoder, the accuracies are significantly lower (i.e., 55.47\%, 26.33\%, and 60.32\%). Therefore, the effectiveness of the integration of the visual and textual modalities may be due to that the visual modality provides extra **beneficial structural information** such as those substructures.
Besides, The benefits of vision do not conflict with the benefits that come from the ability to step-by-step reasoning (e.g. Chain-of-thought, CoT). **Table 1 in our rebuttal supplement PDF file** shows the model benefits from them both simultaneously.
> **W2**: I think this paper is like a data track paper, i.e., proposing novel datasets, testing the capabilities of existing LLM/MLLM, and finally proposing viable solutions for testing. Instead, this paper is counterproductive in wrapping it up as a methodological framework paper. The biggest problem is that the graph visualizer, the graph describer, and the questioner are both part of the methodology and the means of constructing the dataset, which is very confusing. I don't think they fit as part of the methodology because they are just tools for engineering the dataset, and there is no innovation at the level of ideas, modeling, or anything else.
We would like to point out
1. GITA is the **first** to enable the use of MLLMs in graph reasoning. This is a significant advance because nearly all existing graph reasoning works do not utilize the visual modality, which could help improve the performance.
2. GITA is general for **almost all existing graph reasoning data** because it only requires graph information in bare structures $G=\{V, E\}$.
3. GITA **solves technical problems** including: how to use MLLM for graph reasoning based on bare graph structure, how to maintain vision graph usability and keep context length in large-scale graphs, how to trade-off the consistency and variability of vision graphs, special augmentations for vision graphs and their impacts, etc.
4. GVLQA is just a **by-product** of our core contribution, GITA.
Therefore, the methodologies inside GITA are not limited to any individual benchmark or scenario but are significant and valuable to graph reasoning. As a result, we think our work fits the main track instead of the dataset track.
> **W3**. Table 1 compares only text-based LLMs and lacks a comparison with visually aware MLLMs such as GPT-4V/GPT-4O/LLaVa/Gemini.
We would like to point out in our experiments, LLaVa has been utilized as the VLM Reasoner for both the GITA and GITA (VO) configurations. As one of the components of GITA, LLaVa receives the visual and textual inputs from the other components of GITA (though we store them as GVLQA) and the results are shown in the Table as GITA and GITA (VO) terms. Therefore, LLaVa is not necessary to be compared again as an individual baseline because it is **a special case of GITA**. Similarly, GPT-4V serves as the VLM Reasoner for the GITA-ZS and GITA-ZS (VO) configurations in Table 1 and is not considered a baseline.
According to your suggestion, we utilized **GPT-4o** (not released since the submission of our work) and **Gemini**, as the MLLM reasoner and recorded the results in **Table 2 in our rebuttal supplement PDF file** , which shows that the **benefits** of incorporating vision with GITA are **consistent across** various VLM reasoner choices.
---
Rebuttal Comment 1.1:
Comment: My concerns are addressed and I hope that the final version of the paper will be updated. | Rebuttal 1:
Rebuttal: Dear Reviewers,
Thank you for your valuable feedback and thoughtful comments on our manuscript. We have carefully reviewed each of your concerns and have addressed them individually in the responses below. We believe that these revisions and clarifications have strengthened our manuscript, and we hope that our responses meet your satisfaction.
Additionally, due to space constraints, we have included some of the larger tables in an attached **PDF** document for your convenience.
Please find our detailed replies to your specific points below.
Best regards,
Authors
Pdf: /pdf/7d618e928acee92463cce2a77fad59aab55652ef.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Bayesian Strategic Classification | Accept (poster) | Summary: The paper studies partial information release in strategic classification. Roughly speaking, the learner publishes a set of classifiers containing the actual one being used, and the agents update their beliefs accordingly and best respond. The authors show that the agents' problem of best responding is generally hard, and give oracle-efficient (approximate) algorithms in certain cases. They then consider the essentially one-dimensional case where quality is measured by a single number, and show (1) the problem of releasing the optimal set is generally hard, and (2) an efficient algorithm for uniform priors.
Strengths: Conceptually, the problem of "ambiguous" classifiers is natural and interesting. The paper makes an attempt at modeling the problem and presents a variety of results, which might trigger further progress. The paper (especially the introductory parts) is also quite well written and polished.
Weaknesses: My two major concerns are regarding the model and the significance of section 4. I feel certain parts of the model can be better justified. I also feel that in some sense section 4 is trying to solve a problem that might not exist in the first place. See detailed comments below.
Technical Quality: 4
Clarity: 3
Questions for Authors: The model: while the model in the paper makes sense, I'm also curious about other natural ways to model the problem, e.g., the learner could commit to a (possibly restricted) distribution over classifiers and hide the realization from the agent(s). Is there anything interesting we can say about such alternative models? Is there a reason to focus on the particular model in this paper? In particular, I guess one potential criticism of the current model is that, given enough time, agents will eventually figure out the actual classifier being used (e.g., in the same way that they come up with the prior distribution).
Line 168, "truthfully": I'm a bit unsure about the wording here. In particular, the posterior of the agents is generally misleading. More generally, there seems to be a mismatch between what the agents know about the learner and how they behave -- the agents have no reason to believe that the classifier being used is distributed according to the posterior. I wonder what the authors think about this.
Def 2.2: this will probably become clear soon, but is h part of the learner's action? It sounds like it's not, but then why do you say this is a generalization of the full information release game?
Line 314, h \ge f: notation abused here.
Sec 4: in this setup, why shouldn't the learner simply publish the exact optimal classifier (h = f + 1, which is perfectly accurate after manipulation)? I guess this is also where fixing h as an input parameter makes less sense.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: I don't have concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for these questions, we will now respond in detail.
**The model (while the model in the paper makes sense, I'm also curious about other natural ways to model the problem, e.g., the learner could commit to a (possibly restricted) distribution over classifiers and hide the realization from the agent(s). Is there anything interesting we can say about such alternative models? Is there a reason to focus on the particular model in this paper? In particular, I guess one potential criticism of the current model is that, given enough time, agents will eventually figure out the actual classifier being used (e.g., in the same way that they come up with the prior distribution).):**
While working with randomized classifiers is an interesting direction for future work, it is still susceptible to the same criticism of the reviewer: over time the agents will be able to learn what distribution the learner is using and best respond according to that. Also, in practice, it may be undesirable for a learner to constantly change their deployed classifier, both because it is costly but also because it may be perceived as unreliable or unfair.
We believe our model is a natural and simple first attempt towards understanding incomplete information in strategic classification. Having said that, we completely agree with the reviewer that the use of randomized models is an interesting direction for future work, as we have mentioned it in the limitations section: “... However, we note that changing the screening algorithm requires significant resources, and the rate at which the classifier is updated is generally slower than the rate at which decisions are made. In practice, this means that strategic agents effectively face a fixed model in each “batch” between updates.”
**Line 168, “truthfully” (I'm a bit unsure about the wording here. In particular, the posterior of the agents is generally misleading. More generally, there seems to be a mismatch between what the agents know about the learner and how they behave -- the agents have no reason to believe that the classifier being used is distributed according to the posterior. I wonder what the authors think about this.):**
We call it truthful because we require that the learner includes its deployed classifier $h$ in the shortlist $H$. In other words, we don’t allow the learner to `lie’ to the agents. Moreover, after releasing $H$, regardless of what the common prior distribution $\pi$ is, the probability of $h$ in the posterior $\pi \vert_H$ can only increase.
We note that the behavior of our agents follow a traditional Bayes update, the natural model used for updating informational beliefs in the vast majority of related game theory and in particular Bayesian Persuasion [3] literature. Truthfulness and believing that the learner is truthful are also the standard assumption in settings involving information revelation [3], with the motivation that these games are played in a repeated context where lying is observable and punishable in the long-term.
**Def 2.2 (this will probably become clear soon, but is h part of the learner's action? It sounds like it's not, but then why do you say this is a generalization of the full information release game?):**
The reviewer is correct that if $h$ is not the strategic optimal classifier (i.e., one that maximizes the strategic accuracy in the full information release game), this will not be a generalization of the full information release game. We will clarify it in the final version of the paper.
We assume that $h$ is fixed and the learner is not willing to change it due to cost issues: as mentioned in the paper, “... However, we note that changing the screening algorithm requires significant resources, and the rate at which the classifier is updated is generally slower than the rate at which decisions are made”.
We further note that in the case where the fixed $h$ is the strategic optimal classifier in the full information release game (i.e. “standard” strategic classification), our game essentially becomes a generalization of the full information release game because the learner can always choose $H=\{h\}$.
**Line 314, $h \ge f$: notation abused here.** Remark 4.2 explains the notational abuse that we adopt for simplicity. We simply view $f$ (or $h$) both as a function and their corresponding real-valued threshold, so $h \ge f$ simply means that the threshold for $h$ is larger than or equal to the threshold for $f$.
**Sec 4: why not publish $h = f + 1$ (in this setup, why shouldn't the learner simply publish the exact optimal classifier ($h = f + 1$, which is perfectly accurate after manipulation)? I guess this is also where fixing h as an input parameter makes less sense.):**
Releasing $h=f+1$ is not necessarily an optimal or even a feasible solution because in general the space of thresholds is bounded (which is natural because test scores are normally bounded, and so must be the thresholds we work with on those test scores). Observe that this is shown in Example 2.4.
[3] Kamenica, Emir, and Matthew Gentzkow. "Bayesian persuasion." American Economic Review 101, no. 6 (2011): 2590-2615.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response! It answers most of my questions (which are somewhat open-ended in the first place). I agree that the model is reasonable given that the paper is an early attempt to study the phenomenon. Accordingly, I will increase my score to 5. | Summary: The paper studies strategic classification problems where agents with partial information about the classifier can strategically report themselves at a cost. The problem is modeled as a Stackelberg game: The principal can first reveal partial information of the classifier, then the agents choose their strategy to report. The goal of the principal is to maximize the accuracy of the classification as if they know the true value of all the agents. They show that this partial information release from the principal, compared to releasing no information or all information, may improve the accuracy. The theoretical results include NP-hardness and efficient algorithms.
Strengths: + The problem of strategic classifying is well-motivated. The characterization of the Bayesian setting and partial information release is very reasonable.
+ The Stackelberg model of the problem is reasonable.
+ The technical results are non-trivial.
Weaknesses: - All the positive results seem to be under strong constraints. This may imply that the problem itself is very difficult.
- I feel that the title does not reflect the main idea of this paper. What I get from the introduction is that the key point is "partial information release" rather than "Bayesian", but partial information release is not mentioned at all in the title. I would prefer something like "Partial Information Release in Bayesian Strategic Classification".
- It's kind of unclear what there is an oracle in the agent's best response problem and what does the oracle do.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could you describe more clearly and intuitively what the oracle does? Also why it requires an input as $R^+ \cap R^-$?
2. Why do you take an oracle into consideration in the agent best-response problem?
3. Remark 4.3: I don't quite get this part. Why the learner what to make the classification "harder"? Can you propose an example?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for this review, we will now respond in detail.
**Positive results seem to be under strong constraints. This may imply that the problem itself is very difficult:**
A challenging aspect of work in strategic classification theory, along with other fields in ML is that with minimal assumptions, many problems are intractable.
For instance, we show that the learner’s optimal information release problem is NP-hard when the agents’ prior can be arbitrary, and a similar result is exhibited in the first strategic classification paper [Hardt et al. 2016].
That said, despite such hardness barriers, adding some assumptions to relax the problem is important so we can investigate specific challenges, craft initial algorithms that then can be used as a foundation for later empirically performant algorithms.
**Role of the oracle:**
Description of the oracle: the oracle solves the following “projection” problem: given an agent $x$ with cost function $c$, a collection of classifiers $\{h_1, \ldots, h_n\}$, and a binary (0/1) vector $b=(b_1, \ldots, b_n)$, the oracle returns a point $z$ that minimizes the cost $c(x,z)$ for the agent such that for all $i$, $h_i (z) = b_i$. Instead of using the binary vector $b$, in our paper we adopt $R^+$ (classifiers whose corresponding $b_i$ is $1$) and $R^-$ (classifiers whose corresponding $b_i$ is $0$). Note that when $n=1$, this is simply projecting the point $x$ onto either $\{h_1(z) = 1\}$ or $\{h_1(z) = 0\}$, according to the cost function $c$. When $n>1$, this is the projection of $x$, according to the cost function $c$, onto one of the $2^n$ regions created by the $n$ classifiers (each classifier has a positive and a negative region) which we denote by $R^+ \cap R^-$. Informally, the oracle abstracts out the task of finding the closest point, w.r.t. a given cost function, to a given point (an agent), that passes only the classifiers in $R^+$ without passing the classifiers in $R^-$ (or equivalently a point in $R^+ \cap R^-$ because we require to pass exactly ones corresponding to $R^+$ and nothing more)..
**Why we use an oracle:** we need an oracle model because solving the best response problem in Equation 1 may span a large class of potential deployed classifiers, prior distribution $\pi$, and cost function $c$, and specialized optimization techniques are required for each problem class. In our setting, we want to focus on the details of information release so by assuming the oracle model we can isolate the impact of the information release, while abstracting away the agent’s ability to solve the best response problem which is tangential to our point. This allows our assumptions to model a range of agents using different optimization methods to solve Equation 1; further, we can plug in a large class of optimization methods in place of the oracle as needed for specific applications.
**Re Question about Remark 4.3:**
By “harder”, we mean that the classifier $h$ is more stringent and harder to pass, in that it uses a higher threshold than the true classifier $f$. The reason we need to do so is because the agents’ ability to manipulate their features allows them to obtain a *higher* perceived score than their true score. The learner needs to make the classifier more difficult to pass to compensate for this strategic behavior and inflated scores. This is relatively standard in strategic classification, where the classifier is “pushed to the right” to be robust to manipulation.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response! This answers my questions about the oracle and Remark 4.3. I have an additional question about the oracle. How should I expect the complexity of such an oracle in practical applications? That is if a strategic agent cannot optimize even on class efficiently, then calculating the overall best response would be even harder.
---
Reply to Comment 1.1.1:
Title: Response to Question
Comment: Thank you for your continued engagement in the discussion!
We note that in the case where the cost function $c(x,.)$ is convex in its second argument and the agent $x$ is facing linear threshold classifiers, the optimization problem of the oracle becomes a convex program (finding the closest point to intersection of halfspaces w.r.t. a convex cost $c$) and can be solved with convex optimization techniques.
In other cases, however, this problem is non-convex and can be hard in general. Having said that, the literature on non-convex optimization includes a plethora of practical optimization strategies that can handle non-convex problems. We emphasize that our major contribution is introducing and analyzing partial information release in strategic classification, and therefore, assuming such an oracle allows us to abstract away the challenges faced in non-convex optimization which is not the focus of our paper. | Summary: * This paper investigates strategic classification in a partial-information setting using a Bayesian common-prior framework.
* The setting extends standard strategic classification (Hardt et al. 2016) to a partial-information setting by assuming that the deployed classifier $h\\in\\mathcal{H}$ is not fully known to the agents, but rather assumed to be sampled from a common prior $\\pi$ over the hypothesis class. The learner has the ability to influence agent decisions by disclosing a subset of possible classifiers $H\\in\\mathcal{H}$ such that $h\\in H$. The agent moves their feature vector to maximize their expected utility (expected prediction according to posterior, minus cost). The goal of the learner is to deploy a classifier which maximizes accuracy under the agent’s best response.
* In Section 3, the authors consider the agents’ best-response problem. The first result shows that computing the agent’s best response requires (in the general case) exponentially-many calls to a projection oracle (where the hypothesis class is assumed to be finite and complexity is measured in terms of $n=|\\mathcal{H}|$). The second result shows an $O(n^d)$ upper bound for computing the best response when $\\mathcal{H}$ contains $n$ linear classifiers over $\\mathbb{R}^d$. In Section 4.1, an efficient algorithm is presented for optimizing the disclosure set of hypotheses $H$ over one-dimensional realizable threshold classifiers. Finally, Section 4.2 shows that no disclosure is optimal when optimizing false-positive rate, and that full disclosure is optimal when optimizing false-negative rate.
Strengths: * Paper is very well-written, and easy to follow.
* Model is clean and concise, and captures an interesting aspect of information disclosure.
* Theoretical analysis presents both lower and upper bounds, helping establish a hardness spectrum that can motivate future work.
Weaknesses: * The interaction model presented in the paper makes strong assumptions which may not be applicable in practical scenarios. In particular:
* Assuming that the true relation between features and labels is known to the learner. In particular, the paper does not seem to discuss implications of learning from samples.
* Assuming that the agents’ prior distribution $\\pi$ is known to the learner.
* Assuming that $\\pi$ has finite support, and implying that all classifiers in the support can be enumerated in reasonable time. Common hypothesis classes (such as linear models and neural networks) are defined by a continuous parameter space, and have infinite cardinality.
* Interaction model seems to be vulnerable to learner dishonesty.
* The paper does not contain empirical evaluation, and it's not clear whether the proposed scheme is feasible in practical scenarios.
* Positive results on the learner’s optimal information disclosure are limited to one-dimensional classifiers.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Common prior in practical settings:
* What could be reasonable assumptions for the functional form of the common prior $\\pi$ in practical settings?
* How can $\\pi$ be estimated from data?
* In which plausible practical scenarios does the size of $\pi$'s support induce a practically-significant discrepancy between efficient and inefficient algorithms?
* Dishonest information disclosure:
* How much power can the learner gain from acting in a non-truthful way?
* Can truthful information sharing (by the learner) can be enforced or incentivized?
* What are the consequences of the learner not knowing the feature-label distribution $D$, and being required to estimate it from data?
* Can the analysis framework be used to estimate the typical cardinality/"complexity" of the optimal hypothesis sets $H$ reported by the learner? Is there intuition for practical scenarios where a non-trivial $H$ is expected to be "simple to describe" or particularly complex in some sense?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are discussed in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for these detailed comments which we shall now do our best to answer!
Part 1
**Interaction model, true relation between features and labels is known to the learner:**
This is an assumption made in prior work on strategic classification (e.g. see [2,4]). That being said, extending our results to a setting in which the learner only has access to labeled examples is an interesting future work.
**Agents’ prior distribution is known to the learner:**
This is a common assumption in Bayesian Persuasion and in game-theoretic Bayesian settings, e.g., [1].
From a motivational point of view, for example, employers also have access to job interview questions dataset such as Glassdoor when hiring, and may have a good understanding of the information available to the agents. This prior can also encode agents having a very simple prior showing understanding top few features that are commonly known, which is reasonable in settings such as credit scoring (where everyone knows most agents understand that paying on time has a positive impact, for example). There are also natural cases where we can assume that the prior is uniform, corresponding to the agents having no prior information revelation in other Bayesian settings (e.g., active learning [5], the last section on pg. 2.).
**Assuming that $\pi$ has finite support:**
We will include this extension in the final version of the paper: note that one can often discretize the space of possible classifiers to reduce to a finite setting. We do note that only our results in Section 3 rely on finiteness, but small cardinality is not required, so discretization is sufficient for our purposes. We have complemented our results in Section 4 which primarily focuses on discrete uniform priors by considering the case of continuous uniform priors (which have infinitely many elements in their support); see Section D.2 in the Appendix for details.
Regarding the discretization approach for Section 3, note that if $\pi$ has infinite support size, we can ignore classifiers with sufficiently small probabilities (i.e., $poly(\epsilon)$), as they do not affect the manipulation strategy when searching for a $(1+\epsilon)$-approximate solution. The number of classifiers in the support with probability at least $poly(\epsilon)$ for a fixed $\epsilon>0$ is at most $1/poly(\epsilon)$ which is a finite number. Therefore, to obtain a nearly optimal solution, it suffices to only consider probability distributions $\pi$ with finite support size.
Also, observe that finite $\pi$ captures broad phenomena already, like job interview questions on sites like Glassdoor/Leetcode.
**Common prior in practical setting (1. What could be reasonable assumptions for the functional form of the common prior $\pi$ in practical settings? 2. How can $\pi$ be estimated from data? 3. In which plausible practical scenarios does the size of $\pi$'s support induce a practically-significant discrepancy between efficient and inefficient algorithms?):**
1. In Section 4.1, we think of the prior as uniform over a set; this can be seen as initially having no information about the classifier, thinking all classifiers are equally likely on the support (the support itself could encode information about what reasonable classifiers are). This can be easily estimated from data. If there is any prior knowledge (e.g., 0.1 fraction of technical interview questions involve dynamic programming questions), we assume the prior incorporates this knowledge.
2. Estimating $\pi$ from data is a good question. In some of the more specific settings that exist in practice, like humans adapting to high stakes test/interview questions, humans do maintain tables of counts over the discrete range of possible questions, and then create the prior by dividing the count for each question by the normalization term, e.g. summing over all counts. E.g. when people take a test they then share this information with their peers who also have access to the database.
3. It is an interesting question thinking about the impact of $\pi$ on efficient/inefficient algorithms in practice. This question will require further work.
**Dishonest information disclosure How much power can the learner gain from acting in a non-truthful way?**
We restrict to truthful information disclosure as in Bayesian Persuasion [3]. Relaxing this requirement obviously extends the action space of the learner and hence can only increase the learner’s utility. Here is an example in which lying to the agents can achieve optimal utility for the learner: consider a case in which $X = [0,1]$, $D$ is uniform over $X$, and $f(x)=1[x \geq 0.5]$ is the ground truth and $h=f$ is the deployed classifier. If the learner lies to the agents and tells them that the deployed classifier is $h’(x)=1$ (i.e. everyone is qualified), regardless of what the prior distribution of the agents is, no one will manipulate. The learner can then deploy $f$ for optimal accuracy.
---
Rebuttal 2:
Title: Part 2 of Rebuttal
Comment: Part 2:
**Can truthful information sharing (by the learner) can be enforced or incentivized?:**
Here, we highlight the typical argument in Bayesian Persuasion: if the learner is not truthful, over time, the learner will lose credibility and will not be trusted by the agents anymore. As a result the agents will simply ignore the information coming from the subset release and will base their response according to the prior, which could then become sub-optimal for the learner. We also emphasize that in the applications that we consider in our paper (such as hiring, loans, etc.), while the learner does not need to reveal their model fully, the learner could face legal and ethical challenges if they choose to misrepresent the model for their own favor.
**Estimating the feature-label distribution from samples (instead of knowing the distribution) (What are the consequences of the learner not knowing the feature-label distribution $D$, and being required to estimate it from data?):**
Given a data set sampled $i.i.d.$ from the distribution, the learner can find the optimal subset $H$ with respect to the data set and then appeal to standard generalization guarantees to argue that the same subset is (approximately) optimal with respect to the distribution, provided that the sample size is large enough. We note that while this gives only a sketch of the extension from knowing $D$ to having only a sample from $D$, the exact characterization of the sample complexity requires careful analysis and considerations.
**What is the complexity of H, i.e., cardinality of H? (Can the analysis framework be used to estimate the typical cardinality/"complexity" of the optimal hypothesis sets $H$ reported by the learner? Is there intuition for practical scenarios where a non-trivial $H$ is expected to be "simple to describe" or particularly complex in some sense?):**
Our analysis and examples show that the optimal partial information $H$ could in fact be any subset of the hypothesis class. It is not clear to us whether the size of the optimal $H$ can be estimated. Example 2.1 in the paper shows a set of examples in which a subset $H$ is “simple to describe” (like releasing the “relevant features” of the deployed classifier), but we note that such examples for partial information release are not necessarily optimal for the learner.
[1] Kremer, Ilan, Yishay Mansour, and Motty Perry. “Implementing the ‘Wisdom of the Crowd.’” Journal of Political Economy 122, no. 5 (2014): 988–1012. https://doi.org/10.1086/676597.
[2] Yahav Bechavod, Chara Podimata, Steven Wu, Juba Ziani. “Information Discrepancy in Strategic Learning”. Proceedings of the 39th International Conference on Machine Learning, PMLR 162:1691-1715, 2022.
[3] Kamenica, Emir, and Matthew Gentzkow. "Bayesian persuasion." American Economic Review 101, no. 6 (2011): 2590-2615.
[4] Mark Braverman and Sumegha Garg. 2020. The Role of Randomness and Noise in Strategic Classification. In Foundations of Responsible Computing (FORC) (LIPIcs, Vol. 156).
[5] Dasgupta, S. (2004). Analysis of a greedy active learning strategy. Advances in neural information processing systems, 17.
---
Rebuttal Comment 2.1:
Comment: Thank you for the detailed response! The answers are very helpful, and I maintain my current rating. | Summary: The paper introduces a novel framework for strategic classification using a Bayesian setting for agents' beliefs about the classifiers. This framework departs from the traditional assumption that agents have complete knowledge of the deployed classifier and instead assumes that agents have a prior distribution over the possible classifiers. The main components of the model are a population of agents and a learner. Each agent is represented by a pair $(x, y)$, where $x \in X$ is a feature vector and $y \in \{0, 1\}$ is a binary label. An agent with $y = 0$ is called a “negative,” and an agent with $y = 1$ is called a “positive.” There exists a mapping $f: X \to \{0, 1\}$ that governs the relationship between $x$ and $y$, meaning $y = f(x)$ for every agent $(x, y)$.
The model includes the following key components:
1. **Agent Manipulations**: Each agent can manipulate their features, paying a cost function $c: X \times X \to [0, \infty)$, incurred when changing their features from $x$ to $x'$.
2. **Partial Knowledge of Agents**: Agents have a common prior distribution $\pi$ over the hypothesis class $H \subseteq \{0, 1\}^X$, representing their belief about the deployed classifier $h$. Formally, for every $h' \in H$, $\pi(h')$ is the probability that the learner is deploying $h'$ from the agents’ perspective.
3. **Partial Information Release by the Learner**: The learner can release partial information about the classifier to influence agents' beliefs. This is modeled by releasing a subset $H' \subseteq H$ such that $h \in H'$.
4. **Strategic Game with Partial Information Release**: After the partial information is released, each agent computes their posterior belief and then moves to a new point that maximizes their utility.
The paper provides the following theoretical contributions:
1. **Oracle-Efficient Algorithms**: For low-dimensional linear classifiers, the authors present an oracle-efficient algorithm for computing the best response of agents. When $X = \mathbb{R}^d$ and $H$ contains only linear classifiers of the form $h(x) = 1[w^\top x + b \geq 0]$, the best response of the agents can be computed with $O(n^d)$ oracle calls, where $n$ is the number of linear classifiers and $d \ll n$.
2. **Theoretical Analysis of Learner’s Optimization**: The learner’s optimization problem is analyzed to maximize expected classification accuracy under the agents' strategic manipulations. The analysis considers various information release strategies and their impact on agents’ beliefs and manipulations.
3. **Examples and Scenarios**: The paper provides practical examples, such as job seekers using platforms like Glassdoor to form beliefs about hiring algorithms, which ground the theoretical concepts in real-world scenarios.
#### Mathematical Formulation
1. **Agents and Learner**:
- Agents: $(x, y)$ where $x \in X$, $y \in \{0, 1\}$.
- Mapping: $f: X \to \{0, 1\}$ such that $y = f(x)$.
- Hypothesis class: $H \subseteq \{0, 1\}^X$, with the learner using a fixed classifier $h \in H$.
2. **Cost Function**:
$$
c: X \times X \to [0, \infty) \quad \text{where} \quad c(x, x') \text{ denotes the cost of changing } x \text{ to } x'.
$$
3. **Prior Distribution**:
$$
\pi: H \to [0, 1] .
$$
5. **Oracle-Efficient Algorithm**:
- For linear classifiers $h(x) = 1[w^\top x + b \geq 0]$, with $d$ much smaller than $n$ , the algorithm partitions $X$ given by $n$ linear classifiers into $O(n^d)$ elements and computes the best response within each partition element.
- The best response algorithm runs in $O(n^{d+1})$ time, making $O(n^d)$ oracle calls.
In summary, this paper presents a comprehensive framework for Bayesian strategic classification, providing both theoretical insights and practical algorithms. The introduction of a probabilistic aspect to agents’ beliefs and the exploration of partial information release by the learner are key contributions that enhance the understanding and modeling of strategic behavior in classification settings.
Strengths: - Bayesian modeling: shifting to a Bayesian framework where agents have a distributional prior on the classifier is a novel and realistic approach, providing a more flexible model than the standard full knowledge assumption.
- Principled algorithms: the paper introduces oracle-efficient algorithms for low-dimensional linear classifiers and sub-modular cost functions, which are valuable contributions to strategic classification.
- Mathematical and algorithmic analysis: the comprehensive analysis of the learner’s optimization problem, including conditions for the optimality of partial information release, adds depth to the study.
- Real-world examples: practical examples, such as job seekers using Glassdoor, help ground the theoretical concepts in real-world scenarios.
Weaknesses: - Realizability Assumption: The assumption that agents’ priors are realizable (i.e., the classifier deployed by the learner is in the support of the agents' prior) may not hold in many practical situations.
- Fixed Classifier Assumption: The learner’s commitment to a fixed classifier limits the model's applicability in dynamic environments where classifiers are frequently updated.
- Empirical analysis: the paper could be enriched with an empirical study of the methods discussed theoretically.
Technical Quality: 4
Clarity: 4
Questions for Authors: - The focus on maximizing accuracy is essential, but how do the proposed methods perform with other utility functions, such as fairness, robustness, or cost-efficiency?
- How do the proposed algorithms and strategies perform in empirical settings? Are there practical examples or case studies that demonstrate their applicability and effectiveness?
- How can the model be extended to dynamic environments where classifiers are updated over time?
- Is there specific conditions where partial information release outperforms full information release? Are there scenarios where partial information could be detrimental?
- How does the Oracle complexity algorithm scale with higher dimensions or larger hypothesis classes when $d \approx n$? Are there potential computational bottlenecks?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors discuss the limitations of the methods adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for suggesting interesting questions that can be natural future directions of this work such as other utility functions and dynamic environments.
- Regarding the performance of partial information release versus full information release, while we have examples where one outperforms the other, the exact characterization of instances in which one outperforms the other is not known. This is indeed an interesting question.
- For any $n$ and $d$, the best response algorithm for linear classifiers (Algorithm 2) has oracle complexity of $\sum_{j=0}^d {n \choose j}$ which is $O(n^d)$ for $d \ll n$ and $O(2^n)$ for $d \approx n$. This is why we need that the dimension is relatively small compared to $n$. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
LFME: A Simple Framework for Learning from Multiple Experts in Domain Generalization | Accept (poster) | Summary: This paper proposes a simple framework called LFME for learning from multiple experts in domain generalization. LFME introduces a logarithmic regularization term to enforce similarity between the target model and expert models, allowing the target model to acquire expertise from all source domains and perform well in any testing domain. Experimental results demonstrate that LFME outperforms existing techniques not only in classification and segmentation tasks but also reveals through in-depth analysis its ability to implicitly use more information for predictions and to discover difficult samples from experts, thereby enhancing generalization capability.
Strengths: This paper proposes a novel idea on the basis of a new knowledge distillation paradigm for DG. The motivation of using multiple experts for improving generalization is clear and reasonable. Meanwhile, I also found the paper well-presented and easy to understand.
After checking the pseudocode and the provided code, I found the implementation extremely simple. Given the impressive results computed with rigorous evaluation protocols, I believe this paper can offer valuable contributions to the literature by providing an accessible and effective solution.
The provided deep analyses that explains why their method works are appreciated and, in my opinion, crucial for a paper, especially in the era of deep learning. Coupled with the empirical evidence in their supplementary material, I think most of claims made in their paper can be well supported.
Weaknesses: Regarding the proposed method, although large improvements over the baseline ERM are observed, their method involves higher training costs. I noticed the authors provide the training time comparisons with different methods in their supplementary material, which shows that their LFME requires 1.5 times more training time than ERM, and it is also one of the most time-consuming methods compared. While I agree with the authors that requiring more training time is inevitable for methods designed based on KD, the authors should at least list this as a major drawback in their limitation section.
As in recent times, Vision Transformers (ViTs) have demonstrated substantial improvements in the classification and semantic segmentation tasks. Although the authors conducted experiments using different ResNet models, they should also consider adopting a ViT backbone for their experiments. This could provide valuable insights into the performance of their method when applied to more advanced architectures, potentially highlighting further benefits or limitations. Including such experiments would enhance the comprehensiveness of the study and align it with current trends in the field.
Technical Quality: 3
Clarity: 3
Questions for Authors: In Section 4.2, the authors claim that their method improves DG by explicitly mining hard samples from the experts. However, Tables 3 and 6 show that specifically mining hard samples from the experts improves performance of ERM only in the TerraInc dataset, but not in the PACS dataset (with an average improvement of only 0.1 pp). Could the authors provide an explanation for this phenomenon?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > W1: List more computational cost as a limitation.
A1: We thank the reviewer for the suggestion, we will include this limitation in our revised paper.
> W2: Experiments with ViTs.
A2: We conducted further experiments by evaluating our method, the baseline ERM, and some leading methods in Tab. 1 with a ViT-Base model using the same setting detailed in the manuscript. Results in Tab. 7 in the general response show that our method can still obtain favorable performance against existing arts despite using different networks.
> Q1: Why hard sample mining can improve performance in TerraInc dataset but not the PACS dataset.
A1: We infer this is mainly because compared to the PACS dataset, TerraInc is more difficult for the corresponding expert, which can provide more hard samples to boost generalization (briefly mentioned in Line 623 in our paper). As shown in Tab. 5 in the manuscript, the experts can achieve an average accuracy of 96.7% when tested with the source data in the PACS dataset, indicating that there are very few samples in PACS that can be regarded as hard for the experts. On the other hand, the in-domain results for the experts in TerraInc are 91.1% (Tab. 1 in the general response), suggesting that the dataset can provide more hard samples to guide the target model than PACS, thus improving the performance. We will include this analysis in our revised paper.
---
Rebuttal 2:
Comment: The author's rebuttal allayed some of my concerns, and I chose to raise my rating, considering that the paper still had some sparkle to it. | Summary: This paper focuses on improving domain generalization by utilizing multiple experts. Particularly, a simple yet effective framework is proposed whereby a target (student) model is learned from multiple expert (teacher) models through logit regularization. After learning, the target model can grasp knowledge from multiple source domains and shows advantages in handling hard samples. The proposed approach demonstrates consistent improvements across different evaluation tasks, outperforming existing SOTAs.
Strengths: This paper overall is well written. It provides in-depth theoretical insights on its proposed logit regularization and surrogate ablation studies.
Weaknesses: 1: The contribution of the proposed approach is marginal. The performance of the proposed approach does not show much improvement compared to ERM or expert models (Table 5).
2: The baseline models for comparison are not clearly introduced, which creates difficulties in understanding the Tables.
3: The comparison to the approaches that directly learn a foundation model using data from all source domains is not discussed. Hence, the benefits of the proposed approach, which have to use a set of expert models plus a target model, compared to the foundation model are not clear.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1: What is the difference between the proposed approach and foundation models? Why not directly learn a model using all the source domains? How does the performance of the proposed approach compare to Segment Everything?
2: What’s the difference between the proposed approach and the baseline ERM? In lines 77-78, the authors mention ‘only one hyper-parameter’ without further illustration. The difference is clarified only until line 138. Statements near lines 77-78 need to be improved to avoid confusion. More importantly, it can be misleading to say “only one additional parameter upon ERM” since a target model is also involved.
3: How does the value q_*^E relate to the hardness of samples? What causes the sample to be ‘hard’? Low data frequence or image quality? Without these clarified, the provided theoretical insights are not convincing.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: No discussions on limitations and potential negative societal impact of the work are provided.
One limitation is that the performance of the proposed approach can be upper bounded by the performance of the expert models. This limitation again raises the question of why not directly learn from all source domains?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > W1: The performance does not show much improvements in Tab. 5
A1: We want to clarify that the in-domain comparisons in Tab. 5 are only used to verify our claim that the target model has evolved to be an expert in all source domains. In the DG setting, out-of-domain performance is often regarded as the metric for evaluating different methods, and our method is shown to outperform many sophisticated designs. When compared to ERM and the different expert models, our method also shows large improvements (Tab. 4). We note that the out-of-domain performance of our method is regarded as impressive by other reviewers.
> W2: The compared baseline models are not clearly introduced.
A2: The primary baseline models (in both Tab. 1 and 2) are ERM that directly trains the model with data from all source domains. According to the existing benchmark [20], in the DG task, ERM is a strong baseline that can outperform many SOTAs when evaluated with strict evaluation protocols. As for other compared models, we use most of the implementations in the commonly-used DomainBed [20] for evaluation. It would be better to refer to the benchmark for details.
> W3: Comparisons with foundation model that uses data from all source domains is not discussed.
A3: We want to clarify that the referred foundation model is ERM, and we include comparisons with it in all our experiments (i.e. Baseline in Tab. 2 and ERM in all other tables), where our design can improve the performance in almost all evaluated datasets.
> Q1 (part 1): Difference between the proposed approach and foundation models, why not directly learn a model using all the source domains?
A1 (part1): Compared to the foundation model, the target model is trained with an additional logit regularization term $\mathcal{L}_{guid}$ (without it, the target model degrades to the foundation model), which is designed to help the target model to be expert in all source domains. As explained in Sec. 4, we reveal that the logit regularization term can help the target model to use more information and mine hard samples from the experts, which are both beneficial for improving generalization.
> Q1 (part 2): Why not compare with the Segment Anything Model (SAM).
A1 (part2): Our experiment settings in the generalizable semantic segmentation task are the same as previous mainstreams [32, 48, 12], which train the target model on synthetic data and test it in unseen real-world scenes. We do not compare with SAM mainly because (1) SAM and mainstreams use different network structures: SAM uses ViT and CLIP text encoder in their model, which has a total of 636M parameters. Differently, the mainstreams use DeepLabv3+ ResNet50 as the backbone, consisting of 39M parameters, which is only 6 percent the size of SAM; (2) SAM is trained over 1 billion data, while the synthetic data used for training in our setting is only 34,366. We thus do not compare with SAM.
Our work focus the generalization task, which can be applied to many downstream vision tasks, semantic segmentation included. Unlike SAM that aims to solve segmentation with large models and more data, this work can be applied to situations when there is only limited training data from certain domains, and the task is to generalize the model to new domains, for example in particular medical situations. We leave such exploration to potential future works.
> Q2 (part 1): What’s the difference between the proposed approach and the baseline ERM.
A2 (part 1): ERM is the referred foundation model that is trained with all data (by using only $\mathcal{L}_{cla}$ in Eq. (3)). Please refer to our response for Q1 (part 1) for their differences.
> Q2 (part 2): Improper to say “only one additional parameter upon ERM” since a target model is involved.
A2 (part2): We appreciate the suggestion. This sentence is to underline that except for the only one weight parameter, our model does not involve more heuristic hyper-parameter settings compared to ERM. We will clarify it in our revised version to avoid confusion.
> Q3: How does $q_{\ast}^E$ relates to hard samples and what are those hard samples?
A3: As the classification loss is computed as $-\sum y \log q$, smaller $q$ indicates larger loss, which corresponds to a more difficult sample. This is is consistent with the previous work [27]. In Sec. D.4 in the appendix, we find that hard samples from the domain experts contain more domain ambigous data than that from the model fed with all data, explaning why domain experts must be involved. Please also refer to our general response for details.
> L1: No discussions on limitations and potential negative societal impact of the work are provided.
A1: Please note that we discuss the limitation of this work in Sec. A in the appendix. For the potential negative societal impact, this work designs a new idea for the task of domain generalization that holds significant potential for a wide array of real-world applications. It may bear potential societal consequences similar to other machine learning systems. We will include this in our revised paper.
> L2: The performance of the proposed approach can be upper bounded by the performance of the expert models, why not directly learn from all source domains?
A2: Our method outperforms domain experts in both out-of-domain (i.e., Tab. 4) and in-domain (i.e., Tab. 5) settings, indicating that the performance is not bounded by domain experts. This is because the target model uses outputs from all experts and gt as training labels (see Eq. (3)), thus the experts can not bound the performance of the target model. When compared to the referred foundation model that directly trains with all data, our method also shows large improvement (i.e., Tab. 1 and 2). Please refer to our response for Q1 (part 1) for the explanations. | Summary: This paper addresses the problem of domain generalization from the perspective of ''learning multiple experts''. In particular, they propose to train multiple experts specialized in different domains, whose output probabilities provide professional guidance by simply regularizing the logit of the target model. The proposed logit regularization provides effects of enabling the target model to harness more information, and mining hard samples from the experts during training. Experiments on standard DG benchmarks demonstrate the effectiveness of the proposed framework.
Strengths: - The overall manuscript is well-organized and easy to follow.
- The method is simple and effective. Its impressive results in different tasks demonstrate the strong applicability of this idea to potential other tasks.
- The related work section is comprehensive, especially regarding comparisons with Meta-DMoE that also tries to distill knowledge from domain experts.
- The idea of using MSE loss between logit and probability for distillation is new and well-explained, supported by both theoretical and empirical evidence.
- The free lunch idea introduced in sec. 6.1 is interesting and seems to have the potential for more general applications.
Weaknesses: While adopting the idea of knowledge distillation (KD) and the concept of domain experts for improving DG is not new in the literature, it is appreciated that this method can be implemented in such simple form and be explained with extensive insights. Nevertheless, there are still a certain amount of claims in the paper that requires proper explanations or more justification:
- Performance reported in the original Meta-DMoE paper is higher than that in Table 1. For example, Meta-DMoE reports an average of 86.9 in PACS, surpassing the best result listed in the table. The authors should provide an explanation for this discrepancy. Moreover, I noticed that the training time comparisons between these two methods are provided, it would also be beneficial to highlight the test and training resource differences between these two methods in the related work section.
- The authors mention that their method introduces only one hyper-parameter, which is within a certain range. It would be beneficial to include an ablation study to explain why this specific range was chosen.
- It would be beneficial to evaluate this method with more real-world DG problem, such as using dataset from wilds benchmark.
- The method [1] with similar settings should be compared in Table 1. Additionally, some methods, such as MIRO, report improved performance when combined with SWAD. Including these results in Table 1 would provide a more comprehensive comparison.
- Minor issues. Formats of ”i.e.” in line 145, ”Tab.” in line 578, and ”Fig.” in line 581 are inconsistent with their usages in other places. The authors should standardize the format of similar abbreviations to maintain uniformity.
Reference:
[1] Domain Generalization via Rationale Invariance, in ICCV 2023.
Technical Quality: 4
Clarity: 4
Questions for Authors: - This method needs to train additional expert per domain, what if there are large amount of domains for the training data, any potential solution for this occasion?
- Since multiple expert models are required when training the target model, how would this method be applied to the training process of large foundation models?
- The free lunch idea appears to be applicable beyond just the DG setting. How does it perform on ImageNet? Including such experiments could enhance the significance of this concept.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > W1: Performance in the Meta-DMoE paper is higher than that in Tab. 1, and highlight the resource usage differences.
A1: Note that we mainly use the ResNet18 model for evaluating Meta-DMoE in Tab. 1, and the experiments are conducted for a total of 3x20 trials using the default hyper-parameter settings that are within large ranges. These settings are different from those in the original paper (where ResNet50 results are reported), which can cause the differences in the reported numbers of Meta-DMoE. We will include this clarification in our revised paper.
We thank the reviewer for the suggestion. We list the efficiency comparisons in our response regarding w1 for Reviewer oULp, and we will include the differences in the related work section.
> W2: Conducting ablation study to analyse the selected value range for the weight parameter.
A2: We conduct this ablation study in Tab. 4 in our general response, which shows that our method can obtain relatively better performance when the weight parameter is selected wihtin [0.1, 10].
> W3: Experiments in datasets from the Wilds benchmark.
A3: We conduct experiments using three real-world datasets (i.e., iWildCam, RxRx1, and FMoW) using the default settings in Wilds. Results listed in Tab. 5 in the general response show that our method can improve the baseline in all three datasets and obtains favorable performance against existing arts.
> W4: Including RIDG [1] and MIRO+SWAD in Tab. 1.
A4: We include comparisons with the referred methods in Tab. 6 in our general response. The comparisons will be included in Tab. 1 in our revised paper.
> W5: Minor issues.
A5: We thank the reviewer for the notifications, these typos will be corrected in our revised paper.
> Q1: Solution for datasets with a large number of domains.
A1: When encountering datasets containing numerous domains, we can utilize the existing strategy [a] that clusters the training domains into fewer super-domains, thus reducing the expert number. We leave this exploration to potential future works.
[a] Orbit: A real-world few-shot dataset for teachable object recognition, in ICCV'21
> Q2: How to apply the method to the training process of large foundation models?
A2: When applied to the training process of large foundation models, a possible solution for easing the memory constraint problem is to use a sequential training strategy that first learns different experts, and then trains the target model without optimizing these experts. In this way, professional guidance from different experts can still be obtained, and we can avoid the memory constraint for training $M+1$ models at the same time (with $M$ the expert number).
> Q3: Including experiments of the free lunch idea in Imagenet setting.
A3: We conduct experiments with the free lunch idea in Imagenet as suggested. Compared to the baseline, the free lunch can obtain better performance by 0.4pp for the top1 acc (76.9 vs. 76.5) and 0.5pp for the top5 acc (93.7 vs. 93.2). These results show that the free lunch idea is beneficial for large datasets.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed responses. My previous concerns have been well addressed. After carefully reviewing the other reviewers' comments and the authors' replies, I believe the paper has no significant flaws, and therefore, I choose to maintain my score. | Summary: The computer-vision paper introduces a strategy of learning from multiple experts (LFME), which performs knowledge distillation from models specially trained on data from different domains. In particular, the experts are trained jointly with the target model, and a specific form of logit regularization is chosen for knowledge distillation due to empirical verification. The idea is relatively straightforward to understand with strong empirical results on image classification (by Domainbed) and semantic segmentation.
Strengths: 1. The experimental results come in strong when compared to many other baselines. Moreover, the study has extended results for semantic segmentation.
2. The visualizations are helpful to illustrate the ideas and evidence.
3. The empirical study on naive aggregation of domain experts shows that naive ensemble does not necessarily lead to better generalization. Then ablation study shows the chosen form of logit regularization outperforms other knowledge distillation alternatives.
Weaknesses: 1. The novelty is relatively limited. The proposed approach is very similar to Meta-DMoE with a few changes such as logit regularization and alternating updates, but these changes are not well justified (see below). The idea of utilizing multiple domains have also been explored by [1].
2. There is a lack of coherent argument to understand the proposed approach. In Section 4.1, the paper briefly discussed how the additional logit regularization enables learning more information. In addition, in Section 6.2, the proposed approach LFME is compared with Label Smoothing. However, the discussion is not very convincing. From the perspective of using information from other classes, I do not see the fundamental difference between logit regularization and Label Smoothing (LS). Both logit regularization and LS have hyperparameters to be tuned, so it might not be appropriate to claim its advantage as "not involve hand-crafted settings” (line 187) and criticize LS has “potential improper heuristic designs” (189).
3. Moreover, in Section 4.2, the paper justifies the advantage of the proposed approach LFME by “mining hard samples from the experts”. It is a general statement that is true for low-confidence predictions from any model, so it does not constitute a strong argument. Intuitively, the in-domain samples are easier than out-of-domain samples for each expert, and the hard samples aggregated from all experts are not specifically representative of any subpopulations. The argument will be more comprehensive if the paper can show what are these hard samples and why it matters to mine the hard samples from the domain experts (and not an ensemble of experts or some random experts).
4. In Section 6.5, the result for in-domain evaluation is only provided for one dataset. This is not sufficient to justify a strong claim that the target model is an expert on all source domains.
5. Code is submitted but it is unclear how the proposed approach can be combined with SWAD. Does it apply to just the target model or also the domain experts? The hyperparameter configurations to obtain the SoTA results are also unclear for reproducibility.
[1] Yao, Huaxiu, Xinyu Yang, Xinyi Pan, Shengchao Liu, Pang Wei Koh, and Chelsea Finn. "Leveraging domain relations for domain generalization." *arXiv preprint arXiv:2302.02609* (2023).
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. In Algorithm 1, the target model and the experts are trained simultaneously, which is a more parallel and iterative scheme rather than a sequential one with two stages, i.e., training the experts first, then distilling the knowledge to the target model using logit regularization. Why is such a design choice made? Are there better justifications than just “for simplicity”? Please explain with more details.
2. Each expert is generally weak as they are only trained using the data from their own domain. Is it necessary and how to ensure the quality of the experts?
3. In Appendix A, Limitations, the authors discussed that LFME cannot be applied to single-domain tasks. How does the number of domains affect the proposed approach? Will the algorithm benefit from having more domains with less data or more data in fewer domains?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The limitations has been discussed in Appendix A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > W1. Novelty
A1: Differences between Meta-DMoE and LFME are as follow.
1. ### Domain experts in these two works serve different purposes
Meta-DMoE aims to adapt the trained target model to a new domain in test. To facilitate adaptation, their target model should be capable of identifying domain-specific information (DSI). Therefore, the target model is enforced to extract DSI similar to those from domain experts. Notably, their trained experts are expected to thrive in all domains, which are used to extract DSI **not** in their trained domains but rather in an unseen one. In a word, domain experts in Meta-DMoE serve as supervision to ensure that the target model can adapt to a new domain. Differently, LFME expects its target model to be expert in all source domains. In our framework, domain experts provide professional guidance for the target model only in their corresponding domains.
2. ### Different implementations
Meta-DMoE involves meta-learning and test-time training (TTT), where domain experts are used for data from their unseen domains in both training and test. The overall process can be time-consuming due to TTT and the second-order optimization in meta-learning, which may pose efficiency problems compared to the straightforward training in LFME. Meanwhile, their idea is developed on the traditional KD idea that enforces feature similarity between teachers and student, which is shown suboptimal in our analysis.
3. ### Different effectiveness and efficiencies
When evaluated with the same strict setting in DomainBed, LFME leads Meta-DMoE in all datasets with an average improvement of 1.8pp. The required running time for LFME is also less than Meta-DMoE: 38' vs. 73' for one trial in PACS (even without the experts training time, Meta-DMoE still requires 45' for one trail).
We list comparisons with Meta-DMoE in Sec. 2, and are glad that Reviewer xxxE find it comprehensive. Regarding the idea in [1], it shares similar design with DAELDG that uses different classifiers as experts and averages the final outputs of all experts as the result. Similar intuition has been discussed in our analysis. We will discuss this work in the revised version.
> W2: More justifications for argument in Sec. 4.1 and comparisons against LS
A2: We provide more justifications (visual examples included) for further comprehending the analysis in Sec. D.1. More empirical evidence supporting this argument is provided in D.2. Please refer to them for a better understanding.
LS uses a heuristic $\epsilon$ to adjust output probability and prevent overconfidence in the gt label: $(1-\epsilon)H(q, y) + \epsilon H(q, U)$ (given U a uniform distribution). if $\epsilon$ is improperly set (e.g. close to 1) and the dominant label approximates $U$, LS may fail. Differently, LFME uses labels from both gt and experts' outputs to excel in all source domains: $H(q, y) + \frac{\alpha}{2}||z-q^E||^2$, where we do not need to deliberately calibrate the probability. Those two methods differ inherently. Resulsts in Sec. 6.2 also show LFME performs better than latest LS methods. Notably, experiments in Tab. 4 in the general response show that the model is insensitive w.r.t $\alpha$ as it is on par with ERM even with $\frac{\alpha}{2}=1000$, because $q_{\ast}^E$ is mostly aligned with $y$ (see Fig. 2 (a)), which is a much preciser label than $U$. We thus say that LS bears a con of potential improper heuristic designs against LFME. Meanwhile, LS treats all samples equally, while LFME can specifically focus more on hard samples. That is another pro for LFME.
> W3: What are those hard samples and why use domain experts
A3: The hard samples are domain ambigous data, aligned with your intuition. We find that domain experts can locate more of these data than random models, thus can better help DG. Please see our general response for details.
> W4: More in-domain evaluations
A4: We conduct experiments on other two datasets (i.e. TerraInc and VLCS) and list the results in Tab. 1 in the geneal response. Results are consistent with that in Tab. 5, where the target model can lead the baseline and experts in all cases, validating that it becomes expert on all source domains.
> W5: SWAD and hyper-parameter (HP) settings
A5: We apply SWAD only for the target model. We use the default settings in original implementations of Domainbed and SWAD. For your reference, we list all HP in Tab. 2 in the general response.
> Q1: Why use parallel training
A1: In the parallel scheme, each training sample will go through 2 forward passes (one for expert and another for the target), and we can use a single optimization to update all parameters and a total of 5000 updating steps for a trial in PACS (see Alg. 1). Instead, if using the sequential scheme, each sample will go through 3 forward passes (one for training expert, and two for training the target), and we will require 2x5000 updating steps for a trial. Training times for these two schemes are 38' (parallel) vs. 55' (sequential), we thus use parallel scheme for simiplicity.
> Q2: Is it necessary to ensure the experts' quality
A2: Different from Meta-DMoE, experts in LFME are expected to saturate only on their corresponding domains. Although less data is used, they can still outperform the baseline within their trained domains (see our in-domain experiments). Given that our interest lies primarily in the outputs of these experts within their designated training data, their general proficiency beyond these specific data is of secondary concern. Thus, it may be unnecessary to ensure the quality of the experts across broader domains.
> Q3: How does the number of domains affect the proposed approach, will the method benefit from more domains or more data
A3: We answer this question by merging the source domains to create varying domain numbers. As shown in Tab. 3 in the general response, the model performance is generally positively correlated w.r.t domain numbers.
---
Rebuttal Comment 1.1:
Title: Reviewer Response
Comment: Thank you for the detailed responses and comprehensive experiments, which have addressed most of my concerns.
However, I am still not convinced by the arguments on mining the hard samples and the analysis of why the proposed approach works. In Sec. 4.2, the rescaling factor $\mathcal{F}, \mathcal{F}’$ are shown to be negative in Figure 2(e). While it is true that the magnitude is larger for harder examples, if the gradient direction is reversed, then the model is no longer learning the sample as it's going the opposite of the original ERM gradient. I am not sure if we can still interpret this as "mining" the hard samples because they are somewhat being "forgotten". Please clarify.
---
Reply to Comment 1.1.1:
Title: Thank you for the feedback
Comment: We sincerely thank the reviewer for the prompt reply!
We would like to clalrify that if two networks have different gradient directions, it does not mean one network is "forgetting" the samples compared to the other. The negative values of $\mathcal{F}$ and $\mathcal{F}'$ apply for all training samples, but this does not imply that LFME is forgetting all samples in contrast to ERM. Since hard samples from experts correspond to larger gradient magtitude for the target model, they will have more significant impacts on the target model's updating, which can shift the decision boundaries not aligned with easy samples that with small gradient magtitude, potentially causing these easy samples to be overlooked by the target instead. We thus say the target model can mine hard samples from the experts.
The argument in Sec. 4.2 can be better interpreted from the following example: assuming two samples $x_1$ and $x_2$ from the same class and domain, with the target model producing identical output logits for both: $z_1 = z_2$. If the corresponding expert has different levels of confidence regarding these samples, such that $q_{\ast 1}^{E} > q_{\ast 2}^{E}$, the analysis in Sec. 4.2 suggests that $x_2$ will have larger impacts on the updating of the target model than $x_1$, and the target model will focus more on $x_2$.
We thank again for the reviewer's time, and wish our clarification is helpful.
---
Rebuttal 2:
Title: Response to further concerns
Comment: We appreciate the reviewer's feedback and thoughtful response. We answer the additional questions below.
> Q: This is because originally we are doing gradient descent using ERM. When we multiply a negative value in front of it, it becomes gradient ascent instead.
A: We want to clarify that our only objetive is to minimize the loss $\mathcal{L}_{all}$ in Eq. (3) throughout the training process. It's important to emphasize that there is no gradient reversal operation in our framework; specifically, we are not performing gradient descent using ERM followed by multiplying a negative value for gradient ascent.
> Q: How is $\mathcal{F}$ calculated.
A: we want to clarify that we do not need to compute $\mathcal{F}$ during updating, $\mathcal{F}$ is used as an analytical tool to show how much the gradient magtitudes is rescaled w.r.t to the baseline model that without using the logit regularization term, by comparing ratio of gradients from the two models [64]. In our case, gradients of the two models (LFME and ERM) are $q_c - y_c + \alpha (z_c - q_c^E)$ (e.g. $\frac{\partial \mathcal{L}\_{all}}{\partial z_c}$) and $q_c - y_c$ (e.g. $\frac{\partial \mathcal{L}\_{cla}}{ \partial z_c}$), there gradient ratio is thus $\frac{\partial \mathcal{L}\_{all}}{\partial z_c} / \frac{\partial \mathcal{L}\_{cla}}{ \partial z_c}$, which leads to the computing of $\mathcal{F}$ and $\mathcal{F}'$ in Eq. (6) and (8).
> Q: In the given example, how the mining the "hard" sample from an expert achieved and its relation to the reduction of the logit value.
A: For the given example, where $z_1 = z_2$, it indicates that the model is with equal confidence w.r.t the two samples. Without experts involvements, these two samples will contribute equally to the target model's updating. However, when the expert is involved and $q_{\ast 1}^{E} > q_{\ast 2}^{E}$,Eq. (6) and (8) indicate $|\mathcal{F}_1| < |\mathcal{F}_2|$ and $|\mathcal{F}'_1| < |\mathcal{F}'_2|$: the gradient magtitude from $x_2$ is changing more significantly than that from $x_1$ both on the basis ERM, allowing $x_2$ to have a greater impact on the target model's update. The derivations imply that the expert can expose its hard samples to the target model, which is not applicable in ERM.
From your point of view, the objetive in Eq. (3) implicitly encourages the target model to be less confident w.r.t $x_2$. We want to clarify that this confidence reduction does not conflict with mining $x_2$, because lower confidence results in a higher training loss, prompting the target model to explore the corresponding sample more thoroughly, where both the dominant features for classification and other information shared with other classes will be explored, with the dominant features being the primary focus (because both $\mathcal{L}\_{cla}$ and $\mathcal{L}\_{guid}$ in $\mathcal{L}\_{all}$ encourage the gt logit to be the largest, which also indicates that lower confidence in LFME does not equate to lower accuracy—a fact validated by our in-domain results). On the contrary, high confidence indicates that the model may overlook the sample after obtaining its dominant features, making it less important on the updating process. Moreover, penalizing high-confidence outputs has been shown to be effective in improving a variety of learning tasks [a], our objective can thus be viewed as specifically mining hard samples from the experts for penalization. From this perspective, in our case, penalizing confident can be integrated within our explanation of hard sample mining.
We sincerely thank you for your invaluable time and the fruitful discussions. We are more than willing to provide any additional justifications or clarifications you may need.
[a] Regularizing Neural Networks by Penalizing Confident Output Distributions, in ICLR'17
---
Rebuttal Comment 2.1:
Title: Reviewer Response
Comment: Thank you for the prompt clarifications.
Unfortunately, I am still not convinced and couldn't agree with the counterintuitive rationale.
> Specifically, we are not performing gradient descent using ERM followed by multiplying a negative value for gradient ascent.
I understand LFME is not doing this explicitly, but aren't they mathematically equivalent? If it doesn't align, then how does the analysis in Section 4.2 provide valid justifications? I think when we are trying to investigate the working mechanism of the proposed algorithm, the story needs to be coherent and not mislead the readers, especially when the analysis consists of a major section of the main paper.
> Penalizing high-confidence outputs has been shown to be effective in improving a variety of learning tasks
As far as I understand, arguing from the confidence perspective might be more reasonable than hard sample mining.
---
Rebuttal 3:
Title: Reply to your concerns
Comment: We thank the reviewer for the prompt reply.
> Q: Is updating LFME mathematically equivalent to doing gradient descent using ERM followed by multiplying a negative value for gradient ascent?
A: We want to clarify that the two gradient parts of $\mathcal{L}\_{all}$ w.r.t $z$ (e.g. $\partial \mathcal{L}\_{cla}$ and $\partial \mathcal{L}\_{guid}$) may better be interpreted as a whole in a updating step, as one updating step requires the combination of the two gradients for a joint classification task.
According to your suggestion, if we specifically interpret the updating process by separately considering $\partial \mathcal{L}\_{cla}$ and $\partial \mathcal{L}\_{guid}$, we can see that in both terms, $z\_{\ast}$ correspond to largest labels. That is, $z\_{\ast}$ is encouraged to be close to $y\_{\ast}$ by minimizing $\mathcal{L}\_{cla}$ and close to $q\_{\ast}^E$ by minimizing $\mathcal{L}\_{guid}$. This means the updating process will maintain its discriminative ability, and the model will not forget the sample. The different gradient directions can be interpreted as $\mathcal{L}\_{cla}$ encourages $z\_{\ast}$ to become infinitely large, focusing on dominant features, while $\mathcal{L}\_{guid}$ optimizes $z\_{\ast}$ to the opposite direction to be close to $q\_{\ast}^E$ so that other information can also be involved. Therefore, $\partial \mathcal{L}\_{guid}$ should not be sees as tries to "unlearn" what has been learned by$\partial \mathcal{L}\_{cla}$. Rather, on the basis of not compromizing the target model's discriminative ability, they serve different purposes to encourage the model to focus on different types of features.
> Q: Is mining hard samples from experts counterintuitive?
A: The primary effect of knowledge distillation (KD)-based methods can be attributed to sample reweighting [b], where the weight of a training sample in the student model is related to the teacher's confidence. Although a different KD scheme is used, we apply the same rescaling analytical tool to demonstrate that one of the working mechanisms of the proposed KD form can still be seen as weighting different samples based on the teacher's confidence. Since different KD forms are utilized, the weighting strategies differ between the proposed LFME and the traditional KD approach where easier samples from teachers correspond to larger weights (comparisons between LFME and traditional KD ideas are presented in Sec. 6.3, where LFME performs better for the task of DG). Meanwhile, since hard samples from domain experts are more likely to be domain ambiguous data, enforcing the target model to learn these data more throughly can help it to explore more domain-invariant features, which are crucial for generalization [17]. In this regard, mining hard samples from domain experts is an intuitive motivation for improving DG.
> Q: Arguing from the the confidence perspective.
A: We thank the reviewer for the thoughtful suggestion. We want to clarify that the hard sample mining is aligned with the proposed confidence reduction perspective: in the given example, the target model will reduce confidence of $x_2$ more than that of $x_1$ even if they are with the same output logits, consistent with our explanation that hard samples from the experts will affect more on the target model's updating. Therefore, there may be no intrinsic difference between the hard sample mining and the suggested confidence reduction idea. Meanwhile, we provide empirical evidence in Sec. D.3 to show that the classification results of hard samples from corresponding experts are indeed improved in LFME, we thus argue the working mechanism from the hard sample mining perspective.
On the other hand, merely analyzing the confidence is similar to our initial analysis in Section 4.1, where the idea of smoothing the output probability of the target model is equivalent to making it less overconfident.
We sincerely thank you for your invaluable time and all thoughtful suggestions, which definetly help improve our paper. We wish these clarifications are helpful for easing your concerns.
[b] Born-Again Neural Networks, in ICML'18. | Rebuttal 1:
Rebuttal: We thank all reviewers for their hard work and insightful suggestions. We are inspired that Reviewer xxxE and h8LM find our work simple and Reviewer xxxE, h8LM, and oULp think the performance is strong. We are also glad that our in-depth theoretical insights are appreciated by Reviewer xxxE, h8LM, and uZyo.
For the general question raised by Reviewer oULp and uZyo, we answer them below:
>Q: How does $q_{\ast}^E$ relates to hard samples, what are those hard samples, and why mine hard samples from the domain experts?
A: A smaller $q_{\ast}^E$ is associated with larger cross-entropy loss for the corresponding expert, indicating that the sample is more difficult for the expert than others with smaller loss (i.e. larger $q_{\ast}^E$). This is consistent with the previous work that specifically aims to mine hard samples [27].
We conduct analyses in D.4 to study what are those hard samples and the differences between hard samples from domain experts and other models. Specifically, compared to hard samples from the model that trains with all data, we find that hard samples from domain experts contain more ambiguous data that locates in the mixed region or the boundary of different domains (shown in Fig. 6), indicating that they may encode more out-of-domain information. This is consistent with the assumption from Reviewer oULp.
By conducting experiments (results are listed in Tab. 6), we find that hard samples from domain experts is more beneficial for DG than hard samples from the model fed with all data. This is because emphasizing these domain ambiguous data can assist the target model in exploring more domain-invariant features that is consistent across different domains, which is crucial for improving model robustness [17]. This experiment justifies our design by involving domain experts (instead of some random models) in our framework. Please see D.4 for more details.
We conduct the following experiments and list the results in the uploaded file:
1. More in-domain evaluations from different models (Tab. 1);
2. Hyper-parameters used for reproducing results in the DomainBed benchmark (Tab. 2);
3. Effect of the number of domains on model performance (Tab. 3);
4. Sensitivity analysis regarding the weight parameter $\alpha$ (Tab. 4);
5. Evaluation results in datasets from the Wilds benchmark (Tab. 5);
6. Comparisons with more methods in DomainBed (Tab. 6);
7. Evaluating our method with vision transformer (Tab. 7).
All experiments in the rebuttal are conducted with the same experimental settings introduced in our paper where the DomainBed benchmark is utilized. These evaluations will be included in our revised manuscript.
We answer questions raised by each reviewer in the following.
Pdf: /pdf/5738a4e561bb7f49605f3d96290db64224795f2e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
On the Identifiability of Hybrid Deep Generative Models: Meta-Learning as a Solution | Accept (poster) | Summary: The paper thoroughly assesses the issue of identifiability of both the neural and physics components in hybrid deep generative models (hybrid-DGMs), and proposes a novel approach to formulate such models using meta-learning. The authors show the performance of the model in comparison to other hybrid-DGMs and the ground truth for three well-known physics examples.
Strengths: The paper is the first theoretical study of the identifiability of hybrid-DGMs.
Weaknesses: Possible limitations of applicability related to specifics of meta-learning.
Technical Quality: 3
Clarity: 3
Questions for Authors: How resource heavy this method is compared to the other methods used in the comparison?
How well does it perform for OOD testing in terms of robustness and generalisation?
How well it does predict the behaviour of dynamical systems over longer time intervals (is there a performance degradation over time)?
Would meta learning using the Wasserstein distance instead of the KL divergence improve the identification of both the physics and the neural components?
As suggested by the authors themselves this work would benefit from in-depth hyperparameters to performance study, particularly taking into account the relationship between data samples and model performance.
Comments:
- as a minor point - there are some references that didn't compile as well as typos in the manuscript.
- another minor point is that the appendix is missing with the default NeurIPS instructions left in place - this can be removed.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors didn't provide a separate limitations section but discussed the limitations of their work and future directions in the Discussion/Conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer 5WN4 for the supportive and constructive comments.
1. Resources: On the same device and dataset, Hybrid-VAE takes 250.24s for 50 epochs and Meta-Hybrid-VAE takes 291.68s. In all experiments, Meta-Hybrid-VAE requires approximately 0.5 times more epochs to converge.
2. Performance over longer time intervals: We thank the reviewer for this suggestion. Please see the overall response 3 and the added results in the attached pdf. The results showed that the presented identifiable hybrid-DGMs are significantly more robust in its performance of predicting over longer time intervals (while the non-identifiable baselines deteriorated rapidly over longer time intervals).
3. Performance in OoD data: We thank the reviewer for this suggestion. Please see the overall response 3 and the added results in the attached pdf. The results showed that the presented identifiable hybrid-DGMs are significantly more accurate than the non-identifiable baselines in data samples that are out of distribution either due to the physics-component or neural component.
4. Additional ablation studies: In the manuscript we showed that the meta-formulation is critical to the identification of the neural component. In the added results in the attached pdf (Table 1), we further ablated and showed that the meta-formulation can improve but is not critical to the identification of the physics component. In the added results in Table 3, we empirically ablated the effect of the number of distinct “tasks” (i.e., distinct parameter distributions) on the presented method, through which we empirically verified the theoretical condition for identifiability established in Theorem 1-iv). In the added results in Fig 4, we further added ablation studies to show the effect of the number of parameters to be identified on the presented method.
5. Wasserstein vs. KL distance: This is an excellent question which we will investigate in our future studies.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer 5WN4,
Thank you again for your constructive feedback. We hope that you have had time to go through our response in addressing your previous comments. As the author discussion period is closing soon, we would like to follow up to see if you have additional comments for further discussion.
Best regards,
Authors | Summary: This paper discusses the identifiability of Hybrid Deep Generative Models and proposes a Meta-Learning-based approach to address the (un)identifiability issue of current DGMs. Theoretical discussions are provided, and experiments are conducted to verify the proposed method.
Strengths: This paper adopts the result from [1] gives a discussion on the indetifiability issue of Hybrid Deep Generative Models's data driven part.
[1] Ilyes Khemakhem, Diederik Kingma, Ricardo Monti, and Aapo Hyvarinen. Variational toencoders and nonlinear ica: A unifying framework. In International Conference on Artificial Intelligence and Statistics, pages 2207–2217. PMLR, 2020.
Weaknesses: 1. In "Augmenting Physical Models with Deep Networks," identifying the physical part and the data-driven part is important, as mentioned in previous work such as [1]. As mentioned by the authors, this paper focuses on the identifiability issue of the latent variables and the model parameters of the data-driven model. However, the motivation for identifying this part is unclear since it does not have any physical interpretation. The main goal of the data-driven part is more about approximation accuracy rather than interpretability (identifiability). Therefore, the reviewer is not fully convinced by the motivation.
2. The authors also agree that there are two sides to the identifiability issue: (1) physical parameters and parameters from the data-driven part, and (2) the identifiability of the parameters in the data-driven part. This paper focuses on the latter. However, assuming there is no identifiability issue with (1), the problem degrades to the same setting as [2]. Therefore, the theoretical discussion seems borrowed from paper [2], without any revisions.
3. The writing needs to be further improved, including the definition of notations.
[1] Yuan Yin, Vincent Le Guen, Jérémie Dona, Emmanuel de Bézenac, Ibrahim Ayed, Nicolas Thome, and Patrick Gallinari. Augmenting physical models with deep networks for complex dynamics forecasting. Journal of Statistical Mechanics: Theory and Experiment, 2021(12):124012, 2021.
[2] Ilyes Khemakhem, Diederik Kingma, Ricardo Monti, and Aapo Hyvarinen. Variational autoencoders and nonlinear ICA: A unifying framework. In International Conference on Artificial Intelligence and Statistics, pages 2207–2217. PMLR, 2020.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. In Definition 2, what is $T(\cdot)$? Is that the sufficient statistic? Please define it explicitly.
2. In Theorem 1, in assumption (ii), what is $\mathcal{F}_\theta$? What is the difference between $\mathcal{F}_\theta$ and $\mathcal{F}(f_P, f_{N_\theta}; Z_P, Z_N)$?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The major limitation is the motivation of this work, as the reviewer mentioned in the weakness section. Another weakness is the lack of technical contribution.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer HAHc for the constructive comments.
1. Contribution: Please see the overall response 2 for the significance for the (un)identifiability of the neural component that we focused on, and response 3 for the theoretical contribution of our meta-formulation of identifiable-DGMs in general (applies to hybrid or non-hybrid DGMs). We have added new evidence in the attached pdf to support each of these responses.
2. Yes, T denotes the sufficient statistics.
3. Both $\mathcal{F_\theta}$ and $\mathcal{F}(f_P, f_{N_\theta}; z_P, z_N)$ denote the hybrid generative/mixing function.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer HAHc,
Thank you again for your constructive feedback. We hope that you have had time to go through our response in addressing your previous comments. As the author discussion period is closing soon, we would like to follow up to see if you have additional comments for further discussion.
Best regards,
Authors
---
Rebuttal Comment 1.2:
Title: Update
Comment: Dear Authors,
Many thanks for your reply. The authors' reply has addressed the concern of the reviewer on the motivation of this study (question (1)). I would like to increase my rating from 4 to 5. | Summary: A method and a theory of learning hybrid deep generative models (esp., VAEs) are proposed. The method is based on meta-learning, and the theory is about the identifiability of the neural component's latent variable. The method is also empirically compared to baseline methods of hybrid DGMs.
Strengths: - In the introduction, the motivation for analyzing the identifiability of hybrid DGMs is clearly stated.
- The potential connection to the study around nonlinear ICAs sounds reasonable and interesting.
- The method is technically reasonable. To my knowledge, meta-learning has not clearly been applied in the context of hybrid modeling.
- Experiments are done with reasonable, yet mostly synthetic, datasets and with adequate baseline methods.
Weaknesses: (1)
it is unclear what is the theoretical contribution particularly by this work. The theory in Section 5 looks like a mere rehash of what was presented in [6]. Also, the theory in Section 5 focuses only on the neural network part of hybrid DGMs, and the presence of the physics part of the model does not seem to affect the discussion. This limits the significance of the theoretical contribution.
Moreover, as the authors pointed out, the un-identifiability of hybrid DGMs stems both from 1) the un-identifiability of the neural component and 2) the over-powering effect of the neural component. The authors say that they focus on 1), but then the discussion becomes simply about non-hybrid, general DGMs, which makes the authors' claim that they theoretically analyzed the identifiability of *hybrid* DGMs questionable.
Minor points:
- In Eq. (13), the prior of $z_P$ is missing.
- What makes the synthetic and real double pendulum data different? E.g., is there supposed to be any source of non-negligible noises?
Technical Quality: 2
Clarity: 3
Questions for Authors: Please elaborate on the point (1) in the Weaknesses section above.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Limitations nicely discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer 1hrQ for the constructive comments.
1. Contribution: Please see the overall response 1 for our rationale for not focusing on the identifiability of the physics-component, response 2 for the significance for the (un)identifiability of the neural component that we focused on, and response 3 for the theoretical contribution of our meta-formulation of identifiable-DGMs in general (applies to hybrid or non-hybrid DGMs). We have added new results in the attached pdf to support each of these responses.
2. Difference between synthetic and real double pendulum data: In the synthetic pendulum data, we know that the source of error in the prior physics is due to missing the friction and external force terms. In the real double pendulum data, the true governing physics is unknown. The sources of unknown factors include the unknown masses of the two pendulums (assuming to be equal), presence of friction and its associated parameters, potential vibrations or errors in extracting the arms’ positions from the videos, as described in [5] as cited in the manuscript.
3. Typo in Equation (13): Thanks for pointing this out. The prior term of $p(z_p)$ is missing. We will add it in the final manuscript.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer 1hrQ,
Thank you again for your constructive feedback. We hope that you have had time to go through our response in addressing your previous comments. As the author discussion period is closing soon, we would like to follow up to see if you have additional comments for further discussion.
Best regards,
Authors
---
Rebuttal Comment 1.2:
Comment: Thanks for the clarification. I think it helps a lot to strengthen the discussion. Please make sure to fully reflect the additional explanation given in the rebuttal. Assuming the paper will be updated accordingly I raised my score. | Summary: The original contribution for his manuscript is a metalearning framework to identify the physical parameters in a hybrid deep generative models.
Strengths: This article starts with a good question about the identifiability of physical parameters in hybrid deep generative models (that include physics as an inductive bias). Solving this question could answer key questions in data-driven knowledge discovery. This article presents meta learning framework which could help identify the physical parameters statistically.
Weaknesses: This article seems to be a work in progress. The text alone is not an end product. Symptomatically of this work in progress, the central concept of this article of identifiability is misspelled more than 10 times throughout. The contributions of this paper are not clearly stated and hard to find. It seems that the theory stated in this article was already published elsewhere, as illustrated by the reference to another paper for the proof of the only theorem of this article.
Technical Quality: 3
Clarity: 1
Questions for Authors: The proof-of-concept results on pendulum models and the reaction-diffusion equation have very few parameters to identify and do not inform that the method would be useful in more realistic or more complex models. Could you provide examples with more parameters to identify? Could you give theoretical support to how fast your method would converged to the identified quantities of interest as the number of parameters to identify increases?
Confidence: 3
Soundness: 3
Presentation: 1
Contribution: 3
Limitations: Limitations are discussed in the text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer 1aic for the critical comments. Please see the overall response 4 (and the corresponding results in the attached pdf) for clarification regarding the effect of the number of parameters to be identified, and the overall response 1-3 (and the corresponding results in the attached pdf) for the clarifications on the contribution of the presented work.
---
Rebuttal Comment 1.1:
Comment: The authors made a convincing case to answer the concerns about the theoretical contribution of this work.
They also clearly addressed the question about the scaling of this method with the number of parameters to identify.
I updated the soundness and contribution scores to 3, and the rating to 5.
---
Reply to Comment 1.1.1:
Comment: We're glad that we were able to resolve your concerns. We are thankful for your constructive feedback which has helped significantly improve the quality of our work. | Rebuttal 1:
Rebuttal: We clarify the reviewers’ main questions about the theoretical contribution of this work as follow:
1. **Identifiability of the physics-based component in hybrid-DGMs**: The reviewers questioned our motivation for focusing on the un-identifiability of the neural (instead of the physics) component. We do not focus on the physics component because 1) existing works (e.g., [1-2,5] cited in the manuscript) have proposed promising solutions in accurate estimation of the physics parameters , and 2) the physics-component does not suffer from the type of theoretical un-identifiability as the neural-component does – as shown in **Table 1** of the attached pdf, with or without the meta-formulation, the estimation of physics parameters saw good accuracy. In comparison, the identifiability of the neural component has a much more significant effect on the performance of the hybrid-DGM (Table 1/Figs 1-2 in the manuscript). Regardless, as a side benefit of improved identification of the neural component, our approach did result in a higher accuracy of the physics parameter estimation as well (Table 1).
2. **Un-identifiability of the neural component in hybrid-DGMs**: We respectfully disagree with Reviewer HAHc on the (in)significance of the identifiability of the neural component. The goal of a hybrid-DGM is to use the data-driven part to learn the “missing physics” (those not explained by our prior knowledge) from data – to correctly identifying this missing component, therefore, is critical even if the model itself is a neural-net (for the same reason, there is an active body of research on the identifiability for non-hybrid DGMs even though the model is data-driven). Left unaddressed, many different solutions of a hybrid-DGM can fit the reconstruction accuracy equally well, but many – due to the non-identifiable neural component – **will have minimum to no predictive power** as a hybrid-DGM: in addition to our strong results in Table 1/Figs 1-2 in the manuscript, we add additional evidence (thanks to Reviewer 5WN4’s comments) to show that the un-identifiable neural component will significantly reduce the ability of the hybrid model to predict over longer time intervals (**Fig 1** in the attached pdf) or to perform in OoD situations (**Fig 2** in the attached pdf). We believe these results (with theoretical support) provide strong evidence about the significance of constructing identifiable neural components in a hybrid-DGM – a critical gap in the current literature.
3. **Meta-learning as a formulation to construct identifiable DGMs**: We acknowledge that the presented theory heavily draws on [6]. As a seminal work in nonlinear ICA and identifiable DGMs, however, [6] serves as the theoretical foundation for a line of follow-up identifiability theorems [a-c] where, similar to the presented work, the focus is on how to construct conditionally independent generative models based on the theory in [6]: in most existing works, this conditioning leverages **observed** auxiliary variables such as class labels. In this regard, our work is **the first** to theoretically show that the meta-formulation of DGMs enables the construction of conditionally independent models, with theoretically-proven conditions for their identifiability (see Theorem 1). This connection, though may now appear intuitive given the Theorem presented in this manuscript, is a novel contribution to general non-hybrid DGMs that has never been constructed in the existing literature. In **Table 2** of the attached pdf, we add identifiability results of the meta-formulation of a non-hybrid VAE compared to the identifiable-VAE in [6] that uses class label as the auxiliary variable.
[a] Klindt et al, Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding, ICLR 2021
[b] Halva et al, Disentangling Identifiable Features from Noisy Data with Structured Nonlinear ICA, NeurIPS 2021
[c] Yao et al, Temporally Disentangled Representation Learning, NeurIPS 2022
4. **The number of parameters to be identified**: Reviewer 1aic questioned about the identifiability of the presented hybrid-DGMs as the number of parameters increases. We would like to clarify that 1) The number of parameters considered in our work followed the standard practice used in published hybrid-DGMs (see [1-2, 5] cited in the manuscript: the maximum number of unknown parameters to be identified was 3 in [1],4 in [2], and 3 in [5]), and 2) Theorem 1-iv) in the manuscript did already specify the identifiability condition for the presented model dependent on the number of parameters to be identified: for a d-dimensional parameter vector following exponential families of distributions with e-dimensional sufficient statistic, its theoretical identifiability is guaranteed as long as we have observations generated from a minimum of de+1 unique parameter distributions (regardless how large the value of d may be) – in **Fig 3** of the attached pdf, we empirically verify this theoretical condition on the Pendulum system with 3 unknown parameters in the neural component. Additionally. In **Table 3**, we show that the performance of the presented hybrid-DGM is minimally affected by the increase in the number of parameters to be identified (on the Pendulum system). In **Fig 4**, we add results from a complex system of positron emission tomography (PET), where activity images x are generated from radiotracer kinetics governed by 2-tissue compartment model with pixel-wise kinetic parameters: the data-generating compartment model has 4 unknown parameters in each region of interest (ROI), with 5 ROIs in total. The meta-hybrid-DGM (using simplified 1-tissue compartment model as the prior physics) demonstrated strong identifiability results as measured by the MCC.
Pdf: /pdf/7829c7537bb5e5f6ff550f24ac703c9fa1aae89f.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
GrounDiT: Grounding Diffusion Transformers via Noisy Patch Transplantation | Accept (poster) | Summary: The paper introduces a zero-shot layout technique for spatially-grounded text-to-image generation with the power of transformer-based diffusion architecture. Prior methods tend to manipulate the latent image during reverse process for grounding, solely relying on cross-attention maps that show the alignment between specific text tokens and spatial regions of the target object to place in. This shows a drawback in generating a target object that fits within the region of bounding boxes. By utilizing transformer architecture, the method can divide the generation process into two branches: (1) first branch is to generate a target image patch of the bounding box with an arbitrary size by a transformer, (2) the second branch is the original generation process of the whole image. At every denoising step, the generated object is then copied into the generated image of the second branch.
Strengths: - Addressing the limitation of attention-based method in generating arbitrary object sizes within bounding boxes, the method proposes to utilize transformer architecture (namely DiT) which is free from fixed-resolution constraints and enables patch-level generation.
- The paper is well-written and easy to follow.
- Model efficacy is superior to other methods in Figure 4 and Table 1.
Weaknesses: - 158-161: It is obvious that different noise inputs certainly yield different outputs. So I do not understand the main point of this part.
- Semantic sharing part is an interesting observation. It is more helpful if the authors compare it with image interpolation. Does it work when two images have different semantic classes like dog and human?
Wang, Clinton, and Polina Golland. "Interpolating between images with diffusion models." (2023).
- Eq 5: What is formulation of $ \mathcal{L}(x_t, G_i, A_i)$?
- In reverse process: Have the authors applied semantic sharing on bounding boxes like concat tokens of multiple objects for simultaneous denoising? I assume that they share the same semantic class.
- In Fig 3, I saw the size of object image is much bigger than the size of bounding box. Is it intentional? If so, please explain. How to define noisy object image $z_{i, t}$? As far as I understand, the method need to reserve the aspect ratio of the bounding box by scaling it correspondingly to match the size of main image. Is it possible to disable this scaling step and denoise an object image with the same size as the bounding box?
- Does Noisy Patch transplantation guarantee the consistent content of the output image when pasting the generated image with a binary mask (Eq. 9)? Again in Eq 9, the size of $x_{t-1}$ and $B_{i, t-1}$ are not the same as illustrated in Figure 3 so it is not clear how Hadamard product is possible.
Technical Quality: 3
Clarity: 2
Questions for Authors: NA
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations and societal impacts are included in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your review, acknowledging our work to be “well-written” and possess “superior efficacy to other methods”. Here, we address the concerns and questions that have been raised.
**(1) Clarification on the Main Method**
We would like to draw your attention to our general response above, where we provide detailed clarifications regarding our problem definition and method. **Please note that the notations in the below sections are also based on the clarified versions.**
**(2) Clarification on Lines 158-161 of Sec. 4.3**
It is indeed obvious that if two noisy images $\mathbf{x}\_t$ and $\mathbf{y}\_t$ are initialized with different noises, they will produce different outputs if denoised separately (lines 158-161). However, we reiterate this fact to emphasize that as we increase the number of shared sampling between these two noisy images, $\mathbf{x}\_t$ and $\mathbf{y}\_t$ increasingly produce semantically similar outputs, even when starting from two different initial noises (lines 170-171). This phenomenon, also visualized in Fig. 2-(A), (B) of the main paper, signifies the intriguing effect of semantic sharing when two noisy images pass through the DiT together.
**(3) Comparison between Semantic Sharing and Image Interpolation**
Thank you for your comment on comparing our proposed method with the task of image interpolation. After reviewing the suggested paper [1], we would like to clarify the main purpose of our proposed shared sampling.
Unlike image interpolation, which takes two (or more) input images and produces an output image with semantic features that lie between those of the input images, the goal of our shared sampling is to transfer the semantic features of one noisy image to the other. This process ensures that the desired concepts appear in the target noisy image.
**(4) Clarification on Grounding Loss $\mathcal{L}$**
Please note that we provide clarifications on the definition of the grounding loss in our general response. We will also include this clarification in the revised version.
**(5) Clarification on Handling Multiple Objects of the Same Class**
In Stage 2 of our GrounDiT, please note that a separate object branch and a corresponding noisy object image $\mathbf{z}\_{i, t}$ is defined and processed in parallel for each grounding condition $g_i$. This ensures that each object is individually handled, therefore providing precise local guidance for each bounding box region $b_i$.
**(6) Further Clarification on Shared Sampling Mechanism**
In Fig. 3 of the main paper, we intentionally depict the noisy object image $\mathbf{z}\_{i, t}$ as larger than the corresponding bounding box. This highlights that even when $\mathbf{z}\_{i, t}$ and the bounding box $b_i$ are not the same size, shared sampling between the two noisy images is still feasible. This is achieved by assigning positional embeddings so that each image is treated as a whole. This case is also visualized in Fig. 2-(B) of the main paper.
Moreover, as describe in our general response, the noisy object image $\mathbf{z}\_{i,t}$ is set to satisfy the following criteria:
* Size of $\mathbf{z}\_{i,t}\in\mathbb{R}^{H’_i \times W’_i \times D}$ is set within DiT’s preferred token sequence range.
* $\mathbf{z}\_{i,t}$ has a similar aspect ratio as its corresponding bounding box $b_i$.
Therefore, the noisy object image $\mathbf{z}\_{i, t}$ does not require a scaling process after it has been initialized. Simply passing $\mathbf{z}\_{i, t}$ and the nosy patch $\mathbf{b}\_{i,t}$ (originally $B_{i,t}$) together through the DiT for denoising is sufficient to perform shared sampling.
**(7) Consistency of Images after Noisy Patch Transplantation**
To prevent potential inconsistencies between the bounding box regions and the background, our GrounDiT guidance is applied during the initial timesteps of the reverse process, as detailed in Appendix A.1 of our main paper.
Specifically, our reverse process consists of 50 steps, with GrounDiT denoising (Stage 1 and Stage 2) applied for the first 25 steps. For the final 25 steps, the noisy image $x_t$ is denoised using the standard DiT denoising step. This strategy, also employed in previous works such as R&B, leverages the fact that the image structure is primarily determined in the early steps. Therefore, applying our guidance in the early steps is sufficient for accurate grounding, while preventing inconsistencies between the bounding box regions and the background regions.
**(8) Clarification on Hadamard Produce in Eq. (9)**
Thank you for the comment. We correct Eq. (9) below. We will incorporate the clarification in our revised version.
Let $\textbf{UNCROP}(\cdot)$ be a function that zero-pads a patch $\mathbf{b}_{i,t}$ at region $b_i$ to match the size of the original image. Then, the transplantation mechanism originally proposed in Eq. (9) can be clarified as follows:
\begin{align}
& \text{for}\\; i = 0, ..., N-1:\\\\
& \quad\\; \tilde{\mathbf{x}}\_{t-1}\leftarrow \tilde{\mathbf{x}}\_{t-1}\odot (1-M_i)+ \textbf{UNCROP}(\mathbf{b}\_{i,t-1},b_i)\odot M_i\\\\
& \mathbf{x}\_{t-1}\leftarrow \tilde{\mathbf{x}}\_{t-1}
\end{align}
Please note that the binary mask $M_i$ is passed through the Hadamard product with $\textbf{UNCROP}(\mathbf{b}\_{i,t-1},b_i)$, which has the same resolution as $M_i$.
[1] Interpolating between Images with Diffusion Models, Wang et al., ICML 2024 Workshop
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response, it helps address my concerns. Most of my concerns are about the exposition and description of the method which is the main issue of the paper as can be seen in other reviewers. Apart from the writing, the additional benchmarks clearly demonstrate the effectiveness of the method against other baselines as seen from other reviewers. Hence, I still keep my initial score and vote for acceptance. Meanwhile, I encourage the authors to include the ablation of direct pasting object image of the object branch into noisy image of the main branch. This is a good evidence to support the claim in global response.
---
Reply to Comment 1.1.1:
Comment: Thank you sincerely for taking the time to review our submission. We greatly appreciate your valuable feedback and will work to improve our work based on your suggestions. | Summary: The paper targets exploiting a pre-trained PixArt-alpha (a text-to-image multi-aspect diffusion transformer) to generate images conditioned on a set of text-labeled bounding boxes.
The authors start from the idea that, at inference time, a transformer can be given an arbitrary number of tokens, and that these tokens can have positional embedding corresponding to multiple images of multiple aspect ratios at once. Empirically, they observe that in this situation, the diffusion transformer is able to handle the generation process consistently and the outputs are multiple images with shared content but with different aspect ratios corresponding to the input PEs.
When generating the content of the bounding boxes, they take advange of this idea by simultaneously generating two version of the same bounding-box, one with positional embeddings corresponding to its location in the full image, and the other one with positional embeddings making the crop appear as a full image.
This ensures that the label is represented in the full-image version, and thanks to the shared content, also in the original box version, that can then be transplanted back into the image.
Strengths: - The paper confirms an interesting property of multi-aspect diffusion transformers, in that tokens are able to interact at inference time, across combinations of positions that are not seen during training.
- They propose a clever way to exploit this capability in the context of a multi-aspect diffusion transformer.
- Handling spatial relations correctly is one of the major challenges that state-of-the-art text-to-image models have yet to overcome. The targetted task of grounding is very relevant as it is one way to circumvent the issue.
- The proposed solution obtains very good performance on the considered benchmark.
Weaknesses: Mainly, I find that the exposition of the idea not very clear and could be improved:
- How the different branches, the two-stage process, and the diffusion scheduling interact is not clear:
- Appendix A.1 reveals that Stage 1 and 2 are merged in a single full 50 steps denoising schedule, with Stage 1 only being used in the 25 first steps, which is not exactly in line with what is presented in the main paper. This information needs to be in the main paper, and ideally the presentation should be made more consistent with the implementation.
- Figure 3 and Line 200 suggest that in the main branch x_t is denoised in parallel to the object branch, but it does not appear in Algorithm 1. This is inconsistent. And if the main brain does anything, it needs to be explained and detailed.
- Section 3 should present how multi-aspect/size is handled in diffusion transformers, as it is a key requirement for the proposed method. Also, Eq.3 introduces $w$ without definition.
- a smaller point that could be improved: in 4.2, while it can be derived from context and references that $\mathcal{L}$ needs to be a grounding loss, this is nowhere explicitly stated. If I strictly adhered to what is written in this paragraph, the authors could very well be arguing that any loss function that takes as input $(x_t, \mathcal(G)_i, \mathcal(A)_i)$ would be effective, which is obviously not what they are trying to say.
Technical Quality: 2
Clarity: 1
Questions for Authors: I find this submission to be very interesting and show a lot of promise, but the description of the method is very confusing. For the discussion period, I hope the authors can provide the necessary clarifications and leave enough time for us to have a discussion together around their updated description.
Indeed, until the main points are convincingly clarified, succintly reviewed again, and we are confident that changes will be made, I cannot recommend acceptance. My main question is then:
- Can the authors clarify if anything happens in the main branch? Perhaps a full description of how the method works, including the reverse diffusion iterations would help.
Also, as a comparatively minor point, diffusion models are known for the flexibility of their sampling process. Considering how each stage and branch have compatible objectives, have the authors considered doing every update simultaneously in a single reverse diffusion process instead of multi-stage?
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 3
Limitations: Limitations and impacts are discussed adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your review, recognizing our method to be “very interesting” and “show a lot of promise”. Here, we address the concerns and questions that have been raised.
**(1) Clarifications on the Main Method**
**Please find the detailed clarification of the method in our general response above.** We will thorougly improve our presentation in the revised version. Moreover, please note that the notations in the below sections are also based on the above clarification.
**(2) Clarification on Grounding Loss $\mathcal{L}$**
Please note that we provide clarifications on the definition of the grounding loss in our general response. We will also include this clarification in the revised version.
**(3) Parallel Denoising of the Main Branch and Object Branch**
While in Algorithm 1 we meant to convey the parallel process of main branch and object branches, we notice that the presentation can be improved for clarity. We would like to provide clarifications below and further improve in the revised version.
**Main Branch:** First, Line 13 of Algorithm 1 indicates the denoising step of the noisy image in the main branch. Importantly, the output from this step, $\tilde{\mathbf{x}}\_{t-1}$ (originally $\mathbf{x}\_{t-1}$), is not directly used as an input for the denoising steps in each object branch.
**Object Branch:** Lines 6-8 of Algorithm 1 represent the denoising steps of the object branches. Here, the output from the main branch, $\tilde{\mathbf{x}}\_{t-1}$, is not directly used as input for the denoising. Instead, the output noisy patch $\mathbf{b}\_{i, t}$ (originally $B_{i,t}$) in Line 8 is later pasted back into the output $\tilde{\mathbf{x}}\_{t-1}$ from the main branch, as shown in Line 9.
Therefore, since the output from either the main branch or the object branches is not used as an input for the other branch, these processes are indeed parallel.
**(4) Details on the Reverse Process of GrounDiT**
We clarify the reverse process of GrounDiT, supplementing Appendix A.1 of the main paper.
Our reverse process consists of 50 steps using the DPM-Solver scheduler, where the GrounDiT denoising (Stage 1 and Stage 2) is applied for the first 25 steps. For the final 25 steps, the noisy image $\mathbf{x}_t$ is denoised following the standard DiT denoising step. This strategy is also employed in previous works, including R&B, as the structure of the image is known to be mostly decided in the early steps. Therefore, applying our guidance in the early steps is sufficient for accurate grounding.
**(5) Handling Multiple Image Sizes in DiT / Clarification on $w_d$ in Eq. (3)**
We appreciate your feedback on enhancing the clarity of our descriptions in Sec. 3. To address this, we have clarified DiT's mechanism for handling multiple resolutions below and will incorporate this into our revised version.
DiT [1] handles multi-aspect/size noisy images by operating on a sequences of tokens. Given a noisy image $\mathbf{x}\_{t}\in \mathbb{R}^{H\times W\times C}$, DiT first divides it into patches of size $l \times l$, creating a total of $T = (H/l) \times (W/l)$ patches. Each patch is then transformed into a token with a hidden dimension of $D$ through linear embedding. A 2D sine-cosine positional embedding is applied to the sequence of tokens. The detailed formulation of positional embedding is given in Eq. (3) of our main paper, where $w_d$ follows original definition of positional embedding:
$w_d = 1/10000^{(4d/D)}$ with $d$ running from $0$ to $D/4$.
Consequently, when images with different aspect ratios or sizes are input into DiT, the primary difference is the token sequence length, which is managed by assigning appropriate positional embeddings to each sequence.
**(6) Simultaneous Update in a Single Reverse Diffusion Process**
Please note that a single denoising step of our GrounDiT consists of the two-stage sequence: "Stage 1 $\rightarrow$ Stage 2". During Stage 1, a global update is applied to the noisy image $\mathbf{x}\_t$. In Stage 2, precise local control is provided for each bounding box $b_i$ using our proposed shared sampling technique. We found it reasonable to perform the shared sampling based on the output $\hat{\mathbf{x}}\_t$ obtained from Stage 1, as this allows to leverage the improved alignment resulting from the global update.
[1] Scalable Diffusion Models with Transformers, Peebles et al., ICCV 2023
---
Rebuttal Comment 1.1:
Comment: I have read the other reviews and rebuttal, and I thank the authors for all the clarifications to the algorithm.
While the initial submission lacked in terms of preciseness and clarity, I believe the content of the rebuttal fixes the main issues and demonstrate the willingness of the authors to update the submission accordingly.
As such, standing by my initial assessment that the proposed method is a novel, clever, and inspiring solution to a relevant problem in text-to-image models, and now with no major unadressed concerns, I updated my evaluation and recommend acceptance.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your time and effort in reviewing our submission and rebuttal. Your valuable feedback is greatly appreciated, and we will work to improve the clarity of our work accordingly. | Summary: This paper explores the zero-shot layout-to-image generation problem using bounding boxes and texts as conditional inputs. The authors propose a method called GrounDiT, which builds upon the recent Diffusion Transformers (DiT) model. Leveraging DiT's emergent property of semantic sharing, where two noisy images of different sizes can progressively converge during sampling, the authors introduce a key innovation: a separate object denoising branch for each bounding box condition, running in parallel to the main denoising branch. In each object branch, a noisy object patch is generated and then implanted into the original image. The authors compare GrounDiT with SOTA zero-shot methods on the HRS and DrawBench datasets, demonstrating that GrounDiT can generate complete objects in the correct regions while maintaining high overall image fidelity.
Strengths: - The paper is well-written, with a clear and concise background and motivation.
- The idea of having a separate object branch for each bounding box is simple and intuitive, making the method easy to understand and implement.
- The authors conduct an extensive comparison with SOTA baselines, including R&B, BoxDiff, Attention-Refocusing, and Layout-Guidance, demonstrating the effectiveness of their approach.
Weaknesses: - The paper lacks a comparison of prompt fidelity on the HRS dataset. Except for PixArt-R&B( a variant of GrounDiT), it would also be useful to include results from other baselines like R&B and BoxDiff to ensure the completeness of the work, even if they may use different backbones.
- In Table 1, GrounDiT outperforms its variant PixArt-R&B in grounding accuracy, which is expected given the special design of noisy object patch cultivation and transplantation. However, in Table 2, GrounDiT continue to surpass PixArt-R&B in prompt fidelity scores like CLIP score and ImageReward. I am curious to know where these improvements come from, as object patch cultivation and transplantation should only benefit grounding ability. Did the authors perform multiple runs for the experiments?
- The paper does not evaluate the MS-COCO subset presented in the R&B paper, which would be a valuable addition to the experiments.
- The proposed method would increase the compuation cost for image generation. However, there is no quantitative evidence showing the exact cost.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weaknesses part.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: In the conclusion part, the authors admit that the proposed method would increase the compuation time for image generation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your review, acknowledging that our paper is “well-written” and the proposed method is “simple and intuitive”. Here, we address the concerns and questions that have been raised.
**(1) Additional Comparisons on Prompt Fidelity**
Here we provide further quantitative comparisons on the prompt fidelity between our GrounDiT and baseline methods. We will include the results in the revised version.
**[Prompt Fidelity]**
| |Layout-Guidance|Attention-Refocusing|BoxDiff|R&B|PixArt-R&B|GrounDiT (Ours)|
|:--|:---:|:---:|:---:|:---:|:---:|:---:|
|CLIP score ($\uparrow$) | 32.48 | 31.36 | 32.57 | 33.16 | 33.49 | **33.63** |
|ImageReward ($\uparrow$)| -0.401 | -0.508 | -0.199 | -0.021 | 0.280 | **0.444** |
|PickScore (Ours $-$ Baseline)| +0.30 | +0.22 | +0.30 | +0.26 | -0.04 | - |
The table above shows the comparison of CLIP score, ImageReward and PickScore with the baselines. Since PickScore measures the preference among two images, we present their difference in the above table. In CLIP score and ImageReward, our GrounDiT outperforms the baselines. In PickScore, GrounDiT outperforms all baseline except PixArt-R&B, while remaining comparable.
**(2) Relation between Grounding and Prompt Fidelity**
Additionally, we observed that precise local control provided by our GrounDiT method leads to the presence of target objects that are often missing in baseline methods, thereby increasing the prompt fidelity of the image. **Please refer to Fig. S1 in the attached PDF of our general response**, where we provide qualitative examples of these cases along with the corresponding CLIP scores for each image.
**(3) Additional Quantitative Comparisons including MS-COCO**
We further conducted quantitative comparisons on two additional benchmarks: a subset of the MS-COCO and a new custom benchmark.
**[MS-COCO Subset]**
Following R&B, we conducted an experiment on a subset of MS-COCO. Since we could not obtain the exact subset utilized in R&B from the authors, we similarly prepared a subset from MS-COCO.
For this, we first filtered out the image-caption pairs in the MS-COCO 2014 validation set where either the bounding box target objects were not specified in the image captions, or duplicate objects were present in the grounding conditions. Subsequently, we randomly selected 500 pairs to use for evaluation. Following R&B, we measured the mIoU (mean IoU) for each bounding box condition.
| |SD|PixArt-$\alpha$|Layout-Guidance|Attention-Refocusing|BoxDiff|R&B|PixArt-R&B|GrounDiT (Ours)|
|:--|:--:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|mIoU ($\uparrow$) | 0.176 | 0.233 | 0.307 | 0.254 | 0.324 | 0.411 | 0.418 | **0.432** |
As shown in the table above, our GrounDiT achieves the highest grounding accuracy on the MS-COCO subset. Specifically, GrounDiT outperforms R&B by 0.021, representing a 5.1% increase, and PixArt-R&B by 0.014, representing a 2.2% increase. Note that the average number of bounding boxes in the MS-COCO subset is **2.06**, making it a relatively easy task even for baseline methods. Since the main advantage of GrounDiT over R&B and PixArt-R&B is its robustness when there is a higher number of bounding boxes, we provide further comparisons with a higher average number of bounding boxes below.
**[HRS-Spatial Benchmark]**
| |SD|PixArt-$\alpha$|Layout-Guidance|Attention-Refocusing|BoxDiff|R&B|PixArt-R&B|GrounDiT (Ours)|
|:--|:--:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|mIoU ($\uparrow$) | 0.068 | 0.085 | 0.199 | 0.145 | 0.164 | 0.326 | 0.334 | **0.372** |
The above table provides the mIoU comparisons on the Spatial subset of the HRS benchmark, which was also used in Sec. 5.2. The average number of bounding boxes in this benchmark is **3.11**, which is larger than MS-COCO subset by 1.05. For mIoU, GrounDiT outperforms R&B by 0.046, indicating a 14.1% increase, and PixArt-R&B by 0.038, indicating a 11.4% increase. Compared to the 5.1% and 2.2% in the MS-COCO subset, respectively, the higher percentage of increase in mIoU demonstrates the robustness of GrounDiT on more complex grounding conditions.
**[Custom Benchmark]**
| |SD|PixArt-$\alpha$|Layout-Guidance|Attention-Refocusing|BoxDiff|R&B|PixArt-R&B|GrounDiT (Ours)|
|:--|:--:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|mIoU ($\uparrow$) | 0.030 | 0.036 | 0.122 | 0.078 | 0.106 | 0.198 | 0.206 | **0.250** |
Lastly, the above table shows the comparisons on a new benchmark consisting of 500 layout-text pairs, generated using the layout generation pipeline from LayoutGPT [1]. The average number of bounding boxes in this benchmark is **4.48**. Here, GrounDiT outperforms RnB by 0.052, representing a 26.3% increase and PixArt-R&B by 0.044 representing a 21.4% increase. This is in line with the trend in the above benchmarks, **further highlighting the robustness and efficacy of our approach in handling a higher number of grounding conditions.**
**(4) Details on Computation Time**
We further provide the exact computation time for each method.
The numbers in the table below are in **seconds**.
|Num. Boxes|3|4|5|6|
|:--|:--:|:---:|:---:|:---:|
|**R&B**| 37.52 | 38.96 | 39.03 | 39.15 |
|**PixArt-R&B**| 28.31 | 28.67 | 29.04 | 29.15 |
|**GrounDiT (Ours)**| 37.71 | 41.10 | 47.83 | 55.30 |
While our method results in an increase in inference time, the rate of increase is not significant. For three bounding boxes, the inference time is 1.03 times that of R&B and 1.33 times that of PixArt-R&B for three bounding boxes. Even with 6 bounding boxes, the inference time is 1.41 times that of R&B and 1.90 times that of PixArt-R&B.
[1] LayoutGPT: Compositional Visual Planning and Generation with Large Language Models, Feng et al., NeurIPS 2023
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications. After reading all the reponses and other reviewers' comments, I am inclined to keep my original score.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to review our submission and rebuttal. We greatly appreciate your valuable feedback and will work diligently to improve our work based on your insights. | Summary: This paper proposes a method to use a pre-trained text-to-image
diffusion model to guide the generation into placing objects at given
locations determined by bounding boxes. The challenge is to develop
this capability without requiring fine-tuning of the model.
Strengths: - The problem of guided generation without requiring fine-tuning of a
diffusion model is a relevant one, and the method seems effective.
- Experimental results show the proposed method outperforms compared
baselines.
Weaknesses: - The quality of the presentation can be improved. For instance, one
fundamental component of the method, $\math{L}_{AGG}$, is not
defined. The evaluation metrics (spatial, size, color) are not
properly defined, unless I missed it.
- Further analysis or discussion on why the proposed approach is
superior to simply directly transferring the patch would make the
paper more convincing.
- Experimental comparison is limited to two datasets. Qualitatively,
it's not entirely clear there is a significant difference between
the R&B baseline and the proposed approach.
Technical Quality: 2
Clarity: 1
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your review, acknowledging that our work is solving a “relevant” problem of zero-shot guided generation through an “effective” method. Here, we address the concerns and questions that have been raised.
**(1) Clarifications on the Main Method**
We would like to draw your attention to our general response above, where we provide detailed clarifications regarding our method.
**(2) Clarification on Grounding Loss $\mathcal{L}\_{AGG}$**
Please note that we provide clarifications on the definition of the grounding loss in our general response. We will include this in the revised version.
**(3) Details on Evaluation Metrics**
We provide further details on the evaluation metrics from HRS-Bench [2]. We will also include this in the revised version.
* **Spatial:** Object detection is applied to the generated image. Based on the detection results, the spatial relations specified in the text prompt are checked. If the detected objects maintain the correct spatial relations as described in the text prompt, the image is classified as correct; otherwise, it is classified as incorrect.
* **Size:** Similarly, based on the detected boxes, it is checked whether the size relations between the boxes follow the relations specified in the text prompt (e.g. “bigger than”).
* **Color:** After obtaining a semantic segmentation for each object, the average hue color value within each segment is computed. The derived color is then compared with the color of the object specified in the text prompt to determine accuracy.
**(4) Quantitative Comparisons on Additional Benchmarks**
We further conducted quantitative comparisons on two additional benchmarks: a subset of the MS-COCO and a new custom benchmark.
**[MS-COCO Subset]**
Following R&B, we conducted an experiment on a subset of MS-COCO. Since we could not obtain the exact subset utilized in R&B from the authors, we similarly prepared a subset of MS-COCO.
We first filtered out the image-caption pairs in the MS-COCO 2014 validation set where either the bounding box target objects were not specified in the caption, or duplicate objects were present in the grounding conditions. Then we randomly selected 500 pairs for evaluation. Following R&B, we measured the mIoU (mean IoU) for each bounding box condition.
| |SD|PixArt-$\alpha$|Layout-Guidance|Attention-Refocusing|BoxDiff|R&B|PixArt-R&B|GrounDiT (Ours)|
|:--|:--:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|mIoU ($\uparrow$) | 0.176 | 0.233 | 0.307 | 0.254 | 0.324 | 0.411 | 0.418 | **0.432** |
In the table above, our GrounDiT achieves the highest grounding accuracy on the MS-COCO subset. Specifically, GrounDiT outperforms R&B by 0.021, representing a 5.1% increase, and PixArt-R&B by 0.014, representing a 2.2% increase. Note that the average number of bounding boxes in the MS-COCO subset is **2.06**, making it a relatively easy task even for baseline methods. Since the main advantage of GrounDiT over R&B and PixArt-R&B is its robustness when there is a higher number of bounding boxes, we provide further comparisons with a higher average number of bounding boxes below.
**[HRS-Spatial Benchmark]**
| |SD|PixArt-$\alpha$|Layout-Guidance|Attention-Refocusing|BoxDiff|R&B|PixArt-R&B|GrounDiT (Ours)|
|:--|:--:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|mIoU ($\uparrow$) | 0.068 | 0.085 | 0.199 | 0.145 | 0.164 | 0.326 | 0.334 | **0.372** |
The above table provides the mIoU comparisons on the Spatial subset of the HRS benchmark, which was also used in Sec. 5.2. The average number of bounding boxes in this dataset is **3.11**, which is larger than MS-COCO subset by 1.05. For mIoU, GrounDiT outperforms R&B by 0.046, indicating a 14.1% increase, and PixArt-R&B by 0.038, indicating a 11.4% increase. Compared to the 5.1% and 2.2% in the MS-COCO subset, respectively, the higher percentage of increase in mIoU demonstrates the robustness of GrounDiT on more complex grounding conditions.
**[Custom Benchmark]**
| |SD|PixArt-$\alpha$|Layout-Guidance|Attention-Refocusing|BoxDiff|R&B|PixArt-R&B|GrounDiT (Ours)|
|:--|:--:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|mIoU ($\uparrow$) | 0.030 | 0.036 | 0.122 | 0.078 | 0.106 | 0.198 | 0.206 | **0.250** |
The above table shows the comparisons on a new benchmark consisting of 500 layout-text pairs, generated using the layout generation pipeline from LayoutGPT [1]. The average number of bounding boxes in this benchmark is **4.48**. Here, GrounDiT outperforms RnB by 0.052, representing a 26.3% increase and PixArt-R&B by 0.044 representing a 21.4% increase. This is in line with the trend in the above datasets, **further highlighting the robustness and efficacy of our approach in handling a higher number of grounding conditions.**
**(5) Additional Qualitative Comparisons with R&B**
Please refer to Fig. S1 of the attached PDF in the general response, where we provide additional qualitative comparisons between our method and the baselines, R&B and PixArt-R&B, with detailed analysis of each result.
**(6) Advantage of Shared Sampling over Directly Transferring the Patches**
Please refer to our general response for a clarification on the effect of shared sampling:
We can think of each patch as an image on its own and pass it through DiT with the corresponding text condition. **However, this is not feasible in practice with existing DiT, as it has a certain set of preferred image resolutions.** This is due to the fact that the training data does not perfectly cover all resolutions. Please see Fig. S2 of the attached PDF in our general response for an illustration. But since DiT's Transformer architecture allows flexible token sequence length, we can perform shared sampling, which transfers the desired semantic features into the patch.
[1] LayoutGPT: Compositional Visual Planning and Generation with Large Language Models, Feng et al., NeurIPS 2023
[2] HRS-Bench: Holistic, Reliable and Scalable Benchmark for Text-to-Image Models, Bakr et al., ICCV 2023 | Rebuttal 1:
Rebuttal: Here we clarify our problem definition and method. For clarity, we have slightly modified the notations to improve the description of the entire framework and readability. We will continue to improve the presentation of our paper in the revision.
**[Notations]**
- Let $P$ be the input global text prompt.
* Grounding conditions are given as $G=[g_0,...,g_{N-1}]$. Each $g_i$ consists of a bounding box $b_i\in\mathbb{R}^4$ and a word $p_i$ indicating the desired object within $b_i$. **Note that $p_i$ must be one of the words in $P$**, that is, $p_i\in P$, since our method uses the cross-attention maps of DiT to compute the grounding loss $\mathcal{L}$ in Stage 1.
* Let $\mathbf{x}_t$ be a noisy image at timestep $t$. **Please note that DiT treats $\mathbf{x}_t$ as a sequence of tokens**, and applies positional embeddings at each timestep, as $\textbf{PE}(\mathbf{x}\_t)$.
**[Method Overview]**
Please note that our GrounDiT performs the reverse diffusion process **only once**, while conducting Stage 1 and 2 sequentially at every denoising step.
* **Stage 1** takes $\mathbf{x}_t$ and performs gradient descent to update $\mathbf{x}_t$ to $\hat{\mathbf{x}}_t$, which has not yet undergone a denoising step. $\hat{\mathbf{x}}_t$ is passed to Stage 2.
* **Stage 2** then denoises $\hat{\mathbf{x}}_t$ to obtain $\mathbf{x}\_{t-1}$, which moves to the next timestep of the reverse process.
**[Stage 1]**
We clarify Stage 1 as below:
* $\textbf{PE}(\mathbf{x}\_t)$ is passed into DiT, and in this process the cross-attention map $A_i$ for each word $p_i$ is extracted.
* With $A_i$, the grounding loss $\mathcal{L}(\mathbf{x}\_t,g_i,A_i)$ is computed. We use the loss functions in R&B [1], $\mathcal{L}=\mathcal{L}_r + \mathcal{L}_b$, where $\mathcal{L}_r$ (Eq. 11 in R&B) is a region-aware loss and $\mathcal{L}_b$ (Eq. 13 in R&B) is a region-aware loss.
* $\mathcal{L}\_{AGG}$ is computed by aggregating $\mathcal{L}$ for all conditions in $G$.
* Finally, we compute $\hat{\mathbf{x}}\_t \leftarrow \mathbf{x}\_t - \omega_t \nabla_{\mathbf{x}\_t} \mathcal{L}_{AGG}$.
While $\hat{\mathbf{x}}\_t$ is more likely to position each object of $p_i$ within the bounding box $b_i$, **it still struggles to accurately position every object in $b_i$.** Therefore, we introduce Stage 2, which focuses on a precise local update.
**[Stage 2]**
In Stage 2, we perform the denoising of $\hat{\mathbf{x}}\_t$ while injecting semantic features of object $p_i$ into the corresponding bounding box $b_i$.
For this, we define a single main branch as well as multiple object branches, each of which corresponds to a grounding condition $g_i\in G$, detailed below:
* **Main branch** is responsible for the denoising of main noisy image $\hat{\mathbf{x}}\_t$, **via the standard DiT denoising step** using the global prompt $P$: $\tilde{\mathbf{x}}\_{t-1}\leftarrow \textbf{SampleDiT}(\textbf{PE}(\hat{\mathbf{x}}\_t), t, P)$. Note that $\tilde{\mathbf{x}}\_{t-1}$ is an intermediate denoised image, which is later combined with the outputs from object branches to finally obtain $\mathbf{x}\_{t-1}$.
* Each **object branch** is responsible for its corresponding grounding condition $g_i$.
* Consider a local patch $\mathbf{b}\_{i,t}=\textbf{CROP}(\hat{\mathbf{x}}\_t, b_i)$. We want to ensure that the object $p_i$ appears within the bounding box $\mathbf{b}\_{i,t}$.
* One can consider treating $\mathbf{b}\_{i,t}$ as an image on its own and passing it through DiT with $p_i$ as the text condition. **However, please note that this is not feasible with a pretrained DiT model since it cannot properly handle arbitrary resolutions of images.** This is NOT due to the architecture but to the fact that the training images typically have a certain resolution (which we call a **preferred resolution**). Please refer to Fig. S2 of the attached PDF.
* However, please note that **DiT’s Transformer is still flexible to the length of its token sequence and thus feasible to combine two sequences of tokens**, *provided* that the positional embedding for each token sequence is computed with one of the preferred resolutions. Please see Fig 2 (A) in our paper, where both image has a preferred resolution.
* This leads us to our main idea: **shared sampling**. The noisy image $\mathbf{z}\_{i,t}$ at each branch has a preferred resolution (with a similar ratio to the bounding box $b_i$), while the patch $\mathbf{b}\_{i,t}$ cropped from the main branch does not. While we cannot directly denoise the patch $\mathbf{b}\_{i,t}$, we instead combine the two set of tokens from $\mathbf{z}\_{i,t}$ and $\mathbf{b}\_{i,t}$ and denoise them together while using $p_i$ as the text prompt. Our discovery is that this results in properly denoising the patch $\mathbf{b}\_{i,t}$ thanks to the interaction with the other sequence $\mathbf{z}\_{i,t}$, which has a preferred resolution, in the self-attention modules. Please see Fig 2 (B) in our paper. **Please note that $\mathbf{z}\_{i,t}$ in each object branch is NOT directly pasted onto the $\hat{\mathbf{x}}\_t$ in the main branch** but only used as an auxiliary to help properly denoise each patch in $\hat{\mathbf{x}}\_t$ with the corresponding object label $p_i$.
* Specifically, we compute: $\\{ \mathbf{z}\_{i,t-1}, \mathbf{b}\_{i,t-1} \\}\leftarrow\textbf{SampleDiT}(\textbf{CONCAT}(\textbf{PE}(\mathbf{z}\_{i,t}),\textbf{PE}(\mathbf{b}\_{i,t})), t, p_i)$. Then, the denoised patch $\mathbf{b}\_{i,t-1}$ from $i$-th object branch is pasted back into its designated region in the main branch output $\tilde{\mathbf{x}}\_{t-1}$. The result of pasting back $\mathbf{b}\_{i,t-1}$ from all object branches into $\tilde{\mathbf{x}}\_{t-1}$ finally becomes $\mathbf{x}\_{t-1}$. The $\mathbf{x}\_{t-1}$ proceeds to the next timestep of the reverse process.
Pdf: /pdf/e1bffb7a7e820e11d8d4e3f93455d0cbb58c15e7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A Textbook Remedy for Domain Shifts: Knowledge Priors for Medical Image Analysis | Accept (spotlight) | Summary: This paper addresses the issue of deep networks' sensitivity to domain shifts in medical image analysis, particularly for chest X-rays and skin lesion images. The authors propose Knowledge-enhanced Bottlenecks (KnoBo), a concept bottleneck model that incorporates explicit medical knowledge from resources like textbooks and PubMed to improve generalization. KnoBo leverages retrieval-augmented language models to create a clinically relevant concept space and demonstrates a significant performance improvement—32.4% on average—over fine-tuned models across 20 datasets, highlighting PubMed as a particularly effective knowledge resource for mitigating domain shifts.
Strengths: Innovative Integration of Medical Knowledge:
The paper introduces Knowledge-enhanced Bottlenecks (KnoBo), a novel approach that integrates explicit medical knowledge into deep networks. By using retrieval-augmented language models to incorporate clinically relevant factors from medical textbooks and PubMed, the model gains a robust prior that significantly enhances its generalization capabilities.
Comprehensive Evaluation Across Diverse Domain Shifts:
The authors conduct a thorough evaluation of KnoBo across 20 datasets and two imaging modalities (chest X-rays and skin lesion images). This extensive testing demonstrates the model's ability to handle various domain shifts, such as differences in data from different hospitals and demographic confounders, showcasing its practical applicability and robustness.
Significant Performance Improvement:
KnoBo shows substantial performance gains, outperforming fine-tuned models by an average of 32.4% on confounded datasets. This impressive improvement highlights the effectiveness of incorporating medical knowledge for reducing sensitivity to domain shifts, with evaluations indicating that PubMed is a particularly valuable resource for enhancing model performance and information diversity.
Weaknesses: Lack of Application to 3D Medical Imaging:
Issue: The paper primarily focuses on 2D medical imaging modalities, such as chest X-rays and skin lesion images, without addressing the broader and more widely used domain of 3D medical imaging.
Impact: This limitation raises concerns about the generalizability and applicability of the proposed KnoBo model to 3D imaging scenarios, such as CT scans or MRI, which are critical in many clinical contexts. Including evaluations on 3D medical images would significantly enhance the relevance and impact of the research.
Insufficient Comparative Analysis with 3D Domain Adaptation Methods:
Issue: There is a lack of comparative analysis with existing domain adaptation and generalization methods specifically applied to 3D medical imaging.
Impact: The absence of such comparisons makes it difficult to gauge the relative effectiveness of KnoBo in a comprehensive manner, particularly given the significant advancements in 3D domain adaptation techniques. Addressing this gap would provide a more complete evaluation of KnoBo's capabilities and limitations.
Technical Quality: 3
Clarity: 3
Questions for Authors: Application to 3D Medical Imaging:
Question: Have you considered extending the KnoBo framework to 3D medical imaging modalities such as CT scans or MRIs? If so, what challenges do you anticipate, and how might they be addressed?
Suggestion: Expanding your research to include 3D medical images would significantly increase the applicability and impact of your work. Discussing potential adaptations and preliminary results for 3D imaging could provide valuable insights and demonstrate the versatility of KnoBo.
Comparative Analysis with State-of-the-Art Methods:
Question: How does KnoBo compare with recent state-of-the-art domain adaptation and generalization methods, especially those tailored for 3D medical imaging?
Suggestion: Including a detailed comparative analysis with advanced domain adaptation techniques, particularly those used in 3D imaging, would strengthen your paper. This would provide a clearer benchmark and highlight the unique advantages and limitations of KnoBo.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback on our work! Here, we address your concern about applications in 3D images.
We agree with you that 3D modalities are important. However, we scoped our study to 2D modalities as these are cheaper in practice. Part of the motivation is to transfer to less-resourced hospitals where X-rays are generally available but more expensive imaging modalities are not (i.e., in developing countries). That being said, we are very concerned with establishing the broad applicability of our work, and so we evaluated 20 datasets within the modalities we considered.
The KnoBo framework is flexible enough to adapt to 3D modalities because the key contribution is the factorization. Many components of KnoBo can be readily reused, such as the concept generation from medical documents. We only need an appropriate 3D encoder (e.g., 3D CNN) and a pretraining dataset. Fortunately, large-scale public 3D medical datasets are available, e.g., DeepLesion [1] has 32,000 annotated CT scans, CT-RATE [2] has 50,188 CT volumes with associated clinical reports, and the recently released CLIP-CT [2] could serve as a backbone. We agree that 3D would enrich the work, but we feel we have provided significant evidence already that KnoBo is effective and that there is a clear path on how one could apply it to 3D domains in the future.
[1] Yan et al. DeepLesion: Automated Deep Mining, Categorization and Detection of Significant Radiology Image Findings using Large-Scale Clinical Lesion Annotations. 2017.
[2] Hamamci et al. A foundation model utilizing chest CT volumes and radiology reports for supervised-level zero-shot detection of abnormalities. 2024.
---
Rebuttal 2:
Title: Has the rebuttal addressed your concerns?
Comment: Hi,
Do you feel we have provided enough detail about possible extensions of our work to 3D? We will add some of this discussion to the limitations and future work section given extra space. Would you consider updating your rating or are there other weaknesses that we can discuss now during the discussion period?
Thanks!
---
Rebuttal Comment 2.1:
Comment: Thank you for your response. I would keep my score. | Summary: The authors noticed the domain-shift issue of current medical dataset, and after taking inspiration from medical training, they propose giving deep networks a prior grounded in explicit medical knowledge communicated in natural language. The proposed network can incorporate medical knowledge priors to help model make decisions. Basically, the authors assumed that “medical concepts” are robust enough to against the domain shift. Valid assumption, interesting work.
Strengths: The authors noticed the domain-shift issue of current medical dataset, and after taking inspiration from medical training, they propose giving deep networks a prior grounded in explicit medical knowledge communicated in natural language. The proposed network can incorporate medical knowledge priors to help model make decisions. Basically, the authors assumed that “medical concepts” are robust enough to against the domain shift. Valid assumption, interesting work.
Weaknesses: It is very HARD to read and interpret. Please raise examples for better reading experience. Maybe I have some questions after the clearance. Other points could be seeen under the Questions section.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. “In 5 such constructed confounds per modality, covering scenarios of race, sex, age, scan position, and hospital, we find models unable to generalize well, dropping over 63% on average over an in-distribution (ID) evaluation.” What are the tasks? Please specify.
2. “As a baseline, we extract a subset of d pixels directly from the image as the feature without any model-based priors, represented as xp ∈ Rd. We compare the classification performance using x versus xp to probe the efficacy of the vision backbone’s priors.” I didn’t get the point. Why did you want to compare the effectiveness of x and x_p?
3. It is very hard to read the 4.2 section. I cannot fully understand the algorithm 1. Please raise examples for better reading experience.
4. Table 1, why the KnoBo always has not the best, or sometimes the worst performance in in-domain dataset?
5. Is the Structure Prior learnt? Or the questions like “Is there ground-glass opacity?” were pre-defined? How ist he connection between “Is there ground-glass opacity?” and the top-right red box in Figure 1?
6. Is the training end-to-end? Can you please provide an example for 1) how to pretrain using medical books 2) and given an X-ray image, how the workflow is to get the last result? I hold positive thought towards the study, but it is really hard to understand the workflow.
7. Is the parameter prior a part of the bottleneck predictor? In figure 1 it seems like parallel, but in the main text it seems like existing sequentially. Hard to understand.
8. How to determine the best number of concepts?
9. How did you determine the preferred correlation between the label y and concept c?
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the feedback! We will try to include more examples in future versions. We hope the answers below clarify:
**Q1. What are the tasks?**
We studied the medical image classification tasks in a confounded setting (the medical class names are found in Tables 5 and 6). As explained in the second paragraph of the introduction (Lines 30-34), the samples are confounded with different factors during training/validation (ID) and testing (OOD) time. For example, in an X-ray classification task to classify COVID and normal patients, if we confound the data with sex, the dataset is constructed as follows:
Train/Validation (ID): (COVID, male) (Normal, female)
Test (OOD): (COVID, female) (Normal, male)
A robust model should predict based on COVID-related features rather than spurious correlations (sex in this example).
**Q2. Why compare the effectiveness of $x$ and $x_p$?**
This experiment aims to validate our motivation that deep neural networks lack priors for medical domains. $x$ is the feature extracted by a deep network at initialization (i.e., a ViT). $x_p$ is a feature vector constructed by extracting pixel values in the image. If a network has good priors for a domain, we expect the network output at initialization ($x$) to be a more useful representation of the image content than the pixels themselves ($x_p$). For natural images, this is known from prior work on Deep Image Priors, but, as we show in Figure 2, this is not true for medical images. Without good priors, models can easily adopt incorrect hypotheses and rely on spurious features. This leads to catastrophic failures when the spurious features are unavailable in out-of-domain tests and motivates the need to incorporate priors from elsewhere.
**Q3. explain algorithm 1**
Algorithm 1 aims to collect a set of concepts from medical documents. Given the class name (e.g., COVID), we want to extract useful knowledge from medical documents as concepts (e.g., ground glass opacity, a highly indicative feature of COVID). The whole process is a retrieval-augmented-generation task. We first use class names as queries (e.g., COVID) to retrieve relevant documents from a corpus (e.g., PubMed). The retrieved document may contain useful information, e.g., in the upper left of Figure 3, the document says, “the most frequent radiological patterns found were ground-glass opacities.” We feed these documents as contexts for an LLM, which extracts the information as a concept (ground glass opacity). We repeat this process using generated concepts as queries until a predefined number of concepts is reached.
**Q4. Why doesn’t KnoBo always have the best ID performance?**
This is because, in the confounded settings (explained in Q1), models can achieve high ID performance by taking shortcuts using spurious correlations, e.g., learning to classify males and females instead of focusing on diseases. Models that achieve high ID performance in this way will have catastrophic failures on the OOD test (see OOD numbers of ViT-L/14 and DenseNet in Table 1). In this case, a robust model must have a high average performance across ID and OOD. There is a natural tradeoff between ID and OOD, but our models strike the best balance.
**Q5. Is the Structure Prior learnt? The connection between “Is there ground-glass opacity?” and the top-right red box.**
As explained in Q3, the structure prior is not learned from training instances but instead constructed from medical documents.
The top-right red box illustrates the concept grounding module (Sec 4.3, Lines 180-188). Given a concept (e.g., is there ground-glass opacity?) and an X-ray image, we aim to estimate the probability of this concept existing on this X-ray.
To achieve this goal, We train a binary classifier for each concept. We obtain the positive and negative examples for each concept by leveraging a pretraining dataset with (X-ray, clinical report) pairs. We can estimate the presence of a concept in each example by prompting an LLM to generate a response indicating whether the clinical report implies the concept. For example, if the clinical report mentioned related terms about “ground-glass opacity,” it is a positive example of this concept. Otherwise, it will be a negative example. With positive and negative examples for a concept, we can train a binary classifier to predict the concept given an X-ray image.
**Q6. Is the training end-to-end?**
No. To be computationally feasible, the training has stages: (1) constructing the prior from documents (explained in Q3), (2) concept grounding (explained in Q5), and (3) linear layer for final label prediction. Given an X-ray image, the workflow is: (1) execute the concept grounding functions to get the probabilities on all concepts in the structure prior, (2) use the concept probabilities as the input to a linear layer to get the final label prediction, e.g., predict COVID or Normal. Please also refer to Lines 142-144 for Preliminary on Concept Bottleneck Model and Sec 4.3 for details about the workflow.
**Q7. Is the parameter prior a part of the bottleneck predictor?**
No. As explained in Sec 4.4, the parameter prior is used to regularize the learning of the linear layer. We will compute the L1 distance between the linear layer and the parameter prior as the loss to align the learned parameters with priors.
**Q8. How to determine the best number of concepts?**
We did not tune this hyperparameter much and used 150 concepts. In Figure 4, we examine the impact of the number of concepts on ID and OOD performance. More concepts lead to better ID but worse OOD so that an appropriate trade-off can be selected for a problem.
**Q9. How did you determine the preferred correlation between the label y and concept c?**
A language model (explained on Line 198) determines the preferred sign of the correlation between y and c.
Please do not hesitate to ask more questions, and we will be happy to answer them. We look forward to your response!
---
Rebuttal 2:
Title: Has the rebuttal clarified sufficiently?
Comment: Hi,
Has our rebuttal clarified sufficiently that you feel you can judge our work? Are there other aspects that we can clarify for you to make an assessment now that we have provided answers to your question?
Thanks!
---
Rebuttal Comment 2.1:
Comment: Thank you for resolving all my questions. Now I have no more questions. I have adjusted my ratings. Nice work! | Summary: The presented paper addresses the challenge of domain shifts in medical image classification, where conventional neural networks often lack effective priors for medical datasets. The authors introduce KnoBo, a novel class of concept bottleneck networks that integrate medical knowledge priors to enhance neural network performance in medical image classification tasks. KnoBo has three primary components: 1) Structure Prior: leverages medical documents to construct a knowledge bottleneck; 2) Bottleneck Predictor: maps images onto concepts; 3) Parameter Prior: imposes constraints on learning the linear layer's parameters, ensuring that the network's predictions remain consistent with the established medical knowledge. The proposed method of KnoBo is evaluated using two medical datasets, ISIC and X-ray, under both confounded and unconfounded settings.
Strengths: The paper is well-organized and easy to follow. The authors propose a novel approach to incorporating medical knowledge priors into existing neural networks by optimizing three distinct terms. The theory behind each term is solid and clearly explained, making the methodology easy to understand. One notable aspect is the parameter prior (Sec. 4.4), which adds interpretability by aligning the estimations with medical concepts. They also use human evaluation on the learned concepts.
The authors conduct a comprehensive set of experiments to evaluate their proposed method, covering several scenarios, including confounded and unconfounded data. This broad range of experiments supports the validity of their findings.
The ablation studies are thorough, encompassing five knowledge sources, appropriate baselines, and various bottleneck sizes. These ablations provide valuable insights into the proposed method's performance and sensitivity.
Weaknesses: The paper's primary weakness is its reliance on content generated by LLMs to create new medical concepts. Although GPT-4 is one of the most advanced models for generating concepts, it can still produce hallucinations, especially when the generated concepts are loosely aligned with common terms such as hair, skin color/tone, gel bubble, and rules (in the context of skin lesions).
Technical Quality: 3
Clarity: 4
Questions for Authors: - Whats is the main reason to define $ \Delta = | ID - OOD |$ as robustness measure (row 257)? In my opinion, this metric is more related to some "fair predictions" concept, since it measures how ID and OOD performances differ.
- Do the authors know why the diversity values for skin lesion datasets (Table 3, last column) are much lower than those for X-rays?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Not a limitation, but I recommend incorporating a filtering scheme into the concept generation process to enhance the paper. While the authors have conducted human evaluations, relying solely on such assessments may not be feasible due to the limited availability of human resources. Refining the concept generation could be achieved by improving the prompts or employing a medical foundation model to assess the quality of the generated concepts about the target image.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback on our work! Here is our reply to your questions:
**Q1. Hallucinations of GPT-4**
This is a valid concern, but the way we use GPT-4 highly encourages it to directly copy information from documents instead of inventing it. Our concept generation is conditioned on medical documents, which differs from previous work [1] that solely relies on LLM. We employ LLM as a text extraction tool, which basically copy-paste or paraphrases. In addition, each concept is attributed to a document that medical professionals can fact-check. Table 12 of the Appendix and supplementary materials (HTML files in the `example_concepts/` folder) show example concepts with corresponding reference documents.
We examined concepts from the X-ray bottleneck. 80% of the concepts were word-for-word copies of a feature in a sentence that the authors of a document said they examined. 14% included some paraphrases but were still grounded in the document and never completely invented (sometimes a combination of multiple factors in different places in the document). 6% were of confusing origin and potentially hallucinated but still mentioned relevant factors. This aligns with the analysis of the bottlenecks we did with medical students in section C.4 of the appendix.
**Q2. Robustness measure ($\Delta$)**
Our analysis is focused on multiple metrics at once. The delta measure $\Delta$ needs to be combined with the other metrics (ID, OOD) when measuring models’ robustness (it is trivially minimized by creating classifiers with chance performance). Our perspective is that a robust model should have high performance across ID and OOD while maintaining a small drop to demonstrate its robustness to domain shifts.
**Q3. Diversity values for skin lesion datasets are much lower than those for X-rays**
This is because X-ray is a more information-dense modality as it shows multiple organs (e.g., lung, heart, stomach). Skin lesion diagnosis requires fewer medical factors in practice. Moreover, X-ray is the most accessible medical imaging modality, resulting in more available medical documents.
**Q4. Filtering scheme of concept generation**
Thank you for the suggestion! To clarify a bit, we don’t have a phase in our method that requires medical professionals, although we used them later to verify the validity of our approach. We designed three filtering criteria to ensure the concepts are diverse and visually identifiable. (Appendix B.3, Line 712-717) As you suggest, more filtering of the concepts using auxiliary signals would likely substantially improve the results and is something we are exploring.
[1] Yang et al. Language in a bottle: Language model guided concept bottlenecks for interpretable image classification. CVPR 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
The authors have adequately addressed my primary concerns, and I have no further questions. I will maintain my previous rating.
---
Rebuttal 2:
Title: Regarding Hallucinations
Comment: I just want to add on here as I believe RAG is quite an effective and reasonable measure to prevent hallucinations by far. Plus, the number/stats they reported in the rebuttal seem pretty convincing. | Summary: This paper proposed Knowledge-enhanced Bottlenecks (KnoBo) to leverage the concept bottleneck model (CBM) to improve model robustness to various domain shifts and confounders. KnoBo explores using retrieval augmented generation to generate concept space with a large number of concepts using a large language model (i.e., GPT-4). KnoBo brings significant overall improvement when large-scale image-text pairs are available for concept grounding. A comprehensive evaluation across various scenarios and various baselines confirms the effectiveness of KnoBo in reducing the gap between in-distribution and out-of-distribution performance.
Strengths: 1. This paper explores exploiting LLM and retrieval augmentation generation, which enables building large concept space from the large-scale knowledge base.
2. Overall, the presentation is clear. Technical details are clearly presented in the supplemental material. However, some results are a bit confusing and lack discussion of limitation and failure cases.
3. Evaluation is comprehensive.
Weaknesses: 1. The concept grounding seems pretty data-hungry, requiring large-scale multi-modal datasets for pretraining (e.g., MIMIC-CXR with over 300k image-text pairs). For smaller datasets (ISIC with about 60k images with generated captions from LLM), the proposed KnoBo only brings marginal overall improvement. This requirement significantly limits the broad application of the proposed framework in the data-scarcity domain, such as cancer/tumor classification.
2. Lack of a detailed discussion of limitations (one one-line limitation in Section 7) and presentation of failure cases. No failure cases (wrong classification / wrong concept) were included in either the main text or supplemental material.
3. Some results are confusing to the reviewer (See No. 2 in the Questions section).
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In Table 1, the best performance of ISIC-age should be ViT-L/14 and LSL, not the proposed KnoBo (which is bolded).
2. The KnoBo with 150 concepts has an average OOD Acc of 58.8 across Chest X-ray Datasets, as stated in Tables 2 and 4. Why does Figure 4 indicate that KnoBo with 135 concepts has an accuracy below 50 for OOD X-ray datasets? As the performance of OOD datasets decreases with the increased number of concepts, these two results seem contradicted.
3. Typo in the title of Algorithm 1 in Section 4.2. Should be 'retrieval'
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: As explained in the Weaknesses and Questions sections.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback. Here we address your comments:
**Q1. The concept grounding seems pretty data-hungry.**
This is a valid concern, but while we sample from a large dataset like MIMIC, in reality, we only use 22k total examples. Training the concept grounding component is just learning a linear classifier, so we sampled **a small subset of the data**.
On average, each X-ray grounding function uses 1,523 examples, and each skin lesion grounding function uses 1,342 examples. This can be reduced significantly with little loss in performance.
The table below shows the performance of KnoBo on X-ray datasets using varied sizes of samples for training each grounding function. It highlights that KnoBo achieves much of its performance with a small number of examples, suggesting that it is not data-hungry.
| n. of examples | Confounded | Unconfounded | Overall |
|-------------------------|------------|--------------|---------|
| 250 | 73.7 | 72.1 | 72.9 |
| 500 | 73.1 | 73.0 | 73.1 |
| 1000 | 73.1 | 72.8 | 73.0 |
| 1500 | 73.5 | 73.2 | 73.4 |
| 2000 | 74.3 | 73.1 | 73.7 |
**Q2. For skin lesion datasets KnoBo only brings marginal overall improvement.**
Empirical challenges within the skin lesions datasets are not because of a lack of data but because the datasets have very strong non-generalizable cues (skin colors, hairs, etc). For this reason, it is easy to fine-tune models to high in-domain performance and learn non-transferable predictors.
Our overall metric averages both in-domain performance and out-of-distribution performance. KnoBo is much better out of distribution (see OOD and Avg in Table 2, shows differences of **+22.9** and **+6.7**, respectively), but when averaging in domain performance on datasets with no explicitly identified confounds (unconfd), helps more moderately (+.3). We believe that OOD measures more accurately capture if models are able to learn the correct hypothesis and therefore the more important measure of if a model is trustworthy in real applications.
**Q3. Confusing results in Tables 2 and 4.**
In Figure 4, we deactivated the prior loss for KnoBo, which resulted in lower OOD performance. In this ablation, we focus on comparing different inputs (black-box features vs. concept scores) and their sizes while removing prior loss forms a more comparable setting. We provide the **updated Figure 4 in the PDF of the global response**, which includes the curves of KnoBo with prior loss. We will clarify this experimental setup in the final draft of our paper.
**Q4. Failure cases and limitation section.**
Failure cases are difficult to examine and generally not very accessible to non-medical professionals (in the main text, we tried to avoid adding examples that might make some readers uncomfortable if they are squeamish about medical images). Sections C.4 (Table 11) and Section D (Figures 8 and 9) of the appendix contain some of this, where we recruited medical students to evaluate limitations.
The medical students annotated 300 concepts and often found the information in our bottlenecks highly relevant, but some were not groundable (**see Table A in the PDF for examples**). This did not vary much by information source, but skin lesion information was judged to be easier to ground than X-ray information. In a possible camera-ready version, given the extra page, we can try to integrate some more of this content in the main body of text, including example failures, and expand the limitations section to more explicitly discuss the data efficiency of our methods.
**Q5.Typos.**
We have carefully proofread the manuscript again to correct the issues.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: I thank the authors for their time and effort in preparing the rebuttal. The rebuttal resolved some of my questions, and I appreciate the explanation and revision for Figure 4 and the supplement of failure concepts in the attachment. I have some remaining questions:
1. About the examples per grounding functions, does this part correspond to lines **177-179**? Could the author point out or supply how many grounding functions are used for each dataset, how they are selected, and how is the # of grounding functions confirmed? I tried but failed to find these details in the manuscript. If you use 22k examples in x-ray, you should have (22k/2k=11) grounding functions? Is that different from the skin dataset? And why does each x-ray grounding function use 1,523 examples when you actually supply 2,000 examples per grounding function?
2. For the failure case, would it be possible to supply some failure examples of KnoBo in Fig. 7 (i.e., examples without the green checkmarks)?
Thank you very much, and I am looking forward to your reply. Great work!
---
Reply to Comment 1.1.1:
Comment: Thanks for the reply! Here, we reorganize your questions and answer them as follows:
**Q1. About the examples per grounding functions, does this part correspond to lines 177-179?**
Yes. Sec 4.3 explains the method.
**Q2. how many grounding functions are used for each dataset, how they are selected, and how is the # of grounding functions confirmed?**
150 grounding functions are used for each modality. As explained in lines 717-720, we initially use our retrieve-augmented generation algorithm to generate 200 concepts. However, some concepts may not have enough examples in the corpus to support training the grounding functions. Therefore, we only select the top 150 grounding functions ranked by their concept prediction accuracy on the validation sets.
**Q3. If you use 22k examples in x-ray, you should have (22k/2k=11) grounding functions? Is that different from the skin dataset?**
Each example can be reused to train multiple grounding functions, as each clinical report can contain information on different aspects. For instance, example A can be used as a positive example for concepts 1, 2, and 3 and serve as a negative example for concepts 7, 8, 9, etc. Therefore, considering the overlaps, we use 22k examples in total. The skin lesion dataset uses a similar number of examples.
**Q4. And why does each x-ray grounding function use 1,523 examples when you actually supply 2,000 examples per grounding function?**
We set 2000 as the upper bound for examples, but when training each binary grounding function, we want to balance positive and negative examples through subsampling. Therefore, the actual number of training samples is 2$\times$min(positive examples, negative examples), which is smaller than 2000.
**Q5. For the failure case, would it be possible to supply some failure examples of KnoBo in Fig. 7 (i.e., examples without the green checkmarks)?**
This is definitely possible, and we will include them in the next version.
Thanks again for raising those questions! We will clarify those details in the next version of the paper.
---
Reply to Comment 1.1.2:
Title: Grounding Function Data Requirements Clarification
Comment: Apologies, we wanted to clarify the question of data efficiency in the grounding function further (and there was a small typo in the images required before). The mechanism for constructing them is:
1. given a concept name (i.e. ground class opacities for chest x-rays), and a pretraining corpus of (image,report) pairs (i.e. MIMIC), reports are sorted by S-BERT embedding similarity to the concept. The list is truncated to a fixed number of the most similar ones (i.e 2k in the manuscript, but as few as 100 below).
2. An LLM is used to predict if the report positively or negatively affirms that the concept is present in the image, and the output is used as a label.
3. A linear classifier is trained given a balanced subsample of positive and negative instances for the concept
The data requirements are that of training a linear classifier on a fixed feature representation and so practically is very small. Furthermore, between the set of all classifiers that need to be trained, there may be correlations between which reports likely mention information relevant to their construction.
In the paper, we train 150 concept classifiers, so this means that there would be a maximum of 2000 * 150 = 300k image-report pairs required for training. Practically, the actual number of samples is much lower because of correlations, 50k. Below is an updated table containing the number of samples required to train concept classifiers. With as few as 100 reports /concept, requiring at most 15k samples (although practically 9k), performance decreases very moderately.
We acknowledge that this level of detail was not clear from the original manuscript and is an important part of how the method works and so will expand it in any future revisions. We will include a small "algorithm" describing the process.
|reports / concept | maximum samples | actual samples|Confounded|Unconfounded|Overall
| -------- | ------- | -------- | ------- | -------- | ------- |
|100 | 15,000 | 9,060 | 72.5 | 71.7 | 72.1 |
|250 | 37,500 | 10,603 | 73.7| 72.1 |72.9| | |
|500| 75,000 | 19,288 | 73.1| 73.0| 73.1| | |
|1000|150,000 | 33,113 | 73.1|72.8|73.0| | |
|1500|225,000 | 43,229 | 73.5|73.2|73.4| | |
|2000|300,000 | 50,071 | 74.3|73.1|73.7| | |
---
Rebuttal 2:
Title: Has the rebuttal addressed your concerns?
Comment: Hi,
Has our rebuttal addressed the weakness you felt the paper had? In particular, we strongly feel our method is not data-hungry and we have a fairly extensive discussion of problems (currently in the appendix) that could be moved up given extra space. Would you consider updating your rating or are there other weaknesses you feel we can address during this discussion period? | Rebuttal 1:
Rebuttal: We deeply appreciate the time and effort all reviewers have contributed. We feel very encouraged to see that all the reviewers have overall positive attitudes towards our work. Reviewers found our work interesting (f9Nf) and praised the **novelty of our method** (T8n7, tMbd) with **comprehensive evaluations** (z571, T8n7, tMbd) and **clear presentation** (z571, T8n7, tMbd).
We respond to each reviewer's comments individually and hope your concerns can be addressed. Again, we appreciate your time and expertise in the evaluation process.
Pdf: /pdf/918850c1ab50366df18d2bc287251abdf38a596f.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
GIC: Gaussian-Informed Continuum for Physical Property Identification and Simulation | Accept (oral) | Summary: The paper proposes an improvement of PAC-NeRF for the task of estimating material properties from multiview video using 3D Gaussian Splatting (3DGS). Instead of estimating geometry solely based on the first frame like PAC-NeRF, the proposed method uses 4D Gaussian Splatting (4DGS) with reduced order modeling to construct 4D geometry, enabling the use of 3D supervision. A coarse-to-fine internal filling strategy is introduced to ensure that the simulation operates on a solid volume. 2D mask loss is used for additional supervision.
Strengths: The reduced order modeling of 4DGS is a good fit for the reconstruction task with a limited number of fixed views. Directly applying full-order 4DGS seems hard to optimize due to the high number of DOFs.
The experiment results are promising.
A real-world application is provided.
Weaknesses: Some symbols are not defined clearly, making it hard to follow at times. For example, $Discretize$ operator in Line 214; $\tilde{P}$ and $F$ are not defined in the text. I need to guess from Alg 1.
Technical Quality: 3
Clarity: 2
Questions for Authors: If the 4D geometry is accurate enough, is it adequate to only use 3D supervision? Ablation studies are needed to validate the necessity of 2D mask supervision.
Coarse-to-fine density field creation: At the beginning, the reconstruction contour is much larger than the object. How does it shrink to the actual boundary? The $TrilinearInterpolation$ operator will not shrink the contour, and there is no operator to assign zeros.
In the simulation, how do the Gaussian kernel scales evolve? The paper seems to assume isotropic kernels, but physical deformation can transform the sphere into an ellipse. How is this addressed?
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations are well discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed reading of our paper and constructive suggestions! We hope our responses adequately address the following questions about our work. Please let us know if there’s anything we can clarify further.
---
> 1. Some symbols are not defined clearly, making it hard to follow at times. For example, $Discretize$ operator in Line 214; $\tilde{P}$ and $F$ are not defined in the text. I need to guess from Alg 1.
**Reply:** Sorry for the lack of clarity. $Discretize$ denotes the operation mapping particle positions to voxel indices on the density field. \tilde{P}$ and $F$ are the sampled particles and the density field. We add a table to the attached PDF in the "Author Rebuttal" session to clarify the operators and symbols in detail. We will also add the table to the Appendix in the revised version.
---
> 2. If the 4D geometry is accurate enough, is it adequate to only use 3D supervision? Ablation studies are needed to validate the necessity of 2D mask supervision.
**Reply:** Thank you for this constructive advice.
(1) To answer the first question, we perform system identification on the torus object, which is the only instance that provides a ground truth mesh model in the PAC-NeRF dataset. Specifically, we use the ground truth point clouds as the continuum for simulation and utilize the mesh model to extract surface particles from the point cloud as 3D supervision. The experiment, performed with this configuration and other settings unchanged, achieves $E = 1,002,907.25$ and $\mu = 0.2991$, which are close to the ground truth ($E = 1,000,000$ and $\mu = 0.3$). Therefore, we believe that **it should be sufficient to use 3D object surfaces as supervision once the recovered geometry is accurate enough**.
(2) To evaluate the necessity of 2D mask supervision, we perform system identification on 45 cross-shaped object instances in the PAC-NeRF dataset by our method but with only object surface supervision. The results are reported in the table below. It is obvious to see that combining both 2D and 3D shapes as supervision can achieve more accurate performance compared to using 3D shapes only. Therefore, we believe that **utilizing 2D mask supervision to some extent makes up for the errors introduced by the 3D object surfaces extracted from dynamic 3D Gaussians**. We will add the analysis to the revised version.
| **Type** | **Parameters** | **w/o masks** | **w/ masks** |
|-------------------|------------------|--------------------------|--------------------------|
| **Newtonian** | $\log_{10}(\mu)$ | $2.19 \pm 2.90$ | $\mathbf{1.53 \pm 1.31}$ |
| | $\log_{10}(\kappa)$ | $24.2 \pm 22.2$ | $\mathbf{14.8 \pm 19.2}$ |
| | $v$ | $0.20 \pm 0.08$ | $\mathbf{0.20 \pm 0.07}$ |
| **Non-Newtonian** | $\log_{10}(\mu)$ | $19.4 \pm 27.7$ | $\mathbf{13.5 \pm 18.2}$ |
| | $\log_{10}(\kappa)$ | $24.0 \pm 24.8$ | $\mathbf{12.9 \pm 16.8}$ |
| | $\log_{10}(\tau_Y)$ | $\mathbf{4.58 \pm 9.11}$ | $4.80 \pm 3.92$ |
| | $\log_{10}(\eta)$ | $49.1 \pm 40.5$ | $\mathbf{40.7 \pm 24.6}$ |
| | $v$ | $1.33 \pm 0.54$ | $\mathbf{0.19 \pm 0.09}$ |
| **Elasticity** | $\log_{10}(E)$ | $2.85 \pm 1.94$ | $\mathbf{2.43 \pm 3.29}$ |
| | $\nu$ | $3.97 \pm 2.64$ | $\mathbf{2.52 \pm 2.03}$ |
| | $v$ | $\mathbf{0.22 \pm 0.10}$ | $0.82 \pm 0.32$ |
| **Plasticine** | $\log_{10}(E)$ | $\mathbf{25.6 \pm 27.4}$ | $25.6 \pm 29.4$ |
| | $\log_{10}(\tau_Y)$ | $9.04 \pm 2.37$ | $\mathbf{1.67 \pm 1.21}$ |
| | $v$ | $1.16 \pm 0.00$ | $\mathbf{0.22 \pm 0.10}$ |
| **Sand** | $\theta_{fric}$ | $\mathbf{2.55 \pm 2.03}$ | $4.18 \pm 0.52$ |
| | $v$ | $0.31 \pm 0.18$ | $\mathbf{0.17 \pm 0.05}$ |
---
> 3. Coarse-to-fine density field creation: At the beginning, the reconstruction contour is much larger than the object. How does it shrink to the actual boundary? The $TrilinearInterpolation$ operator will not shrink the contour, and there is no operator to assign zeros.
**Reply:** Sorry for the lack of clarity. Although all the operations cannot assign voxels outside an object to zeros, the trilinear interpolation and mean filter operations can reduce the density of the voxels outside the object boundary. By iteratively performing mean filtering and particle voxel reassigning, the densities outside the boundary will be sufficiently small while the object boundary and internal region keep high-density values, and we thus can extract the object particles by thresholding the density field.
---
> 4. In the simulation, how do the Gaussian kernel scales evolve? The paper seems to assume isotropic kernels, but physical deformation can transform the sphere into an ellipse. How is this addressed?
**Reply:** Sorry for the lack of clarity. In this work, we use the grid size of the density field as scale attributes of Gaussian kernels and fix them during the simulation. We admit that a physics-informed scale transformation such as PhysGaussian [1] allows a more realistic rendering. In future work, we will integrate this function into our method to enable kernel transformation during simulation.
---
[1] Xie, Tianyi, et al. "Physgaussian: Physics-integrated 3d gaussians for generative dynamics." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. I do not have further questions.
---
Reply to Comment 1.1.1:
Comment: Thanks for your feedback and we are pleased that our response has successfully addressed your concerns. | Summary: This paper introduces a novel hybrid method that leverages 3D Gaussian representation and continuum to estimate physical properties of deformable objects. From multi-view video, the Gaussian- informed continuum can be extracted and then combined with material point method (MPM) simulation to train the whole pipeline by using both 3D shape and 2D as supervision. The experiments show that the proposed method outperforms previous approaches based on continuum dynamics or 3D Gaussian representation for dynamic reconstruction, system identification or other real-world applications.
Strengths: 1. The paper introduces an efficient motion-factorized dynamic 3D Gaussian network to reconstruct the object states as a linear combination of basis motions, in which the estimated motions and coefficients share the same backbone.
2. The generated Gaussian-informed continuum consists of the density and grid size scale given by the proposed coarse-to-fine filling strategy, which is further used as supervision in training together with the MPM simulation. This tackles the issue of using quantised Gaussian particles for simulation of continuous structures.
3. The experiments show that the method can achieve SoTA performance compared to pre- vious works among various deformable objects, especially when large deformation occurs. The method is moreover applicable to real-word scenarios.
Weaknesses: 1. The authors stated that such a lightweight architecture of the motion-factorized dynamic 3D Gaussian network is sufficient for complex motions rather than modeling each basis with an independent network (line 168-171) while lacking an experimental proof.
2. The representation in Section 4.3 is lacking in elaboration and should accompany with more details in the supplementary. Please give some detailed elaboration for Section 4.3 about Gaussian-informed continuum, e.g., notations used in Algorithm 1, dimensions of the variables, etc.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. It would be nice to see a comparison between the choice of Motion-factorized dynamic 3D Gaussian network and previous architectures ?
2. It is important that the experimentation on complex motions must be performed in order to truly understand the strength of the proposed method.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: To some extent, yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed reading of our paper and constructive suggestions! We hope our responses adequately address the following questions about our work. Please let us know if there’s anything we can clarify further.
---
> 1. The authors stated that such a lightweight architecture of the motion-factorized dynamic 3D Gaussian network is sufficient for complex motions rather than modeling each basis with an independent network (line 168-171) while lacking an experimental proof.
**Reply:** Thank you for pointing out this question. We conducted two ablation analyses to empirically demonstrate the arguments mentioned in lines 168-171.
(1) We first use our method to perform dynamic Gaussian reconstructions on 45 cross-shaped objects in the PAC-NeRF dataset, except each motion basis is modeled with an independent network. Specifically, each basis takes the encoded time as input and contains 8 fully connected layers. The output is the residuals of position and scale of this basis. We use the setting mentioned in Sec. 5.1 to evaluate the CD and EMD on the above method variants and compare them with our method. The results are reported in the table below. The results show that our backbone-shared architecture (**Ours**) slightly outperforms the independent-basis networks (**Ind.**) in terms of dynamic reconstruction, which empirically demonstrates the strength that the reduced order modeling of dynamic Gaussians is sufficient for motion reconstruction tasks. Moreover, our method can achieve less training time compared with the independent design (around 15 minutes vs. 45 minutes for a dynamic scene on a single 3090 GPU).
| | CD $\downarrow$ (Ind.) | CD $\downarrow$ (**Ours**) | EMD $\downarrow$ (Ind.) | EMD $\downarrow$ (**Ours**) |
|----------------|---------------------------|----------------------------|----------------------------|-----------------------------|
| Newtonian | 0.250 | **0.243** | 0.026 | **0.025** |
| Non-Newtonian | 0.204 | **0.195** | 0.023 | **0.022** |
| Elasticity | 0.188 | **0.178** | 0.022 | **0.020** |
| Plasticine | 0.215 | **0.196** | 0.024 | **0.022** |
| Sand | 0.273 | **0.250** | 0.028 | **0.025** |
| Mean | 0.226 | **0.212** | 0.025 | **0.023** |
(2) To validate the second argument in lines 171-172, we compared our method with DynMF [1], which also uses neural networks as learnable bases while considering motion coefficients as time-invariant Gaussian attributes, by evaluating the PSNR on the D-NeRF dataset. The results are reported in the table below. The results show that modeling the motion coefficients as time-variant variables does increase the ability to fit the dynamic scenes.
| Method | Hell Warrior | Mutant | Hook | Bouncing Balls | T-Rex | Stand Up | Jumping Jacks | Mean |
|-----------|--------------|--------|-------|----------------|--------|----------|---------------|-------|
| DynMF [1] | 36.60 | 41.00 | 31.30 | 41.01 | 35.10 | 41.16 | 35.75 | 37.42 |
| Ours | **41.97** | **42.93** | **38.04** | **41.26** | **37.54** | **45.32** | **38.86** | **40.85** |
---
> 2. The representation in Section 4.3 is lacking in elaboration and should accompany with more details in the supplementary. Please give some detailed elaboration for Section 4.3 about Gaussian-informed continuum, e.g., notations used in Algorithm 1, dimensions of the variables, etc.
**Reply:** Thank you for this constructive advice. As suggested, we added a table to the attached PDF in the "Author Rebuttal" session to clarify the operators and symbols in detail. We will also add the table to the Appendix in the revised version.
---
> 3. It would be nice to see a comparison between the choice of Motion-factorized dynamic 3D Gaussian network and previous architectures ?
**Reply:** Please refer to the first reply.
---
> 4. It is important that the experimentation on complex motions must be performed in order to truly understand the strength of the proposed method.
**Reply:** Thank you for this suggestion. We conduct an additional experiment on a scenario with a more complex boundary condition and motion trajectory. Specifically, we use our method to perform system identification on an elastic rope falling onto two rigid cylinders. The data format is the same as PAC-NeRF. The estimated property is reported in the table below, and the simulated trajectory is visualized in the attached PDF. The results show that our method can also generalize to scenarios with more complex boundary conditions and motions.
| | Initial Guess | PacNeRF | Ours | Ground Truth |
|--------|---------------|-----------------|--------------------|--------------|
| $E$ | $10^3$ | $1.12 \times 10^5$ | $\mathbf{1.03 \times 10^5}$ | $10^5$ |
| $\nu$ | $0.4$ | $0.22$ | $\mathbf{0.23}$ | $0.3$ |
---
[1] Kratimenos, Agelos, Jiahui Lei, and Kostas Daniilidis. "Dynmf: Neural motion factorization for real-time dynamic view synthesis with 3d gaussian splatting." arXiv preprint arXiv:2312.00112 (2023).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the rebuttal.
I don't have any questions at the moment.
---
Reply to Comment 1.1.1:
Comment: Thanks for the reviewer's feedback. We are pleased that our response has addressed the concerns. We would appreciate that if the reviewer could re-evaluate the review score. (Note: since the openreview system had issue earlier that reviewers cannot receive the email after posting comments, we delete the old comment and resend it.) | Summary: The manuscript proposes a novel hybrid framework that leverages 3D Gaussian representations for system identification from visual observations. The framework captures both explicit and implicit shapes using dynamic 3D Gaussian reconstruction and a coarse-to-fine filling strategy to generate density fields. These fields are used to sample continuum particles for simulation and extract object surfaces, which can render object masks during simulations to guide physical property estimation.
Strengths: 1. The presentation is clear. The figures look high-quality.
2. I personally appreciate the real-world experiments. I am happy to see the proposed method also works in real life.
3. The proposed method solves one of the most interesting problem in the intersection of guassian splatting and physical simulation, where physical simulation requires volumetric representation but 3dgs outputs surfaces.
Weaknesses: 1. In the real-world experiment, I found the authors switched to FEM for deformable body simulation, which conflicts with the MPM simulator used in their pipeline. I think it needs justifications.
2. Some wordings are confusing: e.g., line 166, do you mean effective instead of efficient?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. I wonder what makes the difference between the proposed method and PAC-NeRF in the infilled particle generation.
2. If I understood correctly, the motion network only takes time as input. I wonder if it helps by combining the temporal encoding from DiffAqua [1] to further capture the low and high frequencies.
3. The infilling algorithm seems expensive since the complexity grows exponentially. Did you try using Octree [2] or similar algorithm to speed it up?
[1] Ma, Pingchuan, et al. "Diffaqua: A differentiable computational design pipeline for soft underwater swimmers with shape interpolation." ACM Transactions on Graphics (TOG) 40.4 (2021): 1-14.
[2] Meagher, Donald JR. Octree encoding: A new technique for the representation, manipulation and display of arbitrary 3-d objects by computer. Electrical and Systems Engineering Department Rensseiaer Polytechnic Institute Image Processing Laboratory, 1980.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed reading of our paper and constructive suggestions! We hope our responses adequately address the following questions about our work. Please let us know if there’s anything we can clarify further.
---
> 1. In the real-world experiment, I found the authors switched to FEM for deformable body simulation, which conflicts with the MPM simulator used in their pipeline. I think it needs justifications.
**Reply:** Sorry for the confusion caused. Switching to FEM is because most of the widely-used robotic simulators incorporate FEM for deformable object simulation, including Isaac Gym, the simulator we used in our experiments. Moreover, estimating physical parameters using MPM and then applying them to FEM should be a feasible way for this application, since
(1) both the Material Point Method (MPM) and the Finite Element Method (FEM) originate from Galerkin methods [3]. In theory, MLS-MPM can achieve first-order consistency in simulations [3], which is comparable to the consistency achieved by linear FEM simulations,
(2) the material properties (e.g., Young's modulus and Poisson's ratio) are independent of the numerical methods.
We will clarify this in the revised version.
---
> 2. Some wordings are confusing: e.g., line 166, do you mean effective instead of efficient?
**Reply:** Sorry for the confusion caused. Indeed, we should use "effective" in line 166. We will correct the expression in the revised version.
---
> 3. I wonder what makes the difference between the proposed method and PAC-NeRF in the infilled particle generation.
**Reply:** Sorry for the lack of clarity.
(1) PAC-NeRF samples particles by directly selecting NeRF field voxels whose alpha values are greater than the predefined threshold and then performing uniform sampling 4 times on each extracted voxel. Since PAC-NeRF defined the alpha threshold with **a small value** to make sure that a solid continuum can be obtained, they usually tend to recover over-large shapes.
(2) On the contrary, our method turns to generate a density field based on the Gaussian and initial particles {$\mu(t)$} $\cup P_{in}$. Among these particles, the Gaussian particles are prone to be located at the object surface, which can guarantee high density at the surface region, while the initial ones incorporated with the coarse-to-fine operations can ensure a solid internal region. With this module, we can extract the continuum from the density field, which serves only for **representing the object shape** instead of the NeRF field, which is utilized for **rendering**. Please refer to Figure 7 in Appendix A.2.2 to see the qualitative results of the proposed filling algorithm, along with ones from PAC-NeRF.
We will add more explanation in the revised version.
---
> 4. If I understood correctly, the motion network only takes time as input. I wonder if it helps by combining the temporal encoding from DiffAqua [1] to further capture the low and high frequencies.
**Reply:** Sorry for the lack of clarity. We do employ temporal and positional encoding to the time $t$ and position $\mu_0$, respectively, to introduce features with various frequencies. Specifically, the encoding module is denoted as $\gamma(x) = \left( \sin(2^k \pi x), \cos(2^k \pi x) \right)_{k=0}^{L-1}$, where $L=10$ for both $t$ and $\mu_0$, which is exactly the same as the setting in DiffAqua. We will add the notations to Figure 2 and the implementation details to Appendix A.1.1 in the revised version.
---
> 5. The infilling algorithm seems expensive since the complexity grows exponentially. Did you try using Octree [2] or similar algorithm to speed it up?
**Reply:** Although the memory requirements for processing the volumetric data scales cubically with the grid resolution, we still use the naive volumetric data structure for our algorithm because
(1) in practice, only **four** iterations are required to achieve sufficient accuracy (the implementation details are also available in Appendix A.2.1),
(2) such a data structure can be efficiently implemented with GPU acceleration based on PyTorch, where the trilinear interpolation and mean filter operations are implemented by "grid_sample" and "conv3d" functions, respectively.
Therefore, we can almost achieve real-time performance (more than 10 fps on a single Nvidia 3090 GPU) on our infilling algorithm.
---
[3] Hu, Yuanming, et al. "A moving least squares material point method with displacement discontinuity and two-way rigid body coupling." ACM Transactions on Graphics (TOG) 37.4 (2018): 1-14.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' responses, which addressed most of my concerns and questions. Regarding the first question, I suggest moderating the stance on the orthogonality between the numerical method and the corresponding physical parameters. While the claim is theoretically sound, practitioners often encounter significant misalignment between MPM and FEM, and I fear this statement might be misleading. It would be beneficial to see dedicated work on FEM due to its realism and applicability in robotics-related tasks. However, I understand this would require considerable separate effort and merits its own publication. Therefore, I will raise my score to 7 to advocate for acceptance.
---
Reply to Comment 1.1.1:
Comment: Thanks for the reviewer's feedback. We are pleased that our response has addressed the concerns. We admit that although the orthogonality assumption works on our application, there might be practical challenges when aligning MPM and FEM, especially on more complex tasks. We'll moderate our stance in the revised version. | Summary: This paper presents an approach for estimating the geometry and physical properties of objects through visual observations using 3D Gaussian representations. The method employs a dynamic 3D Gaussian framework to reconstruct objects as point sets over time and a coarse-to-fine filling strategy to generate density fields. This facilitates the extraction of object continuums and integrates Gaussian attributes, aiding in rendering object masks during simulations for implicit shape guidance.The extracted geometries are then used to guide physical property estimation through differentiable MPM simulation.
Strengths: The experiments in this paper demonstrate improvements over prior works such as PAC-NeRF and Spring-Gaus. The introduction of a novel hybrid framework leveraging 3D Gaussian representations for physical property estimation is straightforward and easy to understand, and experiments have confirmed their effectiveness. Overall, the paper makes meaningful contributions to the problem of geometry + physical property estimation from multi-view videos.
Weaknesses: Some of the technical terms are misused, making the paper confusing to read. For example, "implicit shape representation" is repeatedly used to refer to the rendered object image masks, but "implicit" generally refers to using functions (parametric or neural networks) to represent shapes, where shapes must be retrieved through function evaluations, hence "implicit". Please correct this terminology, as the geometries in this work are represented using GS, which are explicit representations. Additionally, some of the results presentations are confusing and could benefit from clearer explanations and more organized presentation (see below). Clarifying these aspects would greatly enhance the paper's readability and overall impact.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Table 3: Can you include the ground truth values in the table so that readers understand what to expect?
2. System Identification: How do you set the initial parameters for the system identification? Are the optimizations sensitive to initial conditions?
3. Figure 4: Would mask-based supervision fail when the estimated shapes are significantly different from the ground truth? Have you observed any cases where this occurs?
4. Figure 1a: There is a typo in the figure caption. Change "caption" to "capture."
5. References: It would be beneficial to include references to works on differentiable cloth simulation for system identification and inverse problems, such as "Differentiable Cloth Simulation for Inverse Problems" by Liang et al. and "DiffCloth: Differentiable Cloth Simulation with Dry Frictional Contact" by Li et al.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed the limitations of the method regarding known camera parameters, assumption of known material models, and continuum mechanics. However, I do wonder about the failure cases of the method, if there are any. Understanding specific scenarios where the method does not perform well would provide valuable insights and help guide future improvements.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed reading of our paper and constructive suggestions! We hope our responses adequately address the following questions raised about our work. Please let us know if there’s anything we can clarify further.
---
> 0. Some of the technical terms are misused, making the paper confusing to read. For example, "implicit shape representation" is repeatedly used to refer to the rendered object image masks, but "implicit" generally refers to using functions (parametric or neural networks) to represent shapes, where shapes must be retrieved through function evaluations, hence "implicit". …
**Reply:** Sorry for the confusion caused. Since our method employs both 3D surface particles and 2D object masks for supervision, we originally intended to use "implicit shape representation" to describe 2D object masks in order to distinguish them from 3D surface particles, but we overlooked the ambiguity it introduced. We will correct the description by directly using 2D object masks in the revised version.
---
> 1. Table 3: Can you include the ground truth values in the table so that readers understand what to expect?
**Reply:** Thanks for this constructive advice. For more details about the estimated and ground truth values, please refer to the attached PDF in the "Author Rebuttal" session. We will also add the table to the Appendix in the revised version.
---
> 2. System Identification: How do you set the initial parameters for the system identification? Are the optimizations sensitive to initial conditions?
**Reply:** The table in the attached PDF also lists each instance's initial guess. To make a fair comparison, we followed the setting in PAC-NeRF [1] to assign the same initial values for the instances with the same material. The results show that our method is robust to initial conditions even when they are significantly different from the ground truth.
---
> 3. Figure 4: Would mask-based supervision fail when the estimated shapes are significantly different from the ground truth? Have you observed any cases where this occurs?
**Reply:** Under extreme conditions, the mask-based supervision might fail when the simulated trajectory is completely out of view for all viewpoints. However, we did not find any failure cases in our experiments, because we performed initial velocity estimation before system identification. With the initial velocity available and multiple viewpoints located at proper positions, we observed that it's unlikely that the simulated trajectory is outside the field of view, and the estimated shapes will always converge to the ground truth shapes even if they have significant discrepancies at the initial stage.
---
> 4. Figure 1a: There is a typo in the figure caption. Change "caption" to "capture.”
**Reply:** Thank you for pointing this out. We will correct the typo in the revised version.
---
> 5. References: It would be beneficial to include references to works on differentiable cloth simulation for system identification and inverse problems, such as "Differentiable Cloth Simulation for Inverse Problems" by Liang et al. and "DiffCloth: Differentiable Cloth Simulation with Dry Frictional Contact" by Li et al.
**Reply:** Thank you for this suggestion. These two works also tackle the inverse problems with differentiable simulators, particularly in cloth material. We will cite the related works you mentioned in our revised manuscript.
---
> 6. Limitations: I do wonder about the failure cases of the method if there are any.
**Reply:** In the synthetic experiments, we didn't encounter any failure cases on both the PAC-NeRF [1] and SpringGaus [2] synthetic datasets, since each scenario includes video sequences from 10 or 11 distinct viewpoints, which are sufficient for dynamic Gaussian reconstruction. However, the proposed dynamic Gaussian module would fail to reconstruct the trajectory on the SpringGaus real dataset because it only contains 3 viewpoints to capture the dynamic scene. That's why we only use object masks for the supervision of system identification (More details are illustrated in Section 5.3 and Appendix A.5.). Therefore, our method might not perform well when **fewer views** of the scenario are available. Using fewer views to recover both geometry and system identification is more practical and will be an interesting direction for future work.
---
[1] Xuan Li, Yi-Ling Qiao, Peter Yichen Chen, Krishna Murthy Jatavallabhula, Ming Lin, Chen-fanfu Jiang, and Chuang Gan. Pac-nerf: Physics augmented continuum neural radiance fields for geometry-agnostic system identification. In Proceedings of the International Conference on Learning Representations (ICLR), 2022.
[2] Licheng Zhong, Hong-Xing Yu, Jiajun Wu, and Yunzhu Li. Reconstruction and simulation of elastic objects with spring-mass 3d gaussians. arXiv preprint arXiv:2403.09434, 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for the reply. The replies addressed my concerns and there I'll raise my score.
---
Reply to Comment 1.1.1:
Comment: Thanks for your feedback and we are pleased that our response has successfully addressed your concerns. | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude to all the reviewers for their time and their valuable feedback. We deeply appreciate their recognition of our work, such as
"The experiments in this paper demonstrate improvements over prior works" (FGDU),
"I am happy to see the proposed method also works in real life" (a7go),
"The proposed method solves one of the most interesting problems in the intersection of Gaussian splatting and physical simulation" (a7go),
"The experiments show that the method can achieve SoTA performance" (HGBq), and
"The reduced order modeling of 4DGS is a good fit for the reconstruction task" (mZdJ).
We hope that our work indeed "makes meaningful contributions to the problem of geometry + physical property estimation" (FGDU).
Inspired by their thoughtful comments, we have incorporated the following changes in the revision of our paper:
- We conducted four additional experiments, including
- a comparison of our proposed network and independent basis baselines in terms of dynamic reconstruction,
- system identification on a rope instance with more complex motion and boundary conditions,
- system identification on the torus instance with ground truth point cloud and surface supervision, and
- system identification on 45 cross-shaped object instances with only 3D surface supervision,
to address the concerns of the reviewers.
- We updated our manuscript to fix typos and misused terminology to reduce the potential for misunderstandings.
- We added a table to the appendix to provide the estimated and ground truth values for Table 3 in the main manuscript (see Figure 1 in the attached PDF).
- We added a table to the appendix to clarify the operators and symbols in Algorithm 1 in detail (see Figure 2 in the attached PDF).
We hope our responses adequately address the questions raised about our work. Please let us know if there is anything else we can clarify further.
Pdf: /pdf/31e217cb19f3b62e25cb5c7c372ea4b700d9da3d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
U-DiTs: Downsample Tokens in U-Shaped Diffusion Transformers | Accept (poster) | Summary: The paper presents a novel approach to enhancing diffusion models for image generation using Transformers. It introduces U-shaped Diffusion Transformers (U-DiTs) that employ token downsampling within the self-attention mechanism of U-Net architectures, aiming to reduce computational cost while maintaining or improving image generation quality. Computation efficiency is eagerly needed for the visual foundation model. This paper demonstrates that U-DiTs outperform traditional isotropic diffusion transformer models with significantly lower computation requirements. The paper also provides detailed ablations and analyses.
Strengths: (1) The introduction of token downsampling in the self-attention process of U-Net style transformers is novel and addresses key efficiency issues in image generation tasks.
(2) The paper provides extensive experimental results showing U-DiTs achieving better performance metrics than larger, more computationally intensive models.
(3) The reduced computational demand of U-DiTs could make high-quality image generation more accessible and cost-effective.
Weaknesses: (1) This paper needs to include important baselines. It fails to compare with other efficient DiT models, such as Pixart-sigma or others. SiT seems to be the only baseline method that takes flow as the objective. However, it does not employ a different architecture other than DiT, which should not be a very fair baseline to compare with U-DiTs.
(2) This paper only used ImageNet as the training data. No large-scale experiments have been conducted on Laion, JourneyDB, or others to verify its capacity for text-to-image generation. This limits the potential impact of this work.
(3) The operation of token downsampling is not clear. I am very confused about the "After the attention operation, the downsampled tokens are spatially merged as a unity to recover the original number of tokens." Fig 3 is also not clear since the down-sampler seems to be a black box. This is one of the most crucial parts of this paper. I would highly recommend the authors add more details on token merging, downsampling, and upsampling.
Technical Quality: 3
Clarity: 3
Questions for Authors: (1) How do you get the GFLOPs as illustrated in the paper? How to estimate the GPU hours according to such GFLOPs?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer JEXU,
Thank you very much for your review. Here are our responses:
**W1: Fail to compare with other efficient DiT models, like PixArt-Alpha.**
Thanks for your advice. Here we provide a comparison with powerful baselines: PixArt-Alpha [1], as well as U-ViT [2] and DiffiT-XL [3]. We align all methods under the same setting as the standard setting (Table 2). The FLOPs statistics and metric results are shown in the Table below. Our method has a clear advantage over other methods.
| | GFLOPs | FID | sFID | IS | Precision | Recall |
| -------------- | -------- | --------- | -------- | ---------- | --------- | --------- |
| DiffiT-XL | 118.5 | 36.86 | 6.53 | 35.39 | 0.540 | 0.613 |
| PixArt-Alpha | 118.4 | 24.75 | 6.08 | 52.24 | 0.612 | 0.613 |
| U-ViT-Large | 76.4 | 21.22 | 6.10 | 67.64 | 0.615 | 0.633 |
| U-DiT-B (Ours) | **22.2** | 16.64 | 6.33 | 85.15 | 0.642 | **0.639** |
| U-DiT-L (Ours) | 85.0 | **10.08** | **5.21** | **112.44** | **0.702** | 0.631 |
**W2: No large-scale experiments for text-to-image generation.**
We are sorry that text-to-image generation training are too resource-intensive for us to conduct. We carefully consider this as a future work in order to verify its potentials on text-to-image generation.
**W3: The operation of token downsampling is not clear. Please add more details on token merging, downsampling, and upsampling.**
Thank you for your feedback. We appreciate your keen observations and will ensure to include these additional details in our revised manuscript. Actually, given a input tensor QKV_0 (shape=(b, 3c, h, w)), we firstly conduct PixelUnShuffle operation on QKV_0, and get four smaller feature QKV_1 (shape=(b$\times$s^2, 3c, h/s, w/s)). Then we perform vanilla multi-head self-attention, and get the output Y (shape=(b$\times$s^2, c, h/s, w/s)). Finally, we reshape the Y into the original shape (b, c, h, w). Throughout the process, we not only significantly reduced the computational overhead of self-attention, but also ensured that the entire upsampling and downsampling process was completely lossless. This is a key reason why our approach significantly outperforms the other methods listed in Table 1 of the paper.
**Q1: How do you get the GFLOPs as illustrated in the paper? How to estimate the GPU hours according to such GFLOPs?**
We got the GFLOPs via the Python library "torchprofile''[1], which is a "a general and accurate MACs / FLOPs profiler for PyTorch models". We are willing to provide training speed statistics in the table below. For fair comparison, we have removed all tricks to demonstrate the actual training time difference caused by the proposed architecture.
| | GPU hours (400K) | FID | sFID | IS | Precision | Recall |
| ----------------- | ---------------- | --------- | -------- | ---------- | --------- | --------- |
| DiT-XL/2 | 519 | 20.05 | 6.25 | 66.74 | 0.632 | 0.629 |
| U-DiT-L (Vanilla) | 573 | **12.04** | **5.37** | **102.63** | **0.684** | 0.628 |
| U-DiT-B (Vanilla) | **283** | 20.89 | 7.33 | 72.85 | 0.611 | **0.637** |
Sincerely,
Authors
[1] "PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis." ICLR 2024.
[2] "All are Worth Words: A ViT Backbone for Diffusion Models." CVPR 2023.
[3] "DiffiT: Diffusion Vision Transformers for Image Generation." ECCV 2024.
[4] https://github.com/zhijian-liu/torchprofile
---
Rebuttal Comment 1.1:
Title: Follow-up Question
Comment: Thank you for the detailed response. I have a follow-up question. What GPU that was used for your model training?
---
Rebuttal 2:
Title: Authors' Response to Follow-up Question
Comment: Thank you for your helpful suggestions, and sorry for not specifying the devices. We used 8 NVIDIA A100s for training. The GPU-hours statistics were all measured on NVIDIA A100. We promise to add the results of more competitive baselines, as well as the table of FLOPs/GPU-Hour statistics, to Section 4 in later revisions.
---
Rebuttal Comment 2.1:
Comment: 80G A100 or 40G A100?
---
Reply to Comment 2.1.1:
Title: Response to JEXU
Comment: Sorry for not making the config clear. We use 80G A100. We will add this detail to the manuscript in the next revision. | Summary: This paper introduces a U-shaped diffusion Transformer (U-DiT) model, inspired by the departure from U-Net in DiT. The authors aim to combine the strengths of U-Net and DiT to determine if the inductive bias of U-Net can enhance DiTs. Initially, a simple DiT-UNet model was developed, but it showed minimal improvement over DiTs. Consequently, the authors explored downsampling QKV values in self-attention, which not only enhanced the model's performance but also significantly reduced computational costs. Leveraging this downsampled self-attention mechanism, a series of U-DiT models were proposed, with experimental results demonstrating their effectiveness.
Strengths: The paper introduces an interesting idea that downsampled self-attention can reduce redundancy in U-Net while achieving performance improvements rather than losses. The authors support their claims with extensive experiments, demonstrating the convincing performance advantages of U-DiTs.
Weaknesses: 1. The statement in line 68, "the latent feature space is not downsampled," is incorrect for DiTs. DiTs reduce the spatial size of features by patchifying the latent features.
2. The paper presents several techniques that are claimed to be effective; however, their benefits decrease as the model scales up. For instance, in Table 8, the combined use of four techniques only results in a 2FID improvement for U-DiT-L.
3. While parameter count is a critical measure of model size, the paper does not provide a comparison of parameter counts with baseline models.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Why do the tricks fail when the model is scaled up?
2. Please provide a comparison of parameter counts.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer GsTx,
Thank you very much for your comments. Here are our responses:
**W1:** Statement "the latent feature space is not downsampled" is misleading.
**A1**: Thanks for your suggestions. We will add qualifiers to this statement and limit it to the intermediate features.
**W2&Q1:** Why do the tricks fail when the model scaled up?
**A2:** Thank you for your insightful comments. (1) According to OpenAI’s scaling laws, model performance tends to be log-linear as model size increases. This means that as models become larger, the performance gains from the proposed tricks in our study, may appear less pronounced in absolute terms. This phenomenon occurs because larger models quickly approach the performance ceiling for the given evaluation metrics. (2) However, it is important to note that the improvements introduced by our tricks remain significant even for the largest models. These models are already nearing the upper limits of performance, and the relative improvements achieved by our methods are still impactful. Thus, while the absolute gain may be reduced, the effectiveness of our tricks is evident and noteworthy.
**W3&Q2:** Please provide a comparison of parameter counts.
**A3:** Thank you for your suggestion. The comparison of parameters is shown in the following table. With less FLOPs and a little more parameters, our proposed U-DiT-L still supress DiT-XL/2 significantly.
| Model | FLOPs (G)| Params (M) | FID |
|:------------:|:----------------:|:--------:|:----------------:|
| **U-DiT-B (Ours)** | **22.22** | **204.42** | **16.64** |
| DiT-L/2 | 80.73 | 458.10 | 23.33 |
| DiT-XL/2 | 118.66 | 675.13 | 19.47 |
| **U-DiT-L (Ours)** | **85.00** | **810.19** | **10.08** |
Typically, U-shaped models tend to have a larger number of parameters but lower computational overhead, whereas isotropic architecture models generally have higher computational requirements but fewer parameters. It is quite challenging to simultaneously align the parameter and computational overhead between these two types of models, and addressing this will be a primary focus of our future work.
Sincerely,
Authors
---
Rebuttal Comment 1.1:
Comment: Thank you for the response and the additional experiments. In my view, the idea of combining U-Net with token downsampling for generation is very promising, as simple downsampling alone may not be sufficient for generative tasks. Besides, the FID@400K results look good. The authors also show the efficiency compared to other baselines.
After considering the comments from other reviewers, I decided to raise my score to 7 (Accept).
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: Thank you very much for your encouraging feedback, and thank you very much for your help in improving our paper as well! | Summary: This paper proposes a transformer architecture as backbone for
diffusion modeling that is based on the UNet. The paper shows that a
variation of the transformer with downsampling layers and skip
connections achieves better results than the DiT at a lower
compoutational cost.
Strengths: - Experimental results at multiple model scales show the proposed
architecture obtains better performance than the original DiT
architecture, at the same or lower computational cost
- The experiments suggest token downsampling for attention shows
promising results and could be a worthile avenue for improvement.
Weaknesses: - The novelty and technical contribution is limited. The proposed
method consists of a minor architectural modification to the U-ViT
architecture.
- Experiments are limited to ImageNet, and the FID scores achieved are
far from state-of-the-art (FID<2).
Technical Quality: 2
Clarity: 2
Questions for Authors: - There is a repeated use of the word isotropic, seemingly to refer to
the standard transformer. I wonder what is the justification for
this characterization.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer AGQh,
Thank you very much for your comments. Here are our responses:
W1.. **Novelty is limited.**
Our work is not an improvement of U-ViT. The architecture of our U-DiT model is **completely different from U-ViT**: we adopt a U-Net architecture, while U-ViT is an isotropic architecture with skipping shortcuts. The major architectural difference is that U-DiT (Ours) is a encoder-decoder model, where each part have several stages where downsampling or upsampling is used as stage transition; U-ViT does **not involve any downsampling or upsampling**. The feature size is not changed throughout the whole model.
We have also demonstrated via experiments that our model is much better than theirs.
| | GFLOPs | FID | sFID | IS | Precision | Recall |
| ------------------------- | ------ | --------- | --------- | --------- | --------- | -------- |
| U-ViT-Large | 76.4 | 21.218 | 6.100 | 67.644 | 0.615 | 0.633 |
| U-DiT-B (Ours) | **22.2** | 16.64 | 6.33 | 85.15 | 0.642 | **0.639** |
| U-DiT-L (Ours) | 85.0 | **10.08** | **5.21** | **112.44** | **0.702** | 0.631 |
W2. **About limited dataset and not reaching FID<2.**
The reason we use ImageNet is that latest latent-space conditional generation models, like DiT, SiT, DiT-LLaMA, DiffiT all use ImageNet as the only benchmark. There are two versions of the ImageNet dataset: ImageNet-256 and ImageNet-512. As experiments in the paper are performed on ImageNet-256, we further add the performance of our model on ImageNet-512 to prove its robustness:
| | GFLOPs | FID | sFID | IS | Precision | Recall |
| ------------------------- | ------ | --------- | --------- | --------- | --------- | -------- |
| DiT-XL | 524.7 | 20.94 | **6.78** | 66.30 | 0.645 | 0.581 |
| U-DiT-B (Ours) | **106.7** | **15.39** | 6.86 | **92.73** | **0.756** | **0.605** |
We are unable to pursue SOTA of FID<2 because it requires extremely large computation costs that we could not afford. For instance, DiT-XL needs 7M iterations to reach an FID of 2. We have compared with a SOTA model DiffiT (that reaches FID 1.73) under exactly the same 400K iteration setting. This indicates that our model is among the SOTA models in latent image generation.
| | GFLOPs | FID | sFID | IS | Precision | Recall |
| ------------------------- | ------ | --------- | --------- | --------- | --------- | -------- |
| DiffiT | 118.5 | 36.862 | 6.533 | 35.391 | 0.540 | 0.613 |
| U-DiT-B (Ours) | **22.2** | 16.64 | 6.33 | 85.15 | 0.642 | **0.639** |
| U-DiT-L (Ours) | 85.0 | **10.08** | **5.21** | **112.44** | **0.702** | 0.631 |
Q1. **The meaning of term "isotropic":** Yes, it refers to standard transformer where a stack of transformer blocks are concatenated in series. No downsampling or upsampling of the tokens is required within the model.
Sincerely,
Authors
---
Rebuttal Comment 1.1:
Title: Thank you for your response, and my remaining concerns.
Comment: I thank the authors for their response to my comments and to the other
reviewers.
I still have some concerns about the paper:
- Table 7 shows the additional modifications are providing an
improvement of 15 FID points, and that the downsampling alone is
actually not that significant (although such high FID scores provide a
weak signal). This raises the question of how much of the
improvement in Table 2 is actually due to the downsampling
introduced in Section 3, which is the main focus of the paper.
- I am not fully convinced that filtering out high-frequency noise is
always an advantage for diffusion, as claimed by the authors. In
many diffusion settings one seeks to predict the noise, in which
case this could be harmful.
- The writing and terminology could be improved to clearly state which
elements of the original U-Net are being referred to. When the
authors say "we adopt a U-Net architecture", it can be confusing
since the original U-Net is fully convolutional. As another example,
I remain doubtful about the use of the word "isotropic", which
should mean something that "is the same in every direction". However
in this case it doesn't seem any particular latent dimension is
treated differently.
- Note there is the UVit architecture introduced in [1] which also
includes downsampling/upsampling layers and signifcanlty outperforms
DiT-XL (which is not a strong ImageNet baseline).
While I maintain these concerns, after reading the other reviewers I
realize the practical value this paper can have to the community in
providing directions to improve the transformer architecture for diffusion, and therefore raise
my score by 1 point.
[1] Hoogeboom, E., Heek, J. and Salimans, T., 2023, July. simple diffusion: End-to-end diffusion for high resolution images. In International Conference on Machine Learning (pp. 13213-13232). PMLR.
---
Reply to Comment 1.1.1:
Title: Thank You and Further Responses
Comment: Thanks for your approval and thanks again for your helpful suggestions. Here are our responses:
### Q1: Limited improvement in Table 7.
The key is that we also need to take model **FLOPs** into consideration in comparison. In Table 7, though the proposed downsampling could improve a mere 4 FID, it reduces the FLOPs by **more than 1/3** of the DiT-UNet model. In order to evaluate the advantage of downsampling through FID improvement, we evaluated a smaller "DiT-UNet (Slim)" model that is comparable in FLOPs to U-DiT-T. The advantage measured under comparable FLOPs is around **28 FID**, which is nearly **twice** the improvement of all the tricks. Below we provide a copy of Table 7 (upper section) for easy reference:
| | GFLOPs | FID | sFID | IS | Precision | Recall |
| ----------------------------------- | ------ | --------- | --------- | --------- | --------- | --------- |
| **DiT-UNet (Slim)** | 0.92 | 107.00 | 24.66 | 11.95 | 0.230 | 0.315 |
| DiT-UNet | 1.40 | 93.48 | **20.41** | 14.20 | 0.274 | 0.415 |
| **U-DiT-T (DiT-UNet+Downsampling)** | 0.91 | **89.43** | 21.36 | **15.13** | **0.291** | **0.436** |
### Q2: Filtering out high-frequency noise is harmful.
We provide a further explanation as follows: estimating noise is equivalent to estimating clear image via simple transformation, which implies that perception of clear denoised signal is vital in the denoiser. Additionally, shortcuts passes most high-frequencies, while the backbone is low-frequency dominated [1]. Hence, filtering out high-frequencies in the backbone is not causing significant impacts.
### Q3: Terminology issues with "U-Net" and "Isotropic".
Thank you for suggestions on terminology. We agree that the term "U-Net" may cause misunderstanding. After referring to Ronneberger et al. [2], we hold that "U-Shaped Architecture/Network" is a better substitute. However, we found it hard to find a term substitute to "Isotropic". We will explain it as "a standard transformer architecture that does not involve any change in token size" in the next revision.
### Q4: Comparison to UViT (in Simple Diffusion [3]).
We hold that our method is different from UViT [3] as follows:
1. Task difference: UViT experiments are conducted on **pixel-space**; DiT and U-DiT experiments are conducted on **latent-space**.
2. Training setting difference: UViT is using **batch-size 2048** for 500K iterations on the ImageNet-256 benchmark; DiT and our U-DiT uses batch-size 256 (which is only **1/8** of the batch-size of UViT). The authors of [3] themselves claim that "the batch size is larger (2048) which does affect FID and IS performance considerably".
3. Model size difference: UViT has **2 Billion** parameters, which is more than **2 times** the size of the largest variant of U-DiT (U-DiT has only 810M).
4. Architectural difference: UViT **only** uses Transformer Block at the medium stage; it keeps using **conventional ResBlock** at the encoder-decoder stage (Fig. 7 in [3]). Our U-DiT model uses **Transformer Blocks across all stages**.
Above all, we hold that UViT is **not fairly comparable** to DiT and U-DiT, but we do thank the reviewer for providing a competitive diffusion architecture, and we will discuss it in Section 2: Preliminaries in the next revision. Additionally, we sincerely apologize for not being able to test the performance UViT on the same setting of DiT due to limited time left for discussion and closed-source UViT codes.
[1] Freeu: Free lunch in diffusion u-net. CVPR 2024.
[2] U-Net: Convolutional Networks for Biomedical Image Segmentation. MICCAI 2015.
[3] Simple diffusion: End-to-end diffusion for high resolution images. ICML 2023. | Summary: The authors conduct a simple toy experiment by comparing a U-Net architectured DiT with an isotropic one. They find that the U-Net architecture only gains a slight advantage, indicating potential redundancies within the U-Net-style DiT. Inspired by the discovery that U-Net backbone features are low-frequency-dominated, they perform token downsampling on the query-key-value tuple for self-attention and observed performance improvements.
Based on self-attention with downsampled tokens, the authors propose a series of U-shaped DiTs (U-DiTs) in the paper and conduct extensive experiments to demonstrate the good performance of U-DiT models.
Strengths: - The author verified that radical token downsampling method for DiT-UNets could save the overall computation cost compared to full-scale self-attention, while can still improve the performance.
- the authors also performed scaling experiments to compare with various scaled DiTs.
Weaknesses: - there is a lack of comparison with some noticeable existing works, e.g., the authors did not compare with U-ViT, which is pretty similar as the plain UNet based DiT. There is also no mentioning or comparison with HourglassDiT.
- there is no comparison with reducing tokens by simply increasing the patch size.
- the saved computation is not clear, line 141 says 1/3 is saved while 149 claims 3/4.
- there is no justification or explanation why radically downsampling of K, V, Q is better than only k-v downsampling.
- there is no explanation on why radically reduced tokens would gain better performance.
Technical Quality: 3
Clarity: 3
Questions for Authors: - if the downsampling factor is 2, how much layers will this downsampled design support?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors may not have to pursue even longer iterations and comparison with DiTs. More justification on why downsampling tokens would help is more important. Also extending the experiments other than ImageNet would be better.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer Df8i,
Thank you very much for your suggestions. Here are our responses:
**W1.** We omitted U-ViT and Hourglass DiT previously, because these models are mainly targeted at pixel-space generation, while our work is focused on latent space generation. To demonstrate the advantage of our model, we conducted experiments as follows:
| | GFLOPs | FID | sFID | IS | Precision | Recall |
| ------------------------- | ------ | --------- | --------- | --------- | --------- | -------- |
| DiffiT | 118.5 | 36.862 | 6.533 | 35.391 | 0.540 | 0.613 |
| PixArt-XL | 118.4 | 24.751 | 6.075 | 52.237 | 0.612 | 0.613 |
| U-ViT-Large | 76.4 | 21.218 | 6.100 | 67.644 | 0.615 | 0.633 |
| Hourglass-DiT * | 53.8 | 564.574 | 747.528 | 1.000 | 0.000 | 0.000 |
| U-DiT-B (Ours) | **22.2** | 16.64 | 6.33 | 85.15 | 0.642 | **0.639** |
| U-DiT-L (Ours) | 85.0 | **10.08** | **5.21** | **112.44** | **0.702** | 0.631 |
P.S. H-DiT is intended for pixel-space generation instead of latent space generation (which is the usage of our U-DiT model). The officially provided H-DiT models are very small (because they are intended for High-resolution pixel space generation), so we scale them up for 12x (which is the maximal size we could scale). The training of the enormously large H-DiT is not stable.
Plus, we want to stress that **U-ViT is not a U-Net architecture**: it is an isotropic architecture with shortcuts that does not involve feature downsampling. Our U-DiT model, on the other hand, is a U-Net architecture with feature downsampling. Besides feature downsampling, we are further adding attention downsampling to the self-attention module.
**W2.** Thank you very much for your suggestions. Following the setting of Table 1, we have conducted an experiment that uses patchsize 2 as follows:
| | GFLOPs | FID | sFID | IS | Precision | Recall |
| ------------------------- | ------ | --------- | --------- | --------- | --------- | -------- |
| Token Patchification | 0.88 | 129.12 | 34.02 | 9.30 | 0.17 | 0.21 |
| Token Downsampling (Ours) | 0.90 | **89.43** | **21.36** | **15.13** | **0.29** | **0.44** |
Results reveal that patchsize=2 performs worse than our proposed downsampling method.
**W3.** We apologize for the ambiguities. "1/3" in line 141 refers to FLOPs saved in total (on the entire model); "3/4" in line 149 refers to FLOPs saved within self-attention. Apart from self-attention, there are many other components in the model. We will clarify the meaning of these fractions in the next revision.
**W4.** We have provided a brief explanation in line 121-123, and here is some further clarification: KV Compression keeps the number of queries intact (which corresponse to the number of output tokens), which means downsampling is not performed completely on the feature map; the effect of noise filtering is thus reduced. We have downsampled queries for self-attention. Besides, KV Downsampling involves a lossy reduction of Key-Value pair tokens; our Token Downsampling measure does not involve lossy reduction of tensors.
**Q1.** All multi-head self-attention (MHSA) layers in U-DiT support downsampling tokens by 2, and we did apply downsampling in all MHSA layers in actual practice.
Sincerely,
Authors
---
Rebuttal Comment 1.1:
Title: Additional Comments on "why radically reduced tokens would gain better performance"
Comment: Sorry for not making why "reduced tokens would gain better performance" clear. On one hand, low-frequency dominates the backbone of U-Net, and thus downsampling would cause little information loss; on the other hand, downsampling could filter out high-frequency noises (according to line 115-117) and thus being beneficial for diffusion. That's why reduced tokens would gain better performance. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
State-free Reinforcement Learning | Accept (poster) | Summary: The paper proposes a black-box approach to make any no-regret algorithm to a state-free algorithm. The topic this paper works on is a very important topic. The theoretical results in this paper are significant, if correct. That said, the algorithm description is hard to follow, and hence, I could not verify if the results are correct or not.
Strengths: Introduction and Related Work are both well-written and useful to understand what the issue in current RL theory is. The topic this paper works on is a very important topic. The theoretical results in this paper are significant, if correct.
Weaknesses: That said, the algorithm proposed in this paper is not easy to understand. I tried to understand it by reading its description multiple times, but still I do not really understand it. It is because there are some undefined symbols and ambiguous definitions. For example
- What is $π^\perp$?
- What do "compatible with $\Pi$" and "it extends the pruned policy $\pi^\perp_t$ to $\pi_t$" in Line 185-187 mean?
- What is the initial set of $\mathcal{S}^\perp$?
- What are inputs and outputs of $\mathrm{ALG}$?
- Figure 1 is not really helpful to understand how the pruned space, dual trajectory, etc are obtained.
**Minor Comment**
- A paper by [Fruit et al. (2018)](https://arxiv.org/abs/1807.02373) seems related in that it also find unreachable states. I think it should be cited.
# After rebuttal review
I read the paper again and now understand the algorithm. The contribution of the paper is significant, and I highly recommend acceptance to the AC.
Technical Quality: 4
Clarity: 1
Questions for Authors: I listed my questions in Weakness section.
Confidence: 4
Soundness: 4
Presentation: 1
Contribution: 4
Limitations: The paper is mostly theoretical, and it has no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comment! We believe there are some misunderstandings, and have clarified them below. We hope the reviewer could reevaluate our paper, and are very happy to respond more questions during the reviewer-author discussion period.
**Q1**: What is $\pi^\bot$?
**R1**: $\pi^\bot$ represents a fixed default policy defined on state space $\mathcal{S}$. It can be understood as ``select an action arbitrarily" (Line 4 of Algorithm 1).
**Q2**: What does ``by playing an arbitrary action on states not in $\mathcal{S}^\bot$ compatible with $\Pi$, it extends the pruned policy $\pi_t^\bot$ to $\pi_t$'' mean?
**R2**: Note that $\pi_t^\bot$ is a function with domain $\mathcal{S}^\bot$, where an available policy $\pi_t$ needs to a function with domain $\mathcal{S}$.
In this regard, given $\pi_t^\bot$, we need to ``lift'' the function domain from $\mathcal{S}^\bot$ to $\mathcal{S}$.
This is achieved by playing an arbitrary action on states not in $\mathcal{S}^\bot$ compatible with $\Pi$.
**Q3**: What is the initial set of $\mathcal{S}^\bot$?
**R3**: As we explained in Lines 182-184, $\mathcal{S}^\bot$ includes all the identified $\epsilon$-reachable states and $H$ additional auxiliary states.
Therefore, at the beginning of the game, since there is no identified $\epsilon$-reachable states, the initial set of $\mathcal{S}^\bot$ is $H$ additional auxiliary states.
**Q4**: What are inputs and outputs of ALG?
**R4**: As we explained in Lines 9-12 of Algorithm 1, the initial input of ALG is a state space, an action space, a policy set, and a confidence coefficient.
At each round, ALG outputs a policy that operates within the given state and action space.
Upon executing the policy, a trajectory is generated, which then serves as the new input for ALG.
Such an interaction process is generally applicable to all existing online MDP algorithms.
**Q5**: Figure 1 is not really helpful to understand how the pruned space, dual trajectory, etc are obtained.
**R5**:
We are disappointed to hear our figure wasn't helpful, we would love to hear the reviewer's ideas for improvement during the discussion period. We really want to understand what was confusing about this picture because we are committed to improve our paper's presentation to enhance clarity for the camera ready version.
**Q6**: Comparison of ``Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes''.
**R6**:
Thanks for the additional reference! We will add this in the camera-ready version.
The problem studied in [1] is similar to the one considered in our work, i.e., how to achieve regret adaptive to $|\mathcal{S}^\Pi|$ instead of $|\mathcal{S}|$.
However, [1] still requires knowing the possibly large set of available states $\mathcal{S}$.
Specifically, as in its Theorem 1, the regret bound still has a term polynomial to the size of the large set of available states $|\mathcal{S}|$.
In this regard, when $|\mathcal{S}|$ is large enough or infinite, its regret bound will become vacuous.
Such a result aligns perfectly with our Observation 4.3: if the information of $\mathcal{S}$ is used as the input for the algorithm, using existing analysis framework, it is difficult to prevent the regret guarantee from a polynomial-level dependence on $|\mathcal{S}|$.
**References**:
[1]: Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes https://arxiv.org/abs/1807.02373 | Summary: This paper proposes a kind of parameter-free reinforcement learning where the algorithm does not need to have the information about states before interacting with the environment. To achieve this, the authors design a black-box reduction framework which can transform any existing RL algorithm for stochastic or adversarial MDPs into a state-free RL algorithm. The paper focuses on establishing the theoretical regret bound of such a black-box approach.
Strengths: The paper proposes an interesting problem and a black-box framework for transforming any existing RL algorithm into a state-free algorithm. The theoretical analysis seems sound and the technique may be of interest for the theoretical community.
Weaknesses: While I understand that the focus of this paper is on theories, it could still be informative to include some toy experiments. For example, how would a non state-free algorithm behave if given "wrong" or insufficient knowledge/estimates of the state space, and how would the corresponding state-free algorithm (via the reduction) behaves.
Technical Quality: 2
Clarity: 3
Questions for Authors: See above
Confidence: 2
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comment! We would like to respectfully ask the reviewer to reassess our paper in light of the rebuttal. We believe the reviewer's concern was peripheral to our main contributions and would like to receive a fair assessment of our work.
**Q1**:
While I understand that the focus of this paper is on theories, it could still be informative to include some toy experiments. For example, how would a non state-free algorithm behave if given "wrong" or insufficient knowledge/estimates of the state space, and how would the corresponding state-free algorithm (via the reduction) behaves.
**R1**:
This is a theoretical work, so there are no simulation experiments. Specifically, we are uncertain how existing non-state-free algorithms would perform in our state-free setting, making it challenging to design experiments.
For example, consider a non-state-free algorithm running on $\mathcal{S}'$. During some rounds, it executes a policy $\pi$ and obtains a trajectory that includes a state $ s \notin \mathcal{S}' $. In this case, it is unclear how does the algorithm update its policy. | Summary: The paper studies the problem of online reinforcement learning in the tabular setting when no prior knowledge about the size of the state space is available. Unlike existing algorithms, that usually require the size S as input parameter, the proposed algorithm is fully adaptive and its final performance scales with the set of reachable states.
Strengths: * The algorithmic solution is quite elegant since it can be applied to any "basic" RL algorithm with regret guarantees.
* The final result achieves the desired removal of the dependency on S, which is replaced by the size of the reachable states.
* The result holds for both stochastic and adversarial settings and it can be extended to removing the dependency on the horizon H as well.
Weaknesses: * I would encourage the authors to provide a clean comparison of the final bounds in the stochastic setting with the best available bounds. In particular, I'm wondering whether the restart leads to extra log terms.
* Related the previous point, I suggest the authors to make explicit the bounds for simple doubling trick strategies, so as to have a point of comparison.
* What is exactly the role of epsilon? It looks like it can be directly set to 0 and everything works the same.
Additional references of interest
* “Layered State Discovery for Incremental Autonomous Exploration” https://arxiv.org/pdf/2302.03789 This paper extends the seminal work of Lim and Auer on “Autonomous exploration” where the state space is possibly infinite (countable). In this paper, the authors managed to resolve an issue in the original paper and removed any dependency on the total number of states, where making the bound completely adaptive to the set of reachable states. Given the similarity between finite horizon and bounded distance exploration, I wonder whether there is any connection to draw between these two works. My impression is that there is quite a strong resemblance between the concept of pruned states and L-reachable states. The main technical difference is that in autonomous exploration the agent needs to explicitly restart to avoid getting “lost” in long episodes, whereas in finite horizon, the reset naturally happens each H steps.
* “Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes” https://arxiv.org/abs/1807.02373 This paper considers the case where the state space is somewhat “misspecified” (i.e., the set of available states is actually larger than the set of reachable states). In this case, the authors still require knowing the possibly large set of available states, so I’m referring to this paper more for completeness than for strict comparison.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the instructive feedback! Below we address some of the questions raised by the reviewer.
**Q1**: I would encourage the authors to provide a clean comparison of the final bounds in the stochastic setting with the best available bounds. In particular, I'm wondering whether the restart leads to extra log terms.
**R1**:
For the stochastic setting, our results suggest that existing algorithm UCBVI actually achieves weakly state-free learning, where its regret is only dependent on $|\mathcal{S}|$ on the log terms (Proposition 4.1). We also propose a straightforward technique, which effectively eliminates the logarithmic dependence on $|\mathcal{S}|$ under UCBVI framework (Proposition 4.2). These two propositions imply that there is no need to use our designed algorithm SF-RL for the stochastic setting.
Furthermore, in inhomogeneous finite-horizon MDP, it seems that UCBVI has achieved asymptotically optimal regret. In this regard, Proposition 4.2 suggests that UCBVI (with small modifications) can also achieve state-free asymptotically optimal regret, which matches the best available bound. Besides, there is no restart in UCBVI, thus there is no extra log terms.
**Q2**: Related the previous point, I suggest the authors to make explicit the bounds for simple doubling trick strategies, so as to have a point of comparison.
**R2**:
If the reviewer are referring to apply doubling trick on the state space, we will encounter some issues in the analysis. In Theorem 6.2, when a state $s$ is placed into $\mathcal{S}$, we initialize its corresponding high probability event ($\delta(s)$). In this case, we need to know exactly which state s refers to (i.e., the index of $s$). However, under doubling trick strategies, the states generated each time we double are void, which makes it impossible to establish the corresponding high probability events.
**Q3**:
What is exactly the role of epsilon? It looks like it can be directly set to $0$ and everything works the same.
**R3**:
Good question! $\epsilon$ controls the tradeoff between the regret of ALG and the error incurred by the barely reachable states we discard.
Specifically, when $|\mathcal{S}^\Pi|< \infty$, our results suggest that the optimal choice of $\epsilon$ should be $o(1/\sqrt{T})$ instead of $0$.
For example, by setting $\epsilon=1/T$, Theorem 6.2 achieves regret bound $\tilde{\mathcal{O}}(H|\mathcal{S}^{\Pi, 1/T}|\sqrt{|\mathcal{A}|T}+H|\mathcal{S}^{\Pi}|)$.
When $|\mathcal{S}^{\Pi, 1/T}|\ll |\mathcal{S}^{\Pi}|$, this regret will be much smaller than the regret under $\epsilon = 0$.
**Q4**:
Comparison of ``Layered State Discovery for Incremental Autonomous Exploration''.
**R4**:
The objective of [1] is fundamentally different to the objective of our work (even if we transform the infinite horizon setting to our finite horizon setting).
In [1], the objective is to find **all** the incrementally L-controllable states.
If we transform this to our finite-horizon setting, the objective should be to find all the incrementally $\epsilon$-reachable states.
However, in our work, the objective is to minimize the regret.
In this regard, our algorithm does not require to find all $\epsilon$-reachable states.
Specifically, as described in Algorithm 1, if an $\epsilon$-reachable state has not been reached $\epsilon t$ times, it will not be included in $\mathcal{S}_t^\bot$.
This implies that in our work the pruned space $\mathcal{S}_t^\bot$ may **never** be the same to $\mathcal{S}^{\epsilon, \Pi}$, even if $t$ goes to infinite.
**Q5**: Comparison of ``Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes''.
**R5**: The problem studied in [2] is similar to the one considered in our work, i.e., how to achieve regret adaptive to $|\mathcal{S}^\Pi|$ instead of $|\mathcal{S}|$.
However, [2] still requires knowing the possibly large set of available states $\mathcal{S}$.
Specifically, as in its Theorem 1, the regret bound still has a term polynomial to the size of the large set of available states $|\mathcal{S}|$.
In this regard, when $|\mathcal{S}|$ is large enough or infinite, its regret bound will become vacuous.
Such a result aligns perfectly with our Observation 4.3: if the information of $\mathcal{S}$ is used as the input for the algorithm, using existing analysis framework, it is difficult to prevent the regret guarantee from a polynomial-level dependence on $|\mathcal{S}|$.
Thanks for these additional references! We will add these comparisons in the camera-ready version.
**References**:
[1]: Layered State Discovery for Incremental Autonomous Exploration https://arxiv.org/pdf/2302.03789.
[2]: Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes https://arxiv.org/abs/1807.02373 | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
TextCtrl: Diffusion-based Scene Text Editing with Prior Guidance Control | Accept (spotlight) | Summary: Considering the limitations of current GAN-based and Diffusion-based Scene Text Editing (STE) methods, this paper introduces TextCtrl, a diffusion-based method that edits text with prior guidance control. Specifically, TextCtrl incorporates text style disentanglement and text glyph structure representation to control the style and structure of generated scene text images, respectively. During inference, a Glyph-adaptive Mutual Self-attention mechanism is designed to further enhance style consistency and visual quality. Additionally, a new real-world image-pair dataset called ScenePair is crated for fair evaluation. Experiments demonstrate that TextCtrl outperforms previous methods in terms of style fidelity and text accuracy.
Strengths: 1. The paper is well written, and the presentation is clear
2. In the field of STE, the proposed method is reasonable in targeting structural integrity and stylistic consistency. By 3 prior guidance control of text style disentanglement, text glyph structure representation and source image reconstruction, TextCtrl can generate edited image with both high style fidelity and high recognition accuracy, which is novel.
3. The paper constructs the first real-world image-pair dataset for STE, which is contributed to the community.
4. The experimental results show that the proposed method surpasses previous methods with large margins in most cases.
Weaknesses: 1. Several hyperparameters are included in the TextCtrl. I am not sure is there any experience for users to set up these hyperparameters for different datasets.
2. Some figures could be improved. For example, Figure 3 is somewhat not easy to follow; highlighting the pretrained parts and the trainable parts would make it clearer. Figure 6 appears chaotic, it may be better to choose fewer texts.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Can this method be used on Chinese scenes?
2. The authors reference the way of structure condition insertion in ControlNet to feed style condition into TextCtrl, i.e., zero convolution and add, rather than structure condition. Could the authors explain this modification? BTW, could the authors introduce more experimental settings for ControlNet in Table 3, including inputs?
3. In Fig.3, why using "sifted" for c_texture but "Despairing" for c_spatial?
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes, the limitations are detailed discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review. We are encouraged by the positive comments on novelty of the method and the contribution of the proposed dataset. The concerns are taken care to address point by point in the following.
> **[W1]: Discussion about the hyperparameters in TextCtrl.**
[A1]: Thanks for the question. Briefly, we have made an effort to reduce the demand for manually setting hyperparameters and aligned the rest to regular settings. Specifically,
- In pretraining, we revise the max-length for text structure encoder to 24 since it focuses on character-level representation of words rather than long context.
- For training and sampling, we follow the regular settings according to [*1*].
- For integration in inference, as shown in ***Paper Fig.3(d)*** and ***Paper Appendix Alg.1***, we set the initial intensity parameter $\lambda=0$ and $\mu=1$ to avoid the distraction of self-attention in early sampling stages, facilitating text structure initialization. Subsequently, $\lambda$ is automatically calculated by the structural similarity of intermediate latent and target text with $\mu=1-\lambda$, which allows an adaptive integration and avoids manual settings.
Through aforementioned settings, the hyperparameters are determined and they are set consistently to perform stable and robust editing, irrespective of different datasets.
> **[W2]: Figure 3 and figure 6 could be improved.**
[A2]: Thanks for the suggestion on improving ***Paper Fig.3/6*** for a more intuitive presentation. Briefly, ***Paper Fig.3(a)(b)*** depicts the pretraining of text encoder *TE* and the style encoder *SE* respectively, while ***Paper Fig.3(c)(d)*** presents the training (with *TE* frozen) and inference of the whole framework. We will improve the figures in the revised paper.
> **[Q1]: Can this method be used on Chinese scenes?**
[A3]: Although the discussion in the paper mainly focuses on English scenes, our method is promising to realize editing on Chinese scenes by simply replacing the dataset. We highlight the effectiveness of the proposed text structure pretraining on aligning the character-level representation and the visual glyph structure, as well as the text style pretraining on disentangling text style. Since both modules are trained solely on synthetic data, they can be easily modified to fit in another language scenes. In addition, the proposed inference control effectively leverages the style information from real text image, which further enhances the model's adaptability to different scenes.
> **[Q2]:The authors reference the way of structure condition insertion in ControlNet to feed style condition into TextCtrl, i.e., zero convolution and add, rather than structure condition. Could the authors explain this modification? BTW, could the authors introduce more experimental settings for ControlNet in Table 3, including inputs?**
\[A4\]: Sure. We will first explain the reason for the modification and later detail the ControlNet in ***Paper Tab.3***.
As stated in ***Paper Sec.3***, TextCtrl is built based on a synthesis manner, wherein text is the essential product of synthesis/generation while style (e.g., font, background) serves as the additional reference. As a result, text structure is introduced through cross-attention to enable the basic generation of text image while style is inserted through ''zero convolution and add'' in decoder for reference. In fact, rather than modified, the design of TextCtrl coincides with ControlNet [2] which introduces additional reference to a foundational generative model.
In the ablation study of ControlNet, we follow the implementation settings and empirical suggestions in [*2*]. Concretely, the style encoder is replaced with a vanilla Stable Diffusion encoder, serving as the ControlNet module. The module is initialized with the pretrained weight of SD encoder and the input includes the style text image $I_{source}$, the noised latent $z^{t}$, timestep $t$ and text structure embedding $C_{struct}$. During training and sampling, $I_{source}$ is first encoded and integrated with $z^{t}$, which subsequently interacts with $t$ and $C_{struct}$ in the ControlNet module and finally inserts into diffusion generator through the ''zero convolution & add'' process.
> **[Q3]: In Fig.3, why use "sifted" for c_texture but "Despairing" for c_spatial?**
[A5]: Thanks for your detailed review of ***Paper Fig.3(b)***.
Briefly, both $c_{texture}$ and $c_{spatial}$ are encoded from $I_{source}$ *("Despairing")*. The image of *"Sifted"* is rather integrated with $c_{texture}$ than used for $c_{texture}$ in respective task.
Specifically, for an input $I_{source}$ *("Despairing")*,
- $c_{spatial}$ is leveraged in *Text Removal* and *Text Segmentation* for capturing the spatial style. As a result, a pure background and a segmentation mask of *"Despairing"* are expected as the direct output.
- $c_{texture}$ is leveraged in *Color Transfer* and *Font Transfer* for capturing the texture style. With $c_{texture}$ encoded from the stylized image of "Despairing", to avoid degeneration of the transfer model into an identity mapping network, we turn to a different text image *"Sifted"* (synthesized in the same color/font) as the sub-models input/output with $c_{texture}$ as conditional style guidance. The substitution enforces the disentanglement of color/font from the source image context and the implementation details are given in ***Paper Sec.3.2***.
The aforementioned processes ensure the encoded features of style encoder and the objectives of each task are closely associated with the text style disentanglement ability.
> **References**
[1] Rombach et al. High-Resolution Image Synthesis with Latent Diffusion Models. CVPR, 2022.
[2] Zhang et al. Adding Conditional Control to Text-to-Image Diffusion Models. ICCV, 2023.
---
Rebuttal Comment 1.1:
Comment: Thanks for the responses, which addressed my concerns. Hence, I increase my rating to "Accept". However, the authors should make hyperparameters setting clear, make Fig.3&6 easy to understand, and explain modification for ControlNet in the final version if it is accepted.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. We will work on refining the mentioned issues and enhancing clarity in the revised paper. Thanks again for the thorough review and valuable suggestions! | Summary: This paper aims to enhance scene text editing performance using a conditional prior-guidance-control diffusion model. It decomposes text style into background, foreground, font glyph, and color features. A text glyph structure representation improves the correlation between the text prompt and glyph structure. For inference, a glyph-adaptive mutual self-attention mechanism with a parallel sampling process is proposed. To evaluate effectiveness, the ScenePair image-pair dataset is created. Experiments on ScenePair and TamperScene datasets, along with ablation studies, are conducted.
Strengths: 1) Text glyph structure representation addresses the weak correlation between text prompts and glyph structures in text-to-image diffusion models.
2) The proposed glyph-adaptive mutual self-attention ensures coherent control during inference, guided by source image reconstruction, which is novel.
3) The ScenePair dataset is valuable for research on STE tasks.
4) Both quantitative and qualitative experimental results demonstrate notable performance improvement across multiple datasets.
Weaknesses: 1) For the ablation study of text style disentanglement, why replace the style encoder with ControlNet?
2) Some sentences are too long which impacts the clarity. For example, line 235 to line 239, the sentence “GAN-based methods generally … due to the unstable restoration quality” expands 5 lines, and it will be more clear if it is split into two or more sentences.
3) Some figures and tables are not convenient for reading, such as Fig. 4 & 5, Tab. 4 & 5. They should be swapped so that the main parts are near the corresponding figure or table.
Technical Quality: 4
Clarity: 3
Questions for Authors: See the weaknesses
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The authors have pointed out the limitations in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the valuable comments. Your detailed review will certainly help improve the revised paper. The remaining concerns are taken care to address point by point in the following.
> **[W1]: For the ablation study of text style disentanglement, why replace the style encoder with ControlNet?**
[A1]: Thanks for bringing the question. We will first detail the experiment and later discuss the purpose.
In ***Paper Tab.3***, the ablation study of ControlNet [*1*] is conducted by replacing our style encoder with a vanilla Stable Diffusion encoder, serving as the ControlNet module. Following the implementation settings and empirical suggestion in [*1*], we initialize the parameters of ControlNet module with the copy of pretrained SD encoder weight and enable training on both ControlNet module and SD generator to fit in the specialized text data.
As a powerful technique in enhancing generation controllability, ControlNet is prevalently leveraged in image editing for style and structural control. The simple yet effective design enables a more meticulous reference from conditional input (e.g., Canny Edge, Depth map), which is also verified through the ablation study against our style encoder (w/o style pretraining) shown in ***Paper Tab.3***. After performing the style pretraining, however, the style encoder achieves a superiority performance against ControlNet module.
The ablation study is leveraged to demonstrate the fine-grained representation ability brought by the explicit style pretraining strategy, compared with implicit style learning by ControlNet. Notably, it also improves the parameter efficiency with 118M for style encoder and 332M for ControlNet module.
> **[W2]: Some sentences are too long which impacts the clarity. For example, line 235 to line 239, the sentence “GAN-based methods generally … due to the unstable restoration quality” expands 5 lines, and it will be more clear if it is split into two or more sentences.**
[A2]: We appreciate your detailed review. In our revised paper, we will rewrite this kind of sentences with a clearer presentation.
*"GAN-based methods generally achieve a higher score on pixel-level assessment (i.e., PSNR,MSE). The reason lies in that they adopt a divide-and-conquer method, which contains a background restoration process to restrain the background region unaltered. Nevertheless, this may result in generating unsatisfied fuzzy images as shown in Fig.5 column 3 and 4 due to the artifacts left by the unstable restoration process."*
> **[W3]: Some figures and tables are not convenient for reading, such as Fig. 4 & 5, Tab. 4 & 5. They should be swapped so that the main parts are near the corresponding figure or table.**
[A3]: Thanks for the suggestion. We will rearrange the order of the figures and tables in our revised paper for better readability. Please don't hesitate to let us know if you have further questions or suggestions.
> **References**
[1] Zhang et al. Adding Conditional Control to Text-to-Image Diffusion Models. ICCV, 2023.
---
Rebuttal Comment 1.1:
Title: post-rebuttal
Comment: After reviewing the feedback and rebuttal, I find that the concerns have been addressed. I will maintain my rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback! We truly appreciate your time and effort in reviewing our paper. We are committed to continuous improvement and value your insights in the revised paper. | Summary: This paper proposes TextCtrl, which is a new method for high-fidelity scene text editing. The authors identify the primary factor hindering previous methods from achieving accurate and faithful scene text editing to be the absence of prior guidance on stylistic elements and textual organization. By leveraging disentangled text style features and robust glyph structure guidance, TextCtrl achieves superior editing performance. A glyph-adaptive mechanism and a new ScenePair dataset for real-world evaluation further solidify TextCtrl's effectiveness.
Strengths: - The paper is well-written.
- The experiments are extensive and well-discussed.
- The dataset is valuable for the research community.
Weaknesses: - The paper misses details and discussions on key contributions (please see the Questions section below), making it hard to evaluate the significance of the contributions of the work.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Sec. 3.2 on text style disentanglement only provides implementation details but doesn’t give the big picture of why all these components working together would successfully learn to disentangle the text style. Since this seems to be one of the main contributions of the paper, I’d recommend to include more discussions on what each component achieves.
- Line 189 mentions integration of the K, V of the reconstruction branch and K, V of the main branch is preferred over the replacement that is often done in the literature. Could you provide some insights on why this might be more helpful, and in which scenarios?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The limitations have been thoroughly discussed in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive comments. We are encouraged by the positive response to the paper presentation and ScenePair dataset. The remaining concerns are taken care to address point by point in the following.
> **[Q1]: A big picture of why all the components working together in style pretraining would successfully learn to disentangle the text style.**
[A1]:
Text style comprises a variety of aspects (e.g., font, color, perspective), which visually mingle with each other. This characteristic poses great challenge to the direct apply of general representation learning methods (e.g., vqvae [*1*]) on stylized text images. While it is intuitive to decompose text style into isolated components for object-oriented learning, some components (e.g., size, spatial relation) are comparatively difficult to depict or synthesize.
Eventually, we turn to resolve the problem with task-oriented pretraining on the stylized text image, which effectively enables the leverage and representation of implicit text style during learning. Notably, we unify several tasks on the same model to promote mutual collaboration from joint training and enhance the style feature extraction, as suggested in [*2*].
Four tasks are included in style pretraining as shown in ***Paper Fig.3(b)***, wherein
- *(i) Text Color Transfer* extracts text color from the stylized text image. Since the text color is determined by both intrinsic style and lighting conditions, it is challenging to label or classify the holistic color (e.g., RGB, Captions). Instead, we refer to image style transfer and implicitly extract color through colorization training.
- *(ii) Text Font Transfer* shares a common intention with *Text Color Transfer* to capture stylized information but focuses on glyph boundary that shapes the font style, for which we construct a similar architecture and learn from the boundary reshaping.
- *(iii) Text Removal* has already been explored in considerable works [*3,4*], which aims at erasing the text pixels and reasoning the background pixels covered by text. We package it as one of the pretraining tasks to benefit the background preservation in editing.
- *(iv)* *Text Segmentation* not only facilitates the precise removal but also decouples the spatial relationships between background and text, as well as extracting an internal description of text (e.g., size, interval of characters).
The implementation details are provided in ***Paper Sec.3.2***. The task-oriented pretraining achieves fine-grained textural and spatial disentanglement of stylized text image and fertilizes the style representation for downstream generator.
> **[Q2]: Why prefer the integration of K-V of reconstruction branch and main branch.**
[A2]: This is indeed an interesting question concerning the semantic correspondence of diffusion latents.
To start off, leveraging the internal representations, numerous works [*5,6*] have revealed the generative diffusion model's ability on establishing reasonable semantic correspondence across different images, even when exhibiting significant differences in categories, shapes, and poses.
A step forward, [*7*] utilizes the aforementioned correspondence in self-attention module through a mask-guided replacement of K-V to perform non-rigid editing of foreground objects (e.g., adjust the posture of a dog with the appearance and background maintained). [*8*] further explores the roles played by the Q-K-V in encoding semantic information and observes that Q determines the semantic meaning of each spatial location while K-V offer the context of different parts of the image for weighted aggregation.
Scene Text Editing shares a common motivation with aforementioned non-rigid editing task [*7,8*] but preserves differences in its complicated non-convex structure, wherein a minor stroke discrepancy can significantly alter visual perception and lead to misinterpretation. The complicated text structure brings obstacles to the replacement since it's difficult to either obtain a text mask or mask the text region precisely on the intermediate feature. In addition, without the mask, performing replacement on the whole K-V often results in sparse attention maps during self-attention which may lead to inaccurate transfers and artifacts on final result.
To deal with the problem, we prefer a heuristic integration of K-V from two branches to avoid losing concentration and enable a flexible weighted aggregation for style consistency control. Experience in ***Paper Tab.4*** verifies that our integration strategy surpasses replacement strategy [*7*] on maintaining text style consistency. We believe the integration strategy would also be helpful when performing editing without a precise mask for the complex structural object.
> **References**
[1] Van et al. Neural discrete representation learning. NeurIPS, 2017.
[2] Peng et al. UPOCR: Towards Unified Pixel-Level OCR Interface. ICML, 2024.
[3] Wang et al. What is the Real Need for Scene Text Removal? Exploring the Background Integrity and Erasure Exhaustivity Properties. TIP, 2023.
[4] Peng et al. Viteraser: Harnessing the power of vision transformers for scene text removal with segmim pretraining. AAAI, 2024.
[5] Zhang et al. A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence. NeurIPS, 2023.
[6] Tang et al. Emergent Correspondence from Image Diffusion. NeurIPS, 2023.
[7] Cao et al. MasaCtrl: Tuning-Free Mutual Self-Attention Control for Consistent Image Synthesis and Editing. CVPR, 2023.
[8] Alaluf et al. Cross-Image Attention for Zero-Shot Appearance Transfer. ACM SIGGRAPH, 2024. | Summary: This manuscript proposes a diffusion-based scene text editing method with prior guidance control. It incorporates style-structure guidance into the model to enhance the text style consistency and rendering accuracy. A Glyph-adaptive mutual self-attention mechanism is designed to deconstruct the implicit fine-grained features to improve the generation quality. Besides, a new real-world image-pair dataset is proposed for fair comparisons. The experimental results show that the proposed method achieves the best results among existing methods.
Strengths: **Innovative Approach**: The paper introduces a novel diffusion-based method that leverages prior guidance control for text editing, addressing the limitations of previous GAN-based and diffusion-based methods.
**Fine-grained Style Disentanglement**: By constructing a fine-grained text style disentanglement, the method improves text style consistency, which is crucial for maintaining the original style and texture during editing.
**Robust Glyph Structure Representation**: The system incorporates a robust representation of text glyph structures, enhancing the accuracy of text rendering.
**Comprehensive Evaluation**: The authors have created the ScenePair dataset to evaluate both visual quality and rendering accuracy, providing a more holistic assessment of STE methods.
Weaknesses: **Limitation in Task Scope**: The proposed method, while highly specialized in the editing domain, is limited to the task of text editing and does not encompass text generation capabilities. This contrasts with previous methods that offer a dual functionality of both generation and editing. The inability to generate new text content as well as edit existing text may be considered a significant limitation, restricting the method's applicability in scenarios that require creative text synthesis in addition to editing.
**Insufficient Ablation Study**: Although the proposed method decomposes text style processing into four distinct tasks—text color transfer, text font transfer, text removal, and text segmentation—the paper does not present a comprehensive ablation study that systematically evaluates the contribution of each individual task. An ablation study is crucial for understanding the impact of each component on the overall performance. Without it, the discussion lacks depth regarding the significance and interplay of these tasks in achieving the method's objectives. Therefore, further research is needed to dissect the individual contributions and optimize the balance between these tasks for improved performance and efficiency.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can the proposed method deal with generation task directly, as mentioned in Weaknesses?
2. The ablation study for the four tasks is needed, as mentioned in Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review. We are encouraged by the comments that TextCtrl serves as an innovative approach and ScenePair provides a more holistic assessment to STE methods. The concerns are taken care to address point by point in the following.
> **[W1/Q1]: Limitation in task scope to offer a dual functionality of both generation and editing.**
[A1]: Thanks for bringing the question and concern. To start off, we would like to highlight that the main focus of our work is to address scene text editing with throughly disentangled text style and glyph structure feature as the prior guidance. Differ from scene text generation, scene text editing possesses the unique challenge in faithfully representing text styles (e.g., font, color, serifs) as well as detail texture (e.g., grids, shadow). To this end, we constructe a style encoder for explict text style extraction and propose the glyph-adaptive mutual self-attention machanism for implicit consistency control, with the concentration on preserving fidelity in editing.
Some inpainting-based methods offer a dual functionality of both generation and editing, yet their editing ability mainly inherits from reasoning the text style according to surrounding unmasked context, which would be uncontrollable and unreliable sometimes and lead to style deviation, as shown in the ***Uploaded Pdf Fig.A***.
Although our current model focuses on high-fidelity text editing, it can also be easily modified to enable both high-fidelity text editing and unconditional creative text generation. Specifically, leveraging the classifier-free guidance (CFG) technique [*1*] which enables a joint training of conditional and unconditional model
$\widehat\epsilon_{\theta}(t, z_{t}, c) = \omega ·\epsilon_{\theta}(t,z_t, c) + (1-\omega)·\epsilon_{\theta}(t, z_t, \emptyset)$,
we could encode either the pure background image or the text removal output of style encoder, serving as an additional guidance $c_{bg}$. Along with the text style guidance $c_{style}$, we have
$\widehat\epsilon_{\theta}(t, z_{t}, c_{bg}, c_{style}) = \omega ·\epsilon_{\theta}(t,z_t, c_{bg}, c_{style}) + (1-\omega)·\epsilon_{\theta}(t, z_t, c_{bg}, \emptyset)$,
based on which our model could generate without $c_{style}$, which offers a dual functionality for generation and editing. This multi-condition approach has been proven to be feasible in other tasks [*2,3*]. We appreciate the reviewer's suggestion on model versatility and we will further the research in future work.
> **[W2/Q2] Further ablation study is needed to evaluate the contribution of each task in text style pretraining.**
[A2]: We have performed an ablation study of the text style pretraining from a broad perspective in ***Paper Tab.3*** to verify its joint contribution to foster style representation. Yet we agree that a further discussion on the contribution of each task would benefit the understanding of each component and optimization in implementation.
The text style pretraining is decomposed into four sub-tasks, wherein *Text Font Transfer* and *Text Color Transfer* focus on textural style (e.g., glyph style, text color), while *Text Removal* and *Text Segmentation* concentrate on capturing spatial style (e.g., background, size). Due to the time constraint of rebuttal, we augment our style ablations with further experiments on two groups of sub-tasks, namely texture group (font & color) and spatial group (removal & segmentation). Results show that texture group mainly achieves growth in fidelity reflected on FID, while spatial group improves the overall quality of editing reflected on all metrics. The reason lies in the fact that sub-tasks in texture group concentrate on detail texture transfer, while sub-tasks in spatial group reckon with both foreground text and background pixel, thus achieving better statistical similarity. Meanwhile, the two groups jointly contribute to the mutual collaboration in training and achieve finer performance.
|Font|Color|Removal|Seg|SSIM $\uparrow$|PSNR $\uparrow$|MSE $\downarrow$|FID $\downarrow$|
|:------:|:------:|:------:|:------:|------|------|------|------|
|✘|✘|✘|✘|0.3130|14.78|0.0475|66.10|
|✔|✔|✘|✘|0.3197|14.79|0.0470|58.81|
|✘|✘|✔|✔|0.3652|14.91|0.0453|47.19|
|✔|✔|✔|✔|0.3756|14.99|0.0447|43.78|
> **References**
[1] Ho et al. Classifier-Free Diffusion Guidance. NeurIPS Workshop, 2021.
[2] Sharma et al. Alchemist: Parametric Control of Material Properties with Diffusion Models. CVPR, 2024.
[3] Song et al. Arbitrary Motion Style Transfer with Multi-condition Motion Latent Diffusion Model. CVPR, 2024.
---
Rebuttal Comment 1.1:
Title: The rebuttal has been read
Comment: After reviewing the feedback and rebuttal, I would like to maintain my rating.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your time and effort throughout the review. Thank you again for your valuable comments to improve the quality of our manuscript! | Rebuttal 1:
Rebuttal: We thank all reviewers for their thoughtful and constructive feedback. It's encouraged to hear from the reviewers that
- The Model TextCtrl: *"Innovative Approach; addresses weak correlation between text prompts and glyph structures; generate edited image with both high style fidelity and high recognition accuracy;"* [Reviewer Ktb6, mqKG, 25Re]
- The Benchmark ScenePair: *"enable the comprehensive assessment on real-world images; providing a more holistic assessment of STE methods; is contributed to the community;"* [Reviewer MkTr, Ktb6, rTPt, mqKG, 25Re]
- The Evaluation: *"Comprehensive; extensive and well-discussed; demonstrate notable performance improvement;"* [Reviewer Ktb6, rTPt, mqKG]
- The Paper: *"is well written; the presentation is clear;"* [Reviewer rTPt, 25Re]
In response to the review, we provide individual replies below to address the remaining concerns from each reviewer. Notably, some concerns share a common topic yet are raised from different perspectives, to which our respective responses could serve as a mutual reference.
- Research and discussion of text style disentanglement pretraining; [A2 for Reviewer Ktb6, A1 for Reviewer rTPt, A5 for Reviewer 25Re]
- Reason and detail for ControlNet module involved in ablation study; [A1 for Reviewer mqKG, A4 for Reviewer 25Re]
Besides, an additional uploaded one-page pdf, some references and the main paper are used in our responses for clarity and precision.
- Figures from uploaded one-page pdf are denoted as ***Uploaded Pdf Fig.X***;
- Figures and tables from the main paper are denoted as ***Paper Fig.X/Tab.X***;
- Figures from the appendix of the main paper are denoted as ***Paper Appendix Fig.X***;
- Reference paper is denoted as [*X*];
We hope our responses could resolve your concerns. Please do not hesitate to let us know if you have further questions.
Pdf: /pdf/6fbd04465841fd1aa783fbad991871b391c632ce.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper addresses the challenges of Scene Text Editing (STE) by introducing a diffusion-based STE method, TextCtrl. Traditional GAN-based STE methods struggle with generalization, and existing diffusion-based methods face issues with style deviation. TextCtrl overcomes these limitations through two main components: (1) Style-Structure Guidance: By disentangling fine-grained text styles and robust text glyph structures to improve text style consistency and rendering accuracy. (2) Glyph-adaptive Mutual Self-attention Mechanism: it enhances style consistency and visual quality by reconstructing implicit fine-grained features of the source image.
Additionally, the paper introduces ScenePair, the first real-world image-pair dataset designed for STE evaluation.
Strengths: (1) Style-Structure Guidance: By disentangling fine-grained text styles and robust text glyph structures to improve text style consistency and rendering accuracy.
(2) Glyph-adaptive Mutual Self-attention Mechanism: it enhances style consistency and visual quality by reconstructing implicit fine-grained features of the source image.
(3) This paper propose an image-pair dataset termed ScenePair to enable the comprehensive assessment on real-world images.
Weaknesses: (1) In this paper, the authors mention that ‘their style guidance predominantly originates from the image’s unmasked regions, which can be unreliable in complex scenarios and fail in style consistency.’ In theory, the unmasked region can provide the editing model with more style prior information from the surrounding environment, which is beneficial for style transfer during the editing process.
(2) An important characteristic in text editing is to ensure that the edited word is compatible with the features of the surrounding unmasked region of the original image. Similar approaches have already emerged, such as TextDiffuser and AnyText, as mentioned in your paper. Therefore, the setting of editing only cropped images does not guarantee this compatibility. Additionally, the method proposed in the paper involves relatively complex design choices.
(3) About the ScenePair benchmark: how do authors ensure that the backgrounds of image pairs are consistent? Or are there some examples where the image-pair’s backgrounds are noticeably mismatched after your filtering rules?
(4) About the evaluation: During the evaluation of inpainting-on-full-size-image method DiffSTE, TextDiffuser, and AnyText, do authors replace the unedited region with the origin image when comparing with the cropped-based method.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see weaknesses section.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please see weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comprehensive review, which will certainly help improve the revised paper. The concerns are taken care to address point by point in the following.
> **[W1]: Discussion of more style prior information from unmasked region.**
[A1]: We agree that surrounding unmasked region could also provide style prior. Nevertheless, how to determine the informative unmasked region for providing accurate style prior is still an unresolved problem.
Current inpainting-based methods [*1,2,3*] take the whole masked image as input and reason the text style by surrounding unmasked region. As shown in ***Uploaded Pdf Fig.A***, this implicit and indirect style learning strategy lack controllability and reliability, leading to text style deviation sometimes. In contrast, TextCtrl focuses on cropped text region which provides direct and explicit guidance for maintaining style consistency in editing.
> **[W2]: Editing on cropped images does not guarantee compatibility; The method involves relatively complex design choices.**
[A2]: We would like to clarify that while inpainting-based methods possess the intrinsic ability for smoothening, cropped-based methods adopt a background preservation policy to guarantee compatibility.
Compatibility of TextCtrl is ensured through two aspects. Firstly, a text removal task is involved in style pretraining to explicitly foster the ability to preserve backgrounds. Secondly, during inference, we integrates K-V pairs from reconstruction branch to alleviate background degradation. Quantitative results on SSIM in ***Paper Tab.1*** verify the compatibility of TextCtrl comprehensively, along with visualization in ***Paper Appendix Fig.11***.
Though specialized design has been proposed in TextCtrl, the framework is simple and detachable. It's built upon a vanilla text-to-image diffusion model [*6*] (SD v1-5), with a replacement of text encoder and an increment of style encoder. Our design focuses on scene-text-oriented pretraining and inference control, while following vanilla training and sampling settings in [*6*]. The aforementioned settings jointly contribute to TextCtrl without bringing much complexity.
> **[W3]: About ScenePair: consistency measurement of backgrounds; mismatched examples after the filtering rules.**
[A3]: The collection pipeline is depicted in ***Paper Appendix Fig.8***. To ensure pair consistency, we design an automatic pairing algorithm with scoring function:
$Score = \lambda_{length} * S_{length} + \lambda_{ratio}*S_{ratio} + \lambda_{distance} * S_{distance} + \lambda_{SSIM} * S_{SSIM}$,
wherein $S_{length}$ is the similarity of text lengths, $S_{ratio}$ is the similarity of aspect ratio, $S_{distance}$ is the centre distance and $S_{SSIM}$ is the structure similarity of cropped-text regions. The consistency of backgrounds is ensured technically by $S_{SSIM}$ and empirically by $S_{distance}$ (nearer texts are more likely to share same background and lighting condition).
After pairing, pairs with $Score$ higher than a threshold are selected and in most cases they share a common background as shown in ***Paper Appendix Fig.9***. Still, we manually filter out a very few number of unsatisfied pairs to guarantee final quality. Examples are provided in ***Uploaded Pdf Fig.B***.
> **[W4]: About full-size-image evaluation: do authors replace the unedited region with the origin image for inpainting-based methods?**
[A4]: Official code of inpainting-based methods [*1,2,3*] is adopted and the output is directly used for full-image evaluation without replacing unedited regions, since it is generally considered as the final results which intrinsically distinguishes inpainting-based method.
Yet we acknowledge the necessity of evaluation with the unedited region replaced by origin images for a comprehensive comparison. To this end, we extend the full-size evaluation, wherein *(w)* and *(w/o)* denote with and without replacing unedited regions. Comparing within each inpainting-based method, a large margin emerges indicating the inpainting strategy downgrades the image quality of unmasked region, which mainly result from the lossy compression of VAE and the model capacity. Comparison across different methods verifies the compatibility achieved by TextCtrl. Besides, it is worth noting that cropped-image evaluation is regarded as the main indicator of text style consistency since full-image evaluation spends most calculation on unedited regions.
| Methods |SSIM $\uparrow(\times10^{-2})$ | FID $\downarrow$ |
| --- | --- | --- |
| SRNet [*4*] |98.91 |1.48 |
| MOSTEL [*5*] | 98.96 |1.49 |
| DiffSTE [*1*] *(w/o)* | 76.91 |96.78 |
| DiffSTE [*1*] *(w)* | 98.86 |2.37 |
| TextDiffuser [*2*] *(w/o)*| 92.76 |12.23 |
| TextDiffuser [*2*] *(w)* | 98.97 |1.65 |
| AnyText [*3*] *(w/o)* | 82.57 |16.92 |
| AnyText [*3*] *(w)* | 98.99 |1.93 |
| TextCtrl | **99.07** |**1.17** |
> **References**
[1] Ji et al. Improving Diffusion Models for Scene Text Editing with Dual Encoders. TMLR, 2024.
[2] Chen et al. TextDiffuser: Diffusion Models as Text Painters. NeurIPS, 2023.
[3] Tuo et al. AnyText: Multilingual Visual Text Generation And Editing. ICLR, 2024.
[4] Wu et al. Editing Text in the Wild. ACM MM, 2019.
[5] Qu et al. Exploring Stroke-Level Modifications for Scene Text Editing. AAAI, 2023.
[6] Rombach et al. High-Resolution Image Synthesis with Latent Diffusion Models. CVPR, 2022. | null | null | null | null | null | null |
Efficient Algorithms for Lipschitz Bandits | Reject | Summary: The paper proposes two algorithms for Lipschitz bandit problems, with improved time complexity and memory requirements.
Strengths: If the algorithms and proofs are sound, then this is an excellent contribution. Developing these sort of streaming/sketching methods for key bandit problems (such as Lipschitz bandits) is an important area of research, and many people are likely to care about the results of this paper.
Weaknesses: The paper is sloppy to the extent that it is difficult to understand the authors' algorithm or verify their claims. To be specific, consider the following sentences, all taken from a single two-paragraph subsection (section 2.2):
1. "Let $\\{\mathcal{X}_1, \dotsc, \mathcal{X}_N\\}[\mathcal{X}_i \subset \mathcal{X}]$ be an cover of the action space $\mathcal{X}$" --- okay, what is the $\\{\dotsc\\}[\dotsc]$ notation?
2. "Let $\epsilon$ denote the maximum diameter of $\mathcal{X}_i$ for all $i \in [N]$." --- okay, but what is a diameter? Are we in a metric space? This hasn't been specified.
3. "Then the arm set $S = \\{ x_i \mid x_i \in \mathcal{X}_i, i \in [N]\\}$ is an $\epsilon$-mesh." --- what set is this? Now, I presume that the authors mean to say that they want $S$ to be any set that contains a single element chosen arbitrarily from each of the $\mathcal{X}_i$, but that's not written, instead the authors said its __the__ set, but the right hand side does not specify any unique set. Also, the concept of an $\epsilon$-mesh has not been defined, and when its defined, it needs to be with respect to some metric. And if this description was meant to be the definition of an $\epsilon$-mesh... then that's not clear either (and the definition given by the work the authors state these definitions are from, i.e. Slivkins 2019, is _very_ clear---all the authors needed to do was copy it).
4. "The covering dimension $d$ of the action space $\mathcal{X}$ is defined as $d=\inf_{\alpha \geq 0}\\{|\mathcal{S}| \leq \epsilon^{-\alpha}, \forall \epsilon > 0\\}$." But the set $\mathcal{S}$ does not depend on $\epsilon$ (not even implicitly)... (the correct definition, I presume, would be to ask that $\mathcal{S}\_{\epsilon}$ is a minimal $\epsilon$-cover of $\mathcal{X}$ in some metric $D$, and then have that infimum include $\mathcal{S}_\epsilon$ and not $\mathcal{S}$.)
5. "Define $\mathcal{Y}\_j = \\{x \in \mathcal{X} \colon 2^{-j} \leq \Delta(x) \leq 2^{1-j}, j \in \mathbb{N}\\}$, then the set $\mathcal{Y}_j$ contains all arms whose gap is between $2^{-j}$ and $2^{1-j}$." --- but $j \in \mathbb{N}$ is within the constructions of the set on the RHS, which could be read as asking that the condition holds for all such $j$, or for some $j$, but it breaks the dependence of the right hand side on the subscript $j$ of $\mathcal{Y}_j$. Of course, the definition shouldn't have the $j \in \mathbb{N}$ inside the $\\{ \dotsc \\}$ on the right hand side of the definition.
6. "Consider the $\epsilon$-mesh $\mathcal{S}_j$ for space $\mathcal{Y}_j$." --- __the__ $\epsilon$-mesh? Also, $\mathcal{S}_j$ hasn't been defined. This should say instead 'fix some $\epsilon > 0$ and let $\mathcal{S}_j$ be an $\epsilon$-mesh of $\mathcal{Y}_j$', or something like that.
7. "[...] the zooming dimension focuses only on the set $\mathcal{Y}_j$" --- no, the zooming dimension depends on all the sets $\mathcal{Y}_1, \mathcal{Y}_2, \dotsc$, not only a single one of those sets.
While each individual mistake or ambiguity can be resolved easily enough, verifying the authors claims would require me to rewrite everything myself, and this goes beyond what I'm willing to do (and should do...). The whole paper is like this, and it's just not acceptable.
I would urge the authors to, in the future, have someone _not intimately familiar with the work_ proof-read the work.
Note, I put down confidence as "5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully." --- I am indeed very familiar with the related work, but I have not checked the math/other details carefully. It's too much work to read it. I am absolutely certain, however, that this level falls short of any level of clarity that might be expected in published work.
Technical Quality: 1
Clarity: 1
Questions for Authors: .
Confidence: 5
Soundness: 1
Presentation: 1
Contribution: 4
Limitations: .
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your valuable time and effort in reviewing this paper. We thank you for the thoughtful feedback you provided, which has significantly improved the quality of this paper. We also appreciate your recognition of the contributions our paper makes.
In Section 2.2, we primarily introduce the concepts of Covering Dimension and Zooming Dimension to provide background for readers who may be unfamiliar with them. We acknowledge that due to page limitations, some concepts may not have been clearly or comprehensively defined. We will revise this section based on your feedback. However, it is worth noting that even if this section were entirely omitted, it would not affect the paper's overall framework and contributions. While this section is not essential to the core of the paper, we appreciate your attention and suggestions regarding it. We hope you will consider the other sections of the paper to comprehensively evaluate the paper's contributions and value.
Please feel free to reach out if there are any other aspects you would like us to address or discuss further. We are more than happy to engage in further dialogue to ensure all concerns are comprehensively addressed.
---
Rebuttal 2:
Comment: To clarify, the issue isn't section 2.2. That is just an example. This level of "messiness" is present throughout.
This isn't a matter of the definitions being hard to understand due to a lack of space: clear definitions would not take more space at all.
And, on that point, if, as you claim, you could remove section 2.2 from the paper, and still fully understand the contribution... why is that section in your paper?
__To area chair:__ I fully stand by my recommendation to reject this work. I'd like to point out that the other reviewer's comments were:
1. Reviewer q7Fp: "I did have a little trouble reading the parts about the crosscut and generating cubes but it might just be me not being familiar with prior work in the area."
2. Reviewer svqm: "It is not simple to parse algorithms in their current form in a short amount of time. Although you give comprehensive descriptions in text, I believe adding illustrations or additional explanations will significantly improve clarity of your algorithms. "
It's not a matter of Reviewer q7Fp being unfamiliar with prior work---I am familiar with it, and those parts require deciphering, not reading. And its not a matter of Reviewer svqm struggling to parse the algorithms in a short time---the algorithms are not parsable. Additional illustrations are nice, but should not be necessary for a skilled researcher to parse an algorithm. I am certain that if I gave the algorithm blocks in this paper to 5 strong master's students to implement, they would each return with a different algorithm. Reviewer mKhe states that "The paper has good presentation"---I find this wild; I can only presume that the reviewer did not try to understand the details of how the algorithm works, or study the proofs in any depth; then yes, at a high level, the paper might seem reasonable---but this is a theory paper, high level does not suffice.
---
Rebuttal Comment 2.1:
Comment: Thank you very much for your prompt response and valuable feedback. We appreciate the opportunity to improve our paper through your insights. Below is our reply:
1. **Acknowledgment:** We sincerely appreciate the constructive feedback from each reviewer, which has helped identify aspects of our paper that were unclear. We would like to express our gratitude once again to all the reviewers for their thoughtful comments. In our rebuttal, we have provided further explanations and worked diligently to improve the paper's quality, which we believe is one of the key values and purposes of the rebuttal period.
2. **Presentation Feedback:** While reviewers q7Fp and svqm pointed out specific aspects where the algorithm's presentation could be improved, it is also worth noting that the other three reviewers, including q7Fp and svqm, rated the presentation as "**3 - good**". This suggests that the presentation is generally well-received, though we acknowledge there is always room for refinement to enhance clarity and understanding.
3. **Implementation Variability:** You mentioned that "if I gave the algorithm blocks in this paper to 5 strong master's students to implement, they would each return with a different algorithm". This variability is quite common, as many algorithms can be implemented in multiple ways. For instance, in the Lipschitz bandits area, the well-known Zooming algorithm, when only considering the original paper's description, can be implemented in several different ways.
4. **Section 2.2:** As mentioned in our rebuttal, we introduced the concepts of Covering Dimension and Zooming Dimension primarily to provide background for readers who may be unfamiliar with them. While this section enhances understanding, many papers in the field do not include such introductions. Of course, we will continue to refine this section to make it more accurate and clear, but its presence or absence does not affect the paper's overall framework and contributions. | Summary: The paper considers regret minimization for Lipschitz bandits with time horizont $T$ and proposes an algorithm that provably achieves nearly optimal regret while having strictly smaller (by a factor of $T$) time (of order $O(T)$) and memory complexity (of order $O(1)$). This is achieved by considering a tree-like embedding of the state space and pairwise comparison between elements of the tree. A suboptimal method with uniform discretization called MBUD has dependence on the covering dimension of the state space, while MBAD, a method with adaptive discretization, instead has dependence only on the zooming dimension.
Strengths: 1) Achieving nearly optimal regret bounds in minimax (MBUD) and instance-specific setting (MBAD), while reducing time and memory complexity.
2) Both proposed algorithms are non-trivial and seem to be novel and interesting on their own.
Weaknesses: 1) It is not simple to parse algorithms in their current form in a short amount of time. Although you give comprehensive descriptions in text, I believe adding illustrations or additional explanations will significantly improve clarity of your algorithms.
2) I would appreciate a more explicit comparison with previous work - what parts of the algorithms were already reported in the literature?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) Could you provide intuition behind your node exploration process i.e. line 2 in Algorithm 2?
2) (196) Should each cross exploration phase explore instead $O(\\log \\log T)$ cubes?
3) Could you please refer to previous works that worked on reducing time and memory complexity for bandits, and, more specifically, whether it is a common thing to achieve the same regret bounds for settings with restricted time and memory?
4) Could you please comment what part of the proposed algorithms has already been used before in literature? I am especially interested whether cube generation i.e. the way you do tree search has been used before in this context.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed limitations adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your valuable time and effort in reviewing this paper. We thank you for the thoughtful feedback you provided, which has significantly improved the quality of this paper. For the potential concerns you bring up, we would like to address them here.
**Q1: It is not simple to parse algorithms in their current form in a short amount of time. Although you give comprehensive descriptions in text, I believe adding illustrations or additional explanations will significantly improve the clarity of your algorithms.**
Thank you for your suggestion. Due to page limitations in the main text, we have included a flowchart in Appendix A of the paper, which visually represents the algorithm's process and main ideas to facilitate reader understanding.
**Q2: I would appreciate a more explicit comparison with previous work - what parts of the algorithms were already reported in the literature?**
The discretization method used in the algorithm is standard and widely used in the field. The novel aspect of our algorithm lies in how we explore these subcubes after discretization. Previous works typically consider all subcubes simultaneously. In contrast, we designed a new exploration process and algorithm that performs pairwise comparisons between these subcubes in each round, thereby reducing both the time and space complexity of the algorithm.
**Q3: Could you provide intuition behind your node exploration process, i.e., line 2 in Algorithm 2?**
We have included a flowchart in Appendix A of the paper to illustrate the intuition behind our approach. Before the final phase, the MBUD algorithm explores all subcubes. Therefore, it is crucial to allocate which subcubes to explore in each phase to maximize exploration efficiency. Our approach involves selecting every few subcubes in each phase, ensuring that most subcubes are explored before the final phase. Line 2 in Algorithm 2 provides the formula for this selection process.
**Q4: (196) Should each cross-exploration phase explore instead $O(\log \log T)$ cubes?**
Thank you for pointing this out. The original sentence, “Each cross-exploration phase will only explore $O\left(\frac{1}{\log \log T}\right)$ of them,” should be revised to “Each cross-exploration phase will only explore approximately $\frac{1}{\log \log T}$ of them.”
**Q5: Could you please refer to previous works that worked on reducing time and memory complexity for bandits, and, more specifically, whether it is a common thing to achieve the same regret bounds for settings with restricted time and memory?**
Typically, algorithms with limited arm storage tend to increase regret, which aligns with our intuition. Therefore, an interesting question is whether the regret gap can be reduced if additional information is gained during the exploration process or if there are connections between arms. For example, in Lipschitz bandits, once the reward range of a subcube is known, we can estimate the reward range of neighboring subcubes, helping us reduce the reward gap between full memory and memory-constrained settings.
**Q6: Could you please comment on what part of the proposed algorithms has already been used before in the literature? I am especially interested in whether cube generation, i.e., the way you do tree search, has been used before in this context.**
Thank you for your question. As we noted in our response to Q2, our exploration method is novel. The approach of cube generation and tree search is new in this context.
**Please kindly let us know if you have any concerns you find not fully addressed. We are more than happy to have a further discussion regarding them.**
---
Rebuttal 2:
Comment: Thank you for your response. I was aware of the flowcharts in Appendix A at the time of writing the review, but I did not find them particularly intuitive or explanatory. Considering the concerns raised by Reviewer Bcg5, I agree that the paper could have been written more clearly. I will maintain my score for now and closely follow the discussion surrounding the issues raised by Reviewer Bcg5. | Summary: The paper investigates a multi-armed bandit problem where the action space is a metric space a stochastic Lipschitz rewards. The authors present algorithms that use a constant amount of memory and achieve a near-optimal regret. This improves on previous results that had heavy memory usage.
Strengths: The paper has good presentation, and the figures in the appendices are helpfulץ The contribution itself is useful in practice.
Weaknesses: While the result is great, the ideas presented in the paper are modifications of existing methods
Technical Quality: 4
Clarity: 3
Questions for Authors: * Line 136: Should be norm rather than absolute value, right?
* Line 156: Definition of the mesh seems like it contains ALL elements of a cover set. Should be only one per set?
* Line 196: The use of $O$ notation for the fraction is nonstandard and confusing (as is it looks like $O(1)$)
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: I did not find the limitations presented as sufficient and I would like to see more discussion on the downsides of the presented algorithm and future directions of research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your valuable time and efforts in reviewing this paper. We thank the thoughtful feedback you provided, which significantly improved the quality of this paper. For the potential concerns you bring up, we would like to answer/address them here.
**Q1: "While the result is great, the ideas presented in the paper are modifications of existing methods.**
We thank you for your insightful comment. The novelty of our paper lies primarily in the design of the algorithm and the novel theoretical results we obtained. Regarding the algorithm design, our main contribution can be divided into two key insights.
The first insight involves the use of metric embedding, which maps elements from one metric space to another while preserving distance relationships as much as possible. The critical aspect of metric embedding is finding a mapping that maintains the original space's distance metrics, making data processing, analysis, and understanding more efficient and intuitive. Our algorithm essentially maps the metric space to a tree structure, where each node corresponds to a cube. Traversing the nodes of this tree is equivalent to traversing the entire metric space, allowing for a more structured and efficient exploration.
The second insight leverages pairwise comparisons of arms to reduce memory complexity. Rather than covering the entire space at all times, our algorithm generates a stream by traversing all tree nodes. From this stream, we continuously select nodes for pairwise comparisons, gradually converging to the optimal region. This approach minimizes memory usage while maintaining high accuracy in identifying the best arm.
In terms of theoretical analysis, we acknowledge that the techniques employed are relatively standard. However, the advantage of our approach lies in its ability to provide a clear and rigorous framework that supports the algorithm's effectiveness and efficiency.
**Q2: Line 136: Should be norm rather than absolute value, right?**
Thank you for pointing this out. We will make the correction to avoid any misunderstanding by the readers.
**Q3: Line 156: Definition of the mesh seems like it contains ALL elements of a cover set. Should be only one per set?**
A subset $S \subset X$ is called an $\epsilon$-mesh if every point $x \in X$ is within a distance $\epsilon$ from some point $y \in S$. This ensures that each point in the space is sufficiently covered by the mesh.
**Q4: Line 196: The use of $O$ notation for the fraction is nonstandard and confusing.**
Thank you for bringing this to our attention. It is indeed inappropriate to use $O$ notation in this context. The original sentence, “Each cross exploration phase will only explore $O\left(\frac{1}{\log \log T}\right)$ of them,” should be revised to “Each cross exploration phase will only explore approximately $\frac{1}{\log \log T}$ of them.”
**Q5: I did not find the limitations presented as sufficient and I would like to see more discussion on the downsides of the presented algorithm and future directions of research.**
Thank you for highlighting this area for improvement. We will add a more comprehensive discussion about the limitations of our algorithm and potential future research directions. This will include a deeper analysis of scenarios where the algorithm may face challenges and opportunities for further enhancement and application.
**Please kindly let us know if you have any concerns you find not fully addressed. We are more than happy to have a further discussion regarding them.**
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I still don't find the insights enough to increase my score, I will keep it as-is for now and wait for further discussion with the other reviewers.
I still don't think Line 156 is correct. I understand the definition, but the notation is wrong - from the current notation it seems you take all $x_i\in X_i$, not a single element.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your feedback. We will amend the notation to clarify this point explicitly in the revised manuscript. We greatly appreciate your attention to detail, and we are committed to improving the precision and clarity of the definition based on your valuable feedback. | Summary: This paper studies the Lipschitz bandit problem with a memory constraint. There are two algorithms proposed by the authors. The Memory Bounded Uniform Discretization (MBUD) algorithm uses a fixed discretization over the metric space and implements a strategy which explores first and then commits to an exploitation phase. The second algorithm, called Memory Bounded Adaptive Discretization (MBAD) , swaps arms in and out of the memory while creating a mesh over the metric space adaptively (ala zooming). The authors prove upper bounds which match lower bounds from previous work for Lipschitz bandits without memory constraints while maintaining linear time complexity and constant space complexity. Finally, the authors perform experimental validation of the theoretical results on small 1-dimensional datasets.
Strengths: 1. Novel problem formulation in the Lipschitz bandit setting.
2. The authors show upper bounds matching with lower bounds from prior work while maintaining a memory budget on arms.
3. The concepts introduced in the paper are well explained for the most part. I did have a little trouble reading the parts about the crosscut and generating cubes but it might just be me not being familiar with prior work in the area.
Weaknesses: I am unclear about the novelty and contributions of the paper. The problem formulation (limited memory) is new in the Lipschitz bandit setting but it has been studied in several papers in bandits with finite arms (as the authors point out in the related works). Moreover, the proof techniques used in the paper appear standard - MBAD is based on zooming introduced by kleinberg et al., the clean event analysis is from the recent textbook of Slivkins (and their papers), MBUD is based on an explore first strategy resembling the naive Explore-then-Commit algorithm (which trivially satisfies the O(1) memory constraint). In all, I’m not sure what specific parts of the paper are being claimed as novel vs that from prior work.
The experiments in this paper are very limited - only a 1 dimensional interval with an L1 metric. To show real world applicability, it would be nice to have results in higher dimensions and also on real world datasets (since that was the original motivation).
Minor/Typo:
I think the caption for Fig 1 should clarify what is on the X and Y axes. It is obvious from context but it would be nice to have from a readability perspective.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. From what I recall, the Kleinberg survey paper proves bounds for all metric spaces, whereas in this paper only the L1 metric on [0, 1]^d is considered. Is there any reason why similar techniques from that paper would not translate to the bounded memory setting for all metric spaces?
2. On line 229, the authors state “computational workload of the MBUD algorithm is characterized by a constant per-round operation,” but don’t the subroutines (like CROSSCUBE, GENERATECUBE etc) have time complexity depending on T? Is this statement meant to ignore sublunar factors?
3. Do we need to know the zooming dimension beforehand for MBAD? If so, it should be mentioned in the paper as it it was not clear to me.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The Lipschitz constant needs to be known beforehand to apply these algorithms.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your valuable time and efforts in reviewing this paper. We thank the thoughtful feedback you provided, which significantly improved the quality of this paper. For the potential concerns you bring up, we would like to answer/address them here.
**Q1: I am unclear about the novelty and contributions of the paper. The problem formulation (limited memory) is new in the Lipschitz bandit setting but it has been studied in several papers in bandits with finite arms (as the authors point out in the related works) ...**
We thank you for your comment. The novelty of our paper lies primarily in the design of the algorithm and the novel theoretical results we obtained. Regarding the algorithm design, our main contribution can be divided into two key insights.
The first insight involves the use of metric embedding, which maps elements from one metric space to another while preserving distance relationships as much as possible. The critical aspect of metric embedding is finding a mapping that maintains the original space's distance metrics, making data processing, analysis, and understanding more efficient and intuitive. Our algorithm essentially maps the metric space to a tree structure, where each node corresponds to a cube. Traversing the nodes of this tree is equivalent to traversing the entire metric space, allowing for a more structured and efficient exploration.
The second insight leverages pairwise comparisons of arms to reduce memory complexity. Rather than covering the entire space at all times, our algorithm generates a stream by traversing all tree nodes. From this stream, we continuously select nodes for pairwise comparisons, gradually converging to the optimal region. This approach minimizes memory usage while maintaining high accuracy in identifying the best arm.
In terms of theoretical analysis, we acknowledge that the techniques employed are relatively standard. However, the advantage of our approach lies in its ability to provide a clear and rigorous framework that supports the algorithm's effectiveness and efficiency.
**Q2: The experiments in this paper are very limited. - only a 1 dimensional interval with an L1 metric.**
We appreciate your valuable feedback. To keep the experiments straightforward and easy to understand, we initially focused on the one-dimensional case, a common practice in several related works within the field. However, we recognize the importance of demonstrating the algorithm's applicability in more complex settings. In the revised version of our paper, we will include multi-dimensional experiments to provide a more comprehensive evaluation of our algorithms.
**Q3: I think the caption for Fig 1 should clarify what is on the X and Y axes.**
Thank you for your suggestion. We will revise the caption to clearly specify what is represented on both the X and Y axes, enhancing the figure's clarity for readers.
**Q4: Is there any reason why similar techniques from that paper would not translate to the bounded memory setting for all metric spaces?**
According to Assouad’s embedding theorem, a (compact) doubling metric space can be embedded into a Euclidean space with some distortion of the metric. Therefore, considering $[0, 1]^d$ is sufficient for our paper, as it simplifies both the algorithm and the proofs. This approach is also commonly adopted in other works within the field. Our algorithm can be extended to the bounded memory setting for all (compact) metric spaces, leveraging the properties of the embedding to ensure efficient exploration and convergence in more general spaces.
**Q5: On line 229, ... Is this statement meant to ignore sublunar factors?**
In each round, the subroutines in the MBUD algorithm perform only a constant number of operations. Therefore, the overall time complexity remains $O(T)$.
**Q6: Do we need to know the zooming dimension beforehand for MBAD?**
The algorithm requires knowledge of the covering dimension rather than the zooming dimension, which aligns more closely with practical applications. We will include a detailed explanation in the paper to clarify this aspect.
**Please kindly let us know if you have any concerns you find not fully addressed. We are more than happy to have a further discussion regarding them.**
---
Rebuttal Comment 1.1:
Title: Response
Comment: I appreciate the authors response.
Since the paper's contribution is mainly theoretical, I'm not too worried about the state of the experiments currently (although more is always better). However, the technical significance of the paper seems limited to me from the authors responses to me and other reviewers.
For instance, the tree based exploration of a metric space has been done in prior work in lipschitz bandits. See section 8 in [1].
Thus, I maintain my score.
[1] Kleinberg et al. Bandits and experts in metric spaces | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Automatically Learning Hybrid Digital Twins of Dynamical Systems | Accept (spotlight) | Summary: The paper presents a neurosymbolic approach to model dynamical systems based on the usage of LLMs and gradient-based optimization. The model perform competitively against the reported baselines.
Firstly, the human modeller is required to define the problem, priors and target metrics in text. Then the first LLM generates python models that are optimized with SGD, subsequently a second LLM distill the results and pass it onto the first LLM which generates a new set of candidate models to be trained again.
Performance is reported on synthetic data from different models. Proposed model outperform the reported baselines.
Strengths: - The model allows to integrate domain knowledge by using LLM
- The proposed model outperform reported baselines.
- Paper is very well written.
- Annex incorporate comprehensive information about the experiments.
Weaknesses: - It is unclear whether the computational budget of the baselines (in terms of $\mathcal{L}(f_{\theta, \omega}(o), D_{\text{train}})$ evaluations) is similar to that of the proposed method.
- No code is provided.
- The work relies on a closed-sourced model (GPT-4) whose performance also depends on the specific checkpoints used.
Technical Quality: 3
Clarity: 4
Questions for Authors: - Line 55-56: since the work is only shown in the additive case, I suggest you mention this case here also as only showing compositional case results a bit confusing.
- Line 137-139: I find it confusing stating that the outer loss measures generalization. As far as I understand, the outer loss is responsible for model specification, whether this values captures generalization will the depend on whether it's evaluated on the validation dataset, not due to the fact that it is the outer loss.
- I'm unsure whether all the baselines are synthetic data or only some of them. Could you please clarify this?
- In terms of parameter size, how big are the found models $f_{\theta, w(\theta)}$ and how big are the baselines models? Both for the algebraic and neural part.
- Line 188: doesn't $P^{(g)}$ contain $w(\theta)$? If not, could you elaborate on why?
- Why are two LLMs needed? Couldn't a single one do both steps? Have you tried this?
- Suggestion: It'd be interesting to see how the algebraic expression evolve over time during training.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: - The model adds a substantial amount of complexity with respect to alternative modelling approaches like the reported baseline transformers. It's unclear that the performance improvements makes it up for the added complexity of the method.
- Approach is evaluated against simulated data. If one of the benefit of the approach is that it can handle small datasets, and generalize well, why not using a real dataset rather than synthetic data? This way benchmarks against SoTA on those benchmark would be standardized and therefor more meaningful that self-reported baselines. E.g. reported results for SINDy are using polynomials of order 2, this choice is not justified in the paper. Or, transformers perform similarly to paper's approach (within standard error), and it is not possible to know whether the choice of training hyper-parameters.
- Like other symbolic approaches, the paper's approach still requires a human to provide modelling priors to the system, hence the claim of line 79 "a new approach to automatically learn digital twins" is a bit weaken.
- Besides, Table 10, the explainability potential of the model remains under-investigated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *We appreciate the reviewer’s thorough evaluation and positive feedback.*
---
## [P1] Clarifying computational budget
Thank you for raising this question. To clarify:
1. All neural baselines and each HDTwinGen model use identical training settings (2000 epochs, 1000 batch size), consuming the same number of function evaluations per model.
2. HDTwinGen evolves one new model every generation for 20 generations, using a higher total computational budget.
3. Exact evaluation counts are provided in the **Questions** response below.
**Empirical comparisons with comparable budgets:** We have included additional results comparing performance against DyNODE and SINDY given equivalent budgets. This additional budget is used towards HPT, where each baseline underwent 15 iterations of HPT, matching HDTwinGen's 15 evolutionary iterations. Results provided in **[A3]** demonstrate HDTwinGen's superior performance, highlighting the performance benefits of evolving model specification beyond HPT.
**Conceptual differences:** HDTwinGen automates specification and parameter design of hybrid digital twins, unlike neural baselines that only optimize expert-specified models' parameters. This automation accelerates and scales model development, improving cost- and time- efficiency. The increased computational complexity trades off with higher automation levels and the potential for better-performing models.
**Actions taken:** We now include the above in App H.
---
## [P2] Code release
Upon acceptance of the paper, we will release the code, accompanied by extensive reproducibility instructions in App E, to ensure reproducibility.
---
## [P3] Understanding reliance on underlying LLM
We appreciate this comment and agree that the performance of HDTwinGen depends on the capabilities of the underlying model. To understand how the algorithm scales with different models, we reported the performance of HDTwinGen using LLMs of varying capabilities (App H6). We observed that performance correlated with the capabilities of the underlying LLMs, supporting the hypothesis that HDTwinGen's performance will improve as the underlying models advance.
**Actions taken:** We now include this in S8 of the paper.
---
## Questions
* **Composition operator:** We have revised L55-56 to mention that we focus on additive composition.
* **Bilevel objective:** Your understanding is correct. The upper-level objective concerns model specification (evaluated on the validation set), while the lower-level objective concerns parameterization performance (evaluated on the training set). This has been clarified in L137-139.
* **Additional details on benchmark:** The three variants of the Lung Cancer dataset are synthetically generated from PKPD models, while the COVID-19 dataset is generated using an agent-based COVASIM simulator. The Plankton Microcosm (three species) and Hare-Lynx (two species) datasets are real-world ecological datasets. The Plankton dataset is measured in laboratory replication experiments, whereas the Hare-Lynx dataset is measured outdoors.
* **Parameter count:** On Lung Cancer:
|Baseline|Parameter Count|Seconds per Epoch|# Function (Epoch) Evals|
|---|---|---|---|
|DyNODE|33,922|0.02|2,000|
|SINDy|13|0.01|2,000|
|RNN|569,002|0.41|2,000|
|Transformer|2,558,348|0.36|2,000|
|HDTwinGen|245|0.01|40,000|
* **$\omega(\theta)$ in $\mathcal{P}^{(g)}$:** Your understanding is correct, $\mathcal{P}^{(g)}$ is the set of optimized models and includes the optimized parameters $\omega(\theta)^*$. This has been revised in L188-L190.
* **Clarification on modeling workflow:** In our framework, the modeling process is divided into two subtasks: model generation (the modeling agent) and model evaluation (the evaluation agent). Each subtask has distinct instructions and performs different roles. Although there are two steps, both are implemented using a single LLM with different prompts and memory for each task. To clarify this, we will describe our system as a 'multi-step' agent rather than a multi-agent workflow. The main distinction is that multi-agent frameworks (e.g., AutoGPT, MetaGPT) involve dynamic interactions and task allocation, whereas our workflow is sequential and fixed. Thank you for this suggestion, which has improved the clarity and presentation of our work.
* **Evolution of mechanistic component:** We direct you to App H4, where we included the models returned by HDTwinGen in each generation and annotated it with some interpretations of the evolution (including mechanistic and neural components).
---
## Limitations
* **Complexity and performance tradeoff:** We addressed this concern in detail in **[P1]**. As a novel automated hybrid design strategy, HDTwinGen's computational complexity is indeed higher than manual model building, similar to other automated approaches (eg AutoML, NAS). However, this complexity is justified by: improved model performance, and enhanced scalability in model development, potentially reducing overall cost and time in modeling. This is further supported by our new results (**[A3]**), highlighting that given comparable budgets, HDTwinGen discovers better-performing models.
* **Standardized comparison:** Please see our response above clarifying synthetic vs real-world datasets. Regarding your comment about the performance benefits of HDTwinGen vs HPT of neural baselines, we addressed this with additional empirical results with comparable computational budgets, which we discussed in **[P1]**.
* **Sharpening claims:** We have refined our claim in L79 to "a new approach to automatically design digital twins given human modeling priors."
* **Explainability:** While we have provided some preliminary interpretations of discovered models in App H4 and App H10, we agree that this is an important future direction. We have highlighted this by revising Section 8.
---
*We hope the reviewer’s concerns are addressed and they will consider updating their score. We welcome further discussion.*
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications and the work put on getting the new baselines metrics. I think your work makes for a nice contribution, I will update the rate accordingly.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We appreciate your insightful feedback, which has been crucial in enhancing the quality of our submission. We are delighted to have successfully addressed your concerns and grateful for your constructive input throughout this process. | Summary: This paper presents a LLM-powered evolutionary multi-agent algorithm to automate the composition of hybrid digital twins of dynamical systems. Experiments were conducted on several datasets including variants of PKPD model synthesized data, simulated covid-19 data, simulated plankton miscrocosm data, and a real-data of hare and lynx populations. Results versus existing neural ODE, SINDy, RNN, and Transformer demonstrated favorable performance especially in terms of generalizability, sample efficiency, and evolvability of the hybrid model.
Strengths: Hybrid models are gaining increasing importance for combining white- and black-box modeling. LLM-driven systems for automating hybrid model design, composition, and optimization are critical for enabling the practical adoption of these complex models. The proposed work is thus highly novel and of high potential impact.
The experimental evaluation considered several interesting datasets and targeted on several important desiderate of hybrid models.
Weaknesses: While focused on hybrid models, the writing of the paper is missing important related works in recent hybrid models, such as physics-integrated VAE and APHYNITY, in both discussion of related works, and experimentation.
In experiments, the baselines of the works are primarily black-box models (NODE, RNN, Transformers). As the focus of the paper is the automated optimization of hybrid models, it’d be important to 1) include existing hybrid-models as baselines, and 2) demonstrate the benefits of automated design of these hybrid models (vs. the current approach for optimizing these hybrid models where both the white- and black-box components are predefined and only their parameters optimized).
The writing of the paper overall is high-level lacking necessary details for properly understanding and assessing the paper, potentially due to the complexity of the method. For instance, methodologically, it was not clear what is the search space of the hybrid model, for either the white-box or the black-box components; experimentally, it was not clear what the hybrid model is trained to do, what is the rough design of the hybrid components, and the MSE metric is being evaluated on what.
The performance improvements reported (e.g., Table 1) overall seems to be associated with very large standard deviation (in most cases are larger than the mean). Its gain over transformer based approaches is thus not clear given this large fluctuation and the increased complexity of the method.
Most of the experimental data seem to be low in dimension (at each time frame). Please clarify.
Technical Quality: 3
Clarity: 3
Questions for Authors: It’d be helpful for the authors to clarify the relation of the presented work with existing hybrid models, and how it can benefit with the optimization of these hybrid models.
It’d be helpful if the authors could add details about the search space of the hybrid models, both in the general methodological settings and in each experimental dataset.
For each experimental dataset considered, it’d be helpful for the authors to add details about the design of the hybrid model (what white box components, what black box components, what search space, to generate what output for what task).
Clarification on the “spatial” dimension of the experimental datasets would be appreciated.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors provided adequate discussion about the limitation and future work for the presented work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *We thank the reviewer for their thoughtful and helpful review. We’re glad that the reviewer finds our approach to be both highly novel and of high potential impact, though we agree that the addition of existing hybrid models improves the paper significantly.*
---
## [P1] Extending the literature review
We appreciate the references to related works, which consider hybrid models combining mechanistic equations with neural components. **[R1]** integrates prior ODE/PDE knowledge into a hybrid model, using specialized regularization to penalize the neural component's information content. **[R2]** investigates hybridization in the latent space, employing semantically grounded expert latent variables and neural latent variables that are regularized to reduce divergence from the expert component.
Our work differs significantly in both motivation and methodology:
* **Motivational difference:** Our method introduces an automated framework to jointly optimize hybrid model specification (evolving both mechanistic components and neural architecture) *and* its parameters. This contrasts with related works, where experts specify the hybrid model design and optimization is limited to parameters.
* **Methodological novelty:** Our approach uniquely integrates LLMs within an evolutionary framework for automated design, guided by expert-provided task context and data-driven feedback. This leverages LLMs' capabilities in symbolic manipulation, contextual understanding, and learning, enabling the exploration of a vast combinatorial space of hybrid models previously infeasible with standard techniques.
**Actions taken:** *(1)* Extended discussions of related works **[R1,R2]** in L269-280. *(2)* Introduced APHYNITY, an existing hybrid model, as an additional baseline in the global response **[A1]**, demonstrating HDTwinGen's superior performance.
---
## [P2] Demonstrating the benefits of automated design of existing hybrid models
We agree and have added an experiment demonstrating HDTwinGen's ability to further optimize a human-specified APHYNITY model. Specifically, we seed the process with an expert-designed hybrid model (combining logistic tumor growth model **[R7]** with three-layer MLP). The evolution process is visualized in **[A2]** (in the PDF), showing HDTwinGen further evolving the model specification, resulting in improved performance over the initial model specification.
Our automated approach offers substantial benefits: it enables model development at scale, significantly reducing the time and cost compared to human-driven development, involving humans only at key moments (e.g., providing initial context or iterative feedback if desired). Importantly, our approach has the potential to uncover novel and effective hybrid model designs that might elude human designers.
**Actions taken:** *(1)* Added additional results demonstrating HDTwinGen's ability to further evolve an expert-specified APHYNITY model in **[A2]**, *(2)* Included discussion on benefits of automated design in S8.
---
## [P3] Clarifications on the algorithm
Thank you. We highlight that low-level details are already contained within App E,F,G. Here we clarify each aspect:
* **Search spaces:** We do not impose predefined restrictions on mechanistic models (e.g., primitive/terminal sets in symbolic regression). We also do not specify any architecture primitives (e.g., in neural architecture search) for neural components. The search space is implicitly constrained by the symbolic language, as the evolved model must be valid/executable PyTorch python code.
* **Hybrid model:** Our hybrid model considers an additive composition of mechanistic and neural components of the form $dx(t)/dt = f(x(t))$, and $f = f_{neural} + f_{mechanistic}$. Evolution optimizes the specification of both mechanistic and neural components, and corresponding parameters.
* **Training/evaluation and metrics:** The hybrid model is trained jointly to minimize the training MSE loss (Eq 5, App F) using the Adam optimizer. Model evaluation is based on the val MSE and val MSE per component (corresponding to val loss per state dimension, Eq 6).
**Actions taken:** We have highlighted App E more prominently in the main paper.
---
## [P4] Understanding the variability of results
**Variability.** Thank you for your observation regarding performance variability in Table 1. As HDTwinGen optimizes both model specification and parameters, this is equivalent to searching in a much larger hypothesis space compared to baselines (that only search in the parameter space). The variability stemming from model specifications is also evident in the higher std dev of ZeroOptim results (optimized model based on zero-shot LLM generated specifications). Additionally, despite higher variability, HDTwinGen-discovered models exhibit several key strengths: superior OOD performance (Table 2), sample efficiency (Fig 2) and easier evolvability to changing dynamics (Fig 4).
**Actions taken:** Updated manuscript to discuss variability and the trade-offs of automation explicitly in S8.
**Empirical comparisons with comparable budgets:** We have included additional results comparing performance against DyNODE and SINDY given equivalent budgets. This additional budget is used towards HPT, where each baseline underwent 15 iterations of HPT, matching HDTwinGen's 15 evolutionary iterations. Results provided in **[A3]** (in additional PDF) demonstrate HDTwinGen's superior performance, highlighting the performance benefits of evolving model specification beyond HPT.
**Actions taken:** Included empirical comparisons given comparable computational budget in App H.
---
## [P5] Clarifying spatial dimensions
The spatial dimensions of considered benchmarks are Cancer PKPD (4), COVID-19 (4), Plankton-Microcosm (3), Hare-Lynx (2).
---
*We hope that most of the reviewer’s concerns have been addressed and, if so, they would consider updating their score. We’d be happy to engage in further discussions.*
---
Rebuttal Comment 1.1:
Comment: Thanks for a thorough rebuttal which has clarified my main concerns -- the expansion of literature review and the addition of comparison to existing hybrid models is highly appreciated. I will raise my rating.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for your valuable feedback. We are glad to have addressed your concerns and appreciate your insights, which have significantly enhanced the quality of our work. | Summary: This paper describes a method to improve digital twins (DTs) with an evolutionary and human driven dynamics, aided by an LLM. The process is meant to optimize hybrid models with both human-driven and computer search-based optimization.The empirical study uses six datasets from the medicine domain. Suitable baselines and ablation tests are also shown. The authors show that a hybrid approach could lead to superior performance with respect to traditional approaches.
Strengths: I have identified the following strengths:
- The paper addresses an important problem with DT: the need for continuous optimization in front of unseen condition to main the fidelity and effectiveness of DT
- The method suggests that significant autonomy in the optimization could be achieved by means of an evolutionary process, while still retaining the valuable contribution of human domain knowledge for the setting of the initial parameters and context.
- The approach is general and could be applied to a variety of problem domains.
- The approach combines LLMs with optimization techniques such as evolutionary search, producing an advanced algorithm with high potential in the successful use and deployment of DT.
- The paper provide significant details for reproducibility in the Appendix.
Weaknesses: 1. I don't think that having two agents: a modelling and an evaluation agent makes the system a "multi-agent" system. I find it confusing. While this is not technically a weakness, I feel that this system should not be considered a multi-agent system.
2. As the authors also state evolutionary computation could be computationally expensive and hide several levels of complexity, e.g., how thorough should the fitness evaluation be, how many individuals in a population and how many generations, what kind of mutation is applied and what intensity. I feel that all such aspects are not well described in the main paper despite the evolutionary part of the algorithm is a fundamental part of the algorithm.
3. Following-up from the previous point, more discussion on the computational complexity of the algorithm in the main paper could be beneficial.
4. The paper could improve the clarity in relation to which domains, or rather, which domain characteristics are more suitable to the proposed algorithm. It is likely that the algorithm might not perform equally well in different domains
5. Despite a fair review of existing related methods in section 5, the paper does not state too clearly what are the core novelty aspects that are being introduced with respect to the state of the art: I can infer part of them by carefully examining the cited papers, but it would be better if the authors could highlight the main novelty in a more technical way with specific references to the most similar papers from the literature.
Technical Quality: 2
Clarity: 3
Questions for Authors: In section 4, the paragraph Evolutionary Optimization overview: is it possible to provide more details in relation to the computational time and various aspects as per my point 2 above (weakness) in relation to the evolutionary search?
In the limitations below, I have two questions that can be answered.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The limitations section highlights relying on human input, the knowledge of the LLM, and the limitation to continuous time systems. However, I feel that the following other limitations should be considered:
- how can we guarantee that the LLMs, regardless of its knowledge, can provide correct information? Is there a way to test or assess the performance of the LLM?
- The variability and stochasticity of the evolutionary search could imply that the algorithm might not be consistent across multiple runs. Can the authors speculate on how to measure variability and uncertainty introduced by the evolutionary search?
Both points and questions above are particular relevant in critical domains and scenarios such as the medical scenarios used for demonstration in the paper.
I think a discussion on the implications and ethical aspects of using LLMs, human domain knowledge and evolutionary search all in combination for a wide range of domain, e.g. medical domain, is necessary.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *We thank the reviewer for their constructive feedback. We’re glad the reviewer finds the approach general and of high potential.*
---
## [P1] Clarifying "Multi-agent" terminology
We appreciate your comment on our system's classification as "multi-agent". Upon reflection, we agree that our framework doesn't fully embody all characteristics typically associated with LLM multi-agent systems. While our approach incorporates two distinct components (modeling and evaluation) with specialized roles, they are applied sequentially, lacking dynamic, real-time inter-agent communication and autonomous behavior adjustment in multi-agent systems **[R8, R9]**.
We reclassified our approach as a "multi-step" agent, which more accurately reflects its nature:
* A modeling step that generates and optimizes new model specifications.
* An evaluation step that assesses generated models reflects on requirements, and provides targeted improvement feedback.
**Actions taken:** We have now updated this in the paper. Thank you for helping us refine terminology and more precisely position our work.
---
## [P2] Analyzing computational complexity
Thank you for raising this question. Allow us to clarify the evolutionary process and analyze the computational complexity.
**Evolutionary process:**
* Each iteration proposes and optimizes one new model specification, conditioned on the top-K previous models
* The population size N=g in the $g^{th}$ iteration
* Only the new model's fitness is evaluated on the validation set
* The selection step retains the top-K models
* Process repeats for G iterations (G=20, K=16 in our experiments)
**Computational complexity.** We introduce lower-case notation to indicate constant time parameters:
* Evolution/model generation (LLM): $\mathcal{O}(c)$ (LLM inference)
* Parameter optimization: $\mathcal{O}(d)$ (training on $\mathcal{D}_{train}$, in practice, depends on model/dataset complexity)
* Fitness evaluation: $\mathcal{O}(e)$ (on $\mathcal{D}_{val}$)
* Model evaluation (LLM): $\mathcal{O}(f)$ (LLM inference for improvement suggestions)
* Selection: $\mathcal{O}(NlogN)$ (top-K)
The complexity per step is $\mathcal{O}(c+d+e+f+NlogN)$, with total complexity for G generations: $\mathcal{O}(G*(c+d+e+f+NlogN))$. We note that constant complexities vary with different datasets and models. For practical insight, we report average wall-clock times on the Lung Cancer dataset: $c=20s$, $d=12s$, $e=0.004s$, $f=20s$.
**Actions taken:** Added complexity analysis and wall-clock times in App H.
---
## [P3] Discussing domains suitable for the algorithm
We appreciate this suggestion and clarify the domain characteristics our algorithm is designed for:
* Continuous-time dynamical systems with a) Markov property and b) continuous time evolution.
* Scenarios where hybrid models are likely to outperform purely mechanistic or neural models: a) partial knowledge of underlying dynamics exists, with some non-negligible unclear aspects, b) limited or unevenly distributed empirical data across the state space.
* Complex model design spaces with multiple hypotheses: where automated exploration can help experts scale model development, review and discover a wider range of potential designs, improving cost- and time-efficiency of model development.
Our experiments focused on systems with partially understood mechanisms and limited observational data. We observed that our algorithm discovered performant hybrid models, leading to more accurate model dynamics, OOD generalization, and improved evolvability with changing conditions.
**Actions taken:** Included a version of this discussion in S8 and in App A.
---
## [P4] Highlighting technical novelty
Thank you for your insightful comment. Our work introduces the first automated algorithm for designing hybrid digital twins, with the following key innovations:
* **Automated design:** We present an evolutionary algorithm that optimizes both model specification and parameterization based on an initial expert prompt. This contrasts with existing hybrid models where experts specify closed-form equations and neural network designs, with only parameters being optimized.
* **Hybrid model discovery:** Our approach leverages LLMs' capabilities in symbolic manipulation, contextual understanding, and learning to explore a vast combinatorial space of hybrid model specifications. This exploration was previously infeasible with standard techniques, as existing discovery approaches are limited to closed-form equations or neural architecture searches within predefined spaces.
**Actions taken:** Added this to the RWs section to better position our work and showcase its novelty.
---
## Limitations
We acknowledge the importance of these discussions. **Actions taken:** We have integrated an extended discussion in App A and a summarized version in S8:
* **Verification of HDTwin models:** First, HDTwins are composed of human-interpretable components, enabling higher degrees of expert verification. Second, evolved models should undergo rigorous functional testing (e.g., held-out datasets, robustness/fairness metrics) pre-deployment. Future works could incorporate hallucination mitigation strategies (e.g., RAG, constitutional AI, multi-model consensus) to further improve reliability.
* **Variability in evolutionary search:** One strategy lies in quantifying variability through multiple independent runs and confidence intervals. Another lies in controlling variability by providing tighter requirements/constraints in the initial prompt or iterative expert model steering to narrow the hypothesis space.
* **Ethical implications:** We recognize the potential for bias propagation from black-box LLMs. To address this, we recommend rigorous review verification for fairness, bias, privacy, and other ethical concerns.
---
*We hope that most of the reviewer’s concerns have been addressed and, if so, they would consider updating their score. We’d be happy to engage in further discussions.*
---
Rebuttal Comment 1.1:
Comment: The authors have carefully considered my comments and improved the paper accordingly. As a consequence, I'm happy to increase my evaluation.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We're happy to have resolved your concerns and are thankful for your input, which has played a key role in refining the quality of our work. | Summary: An evolutionary search framework for building hybrid dynamics models, especially for so-called digital twins, is proposed. Its notable feature is the use of LLMs for the model proposal and model evaluation in the search. The effectiveness of the method is validated with multiple datasets.
Strengths: - Learning hybrid digital twins is indeed an important research topic.
- The idea of using LLM for the model proposal and evaluation in architecture search is interesting.
- The experimental analysis is done carefully.
Weaknesses: Only a major concern is that in the experiment, methods with (evolutionary or whatever) search but without LLMs are not used. As the use of LLMs in the search is the most notable feature of the method, the experiments should analyze such an aspect particularly. The current baseline methods are okay but do not fit this purpose because they are not based on any search algorithms.
A minor thing. Although the paper nicely overviews some of the relevant studies, it seems to lack a series of studies on hybrid modeling in a part of the ML community, for example:
- Yuan Yin, Vincent Le Guen, Jérémie Dona, Emmanuel de Bézenac, Ibrahim Ayed, Nicolas Thome, and Patrick Gallinari. Augmenting physical models with deep networks for complex dynamics forecasting. Journal of Statistical Mechanics: Theory and Experiment, 2021(12):124012, 2021.
- Naoya Takeishi and Alexandros Kalousis. Physics-integrated variational autoencoders for robust and interpretable generative modeling. Advances in Neural Information Processing Systems 34, pp.14809–14821, 2021.
- Zhaozhi Qian, William. R. Zame, Lucas. M. Fleuren, Paul Elbers, and Mihaela van der Schaar. Integrating expert ODEs into neural ODEs: Pharmacology and disease progression. Advances in Neural Information Processing Systems 34, pp. 11364–11383, 2021.
- Antoine Wehenkel, Jens Behrmann, Hsiang Hsu, Guillermo Sapiro, Gilles Louppe, and Jörn-Henrik Jacobsen. Robust hybrid learning with expert augmentation. Transactions on Machine Learning Research, 2023.
Technical Quality: 2
Clarity: 3
Questions for Authors: As noted in the weaknesses section, the lack of baselines using non-LLM search (e.g., a mere evolutionary search) limits the significance of the experimental results. Could you please elaborate on why the authors did not include such baselines? Or, if the current results already imply something in this direction, it would be helpful if the authors could clarify.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Limitations clearly discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *We thank the reviewer for their thorough review. We are pleased that the reviewer finds our approach interesting, addressing an important research topic, with careful experimental analysis.*
---
## [P1] Incorporating additional baselines
Thank you for this suggestion. While we included SINDy for discovering closed-form governing equations, it doesn't represent an evolutionary search method. To address this, we've added genetic programming (GP) for symbolic regression **[R5]** as a baseline to discover symbolic equations on $\mathcal{D}\_{train}$, using implementation and tuned hyperparameters from **[R6]**.
We have included the additional results in global response **[A1]**, observing that our method discovered superior models. This can be attributed to two key factors:
1. **Hybrid model design:** HDTwinGen simultaneously optimizes both the specification (functional form) and parameters of mechanistic and neural components. This hybrid approach enables more flexible and powerful modeling compared to symbolic discovery methods, which are limited to purely symbolic equations.
2. **Efficient search:** By leveraging LLMs, HDTwinGen utilizes domain knowledge, contextual understanding, and learning to enhance search efficiency, leading to more accurate solutions.
**Actions taken:** We have incorporated GP-based SR as an additional baseline in our empirical analysis.
---
## [P2] Hyperparameter optimization search
To broaden our comparison with non-LLM methods, we conducted an additional experiment using Bayesian hyperparameter tuning (HPT) search for SINDy and DyNode baselines. For a fair comparison, we matched the number of HPT searches to HDTwinGen's evolutionary search steps (15 precisely). Results presented in **[A3]** demonstrate HDTwinGen's superior performance against these HPT-optimized baselines, highlighting the performance benefits of evolving model specification beyond HPT.
---
## [P3] Extending the literature review
We appreciate the references to related works, which consider hybrid models combining mechanistic equations with neural components. **[R1]** integrates prior ODE/PDE knowledge into a hybrid model, using specialized regularization to penalize the neural component's information content. An alternative approach investigates performing hybridization in the latent space. **[R2, R3]** consider settings where an expert equation is known, but equation variables are unobservable. Correspondingly, they employ two sets of latent variables: one governed by expert equations and another linked to neural components. **[R2]** introduced specialized regularization to reduce the divergence of the overall model from the mechanistic component, and semantically grounded expert latent variables. **[R4]** leverages the assumptions that expert models remain valid OOD to sample augmented training distributions from expert latent variables.
Our work differs significantly in both motivation and methodology:
* **Motivational difference:** Our method introduces an automated approach to jointly optimize hybrid model specification (evolving both mechanistic component and neural architecture) *and* its parameters. This contrasts with related works, where experts specify the hybrid model design and optimization is limited to only its parameters. The benefits of our automated approach are substantial: it enables model development at scale, significantly reducing the time and cost associated with human-driven development. Our method involves humans only at key moments (e.g. providing initial context or iterative feedback, if desired), minimizing efforts required from experts. Importantly, our approach has the potential to uncover novel and effective hybrid model designs that might elude human designers.
* **Methodological novelty:** Our approach uniquely integrates LLMs within an evolutionary framework to discover optimal models, guided by human-provided task context and data-driven feedback. By leveraging LLMs' advanced capabilities in symbolic manipulation, contextual understanding, and learning, we enable the exploration of a vast combinatorial space of hybrid models. This exploration was previously infeasible with standard evolutionary techniques, marking a significant advancement in automated model discovery and optimization.
**Actions taken:** In response to your suggestion, we have *(1)* extended discussions of related works **[R1-R4]** in L269-280. *(2)* We introduced APHYNITY as an additional baseline in global response **[A1]**, where we observed HDTwinGen outperforming APHYNITY. *(3)* We additionally demonstrated HDTwinGen's ability to further evolve a human-specified APHYNITY model in **[A2]**, highlighting the flexibility of our framework in accommodating various degrees of expert involvement through automated optimization.
---
*We hope that most of the reviewer’s concerns have been addressed and, if so, they would consider updating their score. We’d be happy to engage in further discussions.*
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. The new experiments seem to be informative. I would keep my originally positive score.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for your valuable feedback. We are glad to have addressed your concerns and appreciate your insights, which have significantly enhanced the quality of our work. | Rebuttal 1:
Rebuttal: *We are grateful to the reviewers for their insightful feedback and constructive comments that have improved the paper.*
We are encouraged by the reviewers' recognition of our work's novelty and potential impact. Reviewers highlighted our approach as "highly novel and of high potential impact" (**8ECc**) with an approach that "integrate domain knowledge by using LLM" (**jD19**) and achieves "significant autonomy in the optimization by means of an evolutionary process" (**YMit**). They noted HDTwinGen as an "advanced algorithm with high potential" that is "general and could be applied to a variety of domains" (**YMit**).
We are pleased that the reviewers agreed that our work addresses "an important research topic" (**UNnb**, **YMit**) and "LLM-driven systems for automating hybrid model design, composition, and optimization are critical" (**8ECc**). Regarding our empirical analysis, reviewers commented, "the analysis is done carefully" (**UNnb**), "outperforming reported baselines" (**jD19**), and "targeted several important desiderata of hybrid models" (**8ECc**).
We have also taken the reviewers’ feedback into account and made the following key changes to improve the paper:
* **[A1] Additional baselines:** We have added comparisons against *(1)* APHYNITY and *(2)* Genetic Programming for symbolic regression.
* **[A2] HDTwinGen optimization of an existing model:** We have provided an additional experiment demonstrating HDTwinGen's ability to further optimize human-specified APHYNITY models used to seed the evolutionary optimization. This highlights the flexibility of our method in accommodating various degrees of expert involvement and its ability to automate the optimization of an existing hybrid modeling technique.
* **[A3] Empirical results with comparable computational budget:** We further compared HDTwinGen against *(1)* DYNODE and *(2)* SINDy with comparable computational resources, allowing 15 iterations of hyperparameter optimization for baselines and 15 iterations of evolutionary model optimization for HDTwinGen.
* **[A4] Extended related works:** Following concrete recommendations, we have expanded the related works section to discuss and compare against existing hybrid models, including Physics-VAE, APHYNITY, LHM, and AHMs.
These revisions have been reflected in the updated manuscript, with additional empirical results provided in the attached PDF.
We believe these updates and our individual responses address the reviewers' concerns and strengthen our paper. We remain open to further feedback.
With thanks,
The Authors of #17161
---
### Additional References
**[R1]** Yin, Y., et al. Augmenting physical models with deep networks for complex dynamics forecasting (2021)
**[R2]** Takeishi, N. and Kalousis, A. Physics-integrated variational autoencoders for robust and interpretable generative modeling, (2021)
**[R3]** Qian, Z., et al. Integrating expert ODEs into neural ODEs: pharmacology and disease progression (2021)
**[R4]** Wehenkel, A., et al. Robust hybrid learning with expert augmentation (2022)
**[R5]** Koza, J.R., Genetic programming as a means for programming computers by natural selection (1994)
**[R6]** Petersen, Brenden K., et al. Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients (2020)
**[R7]** Vaghi, C., et al. Population modeling of tumor growth curves and the reduced Gompertz model improve prediction of the age of experimental tumors (2020)
**[R8]** Hong, S., et al. MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework (2024)
**[R9]** Significant-Gravitas. (2023). AutoGPT. GitHub repository. https://github.com/Significant-Gravitas/AutoGPT
Pdf: /pdf/c06bc1e704066aa16d7a669e5f092d01f4b1ebed.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Ensemble sampling for linear bandits: small ensembles suffice | Accept (poster) | Summary: This paper presents theoretical results for an exploration algorithm using ensemble sampling in the stochastic linear bandit setting. The proposed algorithm creates $2m = O(d \log T)$ ensemble models, selects one model uniformly at random, and then chooses the greedy action based on the parameters of that model.
The proposed algorithm achieves a regret bound of $\tilde{O}(d^{5/2} \sqrt{T})$, and it does not depend on the size of the action set.
Strengths: - Unlike previous ensemble sampling algorithms (Lu & Van Roy, 2017; Qin et al., 2022), the proposed algorithm presents a frequentist regret analysis with a reasonable ensemble size. The authors also explain well the differences (e.g., symmetrization) compared to existing randomized algorithms like Thompson sampling and perturbed history exploration.
- The master theorem (Theorem 1) for analyzing the regret bound of the randomized algorithm using the probability that the selected parameter by the algorithm is more optimistic than the true parameter is also intriguing. Proof seems correct, even though I didn’t check very carefully.
Weaknesses: - As noted in Remark 4, although this paper provides a meaningful theoretical result for ensemble sampling, compared to randomized algorithms like Thompson sampling and perturbed history exploration, the proposed algorithm is neither computationally efficient nor statistically efficient. Ensemble sampling is generally effective in complex settings such as DeepRL. However, the regret bound presented in this paper does not seem sufficient to prove the validity of the ensemble algorithm even in the linear setting.
- Thompson sampling algorithms generally have looser regret bounds than UCB algorithms but are known to perform better empirically. It would be beneficial if there were experimental results showing that ensemble sampling algorithms could empirically outperform existing randomized algorithms.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. When I first saw the ensemble sampling setting, it reminded me of the LSVI-PHE algorithm for linear MDPs with perturbed history exploration (Ishfaq et al., 2021). Ishfaq et al. (2021) address algorithms in the MDP setting, but with a horizon length of 1 ($H=1$), it can align with the linear bandit setting. Ishfaq et al. (2021) estimate perturbed parameters using Gaussian noise on observed rewards from previous episodes, repeating this process $M$ times. Actions are chosen based on the most optimistic of these estimated parameters. If we consider each perturbed estimator as an ensemble model, we have an ensemble size of $M$. However, the algorithm in Ishfaq et al. guarantees a high probability regret bound of $\tilde{O}(d^{3/2} \sqrt{T})$ (when $H=1$), with a theoretical sampling size of $M=O(d)$.
The only difference from the proposed ensemble sampling in this paper is that this paper selects the ensemble model to play uniformly at random. If the most optimistic model were chosen as in Ishfaq et al., could a better regret bound be guaranteed? Additionally, are there any other differences between the algorithm in Ishfaq et al. and the one proposed in this paper?
* * *
Ishfaq, Haque, et al. "Randomized exploration in reinforcement learning with general value function approximation." International Conference on Machine Learning. PMLR, 2021.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations in Section 4.
The content discussed in this paper appears to have little to no negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. There is an unfortunate, but crucial, misunderstanding, in the relation between our work and algorithms such as Thompson sampling and Perturbed History Exploration (PHE), which we believe is responsible for your low scoring of our work. We have clarified these relations in our main rebuttal, which we would ask you to consider.
With the contents of the main rebuttal in mind, let’s discuss LSVI-PHE. Yes, LSVI-PHE fits an ensemble of $m$ estimates, just like we do. But, being an instance of PHE, it fits $m$ estimates from scratch at every single iteration! In contrast, Ensemble Sampling only updates each of the $m$ ensemble elements with the new received observation—so ensemble sampling does something akin to a single gradient step for each of the $m$ estimates at each timestep, whereas LSVI-PHE trains $m$ models to convergence at each step. This is a huge difference in the practicality of the methods, and the reason ensemble sampling is so interesting.
As to taking a maximum over the ensemble: this is a very good question. Yes, you could do that with our iterative updates, and you’d get a better regret bound; you would recover the regret of Thompson sampling (so, $\sqrt{d}$ loose on LinUCB). But it is completely unclear that this would give a better algorithm: after all, it is random-element-type algorithms that are used in practice (see Osband et al. 2016, Osband et al. 2018 and papers that cite these), and practitioners are pretty good at trying out different sensible alternatives. We thank the reviewer for this suggestion. We will insert a comment discussing this point in the manuscript, acknowledging the reviewer.
Regarding how the proof works for the version where the maximum is taken: Whether taking a random element or the maximum, the technical challenge in getting a bound is the same: it is showing that there exists at least one optimistic ensemble element in each “direction” of the parameter space when incremental updates are used. This result and the techniques we use to show it are the main contributions of our paper.
We hope that the above and our main rebuttal clarified the key aspects of the paper. We will use the additional page available at the camera ready stage to clarify these points to future readers.
With that said, we must emphasise that our paper solves a long standing open problem, which attracted two failed attempts previously published at NeurIPS. We believe that a well-written, sound paper with novel proof ideas and techniques that solves a hard open problem—one clearly of interest to the community—is a strong contribution to NeurIPS. As such, we would ask the reviewer to reconsider their assessment of our manuscript.
_Osband, Ian, et al. "Deep exploration via bootstrapped DQN." Advances in neural information processing systems 29 (2016)._
_Osband, Ian, John Aslanides, and Albin Cassirer. "Randomized prior functions for deep reinforcement learning." Advances in Neural Information Processing Systems 31 (2018)._
---
Rebuttal 2:
Comment: Dear reviewer,
The author-reviewer discussion period is shortly coming to an end. We were hoping to get a confirmation from you that you've considered our rebuttal, and to let us know if you have any further questions.
---
Rebuttal Comment 2.1:
Comment: I appreciate the authors for the detailed explanation. Now, I understand the difference between ensemble sampling and PHE better. It would be great to include this explanation in the main text. However, according to the author’s explanation, PHE does not seem to be equivalent to Thompson sampling, at least in the linear case, from the perspective that PHE requires refreshing the noise for observations from previous episodes in each round. Note that in LinTS (Agrawal & Goyal, 2013), the mean vector and covariance matrix are updated incrementally (i.e., $\mathbf{V}\_t = \lambda \mathbf{I}\_d + \sum\_{i=1}^t X\_i X\_i^\top = \mathbf{V}\_{t-1} + X\_t X\_t^\top$, $\mathbf{b}\_t = \sum\_{i=1}^t Y\_i X\_i = \mathbf{b}\_{t-1} + Y\_t X\_t$ ).
Now consider a variant of Thompson sampling where sampling $M$ parameters $\theta\_t^{(j)} \sim \mathcal{N}(\mathbf{V}\_{t-1}^{-1} \mathbf{b}\_t, \nu \mathbf{V}\_t^{-1})$ for $j \in [M]$, and select an action based on the most optimistic parameter, i.e., $X_t = \arg \max\_{x, j} \langle x, \theta_t^{(j)} \rangle$ (or we may choose a parameter uniformly random as this paper does). This variant of Thompson sampling would guarantee the same $\tilde{O}(d^{3/2} \sqrt{T})$ regret as traditional TS (Agarawal & Goyal, 2013), but it also can work online. It would be helpful if you could explain the advantages of the ensemble approach in this paper compared to this variant of Thompson sampling.
Moreover, as the author mentioned, although this paper theoretically addresses a longstanding open problem that had previously been a failed attempt published in NeurIPS, I do not believe the paper offers a complete solution to the open problem.
Certainly, ensemble sampling has shown to be tractable and empirically effective for nonlinear, complex tasks like Deep RL. Given this, one would intuitively expect that ensemble sampling would be at least equivalent to or provide a tighter regret bound than existing randomized exploration algorithms, even in a linear setting. However, I believe the theoretical results proposed in this paper, while very interesting, are only a partial solution.
According to the rebuttal to Reviewer Pgmx, the author claims that “if an algorithm provably cannot work in the linear setting, chances are it won’t work beyond it, either”. However, by the same token, the results of this paper indicate that ensemble sampling is not optimal even in the linear setting. Furthermore, if ensemble sampling is advantageous only when a closed form is not provided, wouldn’t there be no reason to use ensemble sampling in linear settings where a closed form is given? Although the paper presents a novel proof technique, I do not think it offers a solution to the longstanding open problem, especially since even this technique, which is based on the closed-form of the linear setting, fails to demonstrate the efficacy of ensemble sampling in the linear setting where a closed form is provided.
---
Rebuttal 3:
Comment: __Clarifying the confusion about the goal of studying ES in the linear setting__ A minimum requirement for an algorithm that aims at some level of generality is that it should also work reasonably well in simpler settings. Moreover, if the algorithm achieves some generality, compromising on optimality in a simpler setting (in our case, the linear setting) should be acceptable. We are sure that the reviewer is also aware of the many examples where such a compromise exist or is even unavoidable (there is a large body of literature on problems of this type). Ours is the first paper that gives *any* sort of theoretical support for ES. Given the amount of work devoted to ES (empirical and theoretical), we think that a paper that changes this poor status quo should be of major interest (news!) to the community. We note in passing that all previous papers on analyzing ES started with the linear setting (Lu and Van Roy'17; Phan et al,'19; Qin et al,'22 -- all papers published at NeurIPS!), and their rationale for studying ES in the ensemble sampling was exactly the same as ours (see, e.g., the introduction of the paper by Qin et al.)
Another confusion that we would like to clarify is that our work does not give a definite answer to whether ES can be optimal (or even meet the guarantee TS can). While we could not obtain a result like this, ours is an *exponential* improvement over the previous result in terms of what ensemble size gives the optimal rate of growth of regret (up to logarithmic factors).
While we also like to think that we have high standards, expecting that every paper will close a whole area of study by providing a final answer is unrealistic and unhealthy for the community. Just to mention an example close to our work, by this standard, the LinTS work of Agrawal & Goyal (2013) also cited and perhaps liked by the reviewer, which we think is a breakthrough paper, should not have been published: As is well known, the regret bound given in this work falls short of that available for LinUCB by a multiplicative factor of size $\sqrt{d}$. Yet, this paper sparked a lot of good work in the community. Today, we even know that the extra $\sqrt{d}$ factor is even unavoidable (Hamidi et al'20). But achieving this result required the cooperation of many researchers through many years, which was greatly facilitated by reviewers who let the many papers that provided some small steps towards the final solutions to be published. In light of this, we respectfully ask the reviewer to reconsider their position as to what constitutes an interesting, publishable result.
Finally, at the risk of repeating earlier arguments, we would like to note that even if in the linear setting ES is inferior to alternatives, the alternatives loose their edge against ES as soon as the setting becomes slightly more complicated. We show this now in the deep learning setting.
__On ES vs PHE in Deep Learning__ Suppose that our model is a neural network with $p$ parameters, and we use a standard NTK-style linearisation (see, e.g., Jia et al., 2022). TS/PHE with incremental updates gives:
- $O(p^2)$ per step computational complexity for the matrix-vector product, used for Sherman-Morrison updates
- $O(p^2)$ memory for storing the covariance matrix.
Consider on the other hand Ensemble Sampling in the NTK setting. Let $d$ denote the effective dimension of the RKHS with feature map given by the gradient of the neural network. Then the required ensemble size will be $m=d \log T$ (crucially, not $p \log T$). Updating each ensemble element requires a step (or a number of steps) of backpropagation, so order $O(p)$ runtime and $O(p)$ memory. With that, ensemble sampling uses
- $O(dp \log T)$ runtime,
- $O(dp \log T)$ memory.
Observe that $d\leq p$ always. In cases where $d$ might be much smaller than $p$, ensemble sampling is beneficial. __This is why ES works in the deep learning setting, and TS/PHE does not.__
One might then say: okay, why don't we implement TS/PHE with sketching? That will make the runtime of TS depend on $d$ in place of $p$. But implementing sketching for large models is _very difficult,_ due to both numerical stability issues and choosing parameters involved in sketching. We haven't seen popular practical methods built around sketching.
We will be happy to include some extra discussion of why ensemble sampling is a sensible choice if the paper gets accepted using the extra page. This seems something easy to address would the reviewer think this to improve the paper.
In conclusion, we are not claiming that we solve all longstanding open problems related to ES, or even that we solve the hardest of these, but we claim that we solve some of these problems, which we think are of major interest to the community, especially given the new proof techniques we employ.
---
Rebuttal Comment 3.1:
Comment: Thank you for the author's detailed response. I understand the motivation behind ES research in the linear setting and believe that the contributions of this paper should be appreciated. (Please note that I support the acceptance of this paper.) However, I feel that the current manuscript and the author rebuttal do not fully explain that motivation, and I believe this aspect needs to be supplemented.
I am not trying to depreciate this work; rather, I believe the results in this paper can be further strengthened through appropriate comparisons and motivation with other works.
I sincerely hope that the content of the author-reviewer discussion has helped the authors to further polish their work.
Accordingly, I will raise my score from 5 to 7.
---
Rebuttal 4:
Comment: Thank you for the comment. We will be happy to expand the discussion of randomized methods, as indicated beforehand.
We agree with the AC on the motivations for using PHE. And if one thinks of PHE, running ES instead is a natural choice. We prove that doing so still achieves reasonable regret–-even if our result is not optimal (or maybe it is, in the sense that maybe there is a real cost to ES over PHE/TS, and a tighter result is not possible).
On "discussing in detail the pros and cons of the algorithm studied in this paper compared with those randomized methods in the literature":
As confirmed in our rebuttal, we would be happy to spend the additional page granted at camera ready on this discussion. We could also include a more extensive related works section in the appendices. Nonetheless, we must emphasize that our novel contribution is theoretical, not algorithmic–-we do not introduce ensemble sampling, its already a commonly used method in the applied literature–-and there is a strict page limit. We are also not writing a tutorial paper on randomized methods, but providing a novel theoretical contribution. We are writing for an expert that understands randomized methods, and wants to know what specific tricks are required to show that ensemble sampling works: for an expert that can make the comparison between methods themselves. Nevertheless, again, we are happy to use the extra space for a discussion of the relative merits of all methods mentioned. | Summary: The paper presents a regret analysis of ensemble sampling within the stochastic linear bandit framework. It demonstrates that an ensemble size scaling logarithmically with time and linearly with the number of features suffices, marking a theoretical advancement. The paper shows that under standard assumptions for a \(d\)-dimensional stochastic linear bandit with an interaction horizon \(T\), the regret is of the order \((d \log T)^{5/2} \sqrt{T}\). Although the regret bound is not as tight as the existing bounds for the linear bandits, this is the first analysis of ensemble sampling in a structured setting that avoids the need for the ensemble size to scale linearly with \(\sqrt{T}\), which previously compromised the purpose of ensemble sampling, while still achieving near-optimal order regret.
Strengths: - The paper makes a decent theoretical contribution by reducing the required scaling of ensemble size, which enhances the applicability and efficiency of ensemble sampling in linear bandit problems. By showing the better scaling of ensemble size, the paper provides a pathway for more efficient implementation of the ensemble sampling algorithms.
- To my knowledge, this is the first correct frequentist regret analysis of linear ensemble sampling.
Weaknesses: - One drawback is that, as the authors acknowledge, the regret bound of \(O(d^{5/2} \sqrt{T})\) is relatively loose compared to existing linear bandit results. For example, TS-based algorithms like LinTS typically achieve \(O(d^{3/2} \sqrt{T})\) regret, making the proposed bound clearly suboptimal.
- The dependence on \(m\) in the regret bound indicates that as the ensemble size increases, the algorithm's regret performance worsens. This superlinear dependence, with regret being \(O(m^{3/2})\), raises concerns. Given this result, it is unclear why one would opt for this particular way (as proposed in the paper) of linear ensemble sampling.
**Minor comment (but for clarity)**
- I recommend that the authors explicitly write the linear \(d\)-dependence on the ensemble size \(m\) instead of shoving \(d\)-dependence into \(N\) in the theorem statement of Theorem 1 if possible, as I see that authors are transparent about $d \log T$ dependence on $m$ in Remark 1.
- I am not sure whether the expression "**slightly** worse than that obtained for Thompson sampling" (in Line 120) is adequate given that such a gap (extra $d$) can be seen as **significantly** by among many bandit researchers. I understand the authors' intention, but I suggest removing such an expression of "slightly."
Technical Quality: 3
Clarity: 3
Questions for Authors: - Regarding the comments in Remark 3, under unknown $T$, can't you derive some regret bound with the doubling epoch trick though? Or, do you argue that you cannot get any bound (sublinear in T) even with the doubling trick?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: There is no separate "Limitations" section. But the authors discuss the weakness of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We must clarify two important misunderstandings:
1. Our upper bound scales superlinearly with m, but that does not mean that the regret of the algorithm scales superlinearly with m. Our bound is simply not tight when m goes to infinity (we will adjust the wording of Remark 2 to make this clear). It is tight for small ensembles, which is the crucial regime (we do not think anyone has any doubts whether ensemble sampling can work when m goes to infinity! Even previous results of Qin et al. 2022 show that it can, at least in the Bayesian set-up). Our interest is in showing that ensemble sampling as it is used in practice, with small ensembles, can work; and that’s what we demonstrate.
2. Our algorithm is not a “particular version” of ensemble sampling. It's almost the standard version, with the only requirement being a symmetrisation of the ensemble, which can clearly only be beneficial in practice and should be done anyway. In any case, we have no doubt that our result would hold without symmetrisation; but showing this would involve a much, much more tedious argument, from which it would be significantly harder to extract useful insight on how and why ensemble sampling actually works. We address this point in remark 9 of our paper.
We are happy to correct the two issues raised as “minor comments” in the way suggested by the reviewer.
Regarding remark 3: yes, one can of course use the “doubling trick” to obtain an algorithm the regret bound of which is only a constant larger. The problem is that the doubling trick doesn’t just increase the regret bound (which is harmless) by some factor, but it will increase the actual regret incurred by said factor (which is bad). The aim of our paper is to analyse the algorithm as practitioners might use it, and for the aforementioned reason, practitioners would use the doubling trick only as a last resort if ever. Thus, implementing the algorithm without knowledge of the horizon in a way that does not increase the actual regret incurred much in comparison to if the horizon were known is a problem we do not have an answer to—and one where we suspect that a good answer might not exist.
Regarding the weakness of our bound: yes the bound is likely suboptimal; but it is also exponentially better in two quantities than the previously given bound, and holds for the more stringent minimax regret criterion, rather than for Bayesian regret, as in the previous paper. We must point out that the previous paper with the much weaker guarantees was published at NeurIPS in 2022 (Qin et al. 2022), and therefore we believe that our much stronger results are more than sufficient for publication.
Note that we also detail what it would take to get a bound better than ours in Remark 10 of our paper; the required arguments around order statistics in a sequential setting are far beyond the current literature on the topic, but there is hope that it might be solvable in the coming years—time uniform concentration bounds are currently a hot topic, and work on similar questions is beginning to pop up in the literature.
In light of our rebuttal, we must ask the reviewer to reconsider their recommendation of only a “weak accept” for a paper that improves exponentially in multiple respects over a previously published NeurIPS work; a paper that solves a rather long-standing problem in theory; a paper comprising many novel proof ideas and interesting techniques.
_Qin, Chao, et al. "An analysis of ensemble sampling." Advances in Neural Information Processing Systems 35 (2022): 21602-21614._
---
Rebuttal 2:
Comment: I think these results deserve recognition. I am raising my score from 6 to 7.
---
Rebuttal Comment 2.1:
Title: thanks
Comment: Thank you! | Summary: The authors study an upper confidence bound (UCB) type of ensemble sampling method specific to the stochastic linear bandits. Based on previous work, it improves the analysis by introducing Rademacher variables for symmetrising the ensemble. The authors are thus able to obtain $\sqrt{T}$ dependency in the regret with an ensemble size scaling with $d$ (dimension) and $\log T$ instead of $T$ (horizon).
Strengths: - Removing the linear dependency of $m$ (ensemble size) on $T$ (horizon) for this ensemble sampling method is significant.
- The master theorem in Section 3 introduces intermediate filtrations $A'$ that may be of independent interest.
Weaknesses: - The point of introducing the ensemble sampling in [Lu and Van Roy 2017] seems to be approximating Thompson sampling for complex models where computation is intractable. Meanwhile, the regret of linear bandits is known and achievable by LinUCB. Although the analysis improves for using smaller ensemble size in linear bandits, it deviates from the difficult setup and hence undermines the contribution of this paper.
- The number and omnipresence of remarks in the main text obscures the core idea of this paper. For example, Remark 3 does not add much to its section (the upper bound) and is under-explained.
Technical Quality: 3
Clarity: 2
Questions for Authors: - I may have missed this - can the authors point out the significance of the regularization parameter λ? It seems from the theorems (2 and 5) λ only needs to be larger than 5 (i.e. an absolute constant)?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 1
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would ask that the reviewer confirms that they have read our main rebuttal, and to acknowledge that they understand and agree that our paper solves an open problem in the theory community that has attracted two previous failed attempts—both published at NeurIPS. With that in mind, we would ask the reviewer to reconsider their recommendation to reject our work: NeurIPS is precisely the place for well-written, sound papers that address hard open problems highlighted by the community with novel proof ideas and techniques.
We would also ask the reviewer to clarify the issue with respect to remark 3: to us, the lack of a good anytime version of ensemble sampling poses an interesting problem. We would have also liked to see some more details of as to why the reviewer thinks that remark 3 is underexplained. We acknowledge that the text can be dense, but we ask the reviewer not to forget that we are bound by a length constraint, and we think the text has just enough information to understand the issue (here and elsewhere).
As to the question regarding $\lambda$: the significance of $\lambda$ is that it is the scale of the noise injected into our estimates of the instance parameter prior to observing any data. The lower bound on lambda shows that it is crucial to inject some noise—this has already been understood by practitioners, and indeed the main contribution of Osband et al. 2018 over Osband et al. 2016 is the introduction of such “prior noise”. Our theory successfully confirms what practitioners have observed and developed an intuitive/heuristic understanding of (this intuitive understanding is outlined in Osband et al. 2019). This is a non-trivial contribution, and we will highlight it in our next revision of the manuscript.
_Osband, Ian, et al. "Deep exploration via randomized value functions." Journal of Machine Learning Research 20.124 (2019): 1-62._
---
Rebuttal Comment 1.1:
Comment: I agree that the work can be extended with NTK and itself is of interest to the theoretical community. And it would be a good addition to provide the discussion on $\lambda$.
Given the placement of its previous attempts and the authors' response, I'll change my rating toward having this work published i.e. from 4 to 6. | Summary: This paper studies the ensemble sampling in the stochastic linear bandit setting, and shows that the ensemble sampling can achieve $\mathcal{O}( (d\log T)^{5/2} \sqrt{T})$ regret with an ensemble size of order $\mathcal{O}(d log T)$. The authors claim this is the first meaningful result of ensemble sampling as it does not require sub-linear ensemble size.
Strengths: 1. Though I have skimmed the proof of several lemmas, the analysis part seems to be rigorous and mathematically correct. The notations are clear defined before their references.
2. The result which shows that ensemble learning is able to deal with linear bandit problem is quite interesting. This result would be applied in other similar settings.
Weaknesses: 1. It would be better to emphasis the differences between Algorithm 1 and LinUCB and explain how these differences work. The beta is quite similar.
2. Though the authors refer the audience to the previous works for the motivation of studying ensemble sampling, it's still beneficial to include the related works on linear bandits and briefly discuss the specific advantages of using ensemble sampling.
Technical Quality: 3
Clarity: 3
Questions for Authors: The questions are raised in the weakness section. I am willing to re-evaluate the scores if these questions are properly answered.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: This paper is pure theoretical and does not have any limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: While our focus is on the novel proof ideas and techniques needed to get a bound for ensemble sampling, we would be happy to use the additional page available at the camera ready stage to provide more context on ensemble sampling and its relation to other methods.
To answer the reviewer’s two “weaknesses” points directly:
1. The beta featuring in our work is indeed the same as that used in LinUCB, as it is the standard confidence width multiplier used for the confidence ellipsoids of linear ridge regression, as obtained from the method-of-mixtures. However, the difficulty with ensemble sampling lies in proving that the ensemble parameters are “sufficiently spread out” in the confidence ellipsoid in an appropriate sense. Thus, while we reuse part of the ideas from standard bandit proofs (optimism), much work had to go into showing that optimism thus indeed holds (in a certain sense).
2. Regarding advantages of using ensemble sampling, as discussed in Remark 4: Ensemble sampling should only be used in settings where closed forms are not available. There are no advantages to using ensemble sampling in the linear setting studied in this work, where Thompson sampling has a closed-form solution. The linear setting is studied in theoretical work because it provides a testing ground for the soundness of algorithms: if an algorithm provably cannot work in the linear setting, chances are it won’t work beyond it, either.
With this out of the way, we ask the reviewer to consider our main rebuttal. In particular, we ask the reviewer to reconsider their assessment of our contribution as merely “fair”; we solve a long-standing open problem that has had two failed attempts published at NeurIPS. And we would ask that the reviewer reconsiders their scoring of the paper as “Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject”, since to our reading, the reviewer has not provided any significant reasons to reject, whereas reasons to accept are plentiful: a very strong theoretical contribution with interesting, novel proof techniques, on a question that has attracted much attention in the community.
---
Rebuttal 2:
Comment: Dear reviewer,
The author-reviewer discussion period is shortly coming to an end. We were hoping to get a confirmation from you that you've considered our rebuttal, and to let us know if you have any further questions.
---
Rebuttal Comment 2.1:
Comment: Sorry for the late reply.
Thank the authors for their response. I would like to increase my score to 6. | Rebuttal 1:
Rebuttal: We respectfully disagree with the current assessment of our work and would like to encourage the reviewers to reconsider the following: We believe that the successful resolution of a long-standing open problem, one that has attracted multiple prior attempts, holds significant value within the NeurIPS community. Notably, two of these prior attempts were themselves published at NeurIPS (Lu & Van Roy 2017 and Qin et al. 2022). A well-written and sound paper addressing this open problem is a significant contribution. Novel proof ideas and techniques are of interest and contribute to the advancement of the field. It appears to us that none of the reviewers considered these aspects of our work; hence, again, we respectfully ask the reviewers to reconsider their assessment taking the above into account. We will also make an effort to revise the paper to make it (even) more clear that the above is what makes our paper interesting for the community, along with some other revisions (which we think are minor) to clarify a few things that seemed to be missed by the reviewers; in particular, the relation of ensemble sampling and Thompson sampling/perturbed history exploration, and the applicability of our results in the deep learning setting.
__Relation to TS/PHE__ There was a common misunderstanding amongst the reviewers as to the relation between ensemble sampling (ES), Thompson sampling (TS) and Perturbed History Exploration (PHE). In the linear setting, PHE and Thompson sampling are equivalent, and so we will make the comparison only between Ensemble sampling (ES) and PHE. In PHE, at every round, you take all past observations, add fresh noise to all of the data, then fit a model, and select a greedy action. That means that at every iteration, you have to fit a new model! If the models are neural networks, this is extremely expensive.
Could you, perhaps, instead of fitting a new, fresh model at every round, have an ensemble of $m$ models, and update these incrementally? That is, at each round, you do not add fresh noise to all previous observations, but only to the observation just received. This is ensemble sampling; in contrast to PHE, it does not require fitting a new model from scratch at every round; and it does not require storing past data—it is a completely online algorithm.
This leaves the question of whether ES works; whether we can get away with using incremental updates, and not train models from scratch. This is the question that we resolve with an affirmative answer: yes you can! The answer to this question was genuinely not known at all before our work. Our regret bound isn’t as good as if you had trained $m$ new models each iteration… maybe our bound is suboptimal, or maybe it's a real trade-off: we do not know; we discuss this in Remarks 2, 4 and 10.
__Neural network function approximation__ The second concern of our reviewers was whether we had dodged the difficult “deep learning” setting by considering only the linear setting. We did not: the difficulty of proving a result for ensemble sampling comes from the incremental nature of the updates, and is tricky whenever there are any dependencies (correlations) between actions in the problem; the linear setting fully captures these difficulties. The core contribution of the paper is a completely novel proof addressing this difficulty. With a result on the regret proven for the linear setting, one can extend the result to deep learning using neural tangent kernel techniques—this is standard and requires no novel proof ideas or techniques; but at the same time rather tedious. Given the short page limit of NeurIPS papers, we decided not to include this in our manuscript. We believe that our contribution in the linear setting is valuable for its techniques and ideas, regardless of application to neural networks.
Furthermore, the difficulty is not in showing that an algorithm based on the linear bandit setting can have bounded regret when combined with neural network function approximation, but that it remains tractable in that setting. PHE and TS are not tractable in this setting. Ensemble sampling, on the other hand, is the method behind the BootstrapDQN and Ensemble+ algorithms of Osband et al. 2016 & Osband et al. 2018; as evidenced by the citation counts of these papers, these ensemble sampling algorithms are commonly used.
We are thus proving results for methods actually used in practice, rather than the PHE/TS algorithms, which while simple to analyse (and often analysed and extended in the theory literature), are not really used in deep reinforcement learning. We believe that proving results for algorithms that practitioners actually use, rather than for simpler theoretical constructs, is of crucial importance to the community, and often overlooked in theory work.
__Moving forward__ When writing the paper, given that PHE, Thompson sampling and ensemble sampling are not our inventions and are thoroughly described and discussed in the literature, we assumed that it was not necessary to describe the above mentioned points and that it was sufficient to refer to the literature; and that we should use the scarce space for describing the novel parts of our work instead. As the camera ready allows for an extra page, we now think that the extra space should be used to include a discussion of the relationship of these methods, making the motivation to analyse ensemble sampling as we do it, clear, regardless of the familiarity of the readers with the literature.
We must again emphasise that our paper solves a long-standing open problem, which attracted two failed attempts, both published at NeurIPS. We think that a well-written, sound paper that successfully solves this open problem is surely a strong contribution publishable at NeurIPS.
With the above in mind, we must ask that all reviewers reconsider their scoring of our paper. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
4Real: Towards Photorealistic 4D Scene Generation via Video Diffusion Models | Accept (poster) | Summary: 4Real presents a novel framework for dynamic 3D scene generation. Given an input video, the framework selects a frame and prompts a text-to-video diffusion model to generate a "freeze-time video" that reconstructs the canonical 3D scene. Then, video-SDS-based optimization is used to optimize a dynamic 3DGS with reference to the input video. Compared to existing approaches that focus more on object-centric generation, the new method is able to generate more complex 4D scenes including multiple objects.
Strengths: 1. 4Real generates photo-realistic 180-degree 4D scenes for the first time. In comparison, previous 4D generation works are mainly object-centric, restricted by the object-centric image models that they rely on. 4Real only relies on native text-to-video diffusion models, thus addressing the bias. The approach is more favourable than 4Dfy and Dream-in-4D as suggested by a user study.
2. Although the work mainly follows the previous video-SDS-based 4D generation frameworks, it proposes several effective techniques to enable modeling more complex scenes that include multiple objects. A key design is to generate the canonical 3D through freeze-time video and video extension. An additional dynamic 3DGS fitting and a heuristic loss (small motion regularization) are proposed to offset the remained motions in the freeze-time videos, which I find reasonable and novel.
3. The paper also provides empirical values to the video-SDS-based 4D generation frameworks. It reduces the video-sds computational cost dramatically by leveraging low-resolution pixel diffusion models.
Weaknesses: 1. The description of context conditioning (L149) is not clear to me. "One dataset consists of static real-world objects with circular camera trajectories", is this dataset similar to [CO3D](https://ai.meta.com/datasets/co3d-dataset/), does it contain background, and how critical is the context embedding? Including this information can be important to reproduce this work.
2. The evaluation did not compare with AYG [28], whose results could also be easily accessed. AYG optimizes a dynamic 3DGS while the compared 4Dfy and Dream-in-4D optimize a dynamic NeRF, so I would say AYG is closer to this work. Although similar conclusions could be drawn for both AYG and 4Dfy/Dream-in-4D as they are all object-centric, the argument in L271 that claims "These methods are known to achieve some of the best visual quality compared to AYG and ..." is neither well-grounded nor convincing. L270 is also wrong to claim that Dream-in-4D is based on D-3DGS.
3. Generating a freeze-time video for the static 3D generation makes a lot of sense, but I wonder if it is possible that the generated freeze-time video could conflict with the reference video. Moreover, the canonical 3D generation stage seems to be solving a standalone image/text-to-scene task, it would be interesting to know if recent 3D scene generation works like RealmDreamer[A] and CAT3D[B] could be a drop-in replacement.
Minor:
- Editing is mentioned in the contributions, but it has not been discussed in any other places in the paper.
[A] Shriram et al., RealmDreamer: Text-Driven 3D Scene Generation with Inpainting and Depth Diffusion
[B] Gao et al., CAT3D: Create Anything in 3D with Multi-View Diffusion Models
Technical Quality: 3
Clarity: 2
Questions for Authors: Please kindly address the weaknesses above.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Context embedding.** First, the dataset for “static real-world objects with circular camera trajectories”, consists of data similar to the MV-ImagNet, with real-world backgrounds. Second, the use of context embedding is critical, as it controls the video model to generate freeze-time-like video. Without this step, the model predominantly generates dynamic videos whenever nonrigid subjects are included in the text prompt, regardless how the text prompt is engineered. The usage of context embedding can definitely be abandoned in the future when the video models are equipped with better camera-control and better consistency to input prompts.
**Compare with AYG and correct description of Dream-in-4D.** We thank the reviewer for pointing out our mistake in wrongly describing Dream-in-4D as a 3D GS-based method. We will correct this error accordingly. Additionally, we have included a comparison against AYG, as shown in Figure 3 and Table 1 in the PDF. We find that our overall assessment of the advantages of our method over prior object-centric methods remains valid.
**Consistency between reference video and freeze-time-video.** To minimize inconsistency, we generate reference videos without camera motion. This approach reduces disoccluded regions caused by camera movement, which is a major source of potential inconsistency between the reference video and the freeze-time video. Given this setup, we find the low-res video model is able to generate mostly consistent pixels in the remaining disoccluded regions caused by object motion, partially due to any inconsistency is hard to notice in low-res video. To further improve consistency, we merge two videos into a single video when feeding it into the video upsampler, leveraging the temporal self-attention in the upsampler to improve consistency.
**Applying recent 3D scene generation works.** Yes, we believe recent image-mv models trained for 3d scene generation can be a drop-in replacement for the part of generating freeze-time video. Unfortunately, none of these concurrent approaches (i.e., RealmDreamer and CAT3D) are open-sourced yet, so it is not clear how they perform compared to the video model we employed. We will include a discussion of these methods in the revised version.
**Video editing.** Thanks for the reminder. We mentioned the flexibility in selecting and editing videos in our contributions based on our generation-via-reconstruction pipeline that allows users to generate, select, and edit the reference video they want. We will include our results of applying simple video editing techniques, such as face-swapping or attributes manipulation, in the revised draft of the supplementary material.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. It clarifies most of my concerns. The numerical comparison with AYG looks good to me. Figure 2,3,4 in the rebuttal would be more helpful if more frames could be visualized.
On 3D-video inconsistency, a static camera in reference video does not avoid the inconsistency as the objects can move as well, like turning around. "Merge two videos into a single video when upsampling" sounds like a reasonable trick, and should be discussed more in detail in the paper. It would also be better if more principled solutions could be discussed for future works.
I will keep my score. | Summary: The paper introduces a pipeline designed for photorealistic text-to-4D scene generation, discarding the dependency on multi-view generative models and instead fully utilizing video generative models trained on diverse real-world datasets. The method begins by generating a reference video using the video generation model, then learns the canonical 3D representation of the video using a freeze-time video, delicately generated from the reference video. To handle inconsistencies in the freeze-time video, the method jointly learns a per-frame deformation to model these imperfections, then learns the temporal deformation based on the canonical representation to capture dynamic interactions in the reference video.
Strengths: 1. The paper delivers impressive visual results for 4D scene level generation. The quality is outstanding and scene level generation is more general than previous object level generation.
2. The paper is well written with elaborated technical details.
Weaknesses: 1. The method seems extremely slow due to multiple steps being applied. The method hasn't provide enough latency comparisons.
2. Besides user studies, I wonder if any qualitative results can be shown, for example, FVD. Or some video sequences from multiview data (in door or outdoor) and compute reconstruction loss, but applying the lifting for reference video (using one view video in the data set) and render the other view to compare with the gt novel view video.
3. The model has a freeze time view deformation field, which indicates fitting static 3dgs scene from freeze-time video is unable to deliver good initial 3dgs field. How to balance the 3d consistency and plausibility from freeze time camera prior? More reasoning and analysis is desirable.
4. The paper uses freeze time SDS some part and multi-view SDS somewhere else. Better to improve the consistency.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The paper mentioned a freeze time view deformation field. Does every frame has it or it is only applied to the first frame.
2. How important is the freeze time view deformation field.
3. The motion video diffusion needs to sample with the first frame, how does the video diffusion handle different sampled fps? Using it as a input? This setting potentially limits the extension of this method to longer video.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Multiview video can be a better prior, it is limited by training data, but will improve in the future. 4D generation might be benefited from this prior and reduces the requirement of multiple stages.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Latency comparison.** In Ln. 64, we mentioned that our overall runtime is approximately 1.5 hours on a single A100 GPU, compared to over 10 hours for 4Dfy and Dream-in-4D. To provide additional latency analysis, we break down our runtime as follows: approximately 10 minutes for reference and freeze-time video generation, about 2 minutes for running COLMAP, around 20 minutes for reconstructing the canonical GS, and less than 1 hour for the remaining reconstruction steps with SDS.
**Additional qualitative metrics.** We included additional metrics to quantitatively evaluate the rendered videos (see Table 1 of the PDF). We employ the XCLIP score [1], a minimal extension of the CLIP score for videos. We also run VideoScore [2], a video quality evaluation model trained using human feedback, which provides five scores assessing visual quality, temporal consistency, text-video alignment, and factual consistency. Our method significantly outperformed other methods in these metrics. However, we remain conservative about the effectiveness of these metrics since they have not been thoroughly studied for text-to-3D/4D generation.
Method | **X-CLIP $\uparrow$** | **Visual Quality $\uparrow$** | **Temporal Consistency$\uparrow$** | **Dynamic Degree $\uparrow$** | **T-V Alignment $\uparrow$** | **Factual Consistency $\uparrow$**
-----------------|-----------------------|-------------------------------|------------------------------------|-------------------------------|------------------------------|------------------------------------
**4Dfy** | 20.03 | 1.43 | 1.49 | 3.05 | 2.26 | 1.30
**Ours** | **24.23** | **2.43** | **2.17** | **3.15** | **2.91** | **2.49**
**Dream-in-4D** | 19.52 | 1.34 | 1.37 | 3.02 | 2.27 | 1.20
**Ours** | **24.77** | **2.41** | **2.15** | **3.14** | **2.89** | **2.46**
**AYG** | 19.87 | **2.49** | 2.09 | 3.15 | 2.80 | 2.47
**Ours** | **23.09** | 2.44 | **2.16** | **3.16** | **2.90** | **2.50**
Additionally, we excluded the FVD metric, as computing it with a limited number of rendered videos is statistically not meaningful. Evaluating with real-world multi-view videos is also not easily applicable to our pipeline. This is because we made specific design choices, such as requiring the reference video to have no camera motion to avoid inconsistency between the reference video and the freeze-time video. This assumption does not hold for casually captured dynamic videos.
**Analysis of freeze time view deformation field.** The reason for introducing view-dependent deformation (referred as per-view deformation in the paper) is to treat inconsistency in the generated freeze-time videos as geometric deformation. Without the view-dependent deformation, the canonical GS tends to underfit the freeze-time video, leading to blurrier regions, particularly in the background, where the video model often produces geometrically incorrect results.
However, while view-dependent deformation can improve underfitting, it can also lead to overfitting to the reference video, resulting in noisy reconstruction. To mitigate this, we employ a regularization loss of the magnitude of the deformation (as mentioned in lines 180-186) to prevent overfitting. Through empirical testing with a few samples, a weighting of 0.01 was found to be effective and is used consistently throughout the experiments.
Additional visual comparisons are provided in Figure 5 of the PDF to support this analysis. These comparisons demonstrate the effects of removing either the per-view deformation or the small motion regularization.
**Does freeze time view deformation field apply to other temporal frames?** No, it is only applied to fit the frames from the freeze-time video. It is not used for any other frames.
**How does the video model handle different sampled fps?** The video model we used can take FPS as input. Additionally, as described in Ln. 142, it is a frame-conditioned model capable of performing either autoregressive generation or frame interpolation. Therefore, theoretically, our method can handle longer videos in the cost of longer processing time in the following reconstruction stage.
**Multiview video prior.** Yes, we also strongly believe that developing a generalizable multiview video diffusion model is one of the most promising directions for 4D generation. We briefly mentioned potential implementations, such as cross-frame attention, in Ln. 308. In the new draft, we will include a discussion of more concurrent efforts in multiview video generation. At this point, however, our method serves as a pioneering effort in demonstrating the possibility of generating realistic 4D scenes.
**References:**
[1] Expanding Language-Image Pretrained Models for General Video Recognition.
[2] VideoScore: Building Automatic Metrics to Simulate Fine-grained Human Feedback for Video Generation. | Summary: This paper proposes a straightforward method to distill a 4D dynamic scene from a video diffusion model. Given a reference video (only generated videos are shown in the paper, no real video), they first generate a freeze-time video as a multi-view to reconstruct a reference 3DGS. Because they don't have a good multiview image model to capture the general scene multiview prior, they can only use a video model, which might produce small deformations. Some engineering effort is paid here to accept such deformation. After the reference Gaussians is reconstructed, an SDS stage is applied to both the frozen time multiview images and the temporal deformed frames.
Strengths: - This is an early attempt to generate 4D scenes from video diffusion models
- Although very straightforward, the system works for generating some 4D videos.
Weaknesses: - Object-centric? One important constraint this paper assumes is that because they want to generate dynamic scenes, they don't use sophisticated object-centric models. However, most video examples this paper shows do not have significant background motion, so one strong baseline/alternative is to just distill a static background and put a dynamic subject (from a dynamic object-centric SDS) in the foreground. Maybe this would achieve much better results than the paper's pipeline.
- How consistent? One question arises when you have a reference temporal video. Because there is a hallucination of the freeze-time view only considering a specific frame, however, the hallucinated freeze-time multiview images may not be consistent with what is observed in the temporal video, how does the system handle such in-consistency?
- Failure case: when i open the static website files in the supply, there is a folder called "good results", which the reviewer guesses may be because of some case selection, it would be critical to also show failure case.
Technical Quality: 2
Clarity: 2
Questions for Authors: The authors also have to highlight their novelty, since the method is straightforward.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Some limitation is discussed, but no failure cases are shown.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Baseline by 3D scene generation + 4D object-centric generation.**
Although this is a straightforward baseline, implementing it with high quality is challenging.
- First, inserting 3D objects with realistic and physically correct placement is not trivial. Common issues include floating objects, misaligned scales between objects and background, and unrealistic scene layout.
- Second, generating object motions that align well with the backgrounds is difficult, such as ensuring a generated target subject sits on a sofa rather than stands on it.
- Third, complex procedures are needed to relight objects to match environment lighting.
There exists several recent works either specialized in human-environment interaction [3] or generating simple 3D scene layout [4]. Currently, we are not aware of methods able to achieve all above points automatically. We tried our best to create a few baseline results with heavy manual efforts, as shown in Fig. 2 in the PDF. We create the 3D background by removing the object from our generated freeze-time video. We then employ 4Dfy to generate the foreground objects. Finally, we manually insert the objects into the background with the most plausible position and scale we can find. Despite these efforts, we find the inserted object appears disconnected from the background, especially when compared to 4real’s results. For example, in the first example, the fox stands on the sofa instead of lying on the sofa; in the 2nd example, the racoon appears floating above the DJ equipment, and the lighting is inconsistent with the purple/blue environment lighting; and in the last example, the racoon appears popping out of the newspaper instead of lying on it.
Overall, we think that 3D scene generation + 4D object-centric generation could be a viable alternative. However, this approach requires significant and complex research and engineering efforts. At this point, directly lifting 3D from generated videos, as demonstrated in our pipeline, appears to be a more generalizable and straightforward method. This is because the video model inherently produces realistic layouts, motion, and lighting. Furthermore, we envision that incorporating concurrent advancements in 4D reconstruction (e.g., MoSca [1]) and 4D object-centric generative priors (e.g., SV4D [2]) will further improve the quality of our pipeline.
**Consistency between reference video and freeze-time-video.** To minimize inconsistency, we generate reference videos without camera motion. This approach reduces disoccluded regions caused by camera movement, which is a major source of potential inconsistency between the reference video and the freeze-time video. Given this setup, we find the low-res video model is able to generate mostly consistent pixels in the remaining disoccluded regions caused by object motion, partially due to any inconsistency is hard to notice in low-res video. To further improve consistency, we merge two videos into a single video when feeding it into the video upsampler, leveraging the temporal self-attention in the upsampler to improve consistency.
**Failure cases.** As stated in Ln. 303, our method may fail under conditions such as rapid movements, the sudden appearance and disappearance of objects, and abrupt lighting changes. We will include failure samples in the new draft. To clarify what we mean by “good results,” we initially generated approximately one hundred results with various styles of reference videos without human filtering to understand the limits and ideal use cases of our method. After analyzing these results, we identified our limitations as mentioned above and retained the set of “good” videos—those with moderate movements, no sudden appearance or disappearance of objects, and no significant lighting changes—as example results for the intended use case of our method. The selection process can be entirely automatic via heuristic rules based on our observations.
**Highlight of Novelty.** In the 3D generation field, the paradigm where a 3D reconstruction following multiview image generation is one of the mainstream approaches. Although the idea is straightforward, achieving convergence to this solution and making it work effectively is not trivial. Similarly, the proposed generation-via-reconstruction pipeline for 4D generation is non-trivial in practice. Naively combining existing video models and 4D reconstruction techniques does not yield desirable results. The major contributions and novelties of our approach are (1) a workflow to obtain plausible freeze-time videos from reference videos, given current video models of limited capability, (2) a deformation field to handle the inconsistencies in imperfect freeze-time videos, and (3) a joint temporal and multiview SDS to help reconstruct temporal deformation. With these proposed techniques along with other efforts to adopt various existing components and regularization effectively, we achieve the presented realistic 4D generation results.
**Reference:**
[1] MoSca: Dynamic Gaussian Fusion from Casual Videos via 4D Motion Scaffolds.
[2] SV4D: Dynamic 3D Content Generation with Multi-Frame and Multi-View Consistency.
[3] GenZI: Zero-Shot 3D Human-Scene Interaction Generation
[4] GALA3D: Towards Text-to-3D Complex Scene Generation via Layout-guided Generative Gaussian Splatting
---
Rebuttal Comment 1.1:
Title: Keep my positive recommendation
Comment: After reading the reviews and rebuttals, I still think this paper provide valuable contribution to the community. The authors should include the above references [1-5] and discussions in the revision to better clarify the potential concerns, also, if possible, report the failure ratio that how many bad or good results will be generated from the 100 videos. I keep my positive recommendation. | Summary: This paper proposes a new pipeline called 4Real, aiming for more photorealistic text-to-4D generation than prior work. The method first generates a reference video, then learn a canonical 3D representation from a freeze-time video. Afterwards, per-frame and temporal deformation are learned to model the gap between the canonical representation and the targeted video. This paper compares with two baselines and includes both visualizations on 30 examples and user study. The method is also much faster than prior work.
Strengths: 1. This paper has lifted the requirement of training a 3D generative model using limited synthetic data from prior work. This is important since there are much less available 3D assets than videos in the real world. This paper shows a potential way of fully unleashing the power of the abundant diverse video data in improving the text-to-4D models.
2. The model takes 1.5 hour for testing, which is much faster than 10+ hours from previous methods. Although it is still very slow compared to other text-to-X models, it's already a great improvement.
3. The paper has a good presentation and is well-written. It also shows all the testing results in the supplementary.
Weaknesses: This paper has clearly identified quite a few key limitations of prior work and aims to solve them, which is great. But I don't think all of these claims have been validated. For example:
(1) this work captures the interaction between objects and environments
From all the submitted video results, they do not seem to include much (or any) interaction between the object and the environment (e.g., relative global movement between them). For example, for the "bear driving a car" comparison between ours and 4Dfy, the car in "our result" is not moving on the lawn. The movement only comes from the bear inside the car.
(2) prior work being object-centric while this work not.
Although it's clear that the compared baselines only generate foreground object while the proposed method also creates the background environment. Since there's not much interaction between the background and foreground (as explained above), a better baseline would be generating an empty background/environment that suits the prompt/foreground and then putting the baseline-generated object properly in the environment.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. From the results, it does seem that the baselines generate much less realistic objects than the proposed method as the paper has claimed. However, existing text-to-3D models have proved their power in generating realistic 3D assets [20, 24, etc.], so could it because the 3D generation model used in baselines are a bit outdated or the prompts were not the best ones? Maybe the realism in the baseline results can be largely improved by a bit prompt engineering (e.g., adding keywords like "realistic" in the prompt) or updating their used 3D generation model?
2. Does the deformation model also time-varying appearance or only time-varying geometry?
3. Minor suggestions:
(1) maybe adding the input prompt to Fig. 4 may help the readers better understand the output content and quality.
(2) Some typos:
L24: ", E" -> ". E"
L292: missing a period.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The limitations seem to have been sufficiently discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Interaction between objects and environments.** Our approach aims for a generalizable approach to generate realistic interaction between objects and environments, which includes natural multi-object placement, realistic environmental lighting effects, and relative motions between objects and environment. We acknowledge the ambiguity of the term “interaction” and will clarify these points. As per the reviewer’s request, we provide additional samples (see Fig. 4 in PDF) with moderate global motion between the object and the environment, such as “a storm trooper walking forward …” and “a dog running from left to right in front of a chair”. While our method currently faces challenges in reconstructing content with more rapid motion (as stated in Ln. 303), this limitation may be alleviated using concurrent advancement in 4D reconstruction method with long-term point tracking, such as MoSca [1].
**Baseline by 3D scene generation + 4D object-centric generation.** Although this is a straightforward baseline, implementing it with high quality is challenging.
- First, inserting 3D objects with realistic and physically correct placement is not trivial. Common issues include floating objects, misaligned scales between objects and background, and unrealistic scene layout.
- Second, generating object motions that align well with the backgrounds is difficult, such as ensuring a generated target subject sits on a sofa rather than stands on it.
- Third, complex procedures are needed to relight objects to match environment lighting.
There exists several recent works either specialized in human-environment interaction [4] or generating simple 3D scene layouts [5]. Currently, we are not aware of methods able to achieve all above points automatically. We tried our best to create a few baseline results with heavy manual efforts, as shown in Fig. 2 in the PDF. We create the 3D background by removing the object from our generated freeze-time video. We then employ 4Dfy to generate the foreground objects. Finally, we manually insert the objects into the background with the most plausible position and scale we can find. Despite these efforts, we find the inserted object appears disconnected from the background, especially when compared to 4real’s results. For example, in the first example, the fox stands on the sofa instead of lying on the sofa; in the 2nd example, the racoon appears floating above the DJ equipment, and the lighting is inconsistent with the purple/blue environment lighting; and in the last example, the racoon appears popping out of the newspaper instead of lying on it.
Overall, we think that 3D scene generation + 4D object-centric generation could be a viable alternative. However, this approach requires significant and complex research and engineering efforts. At this point, directly lifting 3D from generated videos, as demonstrated in our pipeline, appears to be a more generalizable and straightforward method. This is because the video model inherently produces realistic layouts, motion, and lighting.
**Could up-to-date text-3D models improve realism.** The reason why 4Dfy and dream-in-4d is less realistic is because relying on text-MV model (i.e. MVdream) trained with synthetic data. As a result, the model is biased to generate synthetic-style objects, even when “realistic style” is added to the text prompt (see Fig.1). Some text-3D methods achieve a certain level of realism by using generative models trained with real data, either performing SDS with text-image model [2], or training image-MV model using real data [3]. In this regard, our approach can be loosely considered an updated version of these methods with an improved 3D prior.
We will include this discussion in the next draft.
**Does the deformation model also time-varying appearance?** No, we only model geometry deformation. We find that incorporating time-varying appearance worsens reconstruction results by making the 4D reconstruction problem more ambiguous.
We thank the reviewer for the valuable comments, and will modify accordingly.
**Reference:**
[1] MoSca: Dynamic Gaussian Fusion from Casual Videos via 4D Motion Scaffolds.
[2] HiFA: High-fidelity Text-to-3D Generation with Advanced Diffusion Guidance
[3] CAT3D: Create Anything in 3D with Multi-View Diffusion Models
[4] GenZI: Zero-Shot 3D Human-Scene Interaction Generation
[5] GALA3D: Towards Text-to-3D Complex Scene Generation via Layout-guided Generative Gaussian Splatting
---
Rebuttal Comment 1.1:
Comment: I have read through all reviewers' comments and the rebuttals by authors. My concerns have been addressed. Therefore, I would like to raise my rating to weak accept. Please add the relavant discussions in the rebuttal to the final version of the paper if accepted. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for the positive feedback and insightful comments. We appreciate all the suggestions and will revise the paper accordingly.
To address major concerns from the reviewers, we include the following additional results in the PDF:
- **Additional baseline results from combining 3D scene generation and 4D object-centric generation (@yWZo, 7sxF)**: We found that this seemingly straightforward baseline is actually non-trivial to produce high-quality results with realistic object placement, object motion, and lighting effects, even with significant human intervention such as setting the placement position and object scale.
- **Additional results on text-mv models (@3AzP, yWZo)**: Text-MV model trained with synthetic data is biased to generate synthetic-style images, even when ”realistic style” is added to the text prompt.
- **Examples showing some global relative motion between the object and the background (@yWZo)**
- **Comparison against AYG (@vJ8m)**
- **Quantitative evaluation using XCLIP and VideoScore (@oa6s)**: XCLIP is a minimal extension of the CLIP score for videos. VideoScore is a video quality evaluation model trained using human feedback, which provides five scores assessing visual quality, temporal consistency, text-video alignment, and factual consistency.
- **Additional visual ablation of deformation (@oa6s)**: We add visual analysis of per-view deformation and the effect of using deformation regularization to balance between 3D consistency and plausibility from freeze time video prior.
We will elaborate on and address every question and suggestion in the following individual response.
Pdf: /pdf/8990d30d0802b196427d6a5297ceb7476ae0c778.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper presents a method for 3D scene generation using video diffusion models. The main contribution is to present a method that does not require a multi-view generative model trained on synthetic data. The paper identifies the use of multi-view models as a strong limitation limiting existing works to generate high-quality scenes. The method first generates a video from the text prompt, and then also generates a second video starting from a reference frame that corresponds to a time-freezed camera scan of the scene. The second video allows for a static 3D reconstruction of the reference frame, that is then used in reconstructing the motion in the original video. The proposed method is novel and achieves better results than the state of the art.
Strengths: The paper is very well-motivated and well-written. The key limitation in existing works is very clearly and explicitly identified.
The method is explained clearly. The paper does a good job explaining a complex pipeline and the supplemental provides further details.
The evaluations demonstrate very clear improvements over the state of the art.
Weaknesses: The paper shows high-quality results that outperform existing work, and also present ablations. I was curious whether the identification of the use of synthetic data could also be demonstrated as an ablation, e.g., finetuning the snap video model on synthetic data and adding another SDS loss to demonstrate that it is indeed the key factor that limits high quality.
The paper in the intro mentions "reliance on multi-view generative models". While the video generative model used in the paper does not take camera poses as conditioning, it is in fact trained on static multi-view data. If so, I would still call such a model capable of generating multi-view images as a multi-view model. How would the overall approach perform if the snap video model was not trained on static multi-view scans? It would be good to provide more details on the training data. What percentage of the training data consists of such multi-view recordings?
The paper claims efficiency as a contribution; however, later in L225-228, it correctly points out that most of this improvement is due to the use of snap video model, and not due to any contributions of this paper. If true, the claims should be adjusted.
From L161-163, it seems like the sufficient amount of pose variation is chosen manually? Is it not possible to automate this? If this step is manual, how is it accounted for in the reported processing times?
Technical Quality: 4
Clarity: 4
Questions for Authors: Does Colmap work for all the dynamic videos generated, even if the motion is large?
The results are generally very impressive but it would be very helpful to the community to also show failure examples so that the remaining challenges are clear.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Ablation of finetuning with synthetic data.** This is a great suggestion to rigorously prove the benefit of using models trained with real world data. However, conducting such experiments requires significant GPU hours to reach a meaningful conclusion, which we are unable to fulfill for rebuttal. Instead, we refer to results from text-MV image models finetuned using synthetic data, e.g. MVDreamer. As shown in Fig 1 of the PDF, images generated by MVDreamer are cartoonish, even when “realistic style” is included in the input text prompt.
**Importance of training with static multi-view data.** Training video models with static multi-view data is crucial not only because it supplements the training set with videos of static scenes but, more importantly, because it allows us to learn a context embedding (associated to the multi-view dataset) that serves as a tool to control the model to generate freeze-time-like video. Without this step, the model predominantly generates dynamic videos whenever nonrigid subjects are included in the text prompt, regardless how the text prompt is engineered.
**Details of training data.** The full training set combines a dozen of datasets, each corresponding to different type/style of videos, and is associated with a context embedding. For the static multi-view dataset, it is mainly created from MVImageNet, and sampled based on a predefined ratio during training. The ratio is set according to the number of video hours in each dataset, which results in roughly 1% for the static multi-view dataset.
**Efficiency contribution claim.** It is partially correct that one source of efficiency comes from the video model we used. However, the main source of efficiency, as stated in Ln. 58, is that we "transform the generation problem into a reconstruction problem and reduce reliance on time-consuming score distillation sampling steps." Specifically, as described in Ln. 263, during each stage of the reconstruction procedure, we perform SDS only during the last 5,000 iterations out of the total 20,000 iterations. This choice significantly reduces runtime, as each SDS iteration takes over 20 times longer than simply minimizing reconstruction losses.
**Is choosing amount of pose variation automatic?** This part is automated in our experiments. First, we perform auto-regressive view extension until the video reaches a maximum of 72 frames, which is often more than enough for 180+ degree coverage. Next we run Colmap with the generated video. If Colmap fails to converge, indicating the current video is significantly geometrically inconsistent, the video is automatically shortened until Colmap is able to reach a solution. We will include this in the supplementary.
**Does Colmap work for all the dynamic videos generated?** We recognize the challenge of pose estimation for dynamic videos. Therefore, we purposely generate dynamic videos without camera motion. This is achieved by using text prompts “camera is static”. We will include this detail into the supplementary.
**Failure case.** As stated in Ln. 303, our method may fail if there are “rapid movements, the sudden appearance and disappearance of objects, and abrupt lighting changes”. We will include failure samples in the new draft.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I would still argue for acceptance. I would urge to add more rigorous experiments to study the role of synthetic data in the final version if accepted. For efficiency, papers such as Wonder3D (and more recently Cat3D) demonstrate 3D results without score distillation. It would be good to tone down the arguments there and mention both the role of the snap video model and the existence of other non-SDS papers.
---
Reply to Comment 1.1.1:
Comment: Thanks for the great suggestions! We will add more rigorous analysis with synthetic data, and include thorough discussion of related concurrent works. | null | null | null | null | null | null |
Mixtures of Experts for Audio-Visual Learning | Accept (poster) | Summary: They propose a novel and simple method based on Mixture of Experts (MoE) for audio-visual learning.
Strengths: - Simple yet effective method.
- State-of-the-art results.
- Overall, is well written.
Weaknesses: - Does not include demos with audio to listen in a website.
- Does not include code.
Minor notes on the writing:
- Mention AVE, AVVP, AVS, AVQA in the abstract without definition.
- Section 2.1 first paragraph references to "early researchers" without citations.
- line 82: typo missing closing ).
- Section 3 there is no space before parenthesis.
- line 232: space missing after the parenthesis.
- I did not understand how the unimodal adapter works. I have the feeling that cross-modal adapter is well explained, but the unimodal adapter it is not. Maybe figures for how they work would be useful.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can you include demos and code?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The evaluation framework is limited to the evaluation tasks the authors have chosen.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer usTX,
Thank you for the positive feedback and valuable suggestions! Please see the following for our point-by-point reply.
---
**Weakness - Demos**
Thanks for your advice! We totally agree with the importance of providing audio-visual demos for a more comprehensive understanding of our work, and we will build a dedicated project page to provide source codes and demos.
---
**Weakness - Code**
Thanks for your suggestion! We have provided our code in the supplementary materials. We will release our source codes and provide more implementation details on the project website.
---
**Weakness - Minor Typos**
Thanks for your careful review! We have corrected the typos and revised our manuscript. Besides, the architectures of cross-modal and unimodal adapters are shown in Appendix E, and we will consider moving this section to the main text for better clarity. | Summary: This paper introduces an interesting observation i.e. for the off-screen sound cases or some mixed audio cases, ``injecting cross-modal adapters only brings disturbing information``. To solve the problem. this paper proposes a Mixture of expert strategy that can let the model focus on different aspects of the input data, ignoring the interference. However, I found that the technique's novelty is limited, it seems just have little modification based on [1], just the input audio-visual tokens are first compressed via a cross-attention operation. Moreover, The improvements this approach brings to downstream tasks are also limited, for the AVEL task, compared to DG-SCT, there is just a 0.4% improvement on ACC. For AVS, there is just a 0.2% improvement on $M_j$ for the S4 setting, and so on.
[1] Vision Transformers are Parameter-Efficient Audio-Visual Learners
Strengths: This paper presents an interesting problem that the off-screen sound or the mixture audio signals may hinder the effectiveness of the audio-visual interactions based on cross-attention.
Weaknesses: 1. The technique novelty is limited, there is no obvious difference between this method design and [1]. Moreover, the architecture are same as [2]. Figure 7 is totally the same as Figure 2 in [1].
[1] Vision Transformers are Parameter-Efficient Audio-Visual Learners
[2] Cross-modal Prompts: Adapting Large Pre-trained Models for Audio-Visual Downstream Tasks
2. This paper gives the off-screen and mixed sound cases to illustrate their motivation, but in the experiment part, they did not present the final results of these cases.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see the weakness.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 1
Limitations: 1. Limited technique innovations.
2. Lacking experiments to demonstrate the effectiveness of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer k9Vn,
Thank you for taking the time to read our paper. We hope our responses adequately addresses your concerns.
---
**Summary - The improvements this approach brings to downstream tasks are limited.**
Thanks for your comments! However, we argue that focusing only on some of the experimental results might be partial.
Firstly, it is important to note that our method achieves better results with fewer parameters compared to DG-SCT. While DG-SCT incorporates a more complex design for the adapter structure, we adopt a standard bottleneck structure of adapter, similar to that in LAVisH.
Secondly, regarding more challenging scenarios like the MS3 setting of AVS task (multi-sound sources), our AVMoE using fewer parameters improves by 1.0% on $\mathcal{M_J}$ and 4.5% on $\mathcal{M_F}$ compared with DG-SCT. Besides, for more complex tasks like AVQA and AVVP, our AVMoE improves by 0.9% ~ 2.0% compared with DG-SCT. It can be viewed that AVMoE achieves significant performance across multiple audio-visual tasks, which has also been acknowledged by other reviewers.
---
**Weakness - Limited technique Novelty**
Firstly, LAVisH and DG-SCT only inject cross-modal adapters into frozen audio-visual pre-trained models. LAVisH and DG-SCT only focus on cross-modal adapters and ignore the intra-modal information, and only introducing cross-modal adapters may bring irrelevant information in some challenging cases. To dynamically introduce inter-modal and intra-modal information, we propose an approach of Mixture of Experts (MoE) that combines cross-modal and unimodal adapters according to different scenarios. **The MoE strategy for audio-visual learning is our main contribution, rather than input audio-visual tokens of adapters.** Regarding your concerns about the architecture of the entire model, the overall framework of multimodal parameter-efficient transfer learning methods all consists of two frozen backbones and trainable modules, and our proposed AVMoE module is illustrated on the right side of Figure 2.
Secondly, as for the adapter architecture in Figure 7, this is the typical bottleneck architecture of adapters, which is always used in adapter-related works (like the transformer architecture in many works). This is not our main contribution, but in order to facilitate readers who are not in this field to have a clearer understanding of adapters, we present the details in Appendix E. Besides, we also mentioned in the main text (line 143) that the adapter structure is based on LAVisH.
---
**Weakness - Effectiveness of the Proposed Method.**
Thanks for your suggestion! Yes, Figure 1 illustrates our motivation. However, this is the example in the AVVP task, which has more labels of audio and visual events. Due to space limitations, we present the qualitative and quantitative experiments of the AVVP task in Appendix D.
Given the limited time available for finalizing the appendix, Figure 6 may not fully illustrate the complexity of audio-visual scenarios. We apologize for this and show a more challenging example in Figure 2 of the response PDF.
In this example, the video label only includes *car*, and the audio label includes *car* and *speech*, which are not exactly the same. We observe that the visual predictions of DG-SCT are affected by the audio features, resulting in a visual prediction of *speech*, which does not exist in the video frame. On the other hand, our AVMoE retains unimodal information and has MoE modules to adjust the weight distribution for audio-visual features. This avoids the interference of inconsistent information and obtains the correct result. As for the audio predictions, our model can better localize the temporal boundaries of events compared to DG-SCT, which reveals that our model is capable of dealing with complex scenarios. | Summary: The manuscript proposes the Audio-Visual Mixture of Experts (AVMoE) approach for multiple audio-visual tasks, aiming to explore parameter-efficient transfer learning for audio-visual learning. Specifically, AVMoE introduces both unimodal and cross-modal adapters as multiple experts, specializing in intra-modal and inter-modal information respectively. A lightweight router dynamically assigns the weights of each expert based on the specific requirements of each task. The effectiveness of the proposed method is demonstrated across multiple task datasets.
Strengths: 1. The writing is easy to read.
2. The proposed AVMoE shows performance improvements across multiple audio-visual tasks.
Weaknesses: 1. The AVMoE is somewhat similar to LAVISH, and it would be beneficial to discuss the similarities and differences between the two works.
2. Using all audio and visual tokens as latent tokens can be very computationally expensive.
3. In Table 3, note that the Visual Encoder for both AVSD and Pano-AVQA methods uses the ImageNet pre-trained ResNet-18 model, not VGG-19 and Faster RCNN.
4. Although the experimental results demonstrate the method's effectiveness, does this effectiveness truly capture audio-visual scenarios well? The submitted manuscript should also include temporal and spatial visualizations for the AVQA task.
5. It is recommended to include a more comprehensive comparison of methods in the experiments, especially for the AVQA and AVS tasks, as many recent works have focused on them.
6. The effectiveness of the Router in the method needs to be validated.
7. The rationale for setting the batch-size to 1 requires further discussion.
8. Some writing conventions need attention, eg. the LAVish reference on line 250.
9. Providing the code would be beneficial.
Technical Quality: 3
Clarity: 2
Questions for Authors: If the author can clarify the above questions or suggestions, I will consider raising the score accordingly.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer YVYv,
Thank you so much for these very valuable and constructive suggestions! Please kindly find the point-to-point responses below.
---
**Differences between AVMoE and LAVisH**
Thanks for your feedback! The similarity between AVMoE and LAVisH is the bottleneck architecture of adapters. However, LAVisH only introduces cross-modal adapters, whereas we aim to design a more flexible approach to combine the capabilities of different adapters. Therefore, we introduce cross-modal and unimodal adapters synergistically and design a router to dynamically activate adapters according to different scenarios, thereby alleviating potential conflicting information in multimodal data. We will include these analyses in the main text to better clarify the differences between AVMoE and LAVisH.
---
**Latent Tokens**
There might be some misunderstanding. As we mentioned in Line 146, latent tokens are small, randomly initialized vectors for compressing audio and visual tokens to reduce the computational complexity. Audio and visual tokens are not latent tokens. We will refine this part to make it clearer.
---
**Visual Encoder of AVQA Methods**
Thanks for your comment! We have double-checked the original paper of AVSD [A] and Pano-AVQA [B]. AVSD utilizes VGG-19 as visual encoder and compares it with I3D visual features. Pano-AVQA indeed uses Faster R-CNN as visual encoder.
AVSD: In the Sec. 2 of [A], "we assess a naive feature extractor based on **VGG**, and demonstrate that for video- reasoning, careful reduction of the spatial dimension is more crucial than the type of extracted features used to embed the video frames."
Pano-AVQA: In the Sec. 4.1 of [B], "we use **faster R-CNN** trained with ImageNet Detection to extract and represent region proposals".
*[A] Schwartz I, Schwing A G, Hazan T. A simple baseline for audio-visual scene-aware dialog, CVPR 2019.*
*[B] Yun H, Yu Y, Yang W, et al. Pano-avqa: Grounded audio-visual question answering on 360deg videos, ICCV 2021.*
---
**Temporal-Spatial Visualizations for AVQA**
Thanks for your suggestion! We have already conducted experiments and provided visualizations for the AVS task, which demonstrate that our method effectively captures the correlation between audio and visual elements. Therefore, we did not include temporal-spatial visualizations for the AVQA task in the submitted manuscript. However, we appreciate your suggestion and present the visualizations of the AVQA task in Figure 1 of the response PDF.
It can be seen that our AVMoE localizes the instruments more concentratedly and more precisely. In the first case that contains the question of left and right instruments, our AVMoE assigns greater weight to the instruments on the left and right accordingly, whereas DG-SCT appears to distribute attention across all three instruments. In the second case, our AVMoE can even completely localize the entire cello and accordion, whereas DG-SCT mistakenly localizes to the stand rather than the instruments in the last frame.
---
**More Comparisons for the AVQA and AVS tasks**
Due to space limitations, our work focuses on the methods of audio-visual parameter-efficient transfer learning. Following LAVisH and DG-SCT, we choose the same methods of AVS and AVQA for comparison. However, we also agree that adding more comparisons would help understand the progress of research tasks, and we present the comparison results in the Table 2-4 of the response PDF. We observe that there still exists the gaps compared to task-specific methods, but our AVMoE can reduce these gaps.
---
**The Effectiveness of the Router**
Thanks for your advice! The activation probability of experts in Figure 4 of the original manuscript demonstrates that the router can dynamically activate adapters according to different scenarios. Besides, we also conduct additional experiments to illustrate the effectiveness of the router quantitatively, and the experimental results are shown in Table 1 of the response PDF. The comparison between Setting #1 and #2 demonstrates the importance of routers in our model.
---
**Batchsize**
Due to the limitations of computational resources, the batch size on a single GPU is set to 1 for AVE and AVVP tasks. However, to mitigate the limitations of using a small batch size, we employed a gradient accumulation strategy, which is a common technique for training large models with limited GPU memory. This approach involves accumulating gradients from multiple mini-batches before performing a weight update. By doing so, we can effectively simulate a larger batch size (the same as other models).
---
**Writing**
Thanks for pointing this out! We have carefully checked the manuscript and corrected the typos.
---
**Code**
Thanks for your advice! Actually, we have provided the code in supplementary materials. We will release our source codes and provide more implementation details on GitHub.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer YVYv,
Thank you for your valuable reviews! We are not sure whether our previous response has adequately addressed your concerns. If you have any further questions or comments, please kindly let us know, and we will be glad to respond. | Summary: This paper proposed an integrated adaptor architecture that makes use of cross-modal attention mechanism and Mixture-of-Experts. The architecture connects (separately) trained audio and visual encoder to do audio-visual tasks, including audio-visual event localization, audio-visual segmentation, audio-visual question answering, and audio-visual video parsing. Experiments show that the proposed architecture outperforms other adaptors while having less tunable parameters.
Strengths: The method is straight forward and the paper is well-written. The introduced architecture is novel and intuitive. The improvements over other methods in terms of parameter count and efficiency is clear.
Weaknesses: The comparison with other approaches stays at parameter count and task performance, but inference cost, training time, and flops are not reported.
How does the approach compare with SotA on these datasets (rather than just adaptor models)?
It would be helpful if the author could provide some explanation on why the proposed approach is better than DG-SCT and LAVisH.
The main text and appendix are disconnected, it would be helpful to put some pointers in the main text which refers to appendix.
Technical Quality: 3
Clarity: 3
Questions for Authors: I wonder why is audio visual event classification not compared, it's also not compared in LAVisH and DG-SCT, so I wouldn't blame the author for that, but I'm curious why such an common an popular audio visual task is not considered in audio visual adaptor papers.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The approach is only for audio visual tasks (but not other joint modal modeling scenarios) and the amount of tasks considered is limited.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer jXV1,
Thanks for your careful review and kindly comments on this paper. Please see our point-by-point responses below.
---
**Weakness - The inference cost, training time, and flops are not reported.**
Thanks for your suggestion! Following LAVisH and DG-SCT, we only report the trainable and total parameters, which can also demonstrate the efficiency of our approach. In order to provide a more comprehensive comparison of the efficiency, we will add these metrics in the Appendix.
| **Method** | **Inference cost** | **Train Time** | **GFLOPS** | **Total Params↓** | **Acc** |
|:----------:|:------------------:|:--------------:|:----------:|:-----------------:|:--------:|
|LAVisH|0.13s|7h|406.7|238.8M|81.1|
|DG-SCT|0.13s|15h|460.8|460.8M|82.2|
|**AVMoE**|0.13s|12h|422.6|404.0M|**82.6**|
**Inference Cost: The average time it takes for each batch to be processed by the model on a NVIDIA GeForce RTX 3090.*
**Since these two methods have not released this information, their inference costs and training times are calculated by our runnings of their open source codes.*
---
**Weakness - Comparison with SOTA on these datasets (rather than just adaptor models).**
Thanks for your question. We have reviewed recent related works and reported their results in Table 2-4 of the response pdf. It should be noted that these excellent related works [A, B, E, F, G, H, I, J] are specially designed according to the characteristics of these tasks and datasets. When comparing audio-visual parameter-efficient transfer learning methods ("General Methods" in the tables) with them, we observe that there still exist gaps, but our AVMoE can help to reduce the remaining gaps.
*[A] Learning Event-Specific Localization Preferences for Audio-Visual Event Localization, ACM MM 2023.*
*[B] CACE-Net: Co-guidance Attention and Contrastive Enhancement for Effective Audio-Visual Event Localization, ACM MM 2024.*
*[C] Vision Transformers are Parameter-Efficient Audio-Visual Learners, CVPR 2023.*
*[D] Cross-modal prompts: Adapting large pre-trained models for audio-visual downstream tasks, NeurIPS 2024.*
*[E] Stepping stones: A progressive training strategy for audio-visual semantic segmentation, ECCV 2024.*
*[F] Can Textual Semantics Mitigate Sounding Object Segmentation Preference?, ECCV 2024.*
*[G] Unveiling and Mitigating Bias in Audio Visual Segmentation, ACM MM 2024.*
*[H] CAD-contextual multi-modal alignment for dynamic AVQA, WACV 2024.*
*[I] Semantic Enrichment for Video Question Answering with Gated Graph Neural Networks, ICASSP 2024.*
*[J] Parameter-Efficient Transfer Learning for Audio-Visual-Language Tasks, ACM MM 2023.*
---
**Weakness - More explanation of why AVMoE is better than DG-SCT and LAVisH.**
Thanks for your advice! We will analyze more about the advantages of our method compared with LAVisH and DG-SCT in the manuscript. LAVisH and DG-SCT only inject cross-modal adapters into frozen audio-visual pre-trained models. LAVisH adopts the typical bottleneck architecture of adapters, whereas DG-SCT designs a more complex architecture of dual-guided spatial-channel-temporal attention mechanism in adapters. However, they only focus on cross-modal adapters and ingore the intra-modal information, and only introducing cross-modal adapters may bring irrelevant information in some challenging cases. Hence, to dynamically introduce inter-modal and intra-modal information, we propose an approach of Mixture of Experts (MoE) that combines adapters according to different scenarios.
---
**Weakness - The main text and appendix are disconnected.**
Thanks for your valuable comments! We will include more pointers in the main text.
---
**Questions - No mention of the audio-visual event classification task.**
Thanks for your question! We are not sure if you are asking about the audio-visual classification task on datasets such as VGGsound [K]. Although the VGGsound dataset contains a large number of videos and rich types of annotations, it only has coarse video-level annotations. As far as we know, the VGGsound dataset is more commonly used for pre-training. The AVE and AVVP tasks, which involve classifying audio-visual events second-by-second and localizing their boundaries, are more challenging and provide a more rigorous evaluation of the model's performance.
*[K] Vggsound: A large-scale audio-visual dataset, ICASSP 2020.*
---
Rebuttal Comment 1.1:
Title: Thanks for the reply
Comment: Thanks for taking the time to reply to my review! I'll keep my rating.
---
Reply to Comment 1.1.1:
Comment: Thanks for your reply! We are glad that you keep the original positive rating. | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for your valuable and constructive comments. Our motivation is to propose a general, efficient, and flexible method for audio-visual learning, and your suggestions will definitely help us strengthen our work. Please kindly find the point-to-point responses below.
Pdf: /pdf/df1253be2880dfa40e54baa70e60f15f4b90a7a9.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper proposes AVMoE, a new way of adding adapters to pretrained audio-visual models.
AVMoE extends the LAVisH approach in two ways:
(1) Using unimodal adapters in addition to cross-modal LAVisH adapters.
(2) Adding a router network to dynamically combine the output of multiple unimodal/cross-modal adapters.
AV-MoE outperforms two existing methods (LAVisH and DG-SCT) on three audio-visual tasks: event localization, segmentation, and question answering.
Strengths: Results show solid improvements:
- Extensive experiments demonstrate that AVMoE outperforms the existing DG-SCT approach on all three tasks, while using less trainable parameters.
- Modality ablation studies clearly show that AVMoE also handles missing audio modality much better than DG-SCT, which is a much desired property for audio-visual models.
Weaknesses: ### Experimental Design Ablation
- It is less clear how much each component of the proposed method contributes to the improvement over LAVisH.
What if only unimodal adapters were added, without the router? What if we only add the MoE router to 2 or more cross-modal LAVisH adapters?
Additionally, experiments with adjusted LAVisH hyperparameters to increase trainable parameters to better compare with AVMoE could also be insightful.
### Unclear how hyperparameters are chosen
- The appendix uses completely different num. of latent tokens and learning rate compared to LAVisH, without justifying (1) how they were chosen and (2) how much they affect the results.
### Presentation
- Some important details of the proposed method are either left in the appendix (adapter architecture) or omitted entirely (what is the variance of Gaussian noise used? How much does it affect the results?).
I would suggest moving non-essential results (Results of all systems using incomparable encoders, Qualitative examples of Figure 3, etc.) to the appendix and moving the description of important details such as adapter architecture to the main text.
- Minor comments:
1. **The references section contains multiple mistakes.** Some papers are cited more than once, some lack venue titles, and some NeurIPS 2023 papers are cited as NeurIPS 2024.
2. I would consider prior works in multimodal parameter-efficient transfer learning to be much more relevant than MoE. It would be nice to see some discussion of parameter-efficient methods in vision-and-language (A), audio-visual (B, or even concurrent works such as C) beyond what was used in the paper.
3. Regarding MoE related work, Akbari et al. also work on audio-visual inputs.
4. Qualitative examples of the M3 setting in Figure 3 are a bit hard to see with tiny white masks.
[A] Yi-Lin Sung, Jaemin Cho, and Mohit Bansal. Vl-adapter: Parameter-efficient transfer learning for vision-and-language tasks. In CVPR, 2022
[B] Liu, Hongye, et al. "Parameter-Efficient Transfer Learning for Audio-Visual-Language Tasks." Proceedings of the 31st ACM International Conference on Multimedia. 2023.
[C] Kai Wang, Yapeng Tian, Dimitrios Hatzinakos. Towards Efficient Audio-Visual Learners via Empowering Pre-trained Vision Transformers with Cross-Modal Adaptation. In CVPR, 2024
Technical Quality: 2
Clarity: 2
Questions for Authors: - When comparing results with shared Swin-V2-L encoder versus using seperate Swin-V2-L & HTS-AT encoders, why does LAVisH performance degrade whereas AVMoE improves?
- What is the meaning behind the asterisk in LAVisH row of Table 3: Audio-Visual Question Answering?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: As mentioned in the Weaknesses section:
1. Importance of each component of proposed method is not well studied,
2. The choice/effect of different hyperparameters compared to LAVisH is not justified,
3. Method details such as model architecture are left in the appendix or omitted entirely.
However, AVMoE seems to shows strong improvements over existing approaches. If the proposed method consistently excels across hyperparameters, and the method section is fleshed out, I would be inclined to accept the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer pdXh,
Thank you so much for these very insightful and constructive comments, please see the following for our point-by-point responses.
---
**Weakness - Experimental Design Ablation**
Following your great advice, we conducted more experiments to analyze the contribution of each component, and the results are shown in Table 1 of the response PDF.
- **Effect of Routers**: The comparison between Setting #1 and #2 reveals a consistent performance drop across all tasks when routers are not included in the model. This result highlights the importance of routers in our approach.
- **Effect of only adding more Cross-modal Adapters without Unimodal Adapters**: The results between Setting #3 and #4 illustrate that adding more CMAs without UAs does not significantly improve the model's performance. However, when we compare Setting #4 to Setting #1, there is a noticeable decrease in performance. This result highlights the importance of incorporating UAs into the model, which demonstrates that a balance between unimodal and cross-modal adapters is crucial in our model to effectively capture both intra-modal and inter-modal information.
- **Adjustment of LAVisH Hyperparameters**: We have attempted to adjust LAVisH hyperparameters, such as downsample size and latent token number, but could not match AVMoE's parameter size. Moreover, the comparison between DG-SCT and AVMoE in the manuscript shows that our AVMoE can achieve significant performance with fewer parameters. These comparison results illustrate our approach's superior parameter efficiency and effectiveness.
**Weakness- Hyperparameters**
- The number of latent tokens: Our hyperparameters of latent tokens and learning rates in these audio-visual downstream tasks are based on DG-SCT. From the experimental results in the table below, it can be viewed that the number of latent tokens has little effect on the total parameters and performance of models. Hence, for comparison with DG-SCT, we follow their hyperparameters and set the number of latent tokens for AVE and AVVP tasks to 32. Moreover, even if we use the same number of latent tokens as LAVisH, the performance of our model is still better than LAVisH.
**AVE:**
| **Laten Tokens** | **Total Params** | **Acc** | **LAVisH Acc** |
|---|---|---|---|
| 2 | 403.7M | 82.2% | 81.1% |
| 8 | 403.8M | 82.3% | - |
| 32 | 404.0M | **82.6%** | - |
**AVS:**
| **Laten Tokens** | **Total Params** | **S4** | | **MS3** | |
|---|---|---|---|---|---|
| | | $\mathcal{M_J}$ | $\mathcal{M_F}$ | $\mathcal{M_J}$ | $\mathcal{M_F}$ |
| 2 | 501.2M | **81.1** | 89.7 | **54.5** | **68.7** |
| 8 | 501.6M | 80.8 | 89.7 | 54.3 | 68.5 |
| 32 | 503.0M | 80.9 | **89.9** | 54.2 | **68.7** |
**AVQA:**
| **Laten Tokens** | **Total Params** | **Avg Acc** |
|---|---|---|
| 2 | 456.6M | **75.7** |
| 8 | 456.7M | 75.6 |
| 32 | 456.9M | **75.7** |
**Weakness - Presentation**
* Some details of architecture: Thanks for your constructive comments! We will adjust the manuscript structure based on your suggestion.
* The detail and effect of Gaussian Noise: Adding noise (e.g., Gaussian noise) to MoE in the training stage is very important, which can prevent the model from over-relying on certain experts. The Gaussian noise is implemented with a standard normal distribution (mean 0 and variance 1). We conducted an additional ablation study to evaluate the effect of Gaussian noise. The results indicate that incorporating Gaussian noise not only improved the model's performance on various datasets but also contributed to the training stability of the MoE model with less standard deviations.
| Num. | Settings | AVE (Acc. ± Std Dev) | AVS-S4 ($\mathcal{M_J}$ ± Std Dev) | AVQA (Acc. ± Std Dev) |
|---|---|---|---|---|
| #1 | AVMoE (full model) | 82.6 ± 0.2 | 89.7 ± 0.3 | 75.7 ± 0.2 |
| #2 | AVMoE w/o Gaussian Noise | 81.5 ± 1.3 | 88.4 ± 1.1 | 74.4 ± 0.9 |
* References: We copied the bibtex directly from Google Scholar without making any changes, maybe there was something wrong. We have carefully checked these mistakes and corrected them.
* Related work: We totally agree that parameter-efficient transfer learning is more relevant to our work. However, Mixture of Experts (MoE) is our main contribution, we present a separate subsection (Sec. 2.2) for MoE and discuss audio-visual parameter-efficient methods in the subsection of audio-visual learning (Sec. 2.1). But thanks for your suggestion, we will add a subsection for discussing multimodal parameter-efficient methods.
* Clarity of Qualitative examples: Thanks for pointing this out! Actually, we have tried two methods to display the qualitative results of AVS: adding white masks to original images and directly showing the binary masks.
We thought the former one could better demonstrate the masks of semantic objects, but the white masks are indeed not clear enough. We will consider the second plan and add bounding boxes on important areas for better comparison.
---
**Question 1 - Why does LAVisH performance degrade whereas AVMoE improves when using separate Swin-V2-L and HTS-AT encoders?**
The AVE performance of LAVisH with separate Swin-V2-L and HTS-AT encoders in our Table 1, is referenced from Table 1 of the DG-SCT paper. The authors of DG-SCT mentioned that the reason why the LAVisH dropped maybe it only utilizes latent tokens for cross-attention. The coarse extracted cross-modal information may not adequately counteract the negative effects of domain gaps introduced by the use of different encoders. We also argue that audio representations may bring disturbing information sometimes, and our AVMoE is designed to alleviate this problem.
**Question 2 - What is the meaning behind the asterisk in LAVisH row of Table 3?**
LAVisH* denotes our implementation version of LAVisH. We will add this annotation to the caption of Table 3.
---
Rebuttal 2:
Comment: Thank you for adding comprehensive ablations studies.
### **Experimental Design Ablation**
Extensive results show that each component of the proposed method is beneficial. I encourage the authors to add these results to the Table 4 in the main paper.
### **Hyperparameter selection**
The ablation experiment on num. of latent tokens is greatly appreciated. Results show that AVMoE is robust to the num. of latent tokens.
However, the learning rate and batch size seems to be different from both LAVisH and DG-SCT? To quote each paper on hyperparams for the AVE task:
> LAVisH: We set the learning rate of LAVISH adapter to 5e−6, and 4e−6 for the final prediction layer for audio-visual event localization, ...
| Task | Batch Size |
--- | ---
| AVE | 2 |
> DG-SCT: For the AVE ... We train the model with a batch size of 8 and a learning rate of $5 × 10^{−4}$ ...
> AVMoE: ... weight updates after every 16 batch of training ... We set the learning rate of the AVMoE adapter to 5e-4, while the learning rate of the final prediction layer is determined by the task, 5e-6 for AVE ...
### **Presentation**
Thank you for showing the importance of Gaussian noise for the router. Regarding the references, [2], [10], [24] & [25], [38], [42] and [43], [60] and [61] are the errors that stand out the most.
### **Questions**
If Swin-V2-L + HTS-AT is reimplemented, the relevant rows in Table 1 and Table 2 probably should be marked as well.
Overall, while hyperparameter selection details are a bit vague, the extensive experiments in the rebuttal has addressed most of my major concerns. I would like to change my rating from 4 to 7.
---
Rebuttal Comment 2.1:
Comment: Dear reviewer pdXh,
We sincerely thank you for your thorough reviews and valuable suggestions!
---
**Hyperparameter Selection**
We have double-checked the source code, and confirmed that our learning rates are the same as those used in DG-SCT.
* For the AVE task, the learning rate of DG-SCT adapter is also 5e−4, and 5e−6 for the final prediction layer (not mentioned in DG-SCT paper).
* For the AVS task, the learning rate is 3e−4 for the S4 setting and 1.5e−4 for the MS3 setting, consistent with DG-SCT. Sorry for missing the learning rate of S4 setting in the original manuscript.
* For the AVVP task, the learning rate is 3e−4, we apologize for this typo and will revise the learning rate of AVVP task and AVS S4 setting in the next version.
Regarding the batch size, we can only use a smaller batch size due to computational resource limitations. However, we employ a gradient accumulation strategy to mitigate the limitations, which is a common technique for training large models with limited GPU memory.
**Presentation**
We would like to extend our sincere appreciation for your effort in carefully reviewing our work! We have carefully checked these references & tables and will update them in the next reversion. | null | null | null | null | null | null |
Achieving Tractable Minimax Optimal Regret in Average Reward MDPs | Accept (poster) | Summary: The paper proposes the first tractable algorithm that achieves minimax optimal regret for average reward tabular MDPs. The algorithm does not require prior information on the span of the optimal bias function.
Strengths: The paper proposes the first tractable algorithm that achieves minimax optimal regret for average reward tabular MDPs.
Weaknesses: See questions.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. The minimax lower bound in [4] is sqrt{DSAT}. Because H=span(h^*) <= D, I wonder why sqrt{HSAT} is even achievable by the algorithm developed in the paper.
2. The authors cited [14, 25] for the lower bound sqrt{HSAT}, but I can not find this result/theorem in these two papers with a rigorous proof. In fact, [14] mentioned it is an open problem whether the “actual” lower bound depends on diameter D or the bias span H. Please provide a proper reference if the actual lower bound depends on H.
3. As shown in Figure 2, PMEVI behaves almost the same with its EVI counterparts when no prior information on bias is given. This does not illustrate the advantage of PMEVI compared with UCRL2 or KLUCRL, even though the regret of the algorithm in this paper is claimed to be better.
4. In Figure 2 (to the right), PMEVI is run with bias information c, but it is compared with UCLR2 which does not use c. This is not fair comparison. If PMEVI is given bias information, then it should be compared with other algorithms such as SCAL or UCB-AVG which require bias information for implementation.
5. Computing max_u beta_t(s,a,u) may not be tractable in general. Algorithm 5 provides a way to bound/approximate this quantity. I wonder what is the intuition? I also can not find the proof that it is indeed a bound that does not impact the regret efficiency.
6. The regret proof in Section 4 does not highlight how the key components projection and mitigation in the algorithm impact the regret. In particular, it is not clear from the proof sketch where these components are used, and how the beta-mitigated extended bellman operator is applied to achieve the minimax optimal regret.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: I do not find the discussions of limitations in the main body of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We address your concerns below.
> 1. The minimax lower bound in [4] is $\sqrt{DSAT}$. Because $H=\mathrm{span}(h^*) \le D$, I wonder why $\sqrt{HSAT}$ is even achievable by the algorithm developed in the paper.
The lower bound of [4] is indeed $\sqrt{DSAT}$, but like pointed out by [14], the "hard instance" provided by [4] is one such that $\mathrm{span}(h^*) = D$. To quote [14], "*The proof of the lower bound relies on the construction of an MDP whose diameter actually coincides with the bias span (up to a multiplicative numerical constant), thus leaving the open question whether the “actual” lower bound depends on $D$ or the bias span.*"
What [4] actually shows is that, whatever the algorithm, there exists a MDP with $c = \mathrm{span}(h^*) \asymp D$ such that the regret is $\sqrt{c S A T}$. It means that the lowerbound of [4] can be read $\sqrt{\mathrm{span}(h^*) S A T}$ just as $\sqrt{DSAT}$.
> 2. The authors cited [14, 25] for the lower bound $\sqrt{HSAT}$, but I can not find this result/theorem in these two papers with a rigorous proof. In fact, [14] mentioned it is an open problem whether the “actual” lower bound depends on diameter $D$ or the bias span $H$. Please provide a proper reference if the actual lower bound depends on $H$.
Indeed, [14, 25] do not provide any explicit lower bound. Like said in 1., [14] precises that the lower bound of [4] can be read as $\sqrt{\mathrm{span}(h^*) S A T}$, and [25] shows that it can be achieved by an intractable method with prior knowledge on the bias span, implicitely showing the the lower bound is tight. It is even tighter than the diameter lower bound, since $\mathrm{span}(h^*) \le D$ and $\mathrm{span}(h^*) \ll D$ in general.
> 3. As shown in Figure 2, PMEVI behaves almost the same with its EVI counterparts when no prior information on bias is given. This does not illustrate the advantage of PMEVI compared with UCRL2 or KLUCRL, even though the regret of the algorithm in this paper is claimed to be better.
The left part of Figure 2 was very disappointing at first. The poor advantage of `PMEVI` compared to `UCRL2` or `KLUCRL` can be explained as follows: The advantage of `PMEVI` depends on the quality of the bias estimator, that itself depends on the current quality of play so far. The thing is that the *early* quality of play is bad, because the algorithm still has to figure out which are the optimal actions. This early phase pollutes the bias estimator for quite some time, making it somehow irrelevant for a long time. This is what the right part of Figure 2 is there for: It shows that if the bias estimation is better (e.g., under prior information), `PMEVI` is very efficient at using it to reduce the burn-in time.
Now, the left part of Figure 2 shows that there are still room for improvement, for example by improving the inner bias estimation of the method. We think that this is a different work. This paper is mostly theoretical and gives a direction, showing how to use an external trajectorial bias estimation to improve the minimax regret guarantees. Optimizing this bias estimation subroutine is an interesting direction for future work.
> 4. In Figure 2 (to the right), PMEVI is run with bias information c, but it is compared with UCLR2 which does not use c. This is not fair comparison. If PMEVI is given bias information, then it should be compared with other algorithms such as SCAL or UCB-AVG which require bias information for implementation.
In the right part of Figure 2, we indeed compare `PMEVI` to `UCRL2` instead of `SCAL`. We do not claim that `PMEVI` is *better* than `UCRL2` there, but rather that it can efficiently make use of a good quality prior information. This is to nuance the left part of Figure 2, that seems to show that the projection-mitigation procedure has no effect on the regret. Also, the kind of prior information that we feed to `PMEVI` is not one that `SCAL` can use that can only take a bias span upper bound (like "$\mathrm{span}(h^*) \le c$") into account. Even with tight bias information, `SCAL` will behave like just like `UCRL2` on this instance because plain bias span information is not really helpful here (the bias span is proportional to the diameter on river swim). We can discuss this further in the section **Details on Experiments** in the Appendix.
> 5. Computing $\max_u \beta_t(s,a,u)$ may not be tractable in general. Algorithm 5 provides a way to bound/approximate this quantity. I wonder what is the intuition? I also can not find the proof that it is indeed a bound that does not impact the regret efficiency.
The correction of Algorithm 5 is established in section A.2.2 with Lemma 13. The estimation used by Algorithm 5 is based on Lemma 12 that provides a general bound of $\mathbf{V}(p, u)$ with respect to $\mathbf{V}(p, v)$ for two $u, v$ in the bias confidence region. It will be linked better in the main text to access the proof more easily.
Note that Algorithm 5 runs in polynomial time by providing , and that its output is only an upper-bound of $\max_u \beta_t(s,a;u)$. However, it is enough to achieve minimax optimal regret guarantees.
> 6. The regret proof in Section 4 does not highlight how the key components projection and mitigation in the algorithm impact the regret. In particular, it is not clear from the proof sketch where these components are used, and how the beta-mitigated extended bellman operator is applied to achieve the minimax optimal regret.
The impact of the projection-mitigation operation can be tracked by looking at where Lemma 13 is invoked. It is invoked pretty much everywhere but is mostly critical to bound the optimism overshoot (Lemma 9) and the second order error (Lemma 10). It is also used to bound the navigation error in probability (Lemma 7) but isn't necessary for this term if one is just looking for a bound in expectation.
---
Rebuttal Comment 1.1:
Comment: The complaints of reviewer 9hWK29 fall into three categories:
1) Role of D vs. H. This was a confusion on part of the reviewer, which stems from the fact that early literature focused on D, and it was only later that the role of H was discovered. I think the authors gave an excellent answer, clarifying all the confusion.
2) The experiments are not really demonstrating that the theory holds up. I have the same problem as reviewer 9hWK29 in this regard and while I understand the response, I still think that it would have been better either completely omitting the experiments, or going after checking whether the claimed theoretical improvement also holds up in the experiments in some limited, but well chosen setting (i.e., design MDPs with variable H and confirm that the algorithm adapts to H as claimed).
3) Complaints on the presentation/explanations in the paper. I share some of these concerns.
Overall, yet, I feel the rating of 4 is harsh and unjustified for a paper that is addressing a major open problem in the field and which, as far as we know, resolves this open problem. I, for one, would like to see papers at NeurIPS that do this, even if they are somewhat imperfect. In particular, I won't care even if there were absolutely no experiments. Of course, I also care about how well the paper is written and here I see room for improvement. However, I feel that in this regard the paper is passing the bar and I would hope the authors will improve the writing for the final paper. Eventually, what matters is whether the result is correct and whether it adds interesting new knowledge to the field and here I believe the answer is yes. | Summary: The authors propose a novel algorithm for weakly communicating Markov Decision Processes (MDPs) with an average-reward objective. This algorithm, at the same time, is a) tractable since it does not rely on the exact solution to a high-dimensional non-convex optimization problem as prior work, and b) achieves minimax optimal regret up to logarithmic factors.
Strengths: - First tractable algorithm that achieves minimax optimal regret bound;
- The algorithm does not require any prior knowledge of the span of optimal bias function;
- The algorithm does not require any reduction to a discounted setting and works directly in the average-reward setting.
Weaknesses: - The algorithm requires solving the linear programs for each doubling epoch, which is tractable but non-generalizable beyond the tabular setting;
- The large second-order term is dominated by the main asymptotic optimal term if $T > S^{20}$. This effect of a large second-order term is also observable during the experimental section.
Technical Quality: 4
Clarity: 3
Questions for Authors: - Discussion in Section 3.3 is a little bit misleading since it is formulated in terms of arbitrary vector u and may think that, afterward, the union bound over possible u should be applied to write down a bound of type $\max_{u} \beta_{t}(s,a,u)$ (and thus inducing additional $\sqrt{S}$ factor due to equivalent with purely $\ell_1$-confidence regions), whereas you need to do it only for $u = h^\star$. I would suggest rewriting this discussion a little bit to avoid the confusion I experienced during the first reading of the text.
- Appendix C6. It's not clear what (2k) means in the equation after line 796. I have the same question about (3k) and (4k) after line 816 and 825.
- Additionally, I would appreciate the additional discussion on the difference between regret minimization and best policy identification settings for average-reward MDPs, especially in a glance at existing concurrent work Tuynman et al. 2024.
Tuynman, A., Degenne, R., & Kaufmann, E. (2024). Finding good policies in average-reward Markov Decision Processes without prior knowledge. arXiv preprint arXiv:2405.17108.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The paper is of a theoretical nature and thus does not have any direct impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Concerning the weaknesses
> The algorithm requires solving the linear programs for each doubling epoch, which is tractable but non-generalizable beyond the tabular setting;
Yes, the algorithm needs to solve many linear programs at every epoch for the vanilla version of `PMEVI`. However, solving *too many* linear programs can easily be avoided, especially in the early steps. Indeed, in the early steps of the process, the bias confidence is very bad and pretty much uninformative. After all, in order to estimate the bias correctly, the regret of the algorithm has to be sublinear and this is simply not the case in the early stages of learning. In practice, it means the the projection operation of `PMEVI` does nothing in the beginning. This can be used to avoid a call to a linear program solver (by trying beforehand if a projection on the bias confidence region is necessary at all) hence having the nearly the same running time than `EVI`. In the later stages of learning, changes of episodes are more and more rare, so solving these linear programs is somehow amortized. This is how we obtain the complexity result of Theorem 1.
## Concerning your questions
> Discussion in Section 3.3 is a little bit misleading since it is formulated in terms of arbitrary vector $u$ and may think that, afterward, the union bound over possible $u$ should be applied to write down a bound of type $\max_u \beta(s,a,u)$ (and thus inducing additional factor $\sqrt{S}$ due to equivalent with purely $\ell_1$-confidence regions), whereas you need to do it only for $u = h^*$. I would suggest rewriting this discussion a little bit to avoid the confusion I experienced during the first reading of the text.
We suggest the following modification at line 214.
"(...) are fixed. [Many existing works [4,7,8,11,12,13,14] look for a bound on $(\hat{p}_t(s,a) - p(s,a))u$ that holds uniformly for all vector $u$, because the values of $u$ encountered along the iterations of `EVI` are not known in advance, and depend on the random data gathered by the algorithm. This is morally achieved by doing a union bound for all $u$, which is responsible for the extra $\sqrt{S}$ in the regret guarantees of these methods. In Appendix B, we show that we only need (11) to hold for $u = h^*$ that is a fixed constant, so] one is tempted to use (11) to mitigate (...)."
> Appendix C6. It's not clear what (2k) means in the equation after line 796. I have the same question about (3k) and (4k) after line 816 and 825.
$(2k)$ is the empirical bias error (introduced when starting the regret decomposition at line 256). We will recall the definition of this shorthand. Similarly, the definitions of $(3k)$ and $(4k)$ can be found at line 256. We will recall their definitions at the beginning of every proof section.
> Additionally, I would appreciate the additional discussion on the difference between regret minimization and best policy identification settings for average-reward MDPs, especially in a glance at existing concurrent work Tuynman et al. 2024. Tuynman, A., Degenne, R., & Kaufmann, E. (2024). Finding good policies in average-reward Markov Decision Processes without prior knowledge. arXiv preprint arXiv:2405.17108.
Indeed, it has been shown in the mentioned work that in the best policy identification setting (PAC learning), achieving performance that depend on the bias function rather than the diameter *requires* prior information on the bias function; While we achieve regret guarantees that depend on the bias function even though we have no prior information on it. This is because in PAC learning, the algorithms has (implicitely) to provide a certificate of optimality. It has to assess that the output policy is nearly optimal with high probability. This certificate of optimality is not necessary in regret minimization and in practice, `PMEVI` plays a policy that it thinks is optimal but it does not have a way to certify that it is optimal. This is very likely to mean that if the algorithm where to produce policy certificates in addition to minimizing the regret (see *Policy Certificates: Towards Accountable Reinforcement Learning*, Dann et. al., 2019), then the regret $\sqrt{\mathrm{span}(h^*) S A T}$ would not be achievable. This discussion can be added to the conclusion.
Note that there is sensible difference (in addition to the learning objective) between our work and the above mentioned work: **In our paper, the learner does not have access to a generative model**. With the generative model assumption, the learner can obtain a sample of any state-action pair in the MDP without constraints, making exploration much less difficult.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their response and I am happy to keep my score. | Summary: This paper studies learning in average reward MDPs, and presents the first minimax optimal algorithm (in terms of $sp(h^*)$) which is computationally tractable, and also simultaneously the first which does not require prior knowledge of $sp(h^*)$.
Strengths: The main theorem resolves a longstanding open problem which has been the subject of extensive research effort. Removal of knowledge of $sp(h^*)$ is a challenging problem in many settings beyond the one considered in this paper, and thus I hope that this work might lead to progress on this issue in other areas.
Weaknesses: The algorithm is rather complicated and has many components. The presentation also does not make it easy to determine the key ingredients of this method. While it is interesting that this method may incorporate different confidence regions, I wonder if it might be more clear to present things in less generality.
It is nice that experiments are included, but if anything they seem to contradict the claim that the method achieves minimax regret without prior knowledge of $sp(h^*)$ (due to the left plot in Figure 2). Since the bias estimation subroutine seems to be a key ingredient, it would be much better if an experiment was included where this subroutine was making a nontrivial contribution (by my understanding, in the left plot the bias estimation routine is not doing anything since it hasn't been given enough time, and in the right plot, it is not doing anything since much better prior information has been given to the algorithm).
Technical Quality: 3
Clarity: 2
Questions for Authors: Why is the main theorem stated with the parameter $c$? It seems like the best choice is always $c=sp(h^*)$? (And if $c \gg sp(h^*)$, then the theorem is not giving the minimax optimal regret.) Does the theorem hold for all $c$ simultaneously or is this parameter used somewhere that I didn't notice?
The main theorem mentions a confidence region system for communicating MDPs with a diameter-dependent computational complexity, but since the paper is for weakly communicating MDPs which generally have infinite diameter, I think a different system would be a better choice? Can the author(s) comment on the complexity when the diameter is infinite?
Many times notation like $h^*(\mathcal{M})$ is used, but is this well-defined? There can be multiple solutions to the Bellman optimality equation.
The paper https://arxiv.org/pdf/1905.12425 claims to achieve the optimal diameter-based $\sqrt{DSAT}$ regret, maybe it should be added to the related work.
In the display below line 785, the number of commutes between $S_t$ and $S_{t+1}$ is lower bounded by the number of transitions between these states. A similar bound is used under line 830. Does this suggest that the bias difference estimator could actually be formulated using only direct transitions between pairs of states?
Line 397: $h^*$ should be $g^*$?
Line 404: Some character I believe meant to be $S$ was used
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: No major limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We have numbered your questions to gain a few characters and address all of them.
## Concerning the weaknesses
> The algorithm is rather complicated and has many components (...) While it is interesting that this method may incorporate different confidence regions, I wonder if it might be more clear to present things in less generality.
From a high level, there are two confidence regions. One is for the model parameters (rewards and kernels) and the other is for the bias function. This second confidence region can be considered as "external" and its purpose is to regularize the "vanilla" optimism machinery of `EVI` introduced by [4]. Actually, only the bias confidence region and the regularization mechanism (projection + mitigation) is necessary to achieve minimax optimal regret, although the model confidence region seems very important to reduce the burn-in time of the algorithm. We believe that the generality of the approach is a strong point, because it shows that many existing algorithms can be patched all at once with a single analysis.
> It is nice that experiments are included, but if anything they seem to contradict the claim that the method achieves minimax regret without prior knowledge of (due to the left plot in Figure 2). (...)
Yes, sadly, experiments are a bit disappointing. `PMEVI` provides solid formal grounds to use bias information in order to improve performance (this is displayed by the right part of Figure 2) and Figure 2 also shows that getting pertinent bias information is difficult (left part). This leaves opportunities for future work: Improving the bias estimation subroutine would directly improve the practical performance of `PMEVI`.
For instance, the performance of the algorithm is very bad in the early learning stages. This cannot be avoided because the algorithm has still to figure out which actions seem bad and which seem good. This early data pollutes the bias estimator, making the bias confidence region irrelevant for quite some time. To address it, one idea could be to trash data when it is considered too old. We do not do it in the paper because it would complicate the analysis even more. Also the current version of `PMEVI` was left "not tuned" on purpose, to ease the construction of algorithms based on `PMEVI` later on.
## Concerning your questions
1. We use $c > 0$ instead of $\mathrm{span}(h^*)$ because the results holds for all MDPs with bias span less than $c$. This is just out of formality because the bound doesn't hide any other term than $\mathrm{sp}(h^*), S, A, T$ and $\delta$ (like $p_\mathrm{min}$, or the diameter, or the mixing time, or else).
2. The main theorem mentions a confidence region system for communicating MDPs with a diameter-dependent computational complexity, but since the paper is for weakly communicating MDPs which generally have infinite diameter, I think a different system would be a better choice? Can the author(s) comment on the complexity when the diameter is infinite?
The confidence region for $M$ is not specific to the fact that $M$ is communicating (in fact, the confidence region of `UCRL2` [4] and its variants do not rely on the communicating property). However, the complexity result only holds for communicating MDPs as it is currently written indeed. We see two ways of addressing this.
+ Either we simply restrict the complexity result to communicating MDPs (no need for extra material required, just make the precision in Theorem 1);
+ Or we generalize the complexity result to weakly-communicating MDPs. This can be done like so. The complexity result is established with Proposition 17 and at line 559, we show that the `PMEVI` needs $O(\mathrm{span}(w_0)T/(S^{1/2} \log(T)))$ iterations to converge, where $w_0$ is the error between the initial value of `PMEVI` and the bias function $h^*(\mathcal{M}_t)$ of the MDP confidence region $\mathcal{M}_t$ seen as an extended MDP [4]. If $M$ is communicating, [4, 7] show that this bias span is bounded by $D$, the diameter of $M$. If $M$ is weakly-communicating, the argument of [4, 7] can be generalized to show that it is bounded by "diameter of the communicating part of $M$" + "worst hitting time to communicating part of $M$", that we refer to as the **weak diameter** (we say "weak" because it is finite if and only if the MDP is weakly communicating).
The first suggestion is better connected to the current literature because the diameter is a well-known object. The second is more complete, but requires a proof that is not currently written in the paper, and this notion of weak diameter is not standard.
3. The notation $h^*(M)$ refers to the *optimal bias function* which is a special solution to the Bellman equations. It is the maximal bias vector over policies with optimal gain, see [18]. The notation $h^*(\mathcal{M})$ is only used in the proof of Proposition 17 and is the optimal bias function of $\mathcal{M}$ seen as an extended MDP (see [4]).
4. Thank you for the pointer on https://arxiv.org/pdf/1905.12425, we will discuss discuss its contribution in the next revision opportunity.
5. You are completely right: the proof only keeps tracks of the commutes between $S_t$ and $S_{t+1}$. The bias difference estimator can not work with only direct transitions. The term $A$ in page 13 is no longer a martingale if we only count direct transitions.
> Line 397: should $h^*$ be $g^*$?
Yes, you are correct.
> Line 404: Some character I believe meant to be $S$ was used
Yes, it should be $S_{\tau_{i+1}}$ rather than $\S_{\tau_{i+1}}$.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I find the global rebuttal still leaves me with doubts about the experimental results. Overall, I will keep my score. | Summary: The paper shows that by replacing the extended value iteration in optimistic methods, like UCRL2, it is possible to obtain regret that scales optimally with the number of states, actions, the time horizon and the span of the optimal value function, instead of the diameter, despite not knowing the diameter, assuming weakly communicating MDPs. In addition, a novel analysis is presented that also gives a polynomial bound on the compute cost of the algorithm. The new extended value iteration method is designed based on new ideas to refine the set of plausible MDPs considered in a given step: For this, an inequality is provided that relates the deviation of the values assigned to two states to observable quantities; creating a new set of constraints on the MDPs considered.
Strengths: A major breakthrough if the proof holds up. Interesting insight.
Weaknesses: The presentation is not great; the paper feels rushed, the paper is full of (minor) grammatical and typographical errors (starting with the abstract: "encounter suffer"). While I did not go through the whole proof, there are many minor problems that are apparent: The paper is not very careful in tracking error events. For example, \tilde{beta} is sometimes appearing in a condition, but it is also constructed by the algorithm, perhaps things work out, but details are definitely missing.
The experimental results are unconvincing: Experiments that show scaling the optimal span would have been more convincing. The presented experiments do not help at all.
Technical Quality: 3
Clarity: 2
Questions for Authors: Are you sure you tracked all the error events correctly? (E.g. in Lemma 13, Lemma 3 is used, which needs $\tilde{g}\ge g^*$, but the proof of Lemma 13 does not mention why this would hold and exactly what value of $\tilde{g}$ is used here.)
One of the confusing aspects of the presentation of the insight was that on page 5, where the mitigation is explained, beta_t(s,a,u) is used and then a maximum over u is taken; while it is not explained whether beta_t(s,a,u) bounds the deviation that is a function of u for all u (needs a covering?). I guess this covering is not needed, but then why can we take the max as suggested in this paragraph (lines 155-161).
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: n.a.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## About weaknesses
> The presentation is not great; the paper feels rushed, the paper is full of (minor) grammatical and typographical errors (starting with the abstract: "encounter suffer")
We will do our best to correct as many typos as possible, that can make understanding the paper difficult.
> While I did not go through the whole proof, there are many minor problems that are apparent: The paper is not very careful in tracking error events. For example, \tilde{beta} is sometimes appearing in a condition, but it is also constructed by the algorithm, perhaps things work out, but details are definitely missing.
Concerning the tracking of events, we discuss in further details in the next section (see **About questions**).
> The experimental results are unconvincing: Experiments that show scaling the optimal span would have been more convincing. The presented experiments do not help at all.
Regarding experiments, we have carefully entitled the section **Numerical illustrations** rather than **Experiments**, that we do not claim to account for a throughout experimental campaign. This paper is theoretical and our algorithm `PMEVI` was intentionally "purified" of any form of tuning and modification to improve its empirical performance. The point of `PMEVI` and Figure 2 is to show that the method can make very good use of pertinent bias information, provided that the quality of this information is high. By leaving `PMEVI` not tuned, we make it easier to build extensions of the algorithm for other settings. Obviously, one interesting follow-up of this work would be to provide a finely tuned version of `PMEVI` with a better bias estimation subroutine, a bit like what `UCRL3` [8] is to `UCRL2` [4]. However, such a heavily tuned version would become impossible to adapt to settings others than undiscounted infinite horizon reinforcement learning problems.
## About your questions
> Are you sure you tracked all the error events correctly? (E.g. in Lemma 13, Lemma 3 is used, which needs $\tilde{g} \ge g^*$, but the proof of Lemma 13 does not mention why this would hold and exactly what value of is used here.)
**Concerning events.** If we understand your concern correctly, you legitimately fear the presence of a circular argument, because the optimism guarantees of an episode seem to depend on the preceding episodes. This is indeed the case, but there is no circularity and all risks of circularity are encapsulated in Lemma 13, that establishes anytime optimism guarantees (and more). Roughly speaking, there are two types of events in the analysis. There is the "main event" specified by Lemma 13, that provides time-uniform (1) optimism guarantees, (2) correctness of the bias confidence region (i.e., of projection) and (3) correctness of the mitigation.
+ Indeed, Lemma 13 is established using Lemma 3. The blocked equation between lines 470 and 471 is universally quantified for $\tilde{g} \ge g^*$, i.e., we should read: "Let $E_2$ the event stating that, for all $T' \le T$ [and $\tilde{g} \ge g^*$], (...)". We will make the correction.
+ Then, we agree that $E_1, E_2, E_3$ do not depend on how $\mathfrak{g}_k, \mathfrak{h}_k$ are constructed, and hold **independently of how the driver chooses actions.**
+ So, on $E_1, E_2, E_3$, we show that the properties (1-3) (optimism, correctness of the projection and of the mitigation) *will propagate* from episode to episode.
+ This high level view will be added in the paper to help the understanding of the proof.
And then, there are "error terms specific" events, that are invoked on the fly. These are only necessary in the regret analysis. Note that the quality of the bias estimator depends on the regret of the algorithm, hence we end up with a self-bound on the regret (see line 265). However, the self-bound is of the form $x \le \alpha + \beta \sqrt{x}$ hence ends up being useful at all.
> One of the confusing aspects of the presentation of the insight was that on page 5, where the mitigation is explained, $\beta_t(s,a,u)$ is used and then a maximum over u is taken; while it is not explained whether $\beta_t(s,a,u)$ bounds the deviation that is a function of $u$ for all $u$ (needs a covering?). I guess this covering is not needed, but then why can we take the max as suggested in this paragraph (lines 155-161).
This indeed needs to be clarified, because this is one of the key ideas. Any bound on $(\hat{p}_t(s,a) - p(s,a))u$ that holds for *every $u$ simultaneously* will inevitably scale with $\sqrt{S}$ and compromise the minimax optimality of regret guarantees. Therefore, with `PMEVI`, we do not use the event $\{\forall u, (\hat{p}_t(s,a) - p(s,a)u \le \beta_t(s,a;u)\}$. In fact, we only need $\{(\hat{p}_t(s,a) - p(s,a))u \le \beta_t(s,a;u)\}$ to hold for $u = h^*$, and that does not require any covering.
The maximum is taken for "algebraic" reasons.
The mitigation operation makes sure that optimistic transitions $\tilde{p}(s,a)$ are chosen so that $\tilde{p}(s,a)u \le \hat{p}(s,a) u + \beta_t(s,a;u)$. It happens that using a mitigation coefficient $\beta_t(s,a;u)$ that depends on $u$ provides an ill-behaved operator (monotony is lost), and instead the mitigation coefficient $\beta_t(s,a) := \max_{u \in \mathcal{H}_t} \beta_t(s,a;u)$ is used, leading to the current equation (7).
---
Rebuttal Comment 1.1:
Title: acknowledgement of rebuttal
Comment: The rebuttal is fine; I wish I could read a polished version of the paper before it gets published. However, I am optimistically assuming for now that the presentation issues will be smoothened out and the result will still hold. | Rebuttal 1:
Rebuttal: We appreciate the reviewers for their constructive suggestions. Please find our responses to the reviews below.
**From your collective feedback, it emerges a shared concern about the numerical experiments.** Indeed, the left part of Figure 2 shows that the projection-mitigation operations of `PMEVI` have very little impact on the behavior of the algorithm.
By monitoring the execution of the algorithms more closely, this comes from the fact that the confidence region of the bias vector is very large in the early iterations, hence the projection operation does nothing, and the mitigation operation has a very minor effect. These two operations progressively trigger over the run, but in practice it takes time, for two reasons.
- The quality of the bias estimator depends on the regret. In the early stages of learning, the regret grows linearly hence the bias estimator is way off, and the bias confidence region is accordingly very wide.
- This early phase pollutes the bias estimator/confidence region for quite some time. One idea of optimization would be to discard the early history. This would directly improve the quality of the bias estimator.
This is what the right part of Figure 2 is for: If `PMEVI` has good bias prior information, or if the bias estimation is much better, the performance are greatly improved. The current bias estimator is sufficient to obtain minimax optimal bounds and was left not tuned **on purpose**: to make sure that the theoretical analysis is free of hard-to-understand terms that only come from heavy hand-tuning. For now, the general analysis of `PMEVI` shows that all the optimistic algorithms based on `EVI` can be patched to achieve minimax optimal regret, with a bias estimation subroutine and the projection-mitigation regularization of `EVI`.
This is why the experiment section is named "**Numerical illustrations**".
This section provides a clear direction to improve `PMEVI`: improving the bias estimation subroutine. We believe that this is an interesting direction for future work. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
ID-to-3D: Expressive ID-guided 3D Heads via Score Distillation Sampling | Accept (poster) | Summary: The paper introduces a novel method for generating 3D human heads guided by both identity and textual descriptions. The proposed method employs face image embeddings and textual descriptions to optimize a neural representation for each subject. By leveraging task-specific 2D diffusion models as priors and a neural parametric representation for expressions, the method achieves high-quality results while requiring minimal training. Extensive experiments demonstrate the proposed method could generate higher-quality 3D facial assets than state-of-the-art methods.
Strengths: 1. The proposed method is novel and promising in the field of 3D facial asset generation. The use of task-specific 2D diffusion models as priors for 3D head generation is a novel technique that reduces the dependency on large 3D datasets. The method incorporates a neural parametric representation to disentangle expressions from the identity, which is a reasonable way to manage the complexity of facial dynamics and ensure identity consistency in generated 3D heads.
2. Experiments evaluation is strong and demonstrates the superiority of the proposed method over existing methods. The quantitative analysis and visualization are impressive. The visual results also clearly show the advantage over state-of-the-art methods. Detailed metrics and visualization highlight the method's ability to produce high-quality, identity-consistent 3D models, showcasing significant improvements in terms of details and textures. The evaluation covers various aspects such as generalization to new identities, handling of different ethnicities, and avoidance of oversmoothing, providing a thorough validation of the method's effectiveness.
3. The generated 3D models are of high quality, with detailed textures and geometry. The method ensures that the identity of the input image is well-preserved in the 3D output. The versatility of the method is also demonstrated, which provides practical utility beyond simple model generation.
Weaknesses: 1. The generated 3D head models, while detailed, sometimes appear exaggerated and more like caricatures than realistic human faces. This diminishes the method's applicability in scenarios where photorealism is critical. Addressing this issue would involve refining the model to balance between preserving identity and achieving realistic facial features.
2. The performance and quality of the method on significantly larger and more complex datasets are not extensively discussed. Evaluating and demonstrating the scalability of the approach would enhance its credibility and applicability. Discussing potential limitations when scaling up and providing strategies to handle larger datasets would be valuable additions. It would be interesting to discuss or show what could be further improved if larger datasets are incorporated.
3. The writing can be improved for better clarity and accessibility. For instance, it should be explicitly described that each identity requires a training stage. This would help readers understand the necessity and implications of the training process. Additionally, simplifying and clarifying the description of the multi-stage pipeline would make the methodology more accessible to other researchers and practitioners.
Technical Quality: 4
Clarity: 3
Questions for Authors: See the weaknesses section.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Limitations are discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # 5 Response to reviewer `fD9E`
We thank the reviewer `fD9E` for recognizing our work as "novel and promising", acknowledging its "superiority [...] over existing methods" and appreciating its "versatility". Below we address the proposed concerns about the limitations of our method and its potential extensions. **We regularly refer to the general response above and the one-page pdf provided with the figures**
## 5.1 Bias towards exaggerated features.
Indeed, the current pipeline exhibits a bias towards exaggerated features. We identify 3 main causes of this phenomenon:
1. **Dataset size**. We train our 2D guidance model on the NPHM dataset, which is characterized by a relatively small number of subjects. Consequently, it may under-represent the wide spectrum of human head shapes and textures, resulting in output that may appear cartoonish. We provide a breakdown of dataset diversity in Fig.E and a more detailed comment in Sec.1.3 of the general response.
2. **Dataset texture quality**. While the NPHM dataset excels in expressive details, it lacks high-quality texture scans. As a result, the generated 3D model textures may appear unnatural in some cases. Examples of NPHM renders and our generated 2D textures can be seen in Fig.G and H respectively, and a more detailed discussion can be found in Sec.1.1 of the above general response.
3. **Geometry generation pipeline**. Our method employs a sequence of random camera poses to guide the geometry generation process. However, this can introduce point-of-view biases, leading to certain portions of the 3D model being overemphasized.
Considering the above observations, we theorize that using a larger-scale dataset could address the issue by improving the photorealism of the generated 3D head models, and recognize that it represents one of the immediate next steps to further advance our work.
## 5.2 Discussion on scaling up to larger and more complex datasets.
Scaling up our SDS pipeline to exploit larger or more complex datasets is an interesting problem, as long as such datasets are available.\
We identify 2 potential next steps:
- **Extending the current NPHM dataset** by acquiring more scans to overcome its distribution bias as discussed in the reply above. This extension would not entail any adjustments to the method but would improve its generalization capabilities.
- **Leverage more complex data types** by combining datasets with temporal signals (e.g. video) and very high-quality textures (e.g. lightstage data). Merging different data types into our pipeline would require method adjustments. One solution could be to employ a proxy representation, such as 3DMMs or a UV texture template, to allow for easy segmentation of semantics (i.e. hair/face subdivisions, etc.), that could be easier to link with large-scale SD textual embeddings. In this direction, we identify the recently released AVA256 ( 13th June 2024) [1] as a good candidate dataset to use in future work.
## 5.3 Improve the writing for better clarity and accessibility.
We appreciate and agree with the reviewer's recommendations to improve the manuscript. We will revise and update the text as suggested to improve clarity.
[1] [https://about.meta.com/realitylabs/codecavatars/ava256](https://about.meta.com/realitylabs/codecavatars/ava256)
---
Rebuttal Comment 1.1:
Title: Maintain rating
Comment: The rebuttal addressed some of my concerns. I recommend the authors explain the exaggerated caricature-like style in the paper and maybe need to change the "realistic" claims. I would maintain my initial rating.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the feedback. We appreciate the suggestions and will ensure to comment on the caricature-like style in the limitations section of the paper and revise the claims as suggested. | Summary: The paper presents a novel technique for creating 3D human head models from a single real-world image, guided by identity and text. This method is based on compositionality and uses task-specific 2D diffusion models as optimization priors. The authors extend a base model and fine-tune only a small training parameters to create 2D priors for geometry and texture generation. Additionally, the method utilizes a neural parametric representation for expressions, allowing the creation of highly detailed geometry and albedo textures.
Strengths: 1. The method demonstrates significant innovation in the field of 3D head generation, particularly in generating high-quality models without the need for large-scale 3D datasets.
2. The capability to generate 3D models with disentangled expressions is a notable advancement, as this has been a challenge in previous research.
Weaknesses: 1. The paper should discuess dataset diversity. Does the training dataset used in this study possess enough diversity to prevent potential biases?
2. There's a lack of analysis on the robustness of ID embeddings. Are these identity embeddings robust enough to accurately depict identity features across various expressions and poses?
3. Some citations are missing, specifically at line 191 where template-based approaches require references.
4. Is it logical for geometry diffusion to be fine-tuned using style-transfer methods? It could be reasonable if a pre-trained standard SD model like RichDreamer[1] is utilized.
5. Some results aren't convincing; for instance, in the last row of Fig9 within supplementary materials, reconstruction seems to lose eye-catching hair details.
6. Supplementary materials have been placed in a separate zip file instead of being attached to the main paper, which might breach some submission rules.
[1] RichDreamer: A Generalizable Normal-Depth Diffusion Model for Detail Richness in Text-to-3D
Technical Quality: 3
Clarity: 3
Questions for Authors: * Discussing the diversity of the dataset during training.
* Discussing the robustness of id embeddings
* Correcting citations.
* Explaining why fine-tune using a geometry diffusion model with style-transfer instead of another geometry diffusion model.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors acknowledged the limitations and deliberated on the possible negative impacts of the suggested technology on society.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # 4 Response to reviewer `G3ux`
We appreciate the reviewer's thorough feedback and the definition of our work as a "significant innovation in the field of 3D head generation". We provide all the requested clarification and experiments below. **We regularly refer to the general response above and the one-page pdf provided with the figures.**
## 4.1 Discussion on dataset diversity.
Please refer to Sec.1.3 of the above general response.
## 4.2 Analysis of the robustness of ID embeddings.
We provide analyses on the robustness of the ID embeddings w.r.t. expressions and camera poses in Fig.C and Fig.D, which we will include in our work.\
We use our training dataset (NPHM dataset). For each identity and expression, we collect renders for 9 different camera rotation angles [-60°, -45°, -30°, -15°, 0°, 15°, 30°, 45°, 60°] and extract their identity embedding (ArcFace [1]). Examples of renders are visible in Fig.G. We use a neutral pose with a 0° rotation angle as a reference.\
Note that, per definition, a similarity score above 0.5 signifies the same identity [1].
- **The impact of expressions on ID embeddings (Fig.C)** is isolated by computing the cosine similarity between the neutral reference and all the remaining expressions captured with a rotation angle of 0°.
- **The impact of the camera pose on ID embeddings (Fig.D)** is isolated by computing the cosine similarity between the neutral reference and the neutral expression captured with all the 9 possible rotation angles.
As visible from Fig.C and D, ArcFace reliably captures identity features across various expressions and poses, showcasing robust behavior even for extreme expressions (e.g. "Squeeze" Avg-SimID: 0.62) and substantial camera rotation (e.g. -60° Avg-SimID: 0.7).
## 4.3 Missing references.
We added references to line 191 [2,3,4]
## 4.4 Initialization of the 2D models.
ID-to-3D uses two ID-driven and task-specific guidance models during geometry and texture generation.\
We create both geometry-oriented and texture-oriented models starting from the same pre-trained weights. This design choice avoids the introduction of two different initialization biases and has the goal of favoring ID consistency between the two finetuned representations (Fig.7 Additional, Fig.H rows 1 and 2).
## 4.5 Artifacts.
Despite avoiding any Janus artifacts or major misalignments, failure cases of our ID-to-3D might involve slight misalignment between texture and geometry (e.g. Fig.9 last row in additional material). We suggest that this is due to the lack of specific optimization for physically bounded textures and geometries and plan to improve on this portion of the pipeline in future work.
[1] Arcface: Additive angular margin loss for deep face recognition. CVPR 2019\
[2] DreamFace. SIGGRAPH 2023\
[3] TADA. 3DV 2024\
[4] FLAME: Learning a model of facial shape and expression from 4D scans. SIGGRAPH 2017
---
Rebuttal Comment 1.1:
Comment: Thanks for the author’s response. I would raise the score to 6. | Summary: This work proposes a new approach for the generation of 3D human heads, which enables guidance with identity, facial expressions, and text descriptions. The approach is structured around two principal components: 1) the authors fine-tune a previously established text-to-image diffusion model through LORA on a specialized dataset of 3D human head models, to obtain the 2D guidance with separated texture and geometric details. 2) the method executes the generation of geometry and texture in separate stages, utilizing the SDS loss to optimize the process. It considers specific designs, such as the learnable latent code of facial expression, to enrich the geometry and texture details. It obtains better performance than several existing works.
Strengths: 1. The proposed method is reasonable. The finetuning of the diffusion model on a specific 3D head dataset provides better guidance for geometry and texture.
2. It supports various types of conditional inputs, including identity, expression, and text description, thereby enriching its versatility in application.
3. It obtains better performance than several existing works.
Weaknesses: 1. The generated head is of low visual quality. We are aware that text-to-image diffusion models are capable of producing very high-resolution images. However, the learned facial textures (resolution, clarity) showcased in this study are relatively poor. In comparison, other single-image 3D reconstruction methods, such as 3D GAN inversion combined with technologies like NeRF, can achieve very high visual quality. The text guided 3D portrait generation method, Portrait3D (siggraph 24), is also of high visual quality.
2. The innovativeness of the method is modest. Fine-tuning diffusion models on specific datasets and using SDS as a supervisory loss for 3D modeling are both fairly common practices. The methodological innovation in this study seems insufficient.
3. The expressions and text-guided editing scenarios demonstrated in the experimental section are quite basic (such as "eyes closed," "brow lowerer," "de-aged"), which limits their practicality. It is suggested to showcase more practical editing effects, such as changes in hairstyle or face shape, richer text-based guidance, to better understand its editing performance.
Technical Quality: 3
Clarity: 3
Questions for Authors: There are a few questions that should be addressed in the rebuttal, please see paper weakness for more information.
******** after rebuttal
Thanks for providing additional experiments, that addressed some of the concerns. I raised my rating to borderline accept, mainly for its contribution of providing a relatively complete method for simultaneously controlling facial ID, expressions, and characteristics.
Yet, the generated head, especially the texture, is still low in quality. Although this may be influenced by the dataset used, it is still a limitation of the method as there is currently no better dataset available (per the author's rebuttal), and it is also unlikely to construct a higher-quality dataset. Is there any other possible ways to further improve the image quality?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # 3 Response to reviewer `vm8B`
We thank the reviewer for the feedback. We thoroughly clarify and respond to all the concerns raised. **We regularly refer to the general response above and the one-page pdf provided with the figures.**
## 3.1 Discussion and comparisons on the quality of the generated textures.
We address the 3 comparisons asked by the reviewer in the following:
### 3.1.1 Comparison with Stable-Diffusion text-to-image models.
In Fig.H, we provide comparisons of our specialized 2D models against two well-established Stable-Diffusion text-to-image models usually used in SDS pipelines [7,8].
As visible, these Stable-Diffusion models have three main shortcomings that limit their applicability to the task at hand (i.e. creation of ID-driven 3D heads with expression control):
- __Low ID retention.__ The models struggle to consistently create outputs of specific identities since they rely only on textual prompts.
- __Low expressivity.__ The use of natural language to enforce expression conditioning is ineffective. The models overlook expression-related prompts to boost photorealism.
- __Inconsistent Lighting.__ The models generate a wide range of lighting conditions, to enhance photorealism and artistic effect. This complicates the separation of lighting and albedo contributions when creating renderable-ready assets with SDS.
Please refer to Sec.1.1 of the above general response for a more in-depth discussion on the impact of the NPHM dataset bias on texture quality.
### 3.1.2 Comparison with 3D-aware image synthesis methods.
Please refer to Sec.1.2 of the above general response for a description of the advantages of our 3D representation against single-image 3D reconstruction methods that use 3D GAN inversion combined with NeRF.
### 3.1.3 Comparison with Portrait3D (siggraph 24).
We acknowledge this concurrent work (as per NeurIPS guidelines) published on 19 July 2024 in the ACM Transactions on Graphics (TOG), Volume 43, Issue 4. We will include and discuss it in the main manuscript.
## 3.2 Limited Innovation.
As noted by `G3ux`, `HnyJ` and `fD9E`, our approach introduces significant advancements in 3D head generation, addressing key challenges in the field. In particular, we present to the reviewer 2 overlooked contributions, further substantiating the innovativeness of our model:
- __Production of editable identity-driven 3D heads with expression control via SDS.__ To the best of the author's and other reviewers' knowledge, we propose the first SDS model producing identity-driven, editable, and highly detailed 3D heads with expression control from in-the-wild images of subjects without the need for large-scale 3D datasets, which allow us to set a new SoTA for human heads generation via SDS.
- __Design of a neural parametric expression model compatible with SDS pipelines.__ This methodological innovation allows ID-to-3D to disentangle expressions from identity in 3D assets, a challenge in SDS research [1,2]. The control over facial expressions ease editing while ensuring consistent identity preservation across diverse 3D models.
## 3.3 Expression and text-guided editing capability.
We address the 2 concerns raises below:
### 3.3.1 Expression Conditioning.
> The expressions [...] demonstrated in the experimental section are quite basic.
Our expression controls can convey extreme expressions (e.g. ’squeeze’ and ’cheeks puffed’), as well as handle subtle changes from the neutral reference (e.g. ’dimpler’ and ’lip roll’) with unprecedented levels of geometric detail and expression-conditioned wrinkles (Fig.5 and 6 of the main paper).\
As highlighted by `G3ux` and `fD9E`, our method goes beyond basic scenarios and shows promising results compared to relevant literature:
1) The capability to generate 3D models with disentangled expressions has been a challenge in previous SDS research [1, 2]
2) Managing the complexity of facial dynamics while ensuring identity consistency in generated 3D heads is a well-known problem in literature, with specialized methods like [3, 4, 5, 6] all struggling to create expression-driven wrinkles.
### 3.3.2 Text-Guided Editing.
> The [...] text-guided editing scenarios demonstrated in the experimental section are quite basic.
As requested, we provide evidence on text-guided editing capability of our model beyond basic scenarios. Fig.A presents the text-guided editing scenarios suggested by the reviewer. In particular, we showcase:
- 11 unique rich text prompts associated with 6 hairstyles (Fig.A1-8, A10, A11).
- 2 head accessories (Fig.A3, A4, A9).
- 5 face shape changes driven by 3 different ethnicities (Fig.A1, A2, A5, A6, A9).
Note that :
- Our method can interpret and exploit text-based inputs not addressed in previous works (e.g. id-driven changes in 'aging', 'gender', 'heritage').
- Even when using the exact same text prompt (Fig A1, A2) our model generates unique identity-consistent assets that simultaneously align with the text and retain the characteristic facial features of the input ID.
Our approach is able to address practical scenarios and opens new avenues for expressive text-guided editing of 3D assets.
[1] HumanNorm. CVPR 2024.\
[2] HeadSculpt. NeurIPS 2023.\
[3] Learning neural parametric head models. CVPR 2023.\
[4] DECA: Detailed Expression Capture and Animation. SIGGRAPH 2021.\
[5] SMIRK: 3D Facial Expressions through Analysis-by-Neural-Synthesis. CVPR 2024.\
[6] GANHead: Towards Generative Animatable Neural Head Avatars. CVPR 2023.\
[7] Stable Diffusion 2.1. [https://huggingface.co/stabilityai/stable-diffusion-2-1](https://huggingface.co/stabilityai/stable-diffusion-2-1).\
[8] DreamLike-Photoreal 2.0. [https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0](https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0).
---
Rebuttal 2:
Title: Will the code and models be open sourced?
Comment: Will the code and models be open sourced?
---
Rebuttal 3:
Comment: Yes, we confirm that the code and the models will be made publicly available.
Title: Code publicly available | Summary: **Summary
This paper focuses on the task of 3D head generation. Specifically, the authors first extend a traditional diffusion model to a text-to-normal version and a text-to-albedo version with ID-aware and expression-aware cross-attention layers. Then, with the trained diffusion models, the authors optimize a neural parametric head model with a score distillation sampling loss. Extensive experiments demonstrate that the proposed method outperforms existing text-to-3D and image-to-3D methods in terms of 3D head generation. However, I have some concerns about this paper. My detailed comments are as follows.
Strengths: **Positive points
1. The authors introduce the first method for arcface-conditioned generation of 3D heads with score distillation sampling loss.
2. The proposed method can also achieve ID-conditioned text-based 3D head editing (e.g., age editing, changing hair color and gender).
Weaknesses: 1. Although the proposed method generates a similar geometry to the input identity, the synthesized texture appears much unrealistic. What might be the cause of this phenomenon? Additionally, some implicit 3D representations, like NeRF [A-C] and 3DGS [D-F], can model high-fidelity surfaces for 3D heads. Why do the authors choose DMTET over these representations? It would be better if the authors could provide more discussion about the above questions.
2. The authors use five images as identity references for each 3D head. How is this optimal number determined? What is the relationship between identity similarity and the number of reference images? Quantitative results in terms of this should be provided.
3. There are some misalignments between Table 1, Figure 3, and Figure 4. For example, Fantasia3D is included only in Table 1 but does not appear in Figure 3 or the right column of Figure 4..
4. In the original paper of Fantasia3D [G], the proposed method can only synthesize 3D assets given a text prompt as input. How do the authors adjust this methods to generate 3D heads when conditioned on specific identity?
5. For each 3D head asset, the authors extract identity features from multiple RGB images. How are these features combined? Are they added or concatenated? It would be helpful to provide more details about this operation.
**Minor issues
1. On page 7, line 263, there is a missing space between “ID-to-3D” and “as”.
**Reference
[A] NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. ECCV 2020.
[B] Implicit and Disentangled Face Lighting Representation Leveraging Generative Prior in Neural Radiance Fields. TOG 2023.
[C] Geometry-enhanced Novel View Synthesis from Single-View Images. CVPR 2024
[D] 3D Gaussian Splatting for Real-Time Radiance Field Rendering. SIGGRAPH 2023.
[E] Photorealistic Head Avatars with Rigged 3D Gaussians. CVPR 2024.
[F] Relightable Gaussian Codec Avatars. CVPR 2024.
[G] Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation. ICCV 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weakness section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # 2 Response to reviewer `HnyJ`
We thank the reviewer for recognizing the novelty of our method and appreciating the experimental section. Below we respond to the doubts put forward by the reviewer. **We regularly refer to the general response above and the one-page pdf provided with the figures.**
## 2.1 Details about the method.
We provide the additional requested clarifications:
### 2.1.1 Cause of unrealistic texture.
Please refer to Sec.1.1 of the above general response.
### 2.1.2 Choice of 3D representation.
Please refer to Sec.1.2 of the above general response.
## 2.2 Relationship between identity similarity and number of reference images.
We analyze the relationship between identity similarity and the number of reference images in Fig.B. We consider 40 identities, 25 in-the-wild images for each subject, and extract for each image its identity embedding (ArcFace) [1]. For each identity, we consider the center of the distribution as representative of its facial features. We report the similarity between the center of the distribution and the mean ArcFace created with a subset of N number of reference images.
The plot shows the averaged trend for 40 identities (blue line) together with its standard deviation (light blue).
The trend reaches a plateau after 20 images, while 5 images is enough to reach an identity similarity of more than 0.95 for all the IDs considered. In our experiments, we selected 20 images to use as references for our comparisons and kept 5 images to use as input to our method, ensuring a good trade-off between identity similarity retention and practicality.
## 2.3 Misalignments in Figures 3,4 and Table 1.
We clarify the misalignments:
- We perform a quantitative evaluation on all the evaluated models using similarity ID (Fig.4 left) FID on geometry and texture (Table 1).
- We compare the best 2 performing text-to-3D and image-to-3D methods against ID-to-3D in the user survey (Fig.4 right), as this enabled us to gather more responses.
- We omit the worst-performing method (Fantasia3D) in the qualitative comparisons (Fig.3) due to space constraints.
We included a qualitative comparison against Fantasia3D in Fig.F. We will amend the revised version of the paper to include comparisons and clarification.
## 2.4 Creation of Fantasia3D assets.
Stable Diffusion textual embeddings struggle to represent a specific identity but have knowledge of named celebrities included in the training data, which can be generated by prompting "name + surname". To compare ID-to-3D and text-to-3D methods, we create a dataset of celebrity names suggested by ChatGPT and use the textual prompt "A DSLR portrait of [name + surname]" to create Fantasia3D / TADA / Human-Norm / DreamFace 3D assets.
## 2.5 Identity features concatenation.
For each image, we extract the ArcFace identity embedding after cropping and centering. The identity embeddings are then concatenated and processed by a shallow 2-layer MLP to match the dimension of the text features in the pretrained diffusion model. This representation serves as identity conditioning for the geometry-oriented and albedo-oriented 2D diffusion models.
[1] Arcface: Additive angular margin loss for deep face recognition. CVPR 2019
---
Rebuttal Comment 1.1:
Comment: The authors have addressed my concerns. I have decided to maintain my score as a Borderline accept. | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewers’ insightful feedback, which acknowledges the novelty of our method (G3ux, HnyJ, fD9E) and the quality of our experimental results (G3ux, HnyJ, fD9E, vm8B). The reviewers recommended additional experiments and visualizations to emphasize strengths, address limitations, and highlight future improvements. We have diligently conducted all the suggested experiments, as detailed in our responses to both general and reviewer-specific comments. **To prevent confusion with the original submission’s figures, we have labeled the new figures in the attached document using alphabetical letters.**
# 1 General Response
## 1.1 Generation of unrealistic textures. [ Reviewers `HnyJ`, `vm8B`, `fD9E` ]
ID-to-3D creates the textures of the expressive 3D heads with 2D guidance specialized in pseudo-albedo generation. This 2D model reliably generates ID-driven images under consistent and diffuse lighting conditions (Fig.H row 1), while also allowing direct control over the subject's expression -- two features absent in previous text-to-image SD models [1, 2] (Fig.H rows 3 and 4).
We train our model using the NPHM dataset [3], which is rich in expression data and geometric quality. \
Nevertheless, we acknowledge 2 of its main shortcomings that affect the synthesized texture quality:
- **Dataset bias.** Due to its limited size, it is inherently prone to biases, especially when compared to standard datasets used for SD training (e.g. Lion 5B [4], Lion-Face 20M [5]).
- **Dataset texture details.** The dataset provides scans with only UV diffuse albedo maps, which exhibit a relatively low level of texture photorealism (as visible from its renders in Fig.G).
Despite these drawbacks, NPHM remains the best choice of its type in the public domain, and given higher-quality data, our method could greatly benefit.
*The generated 3D textures resemble the NPHM suboptimal 2D albedos encountered during the training of the texture-guidance model. Nevertheless, this limitation is substantially outweighed by the capability to create images and 3D assets with control over the subject's expression and identity.*
## 1.2 Alternative 3D representations. [ Reviewers `HnyJ`, `vm8B` ]
In our 3D representation, we use DMTET for geometry generation and a Transformer predicting spatially varying reflectance during texture generation. This 3D representation offers 2 key advantages:
- **Combines benefits from implicit and explicit 3D representations** [6]. This allows the generation of high-frequency details and expression-dependent wrinkles while directly producing render-ready 3D heads (i.e. textured 3D meshes).
- **Disentangles geometry and appearance generation.** It allows a separate optimization of geometry and appearance, ensuring flexibility in editing and mitigating the generation of artifacts.
Compared to other methods:
- **3D-aware image synthesis methods** [7,8,9,10] focus on generating novel 2D views of input images by using 3D GAN inversion combined with NeRF-based representation. They generally struggle to handle side or back views effectively and do not produce textured meshes that can be used as-is in downstream applications. We consider this line of work orthogonal to ours.
- **NeRF-based methods** [11] jointly optimize density and RGB color for a given scene. Similarly, **3DGS-based methods** [12, 13, 14] optimize at once explicit 3D gaussians and their spherical harmonics for appearance. Although they achieve high visual quality when trained on pixel-perfect views of a scene, when used in SDS pipelines they both a) struggle to perform effective surface recovery of high-frequency details and b) train intertwined geometry and appearance representation that cannot be easily manipulated.
We agree that other representations also offer interesting possibilities for future research.
In particular, 3DGS shows very good results in human 3D reconstruction from lightstage data [14] or high-quality videos [13]. How to extend the 3DGS representation to achieve photorealistic generation from in-the-wild images using SDS is an open but interesting line of research.
## 1.3 Dataset bias. [ Reviewers `G3ux`, `fD9E` ]
We appreciate the insightful questions and agree with the importance of discussing dataset diversity and minimizing potential biases.
We provide a detailed breakdown of the NPHM dataset [3] in Fig.E. During the 2D guidance training, we use gender, age, ethnicity, and hairstyle as textual prompts to guide generation. As a result, our method generates diverse identities, ethnicities, and ages (Fig.A, Figures of the main paper). We are aware and acknowledge the limitations inherent in a small-sized dataset, such as the underrepresentation of minorities, and we revise the main manuscript to add clarifications and highlight this critical issue.
[1] Stable Diffusion 2.1. https://huggingface.co/stabilityai/stable-diffusion-2-1.
[2] DreamLike-Photoreal 2.0. https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0.
[3] Learning neural parametric head models. CVPR 2023.
[4] LAION-5B: An open large-scale dataset for training next generation image-text models. NeurIPS 2022.
[5] General Facial Representation Learning in a Visual-Linguistic Manner. CVPR 2022.
[6] Deep Marching Tetrahedra. NeurIPS 2021.
[7] Implicit and Disentangled Face Lighting Representation Leveraging Generative Prior in Neural Radiance Fields. TOG 2023.
[8] Geometry-enhanced novel view synthesis from single view images. CVPR 2024.
[9] Efficient Geometry-aware 3D Generative Adversarial Networks. CVPR 2022.
[10] PanoHead: Geometry-Aware 3D Full-Head Synthesis in 360°. CVPR 2023.
[11] NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. ECCV 2020.
[12] 3D Gaussian Splatting for Real-Time Radiance Field Rendering. SIGGRAPH 2023.
[13] Photorealistic Head Avatars with Rigged 3D Gaussians. CVPR 2024.
[14] Relightable Gaussian Codec Avatars. CVPR 2024.
Pdf: /pdf/674d2496cfda3feecdfb698a288ff8873afdf4c9.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Provably Robust Score-Based Diffusion Posterior Sampling for Plug-and-Play Image Reconstruction | Accept (poster) | Summary: The authors propose 'diffusion plug-and-play' (DPnP), a plug-and-play diffusion framework that alternatively calls what amounts to a consistency sampler based on the likelihood of the forward model, followed by (essentially) and unconditional diffusion step using the score function. The authors provide many subsequent theoretical results for this framework.
Strengths: 1. The paper is well-written and easy to follow/understand (barring some notation issues, see Weaknesses).
2. Each step in the process of developing the DPnP algorithm is well-motivated and explained thoroughly.
3. The provided proofs for the main theorems in Appendix F are well-written and seem correct.
4. DPnP outperforms competitors in non-linear inverse problems in nearly all tested metrics in Section 5 and Appendix G.
5. The overall contribution is impactful with respect to non-linear inverse problems.
Weaknesses: 1. This paper would benefit from improvements to the mathematical notation. Namely, it would be easier to read the math if scalar quantities were better differentiated from vector quantities (e.g., bold-faced vectors).
2. The formatting structure of the main paper is, at times, very granular with many lists. Personally, I appreciate this, but I feel that others may think that things could be ordered/structured better. This is a very minor issue, but worth keeping in mind.
3. I acknowledge that the primary contribution of this paper is the theoretical results associated with the DPnP framework, but I think that more robust experimental evaluation would have benefited this work. In particular, it would have been nice to see how the DPnP framework performs on more 'typical' linear inverse problems (e.g., inpainting, deblurring). This would have given a better sense of the overall performance of the approach, even if the focus is performance on non-linear inverse problems. I do not expect you to perform these experiments in the revision/for the rebuttal, I just wanted to note that the results would be more convincing if there were more of them.
4. It would be nice to discuss the connections between the proximal consistency sampler and projection-type approaches (e.g., that leveraged by Chung et al.'s 'Come-Closer-Diffuse-Faster) when the inverse problem is linear. In fact, I suspect that, for linear inverse problems, DPnP can be reduced to a simple projection step, followed by the reverse diffusion step. It would be worthwhile to discuss these connections, even if the discussion is relegated to an appendix.
5. In Algorithm 1, proximal consistency sampling is done before the denoising diffusion sampling. Why is this? The previously mentioned projection methods have the steps flipped, so I wonder why you have decided to structure the DPnP algorithm this way.
Technical Quality: 4
Clarity: 3
Questions for Authors: See Weaknesses.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors adequately discussed the limitations of their method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback and insightful comments. We will make sure to take your invaluable suggestions into account in revising the paper and in our future work. Below we address the two questions you raise.
**Connections with projection-type approaches.**
- Thank you for your thoughtful question. We are willing to clarify our connection and difference with projection-type approaches as follows.
> While our algorithm draws inspiration from projection-type approaches, the essential difference between our algorithm and these approaches persists even for linear problems. Roughly speaking, our proximal consistency step can be reduced (cf. Line 274-277 in our paper) to a slightly more complicated, yet theoretically rigorous version of the (proximal) projection step in these approaches for linear problems, but the same _cannot_ be said about our denoising diffusion step.
>
> A manifestation of such essential difference is that our denoising diffusion step requires multiple steps of reverse diffusion in a single iteration of outer loop, and has an accurate, simple expression for the (stationary) distribution it aims to sample from. In contrast, for projection-type approaches, it is typical that only one step of reverse diffusion is run in a single iteration of outer loop, and the distribution of its output, even in the ideal continuous setting, cannot be expressed in a simple way, which significantly hinders theoretical analysis. From a big picture, by running more steps of reverse diffusion every time after we do proximal consistency sampling, we stabilize the process since we better compensate for the drifts from the data manifold induced by (proximal) projection onto the null space of consistency equations. This is the heuristic reason for the improved robustness of our algorithm, which is backed by rigorous mathematical proofs.
**Order of proximal consistency step and denoising diffusion step.**
- Thank you for your sharp observation. The first reason for such an order is that the algorithm would have visually better outputs if ending with the denoising diffusion step instead of the proximal consistency step, as denoising diffusion by design outputs visually satisfactory images while proximal consistency sampler only cares about consistency. Such a difference was not present in previous projection-type approaches, as the denoising stage there runs only one step of reverse diffusion, thus it is insignificant whether it comes as the penultimate step or the last step. This again constitutes an example of the difference of our algorithm. The second reason is pedagogical, since this order better parallels that of projected gradient descent, which may facilitate the understanding of the logic of our algorithm.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns. I think that this work has value and I will keep my original score of 7. | Summary: This paper introduces a diffusion plug-and-play method (DPnP) that uses score-based diffusion models as expressive data priors for nonlinear inverse problems with general forward models. By combining a proximal consistency sampler and a denoising diffusion sampler, the method offers provably robust posterior sampling, with performance guarantees and demonstrated effectiveness across various tasks.
Strengths: This paper establishes both asymptotic and non-asymptotic performance guarantees for DPnP and provides numerical experiments to demonstrate its effectiveness across various tasks. The theoretical analysis presented is a valuable contribution to the field.
Weaknesses: Although I appreciate the theoretical aspect of this paper, as mentioned in the abstract, “this paper develops an algorithmic framework for employing score-based diffusion models as an expressive data prior in nonlinear inverse problems.” However, there are already existing works on embedding denoising diffusion models into plug-and-play frameworks as data priors, such as [1-2]. It would be helpful if the author could clearly highlight any new insights within this paper to distinguish it from previous work; otherwise, the novelty of this paper may appear somewhat incremental.
Minor one: There are several typos, e.g., in the abstract, the sentence "Score-based diffusion models, thanks to its impressive empirical success, have emerged as an appealing candidate of an expressive prior in image reconstruction." The correct pronoun should be "their" instead of "its" to match the plural subject "Score-based diffusion models."
Technical Quality: 3
Clarity: 3
Questions for Authors: I wonder if the asymptotic consistency and non-asymptotic error analysis of DPnP, as established in this paper, demonstrate convergence. If so, it is necessary to verify this through numerical experiments.
Could you please explain why the result of DPnPDDPM shown in Table 1 is smooth, while the one shown in Table 5 is noisy with a lot of noticeable noise? Thank you.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: As noted in the weaknesses section, the primary limitation is that diffusion-based PnP methods have already been proposed in [1-2] and applied to various inverse imaging problems.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback on our theoretical contribution and for your insightful comments. It seems that the reference items [1-2] in the review are unfortunately missing, so this rebuttal will be based on our understanding of general related literatures. We would really appreciate it and would be happy to discuss in more details if you could provide these references during discussion period.
**Distinction from previous work and new insights.**
- All of the previous works that used both score-based model and plug-and play, e.g. [RS18, LBA+22, BB23, CDC23] are more or less heuristic and do not have provable consistency or robustness. While some of them realized the potential of split Gibbs sampling as a plug-and-play posterior sampler, none of them found a way to solve the denoising step rigorously. One of our new insights is to show that the denoising step can be solved in full rigor with a reverse diffusion process using only unconditional score function, and it is even possible to make this diffusion process a DDIM one which enables application of many acceleration techniques. Moreover, the gain of our denoising diffusion step is not only mathematical rigor, but also improved robustness, since by running a full, multiple-step reverse diffusion process instead of a one-step heuristic denoiser as in previous works, we are able to better compensate for the drifts from the data manifold induced by the consistency step. It is with this insight that we are able to establish the first provably consistent and robust diffusion posterior sampling algorithm.
**Demonstration of convergence.**
- To further demonstrate the convergence of our algorithm as shown by our theory, we run a proof-of-concept experiment with data generated from a Gaussian mixture model, so that the convergence of distribution can be examined directly. The detailed settings can be found in General Response to All Reviewers, and the results can be found in the rebuttal pdf. It can be clearly verified from these results that our algorithm does converge to the true posterior distribution.
**Additional noise in DDS-DDPM.**
- Thank you for your sharp observations. As far as we can see, only the second row in Table 5 has noticeable noise for DPnP-DDPM. However, we also note that in this row, DDS-DDPM also recovers visibly finer details and textures of the board and the text in the original image. This tradeoff between the capability of reconstructing finer details and the risk of introducing additional noise is indeed a general phenomenon that has been observed in previous works, e.g., in [SKZ+23, page 21].
**Typos.**
- Thank you for your careful reading. We will proofread our paper more carefully and correct this typo among a few others in the final version.
---
Rebuttal Comment 1.1:
Title: Response to the author rebuttal
Comment: The reviewer has read the authors' rebuttal as well as the comments from other reviewers. Based on these, the reviewer prefers to maintain the initial score. | Summary: This paper introduces a diffusion-based sampling framework closely related to plug-and-play methods for solving general inverse problems. The technique alternates between two steps: calling a proximal consistency sampler that enforces data-fidelity, and regularization via a denoising diffusion sampler leveraging strong diffusion-based image priors. Theoretical results demonstrate asymptotic consistency and robustness to sampling errors. Numerical experiments show promising reconstruction quality.
Strengths: - The theoretical analysis is a valuable contribution. A lack of robustness to sampling errors and the resulting error accumulation has been a key challenge of diffusion-based solvers, especially in highly nonlinear tasks such as phase retrieval.
- The paper is well-written overall and the structure is logical.
- The experimental results are promising. In particular the proposed plug-and-play sampler achieves significant improvement over DPS, a well-established technique in the literature.
Weaknesses: - The experimental evaluation is somewhat lacking. It would be interesting to see comparison with more contemporary solvers such as ReSample [1], which has improved robustness against sampling errors due to a posterior mean correction scheme. Moreover, in-depth ablation studies on the multiple hyperparameters of the algorithm are missing. Thus, it is unclear how much hyperparameter tuning is necessary.
- The proposed technique appears to have a very high compute cost (3000 NFEs). More discussion on the compute requirements and possible ways to accelerate the algorithm would be valuable.
[1] Song, Bowen, et al. "Solving Inverse Problems with Latent Diffusion Models via Hard Data Consistency." The Twelfth International Conference on Learning Representations.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can the findings be extended to latent domain samplers? This would greatly improve the efficiency of the technique.
- How does the method compare to other samplers such as ReSample?
- How does performance scale with NFEs?
- I would recommend changing DDS to some other abbreviation to avoid confusion with the Decomposed Diffusion Sampling method [2].
- What does G denote in line 136?
[2] Chung, Hyungjin, Suhyeon Lee, and Jong Chul Ye. "Fast diffusion sampler for inverse problems by geometric decomposition." arXiv preprint arXiv:2303.05754 3.4 (2023).
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are not clearly addressed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful review and insightful suggestions. Below we address your concerns in a point-to-point manner.
**Comparison with ReSample [SKZ+23].**
- Thank you for your suggestion. There are a few recent algorithms that included a stochastic correction step for better robustness. The LGD-MC algorithm [SZY+23], which we compared against in our paper, is also one of them. We chose it due to its simplicity for implementation, but we are also more than happy to include comparison with the non-latent version of ReSample in the final version; comparison of latent versions, on the other hand, requires developing a latent version of our algorithm, which we feel more appropriate to leave to future work as discussed in General Response to All Reviewers. There are some preliminary comparisons in General Response to All Reviewers and in the rebuttal pdf, and we plan to include a more detailed comparison in the final version.
**Computational cost.**
- Thank you for your suggestion. In our paper, "~3000 NFEs" is the computational cost of the DDPM version of DPnP; we also proposed a DDIM version in the paper, which costs ~1500 NFEs, only 1.5x the cost of DPS (1000 NFEs). Moreover, the DDIM version allows to incorporate acceleration techniques for unconditional DDIM sampling, e.g. DPMSolver++, which can reduce the computation significantly. We would be more than happy to add more discussions in the final version.
**Performance scaling with NFEs.**
- Thank you for raising this good point. We have included a few samples with different NFEs in the rebuttal, Figure 3, and are willing to investigate it more systematically in the final version if possible.
**The name DDS for Denoising Diffusion Sampler.**
- Thank you for your suggestion. We will change it to an unambiguous name in the final version.
**Notation $G$ in line 136.**
- Thank you for your very careful reading and sharp observation. This is a typo for $M$, the forward diffusion process. We will proofread the paper carefully and make corrections in the final version. | Summary: This paper focuses on developing a plug-and-play algorithm for using score-based diffusion models as an expressive prior for solving nonlinear inverse problems with general forward models. While going from current state $x_{k+1}$ to $x_{k}$, this paper makes two gradient updates: (1) from $x_{k+1}$ to $x_{k+\frac{1}{2}}$ using gradients from the measurement error and (2) from $x_{k+\frac{1}{2}}$ to $x_k$ using denoising diffusion score function. Theoretically, the authors prove robustness of the proposed algorithm in sampling from the posterior, and empirically, they show that the proposed algorithm outperforms one commonly used baseline DPS.
Strengths: 1. This paper provides a robust algorithm for solving nonlinear inverse problems using unconditional score-based diffusion priors.
2. The paper is nicely written and theoretical analysis in a simple setting clearly demonstrates the major contributions of the paper.
Weaknesses: ### **Weaknesses and comments**
1. Line 171: "Assumption on forward model is applicable to *many* applications of interest" what are the applications?
2. Assumption 1: what are some examples of $\mathcal{L(\cdot, y)}$ that are differentiable almost everywhere and used in practice?
3. Step 1 in **A stochastic DDPM-type sampler via heat flow**: How do you know the noise level $\eta$? This is typically unknown.
4. The paper is overloaded with lots of notations. To sample from the posterior, you need the measurement conditional score. By Bayes' theorem you can write this term as the combination of prior and likelihood. How do you get the unknown likelihood term exactly? Please explain the key idea without overloading with notations.
5. Inverting $\Sigma$ is difficult in high-dimension posterior sampling problems. How do you get around this issue unless you are making further approximations like diagonal covariance or scalar? In that case, how do you sample from the posterior exactly?
6. One of the interesting parts of the theoretical results in this line of research is the characterization of the discretization error. But this seems to be out of scope of this paper according to the authors (line 299).
7. Theoretical results are valid provided you get a tight TV bound for both the consistency and diffusion samplers. Any ideas how to obtain these bounds?
8. Experimental results are compared with DPS which the authors claim to be state-of-the-art. Many of the recently developed methods [1,2,3,4] outperform DPS and there is no comparison with any stronger baselines. See citations below and references therein.
9. For evaluation which dataset is used. Is it the same subset used in the original paper and other follow-up papers? The results differ quite a lot if you pick some smaller subset.
10. The authors are encouraged to cite the published version of the papers where applicable (see for instance CCL+22).
References
1. Solving Linear Inverse Problems Provably via Posterior Sampling with Latent Diffusion Models.
2. Prompt-tuning latent diffusion models for inverse problems
3. Beyond First-Order Tweedie: Solving Inverse Problems using Latent Diffusion
4. Tweedie Moment Projected Diffusions for Inverse Problems
Technical Quality: 3
Clarity: 2
Questions for Authors: Please see the weakness section above.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes, the authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and valuable comments. Below we address your comments in a point-to-point manner.
**Examples for Assumption 1 on the forward model and the likelihood (point 1,2).**
- A typical scenario is where the measurement noise $\xi$ in the measurement model $y=\mathcal{A}(x^\star)+\xi$ is Gaussian, e.g., $\xi\sim\mathcal N(0,\sigma^2I_m)$. In such case, we have$$\mathcal L(x;y)=\log p(y|x^\star=x)=-\frac1{2\sigma^2}\Vert y-\mathcal A(x)\Vert^2 - \frac{m}2\log(2\pi\sigma^2).$$
Therefore Assumption 1 holds for *all* applications where the forward model $\mathcal A$ is almost everywhere differentiable, in particular, if $\mathcal A$ is linear. This includes inpainting, deblurring, super-resolution, phase retrieval, etc. It is noteworthy that most of the existing works (e.g. [CKM+23, SKZ+23, SZY+23, DS24, MSKV24], and [1-4] you listed), have the same differentiability assumption or requires the stronger condition that $\mathcal A$ is linear.
**The noise level $\eta$ (point 3).**
- The noise level $\eta$ in the DPnP framework is a hyperparameter used for annealing. It is not related to, and should not be confused with, the *measurement noise level* (the variance of $\xi$ in the measurement model $y=\mathcal A(x^\star)+\xi$). The annealing noise level at the $k$-th iteration, denoted by $\eta_k$ in Algorithm 1, is set manually, hence is always known.
**How to know the likelihood term (point 4).**
- As the example above (please refer to our response to point 1 and 2) demonstrates, the likelihood can be computed if the forward model $\mathcal{A}$ and the noise distribution is known. This setting is adopted by most of the existing works (please refer to our response to point 1,2), including [1-4] you listed (in fact, most of them assumed $\mathcal{L}(x;y)=-\frac1{2\sigma^2}\Vert y-\mathcal A(x)\Vert^2 + \rm const$, which is a special case of ours with Gaussian measurement noise). It might also be helpful to note that our log-likelihood function $\mathcal L$ is fully, simply determined by the measurement model, and should not be confused with $\log p(y|x_t)$, which was also called likelihood by some of the previous works.
**Computation cost of inverting $\Sigma$ (point 5).**
- While most of the existing works (including [1-4]) simply assumed (implicitly or explicitly) $\Sigma=\frac1{\sigma^2}I$ is scalar (please refer to our response to point 4), it is possible to handle general $\Sigma$ with standard numerical tricks in our work. For example, one can pre-compute, *once and for all*, the vector $A^\top\Sigma^{-1}y$ (which amounts to solving a linear equation) and a prefactorizization of the PSD matrix $A^\top\Sigma^{-1}A$. Using such precomputed information, each iteration only needs to compute several matrix-vector products, which has a negligible computation cost compared to Neural Function Evaluations. The cost of computing the prefactorization of $A^\top\Sigma^{-1} A$ has also been observed to be acceptable [SVMK22, KEES22]. Another option is to use MALA for the PCS step as we did for general non-linear inverse problems, so that there is no need to invert any matrix.
**Discretization error and TV bound for subsamplers (point 6,7).**
- As Theorem 2 indicates, discretization error affects our algorithm only through total variation error of subsamplers DDS and PCS, so we respond to point 6 and 7 together. The reason we did not include an analysis of TV error of subsamplers is, as explained in the paper, that both subsamplers can be analyzed with existing techniques in the references we provided. For example, applying the results in [LWCC23], one may show that DDS-DDPM has total variation error $$\varepsilon_{\sf DDS-DDPM}\le \tilde C\frac{d^2}{\sqrt{T'}}+\tilde C\sqrt d\varepsilon_{\sf score},$$ where $\tilde C$ hides logarithmic dependence on parameters, $T'$ is the number of steps (cf. Algorithm 2), $\varepsilon_{\sf score}$ is score estimation error defined in [LWCC23]. Similar bound for DDS-DDIM can also be proved using the results therein. On the other hand, utilizing results on the convergence of MALA [CLA+21], one may prove that PCS has total variation error $$\varepsilon_{\sf PCS}\le C\exp(-C\sqrt{N}\eta^{-1/6}d^{-1/4}).$$
**Comparison with works other than DPS (point 8).**
- Thank you for the references. As we mentioned in General Response to All Reviewers, we compare our algorithm not only with DPS, but **also with more recent, highly competitive** algorithm like LGD-MC [SZY+23], and plan to add comparison with ReSample (pre version available in the rebuttal pdf). Note that comparison with such stronger baseline as we have done is actually not too common in the literature, given the highly active status of the field and the well-established position of DPS, e.g. in all the above references and [1-4] you listed, the strongest non-latent baseline is DPS (latent models are discussed in the next paragraph). In addition, none of these previous works come with a provable robustness guarantee, thus our algorithm is also of independent theoretical interest.
- Concerning [1-4], all of them except [2] are confined to linear inverse problems, while our focus is on general, non-linear inverse problems (e.g. phase retrieval and quantized sensing). [1-3] rely on the use of latent diffusion models ([2] further requires prompting the model) which is again a narrower setting than ours as discussed in General Response to All Reviewers. Unavailability of code also hinders comparison with [2-4].
**Whether the dataset is the same with DPS (point 9).**
- We have strictly followed the original DPS paper on the usage of dataset. We used the same dataset without picking smaller subset than DPS. Our experiments are done for different inverse problems, though, as our focus is on more non-linear problems. In doing so, we made our best effort to ensure fair comparison by fine-tuning all competing algorithms and listing explicitly our problem parameters in the paper.
---
Rebuttal 2:
Title: Thank you for your review. Have we addressed your concerns?
Comment: Dear Reviewer 43oG,
We've taken your initial feedback into careful consideration in our response. Could you kindly confirm whether our responses have appropriately addressed your concerns?
If you find that we have properly addressed your concerns, could you please kindly consider increasing your initial score accordingly? Please let us know if you have further comments.
Thank you for your time and effort in reviewing our work!
Many thanks, Authors
---
Rebuttal Comment 2.1:
Title: Discussion with Authors
Comment: The reviewer thanks the authors for the detailed response. The reviewer is satisfied with the clarifications in Q1, Q2, Q3 and Q9. However, the major concerns still remain. Regarding Q4, the term $\log p(y|x_t)$ is typically not computed and existing methods (e.g. DPS) approximate this using $\log p(y|E[x_0|x_t])$, which is similar to the term $L(x;y)$ used in this paper except the additive score function and a cross-term from the Tweedie's formula. Since the noise level $\sigma$ is anyway not known and needs to be tuned in stepsize, the reviewer suspects that the proposed method doesn't offer any major advantages over existing methods at the cost of more compute. This observation is also supported by experiments in Tables 2 and 3.
For Q5, how would you precompute $A^T\Sigma^{-1}y$ for inverse problems typically considered in practice, such as Gaussian deblur, motion deblur or super-resolution? What would be the storage space complexity in high-dimensional applications where images could be of size 1024x1024? The reviewer thinks that these issues have not been properly addressed in the paper.
For Q6 and Q7, there is no discussion regarding this important piece of information in the main paper, which would essentially help the reader when to choose this algorithm over others. Especially, the dependence on $d$ in $\varepsilon_{DDS-DDPM}$ seems problematic for large-scale applications with $d$ of the order $10^6$ as in recent state-of-the-art inverse solvers.
For Q8, the compared baselines are weak and do not adequately justify the claims of the paper. The reviewer was referring to more recently developed pixel space diffusion based inverse solvers such as TMPD or ReSample. The reviewer thanks the authors for providing some preliminary experiments on ReSample.
The reviewer will follow the guidelines in revising the score if needed after all the questions have been addressed properly.
---
Reply to Comment 2.1.1:
Title: Thank you for your response, and further clarification
Comment: Thank you for engaging with us! We are happy to hear that many of your concerns have been addressed successfully, and appreciate your detailed comments that provide us an opportunity to clarify further the remaining points.
Regarding your comments on Q4, we would like to clarify two points:
- Our approach does not involve approximating $\log p(y|x_t)$ as in the previous algorithms. Our $\mathcal L(x; y)$ is not an approximation to $\log p(y|x_t)$; it is simply the likelihood $\log p(y|x_0)$ (where $x_0$ is the ground truth signal), using the notation in DPS paper, which is assumed known in most of the previous works. Our approach deviates significantly from DPS; the reason we were able to bypass this approximation is that we directly tackle the whole posterior distribution $\propto p^\star(x_0)p(y|x_0) = p^\star(x_0) \exp(\mathcal{L}(x_0; y))$ using a split Gibbs sampler, which is made practical by developing proximal samplers for both $p^\star(x_0)$ and $\exp(\mathcal{L}(x_0; y))$ with diffusion and MALA.
- The measurement noise level $\sigma$ is assumed known or is a tunable parameter, which is consistent with most of the previous works (e.g. DDRM, DPS, LGD-MC, ReSample). As can be seen from the experimental results (including our rebuttal pdf), our algorithm demonstrates significant improvement for highly non-linear problems like phase retrieval, over recent works like LGD-MC and ReSample.
Regarding your comments on Q5, we would like to note that (i) it is a common choice as in many previous works (e.g. the references above) that $\Sigma$ is chosen as a scalar identity, in which case it is not necessary to perform a matrix inversion; (ii) The storage cost of prefactorization has been found to be managable (of $O(n)$ order) with memory efficient SVD in many practical inverse problems, e.g. denoising, inpainting, super resolution, deblurring, and colorization, cf. DDRM [KEES22]; (iii) If memory is really of concern, there is also the option to simply use MALA for the proximal consistency step that avoids direct inversion, which is still theoretically sound as our theory tolerates errors for both subsamplers.
Regarding your comments on Q6 and Q7, we would be happy to include more discussion in the final paper when space permits, which we suppressed in the submission and left a few references. Note that the dependence on $d$ in $\varepsilon_{\sf DDS-DDPM}$ can be further improved to $\tilde{O}(\sqrt{d/T}) + O(\varepsilon_{\sf score})$ using sharper results in [BDBDD24]. However, such dependency with $d$ is known to be tight and generally non-avoidable when plain diffusion models are used.
Regarding your comments on Q8, since there is only one day left before the discussion deadline, it is challenging to provide more experiments result in time before the discussion period ends. Nonetheless, we are committed to include more algorithm evaluation in the final version. We also want to provide a bit more discussion regarding the additional experimental results regarding the full evaluation of ReSample on FFHQ dataset, which is already in the rebuttal pdf. Therein, it can be seen that LGD-MC, one of the baseline in our original submission, is a competitive baseline with performance close to that of ReSample. Our algorithm demonstrates significant advantages over both, especially on the phase retrieval task. We expect similar conclusions will hold when we compare ReSample on other datasets/tasks.
Thank you again for your careful review. We appreciate your constructive feedback and are happy to discuss more. | Rebuttal 1:
Rebuttal: # General Response to All Reviewers
We would like to express our cordial thanks to all the reviewers for their careful review and constructive feedback. Below we address some common concerns raised by the reviewers. Our point-to-point response can be found in the separate rebuttal to each reviewer.
**Comparison with latent diffusion.** Our work is complementary (rather than parallel) to previous literatures on latent diffusion. It is mostly straightforward to incorporate latent diffusion models into our framework (it suffices to modify the Denoising Diffusion Sampler in our framework to a latent version), which will hopefully further allow our algorithm to take advantage of latent diffusion models. On the other hand, such pre-trained latent model is not always available for imaging tasks (e.g. the experiments of our work and many previous works [CKM+23, CLY23, MSKV24]), and our framework still works when only non-latent models are available. We opt to present only the non-latent version in the paper to better highlight the key novel ideas of our work (together with its theory) within limited pages and without compromising generality of settings, which is already quite lengthy. However, we agree that incorporating latent diffusion is a promising direction for future research.
**Experimental evaluation.** We compare our algorithm with the well-established DPS algorithm **as well as more recent, highly competitive algorithm** such as LGD-MC [SZY+23] which already demonstrates significant advantage over DPS. We abide to the experimental settings in these previous works with best efforts, and the size of testing dataset is kept strictly the same as that in DPS. Following the suggestions of reviewers, **we have also included a brief comparison with ReSample [SKZ+23] below and in the rebuttal pdf**, which we will try to make more complete in the final version. In addition, we would like to emphasize that none of the previous works have a provable robustness guarantee, thus our work is also of independent theoretical interest.
------------------------------
# Rebuttal pdf
Please find in attached our one-page pdf containing the additional experimental results cited in our responses. The settings of these results are explained below.
**Comparison with ReSample.** For reasons explained above, we compare our algorithm with the non-latent (also termed "pixel-based" in the [SKZ+23]) version of ReSample, and leave the comparison of latent versions for future work after we developed a latent version of our algorithm. For ReSample, we use DDIM with $T=1000$ steps, which is the same as all the other algorithms. We let ReSample run without stochastic resampling for the first $250$ steps as specified in the original paper. After that, we run stochastic resampling every $10$ steps as in the original paper. All other parameters, including the resampling noise factor $\gamma$, the (conjugate) gradient stepsize, early stopping criterion, etc., has been fine-tuned with reasonable effort for best performance. We evaluate its performance on the same set of inverse problems on FFHQ-1k dataset as in our paper, i.e. super-resolution, phase retrieval, and quantized sensing (problem parameters was given in our paper). **The results are shown in Table 1 in the rebuttal pdf.** We make a few comments on the results here.
- Non-latent ReSample does not work on phase retrieval, despite our considerable effort in trying different parameters. This may be accounted by the complicated non-linear, non-convex landscape of the phase retrieval objective, as in such case the hard consistency enforcing step in ReSample may easily stuck at local minimum. It is possible that using the latent version of ReSample may alleviate this issue, but as we said, this awaits future research.
- Overall, on the tasks where ReSample works, i.e., super-resolution and quantized sensing, the performance of ReSample is better than DPS and close to that of LGD-MC which we already compared with. All of DPS, LGD-MC and ReSample are surpassed by our algorithm in quantitative metrics.
**Demonstration of convergence.** We verify our theoretical claim of convergence of our algorithm on a Gaussian mixture model as a toy example. We consider the setting where the unconditional distribution $p^\star(x)$ is a 2D GMM given by
$$x = [x_1, x_2] \sim 0.6 \cdot\mathcal N([-3, -1]^\top, I_2) + 0.4 \cdot\mathcal N([1, 1]^\top, I_2).$$
We consider a linear inverse problem where $\mathcal{A}(x) = x_1-x_2$, and suppose the measurement $y=-0.5$. For simplicity of demonstration, we further consider the noiseless setting, so that the posterior distribution $p(x|y)$ is a degenerate distribution supported on the 1D line $\\{x\in\mathbb{R}^2: y=\mathcal{A}(x)\\}=\\{x\in\mathbb{R}^2: x_1-x_2=-0.5\\}$. **A depiction of the setting is given in Figure 2.(a) in the rebuttal pdf.** We run DPnP on this linear inverse problem assuming the unconditional score function is exactly known (which can be computed from the GMM expression). We use an annealing schedule with $K=120$, $\eta_0=0.5$, $\eta_K=10^{-4}$. **The distribution of $\hat x_k$, the $k$-th iterate of DPnP, is shown in Figure 1 in the rebuttal pdf.** We further compute the total variation distance between the distribution of $\hat x_k$ and the true posterior distribution. **The result of total variation error is shown in Figure 2.(b) in the rebuttal pdf.** It can be inferred clearly that the distribution of the iterates of DPnP eventually converges (in total variation) to the true posterior distribution.
Pdf: /pdf/063064b6e805f3a4ffa9f7a4ae673c948792a6b4.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
End-to-End Autonomous Driving without Costly Modularization and 3D Manual Annotation | Reject | Summary: This paper handles the costly modularization and 3D manual annotation in current end-to-end autonomous driving, which proposes an unsupervised pretext task to provide necessary environmental information, as well as a direction-aware training strategy to enhance the robustness in safety-critical steering scenarios.
The authors conduct comprehensive experiments in both open- and closed-loop evaluation benchmarks, which demonstrate the effectiveness in various metrics. Moreover, the improvements are obtained with much less resource cost and faster inference speed, which is surprising and impressive.
In addition, this paper gives in-depth discussion and performance comparison about the usage of ego status in the open-loop evaluation of nuScenes. The considerable improvement in the intersection rate with the road boundary, which is proposed in recent BEV-Planner, again proves the superiority of the designed pretext task.
Strengths: Overall, I am rather positive on this paper. In particular, I really like the motivation of this work that aims at finding a solution to relieve the heavy annotation and computation overload in current end-to-end autonomous driving. I believe this paper can inspire other works and facilitate this field. The strengths in this work include:
(1) Enough novelty. This paper introduces an innovative unsupervised pretext task to perceive the environment, which is completely different from other works that accumulate subtasks requiring massive 3D annotation and computation resources.
(2) Good performance. This paper demonstrates excellent performance and fast inference speed in both open- and closed-loop evaluation compared with other end-to-end methods. In specific on the challenging metric, i.e., intersection rate in BEV-Planner, the proposed approach surpasses other methods by a considerable margin. This clearly shows the effectiveness and advantages of the proposed method.
(3) Insightful analysis. The authors provide extensive experiments and analysis for the proposed method. I appreciate this. The experimental analysis with various ablation studies allows a better understanding of each module. Notably, the authors observe the different computation ways of open-loop evaluation metrics between ST-P3 and UniAD and provide performance comparison with different settings, showing the comprehensiveness.
(4) Good writing and organization. This paper is well-written and organized. Each section has a clear motivation. It’s easy to follow the ideas. I enjoy reading the paper.
Overall, I believe this paper is significant to the autonomous driving community because it shows new insights and directions in designing simple but effective E2EAD framework with SOTA performance.
Weaknesses: (1) In this work, the 2D ROIs are crucial for the designed pretext task. I noticed that the authors adopt the open-set 2D detector GroundingDINO to generate the ROIs. Then the results and discussion of using other third-party detectors should be presented.
(2) The proposed method is shown to be efficient with the unsupervised pretext task and self-supervised training strategy, which is nice. It is suggested the authors show the influence of the training data volume (e.g., 25% and 50%).
Technical Quality: 4
Clarity: 4
Questions for Authors: In the appendix, the authors show that the proposed method still achieves comparative performance even without backbone pretraining, while UniAD dramatically degrades without the pretrained weights of BEVFormer. What do you think are the reasons causing this?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: It is suggested to provide discussion of limitations and broader impact in the revision.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer YXrs:
We appreciate your careful review and thoughtful comments. We are encouraged and grateful that the reviewer found our approach to be well-motivated and insightful. Below, we address the concerns that were raised, and remain committed to clarifying further questions that may arise during the discussion period.
***
***Q1: In this work, the 2D ROIs are crucial for the designed pretext task. I noticed that the authors adopt the open-set 2D detector GroundingDINO to generate the ROIs. Then the results and discussion of using other third-party detectors should be presented.***
**A1**: Thanks for the constructive comment. As suggested, we have evaluated different methods for generating 2D ROIs, including two open-set 2D detectors (GroundingDINO and DE-ViT[1]) and a 3D detector MV2D[2] (where the 3D predictions are projected onto the camera plane to obtain 2D ROIs). As shown in #Rebuttal-PDF-Fig.3(a), using ROIs from GroundingDINO results in the best performance. This highlights GroundingDINO's effectiveness in generating high-quality 2D ROIs compared to DE-ViT and MV2D.
For further investigation, we assessed the 2D detection performance on nuScenes by matching the 2D ROIs with projected ground-truth 3D boxes. The PR and ROC curves are presented in #Rebuttal-PDF-Fig.3(b). GroundingDINO consistently outperforms the others, demonstrating its capability to perceive objects effectively in challenging driving scenarios. Notably, the 3D detector MV2D, which is trained on nuScenes data, does not show superior performance in either planning or 2D detection tasks. This suggests that current open-set 2D detectors, such as GroundingDINO, have significant potential for auto-annotating with prompt information and enhancing downstream tasks.
We will include these results and analyses in the revision. Thanks.
[1] Zhang X, et al. Detect every thing with few examples. In arXiv 2023.
[2] Wang Z, et al. Object as query: Lifting any 2d object detector to 3d detection. In ICCV 2023.
***
***Q2: The proposed method is shown to be efficient with the unsupervised pretext task and self-supervised training strategy, which is nice. It is suggested the authors show the influence of the training data volume (e.g., 25% and 50%).***
**A2**: Thanks for this thoughtful comment. As suggested, we trained our UAD model with different volumes of training data, and the results are presented in #Rebuttal-PDF-Tab.4. The comparison shows that even with only 50% of the data, our UAD outperforms the baseline models UniAD and VAD. Increasing the amount of data further enhances the model's performance. It is also observed that more challenging steering scenarios, such as turning right and left, offer greater potential for improvement. We believe that the performance in these challenging scenarios could reach new heights with the availability of more data. This will be a focus of our future research.
We will include the analysis and results in the revision. Thanks, again.
***
***Q3: In the appendix, the authors show that the proposed method still achieves comparative performance even without backbone pretraining, while UniAD dramatically degrades without the pretrained weights of BEVFormer. What do you think are the reasons causing this?***
**A3**: Thanks for this insightful comment. As discussed in #Reviewer-d489-Q1-A1, our framework can perform lossless information transmission from the regional perception of the environment to the downstream planning task. This endows our UAD with efficient utilization of training data to learn task-relevant knowledge, hence not requiring even backbone pretraining to provide initialization and prior information. Another possible reason lies in the balance of multi-task learning. The numerous optimization losses from preceding subtasks in UniAD would distract the optimization of the oriented planning task in the E2E model. This forces UniAD to load the pretrained weights of BEVFormer to initialize the upstream perception task, which decreases the training difficulty by reducing part optimization items. In contrast, our framework is simple with only five losses for optimization, of which three ones are explicit planning-relevant. The design thus guarantees that the model allocates more attention to the core planning goal without requiring pretrained weights, which is friendly for real-world deployment.
We will include the discussion and analysis in the revision. Thanks.
***
***Q4: It is suggested to provide discussion of limitations and broader impact in the revision.***
**A4**: Thanks for the helpful comment. As discussed in Sec. 4.5 of the manuscript, #Reviewer-d489-Q4-A4, and #Reviewer-3W73-Q4-A4, the coarse perception in our method may occasionally lead to inaccurate planning. However, the flexibility of our simple framework allows the easy integration of customized perception modules (e.g., 3D detection/mapping heads) and the implementation of post-processing pipelines. This adaptability addresses practical considerations in current autonomous driving applications.
Nevertheless, in this way, the need for costly 3D annotations and modular designs persists, posing challenges to the development of efficient end-to-end autonomous driving systems. We believe that in the future, redundant perception and prediction sub-tasks will be optimized or fused in an efficient manner for practical products. We hope our efforts contribute to accelerating this progress.
We will include the discussion in the revision. Again, thanks!
---
Rebuttal Comment 1.1:
Title: Post Rebuttal Comments
Comment: Thanks the authors for their detailed feedback. All my concerns have been addressed. I very like this paper due to its novelty, great performance and simplicity. I believe this is a potentially right way for end-to-end autonomous driving.
I also see other reviewers' comments and the authors' rebuttals, and think that the authors have done a good job to explain their work better.
Thus, I keep my original score (Very Strong Accept). | Summary: This paper addresses the limitations of current end-to-end autonomous driving models that still rely on modular architectures with manually annotated 3D data. The authors propose an unsupervised pretext task that eliminates the need for manual 3D annotations by predicting angular-wise spatial objectness and temporal dynamics. This is achieved through an Angular Perception Pretext that models the driving scene without the need for manual annotation. A self-supervised training approach is introduced to enhance the robustness of planning in steering scenarios. This strategy learns the consistency of predicted trajectories under different augmented views. UAD demonstrates significant improvements in performance over existing methods like UniAD and VAD in both open-loop and closed-loop evaluations. It achieves these improvements with reduced training resources (44.3% of UniAD) and faster inference speed (3.4× faster than UniAD).
Strengths: 1. The paper introduces UAD, an unsupervised autonomous driving framework that eliminates the need for costly 3D manual annotations, which is a significant departure from traditional modular approaches.
2. The Angular Perception Pretext is an innovative approach to spatial-temporal understanding without manual labeling, offering a new perspective on autonomous driving perception.
3. The experiments conducted are comprehensive, including both open-loop and closed-loop evaluations, which demonstrate the method's effectiveness across different scenarios. The paper provides a detailed comparison with state-of-the-art methods like UniAD and VAD, showcasing the improvements in performance metrics, which adds to the quality of the research.
Weaknesses: 1. UAD treats an entire sector as occupied when only a part of it contains an object. This seems imprecise. This could potentially lead to less accurate spatial understanding of the environment. In autonomous driving, overly coarse representations might result in the vehicle making less accurate decisions, such as unnecessary braking or incorrect path planning. Have the authors tried some open world segmentation models for more accurate spatial information?
2. The paper draft does not provide explicit evidence or analysis on whether UAD can indeed benefit from training on a larger scale of data. The authors could conduct experiments with varying sizes of datasets to empirically evaluate how performance metrics change as more data becomes available. This could provide insights into the benefits of scaling up.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. it is not explicitly stated whether UAD and UniAD use the exact same training and test split within the nuScenes dataset.
2. As UAD only uses basic obstacles for training. I wonder how will UAD reacts to traffic lights, lane lines, and policeman's gestures?
3. What is the BEV area designed in the experiment?
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: In the current draft, UAD might be limited to basic obstacle detection and does not extend to the interpretation of traffic signals.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer 3W73:
We appreciate your careful review and thoughtful comments. We are encouraged and grateful that the reviewer found our approach to be well-motivated and innovative. Below, we address the concerns that were raised, and remain committed to clarifying further questions that may arise during the discussion period.
***
***Q1: In autonomous driving, overly coarse representations might result in the vehicle making less accurate decisions, such as unnecessary braking or incorrect path planning. Have the authors tried some open-world segmentation models for more accurate spatial information?***
**A1**: Thanks for the insightful comment.
(1) In the context of human designed planning stack, we agree with the reviewer that inaccurate or overly coarse environment representations can lead to suboptimal or even unsafe vehicle behaviors. We argue that in the context of end-to-end driving stack, this assumption may no longer hold, as our methods have demonstrated the effectiveness, following standard evaluation protocol. We believe that the capability of lossless information transmission can compensate for such coarse representations, as discussed in #Reviewer-d489-Q1-A1.
(2) Different from sending sparse object queries to the planning head, as done in previous works like UniAD and VAD, our model passes regional angular queries to the downstream planning module instead of the predicted objectness probability. Angular queries provide a compact and comprehensive representation of the environment, ensuring lossless transmission of environmental information. In contrast, the object queries used in prior works may miss critical environmental details due to the limitation of detection accuracy.
(3) In the ablation experiment presented in Tab.6 of the manuscript, we explored predicting dense pixel-wise segmentation masks. This comparison may align with and can address the reviewer’s concern. As shown in the results, more precise predictions did not yield better outcomes. We believe this is because the planning module receives more comprehensive and understandable environmental information from angular queries rather than segmentation masks.
(4) Following the suggestion, we also experimented to generate 2D segmentations within GroundingDINO's 2D boxes using the open-set segmentor SAM. We retrained our planner with these 2D segmentations, and the comparison is presented in #Rebuttal-PDF-Tab.2/Fig.2. The results indicate that more fine-grained 2D segmentations offer only marginal performance gains compared with using 2D boxes from GroundingDINO, which further supports our hypothesis. For economic and efficiency reasons, using 2D boxes is more favorable for real-world deployment.
We will include the results and analysis in revision. Thanks, again.
***
***Q2: The authors could conduct experiments with varying sizes of datasets to empirically evaluate how performance metrics change as more data becomes available. This could provide insights into the benefits of scaling up.***
**A2**: Thanks for this constructive comment. As suggested, we trained our UAD model with different volumes of training data, and the results are presented in #Rebuttal-PDF-Tab.4. The comparison shows that even with only 50% of the data, our UAD outperforms the baseline models UniAD and VAD. Increasing the amount of data further enhances the model's performance. It is also observed that more challenging steering scenarios, such as turning right and left, offer greater potential for improvement. We believe that the performance in these challenging scenarios could reach new heights with the availability of more data. This will be a focus of our future research.
We will include the analysis and results in the revision. Thanks.
***
***Q3: It is not explicitly stated whether UAD and UniAD use the exact same training and test split within the nuScenes dataset.***
**A3**: Following the standard protocol in previous works (e.g. UniAD, VAD), we perform training on the 700 scenes and evaluate on the 150 scenes of the nuScenes dataset. Notably, with our proposed unsupervised pretext task, the training process requires no human annotations in nuScenes (e.g., 3D bounding boxes). To make it more clear, we will add clarification on this point in revision. Code, models and configs will be released upon publication. Thanks, again.
***
***Q4: As UAD only uses basic obstacles for training. I wonder how will UAD reacts to traffic lights, lane lines, and policeman's gestures?***
**A4**: Thanks for the thoughtful comments. We follow the paradigm of our baseline UniAD, which does not explicitly send traffic light states or police gestures to the model. Our aim is motivating the E2E planner to understand the world from the data itself, i.e., data-driven learning. Surprisingly, even without the explicit input of such information, #Rebuttal-PDF-Fig.1 shows that our model can correctly interpret the red traffic light and brake the vehicle accordingly, as well as adjusting the driving direction when approaching the lane lines.
We also agree with the reviewer that incorporating such information could help the model understand and react to changes in the environment more easily. An intuitive approach is to transform the traffic light state or other information into queries that interact with the ego planning query. We plan to explore this in future research. Thanks again for the insightful feedback.
***
***Q5: What is the BEV area designed in the experiment?***
**A5**: As mentioned in Section 4.1 of the manuscript, we adopt the view transformer from BEVFormer as the BEV encoder, which sets the BEV region to -51.2m to 51.2m in both the x and y directions. The default BEV resolution is 200 x 200, which aligns with our baseline UniAD. Tab.14 in the appendix of the manuscript lists the performances at different resolutions. We will clarify this in the revision. Thanks.
---
Rebuttal Comment 1.1:
Comment: Thank you for answering the questions. All my concerns have been addressed. I will keep my initial rating.
---
Reply to Comment 1.1.1:
Title: Thank you so much for the feedback!
Comment: Dear Reviewer 3W73,
Thank you again for your kind review and constructive comments. Your suggestions greatly strengthen our work and improve our manuscript's quality. We will include them in the revision.
Best regards,
Authors | Summary: This paper aims to discard the requirement of 3D manual annotation in end-to-end autonomous driving by the proposed angular perception pretext task. Besides, this paper proposes a direction-aware learning strategy consisting of directional augmentation and directional consistency loss. Finally, the proposed method UAD achieves superior performance in both open-loop and closed-loop evaluation compared with previous vision-based methods with much lower computation and annotation costs.
Strengths: 1) This paper aims to discard the requirement of 3D manual annotation in end-to-end autonomous driving, which is important and meaningful for training larger end-to-end autonomous driving models at scale. I totally agree and appreciate this.
2) This paper proposes a direction-aware learning strategy, which will further improve prformance by self-supervised learning.
3) UAD is evaluated in both open-loop and closed-loop evaluation and different metrics (UniAD, VAD, and BEV-Planner).
Weaknesses: 1) There is a lack of explanation on how to use ego status. Besides, there should be more experiments about the performance of UAD without ego status.
2) There is a lack of explanation on how many frames are fused and what method is used for temporal fusion (sliding window or streaming).
3) The angular perception pretext task introduces 2D detection information for perception learning on BEV features. According to Table 6, it seems that angular design is very important for UAD. However, BEV-Planner can achieve a not-so-bad result without any 3D manual annotation and 2D detection information. Therefore, verifying the effectiveness of angular design on BEV-Planner will be more convincing.
4) For the proposed angular design, I do not think is very novel. Because the effectiveness of the 2D detection auxiliary head has been verified in BEVFormerV2 [1] and StreamPETR [2], the UAD just converts 2D object detection to BEV segmentation.
5) For the proposed direction-aware learning strategy, although it is useful, it is a method of data augmentation in EBV space, which I do not think is very novel.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1) I have serious concerns about the use of ego status and temporal fusion. If UAD uses GT ego status when fusing many frames, because there is no cumulative error in the ego trajectory, it will lead to falsely high performance. A better solution is to use predicted ego status instead of GT ego status (such as SparseDrive [3]).
2) Why does the use of a 3D detection head lead to a decrease in performance? What will be the result if UAD uses the online mapping head?
3) What is the performance if UAD uses a 2D detection auxiliary head instead of an angular design? For example, angular queries can be obtained by 2D detection auxiliary head and depth prediction (such as Far3D [4]).
[1] BEVFormer v2: Adapting Modern Image Backbones to Bird's-Eye-View Recognition via Perspective Supervision
[2] Exploring Object-Centric Temporal Modeling for Efficient Multi-View 3D Object Detection
[3] SparseDrive: End-to-End Autonomous Driving via Sparse Scene Representation
[4] Far3D: Expanding the Horizon for Surround-view 3D Object Detection
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: See the weakness and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer UZRG:
We thank the reviewer for providing helpful comments on our work. We provide our responses below to address reviewer’s concerns, and remain committed to clarifying further questions that may arise during discussion period.
***
***Q1: Lacking explanation on how to use ego status, and experiments without ego status.***
**A1**: As mentioned in Sec.4.2 of the manuscript, we follow previous works (VAD and BEVPlanner) to embed ego status into planning module. In specific, we concatenate the ego states with ego query along channel dimension. Besides, we have claimed that **only** the rhombus mark in Tab.1 of manuscript denotes the version using ego status. The other experiments are all conducted without ego status for fair comparisons. We will clarify this in revision. Thanks.
***
***Q2: Lacking explanation on how many frames are fused and temporal fusion.***
**A2**: It should be noticed that we have mentioned using the view transformer from BEVFormer to encode BEV features in Sec.4.1 of manuscript, which performs temporal fusion with BEV feature of the last frame through deformable attentions. We will clarify this in revision. Thanks.
***
***Q3: Applying angular design on BEVPlanner is more convincing.***
**A3**: (1) It's noted that BEVPlanner claims good performance with only ego status due to dominant "go straight" scenarios in nuScenes. However, in challenging steering scenarios, using solely ego status is insufficient, as also proved in BEVPlanner. In contrast, our UAD achieves superior performance across all scenarios, even without ego status.
(2) Our **ego status version** can be seen as BEVPlanner with our module, since there is no big difference in the planning head. We wanted to follow reviewer's advice to apply our design on BEVPlanner. However, (a) their code was not released when we submitted our paper; and (b) although some code recently released, there was no README in repository until now, making it hard to reproduce official results. We will attempt this when complete code and training guidelines are released. Thanks.
***
***Q4: Angular design is not very novel as a 2D detection auxiliary head.***
**A4**: The authors would like to re-state the core idea and contributions of the paper. Our work mainly aims to clarify (1) the unnecessity of complex auxiliary tasks for E2E models and (2) E2E models can achieve impressive results even without 3D annotations. Designing simple unsupervised pretexts embodies the implementation of our spirit, where specific module structure in the paper is not the only choice. Hence, simplicity and effectiveness are much more crucial to our work.
Moreover, our work fundamentally differs from papers that utilize 2D detection priors for 3D detection tasks. In our case, 2D detection is applied offline and only used to generate labels for angular perception, which is still conducted in 3D BEV space. This is entirely different from adding an auxiliary 2D detection head. Besides, the predicted objectness is not passed to downstream tasks. We will clarify this in revision. Thanks.
***
***Q5: Direction-aware learning is not very novel as a data augmentation.***
**A5**: It should be noticed that we greatly alleviated the previous unsolved data bias issue in E2E driving problems by this design. As noted in Fig.2 of BEVPlanner, ego trajectories from E2E training set have highly skewed distributions with most simple go-straight scenarios. Yet, there are no **efficient** solutions in recent E2E works for such severe data bias from the perspective of data augmentation. We then design direction-aware learning to attempt at solving the bias in an efficient data augmentation manner. Despite its simplicity, we believe such finding and strategy can greatly benefit the E2E driving research community. Thanks.
***
***Q6: Using predicted ego status is better than GT ego status for causing false high performance.***
**A6**: Thanks for this constructive comment. (1) For the non-interactive open-loop evaluation in nuScenes, we follow the way using ego status of BEVPlanner to ensure a fair comparison. (2) The ego status for the closed-loop evaluation in CARLA indeed uses the predicted values provided by the IMU sensor of the ego car, which satisfies the requirement of reviewer. Please refer to #Reviewer-XcsQ-Q1-A1 for updated closed-loop results with ego status. Thanks.
***
***Q7: Why 3D detection head lead to performance decrease? Applying online mapping head?***
**A7**: (1) The optimization of an additional 3D detection task might distract planning-oriented learning, which is also claimed by UniAD. For instance, the planning performance when training five task heads jointly is much lower compared with planning-oriented training, which transfers features from preceding subtasks to planning (#0 vs #12 in Tab.2 of UniAD). For more discussion, please refer to #Reviewer-d489-Q1-A1. (2) As suggested, we arranged an additional map head to our model as shown in #Rebuttal-PDF-Tab.3, which doesn't improve planning performance.
Notably, in our manuscript, we highlight the "convenience" of integrating traditional perception tasks with our model, not to enhance planning quality but to demonstrate flexibility. Our work aims to, and successfully proves, that these typical perception tasks are not necessary for E2E planning. Thanks.
***
***Q8: Using a 2D detection auxiliary head instead of angular design?***
**A8**: As discussed in #Reviewer-d489-Q1-A1, lossless information transmission should be the core principle of E2E models. Passing 2D object queries to planning module is no different from previous works using 3D object queries, which miss important environmental information outside the scope of human-defined object categories. Hence our model performs angular perception primarily to summarize comprehensive environmental information from BEV feature, rather than using its output as in previous works. We will clarify this in revision. Thanks.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. However, after reading the author's rebuttal, some issues still need to be addressed, especially the experiments.
1. Experiments without ego status are required. The BEV-Planner contains relevant experiments without ego status.
2. How many frames are fused? Is streaming time fusion used to fuse many frames?
3. I don't think angular perception is very different from 2D detection and depth estimation. Angular perception is just projecting the 2D detection box into BEV, which only involves the additional operation of extrinsic parameter transformation.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer UZRG
Comment: Thank you for taking the time to offer suggestions amidst your busy schedule, yet we still need to point out several misunderstandings by the reviewer about our rebuttal. The detailed responses are provided below, hoping to address your concerns adequately:
***
***Discussion-Q1: Experiments without ego status are required. The BEV-Planner contains relevant experiments without ego status.***
**Discussion-A1**: As clarified in #Rebuttal-Q1-A1, **only the rhombus mark** in Tab.1 of the manuscript denotes the version using ego status. The other experiments are **all** conducted **without ego status** for fair comparisons. Our planner achieves the best open-loop performance in nuScenes for both w/. and w/o. ego-status settings, as shown in Tab.1 of the manuscript. We will clarify this in the revision. Thanks.
***
***Discussion-Q2: How many frames are fused? Is streaming time fusion used to fuse many frames?***
**Discussion-A2**: As clarified in #Rebuttal-Q2-A2, we use the view transformer from BEVFormer[1] to encode BEV features, which performs temporal fusion with the BEV feature of the last frame through deformable attentions. In other words, there are total 2 frames for temporal fusion, i.e. t-1 and t. For more details, please refer to [1]. We will clarify this in the revision. Thanks.
[1] Li Z, et al. Bevformer: Learning bird’s-eye-view representation from multi-camera images via spatiotemporal transformers. In ECCV 2022.
***
***Discussion-Q3: I don't think angular perception is very different from 2D detection and depth estimation. Angular perception is just projecting the 2D detection box into BEV, which only involves the additional operation of extrinsic parameter transformation.***
**Discussion-A3**: In #Rebuttal-Q4-A4, we have detailed the fundamental differences between our angular perception and 2d detection/depth estimation, as well as the core idea and contributions of our work. It should be noted that the projection of 2D boxes is only used to generate the pseudo labels for training our pretext task, instead of 'detecting' the object areas for downstream tasks. **The processes of projection and objectness prediction are not performed during inference, since the angular queries have provided compact and comprehensive environmental knowledge**. We will clarify this in the revision. Thanks.
***
Again, we thank the reviewer for the time and efforts, and remain committed to clarifying further questions.
---
Rebuttal 2:
Title: Discussion Invitation
Comment: Dear Reviewer UZRG,
We thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.
Sincerely,
Authors.
---
Rebuttal 3:
Comment: Thanks for your response. I only raise my score to 4. We are still confused about the design of angular perception, and this paper needs a lot of revisions to clearly describe these details, especially the use of ego status and temporal information. | Summary: The article proposes an end-to-end (E2EAD) autonomous driving method called UAD (Unsupervised Autonomous Driving), which achieves autonomous driving on a visual basis without the need for expensive modular design and 3D manual annotation. UAD aims to overcome the limitations of existing E2EAD models that mimic traditional driving stack module architectures. These models typically require carefully designed supervised perception and prediction subtasks to provide environmental information for planning, which require a large amount of high-quality 3D annotation data and consume significant computational resources during training and inference processes.
Strengths: 1. The method is novel and a good direction for exploring the end-to-end model's dependence on 3D manual annotation.
2. The paper has rich ablation experiments to demonstrate the effectiveness of the method.
3. The paper has advantages in both speed and accuracy compared to previous articles
Weaknesses: 1. The paper lacks sufficient comparison with other methods(such as Interfuser[1],DriveAdapter[2],DriveMLM[3],VADv2[4]), which acheve better closed-loop performance on carla town 05 long benchmark.
2. In Table 6, the angular design brings too much gain, especially in terms of collision rate (from 1.37% to 0.19% ). It's strange. The angular design is about how to encode the sensor data and should not have so much impact on collision rate.
3. Angular design is widely used in BEV-related works, like PolarFormer[5]. The paper lacks citation about these works.And it's not proper to regard it as the main contribution of the work.
4. The paper lacks a part to introduce the use of 2D tasks as auxiliary tasks.
[1] Hao Shao, Letian Wang, Ruobing Chen, Hongsheng Li, and Yu Liu. Safety-enhanced autonomous driving using interpretable sensor fusion transformer. In Conference on Robot Learning, pages 726–737. PMLR, 2023
[2] Xiaosong Jia, Yulu Gao, Li Chen, Junchi Yan, Patrick Langechuan Liu, and Hongyang Li. Driveadapter: Breaking the coupling barrier of perception and planning in end-to-end autonomous driving. 2023
[3] Wenhai Wang, Jiangwei Xie, ChuanYang Hu, Haoming Zou, Jianan Fan, Wenwen Tong, Yang Wen, Silei Wu, Hanming Deng, Zhiqi Li, et al. Drivemlm: Aligning multi-modal large language models with behavioral planning states for autonomous driving. arXiv preprint arXiv:2312.09245, 2023
[4] Shaoyu Chen, Bo Jiang, Hao Gao, Bencheng Liao, Qing Xu, Qian Zhang, Chang Huang, Wenyu Liu, and Xinggang Wang. Vadv2: End-to-end vectorized autonomous driving via probabilistic planning. arXiv preprint arXiv:2402.13243, 2024
[5] Yanqin Jiang, Li Zhang, Zhenwei Miao, Xiatian Zhu, Jin Gao, Weiming Hu, Yu-Gang Jiang. PolarFormer: Multi-camera 3D Object Detection with Polar Transformer. AAAI 2023
Technical Quality: 3
Clarity: 4
Questions for Authors: Why use 2D boxes instead of 2D segmentation of objects? Is it more reasonable to use 2D segmentation labels to link the points in BEV space with the points in image space?
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: 1. Incompele comparison on CARLA benchmark.
2. Lack citation about angular design.
3. Angular design is not novel.
4. Some of the experiment results are not that convincing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer XcsQ:
We thank the reviewer for providing thoughtful comments on our work. We provide our responses below to address reviewer’s concerns, and remain committed to clarifying further questions that may arise during discussion period.
***
***Q1: The paper lacks sufficient comparison with other methods such as ...VADv2.***
**A1**: Thanks for this helpful comment. Recent works, including VAD and some papers mentioned by the reviewer, don't provide their code and models for CARLA closed-loop evaluation. Therefore, in the comparison of the manuscript, we **don't use ego-status** (in the planning module) as we want to guarantee fair comparison with our closed-loop baseline TransFuser, which releases the code and is a standard closed-loop framework.
We then construct fair comparisons with mentioned works by providing a **using ego-status version** in #Rebuttal-PDF-Tab.1, which again proves its effectiveness. Particularly, our model outperforms all mentioned methods except the driving scores of VADv2. Notably, no 3D manual annotation is needed with such a SOTA performance. Since VADv2 estimates multi-modal ego trajectories while our work (also our baselines) predicts single-modal one, we believe our UAD can also benefit from the multi-modal design, as it has been proved by motion prediction works. Yet this study is not the topic of our work, we leave it to future study.
We will include the results in revision. Thanks.
***
***Q2: The angular design should not have so much impact on collision rate.***
**A2**: Thanks for the comment, yet we respectfully disagree. As discussed in #Reviewer-d489-Q1-A1, the E2E frameworks provide potential of lossless information transmission from sensors to planning head, compared with previous modularized design. However, planners in most prior works perceive the environment from sparse object queries with limited and manually defined categories (e.g., vehicles). Any information not covered by these categories or related perception subtasks is inevitably lost. Moreover, the errors from upstream sub-tasks like object detection would directly hinder the performance of downstream planning. In contrast, our angular queries contain compact and lossless information about the driving scene, motivating the E2E model to make more intelligent planning decisions, especially after training with large-scale datasets.
In addition to the analyses above, we demonstrated our claim with experiments, as Tab.4 and Tab.6 in the manuscript have clearly evidenced that both our spatial and temporal perception designs can decrease the collision rate. We will include the analysis in the revision. Thanks.
***
***Q3: Angular design is widely used like PolarFormer and not proper as main contribution.***
**A3**: Thanks for the comment. To the best of our understanding, our angular design non-trivially differs with previous works like PolarFormer, which builds grid-wise BEV feature by **projecting** polar coordinates instead of Cartesian coordinates to 2D images. Yet, the essence of our angular design, is not the process of building BEV feature. Instead, we aim to explore effective strategies to **losslessly transfer the information** of the BEV feature for planning, i.e., how to **summarize** the driving scene from BEV grids. We believe our angular design is both compact and efficient, which is compatible with different BEV construction paradigms such as LSS, PolarFormer or BEVFormer. How to build better BEV features, and how to pass BEV features in a compact and lossless manner to downstream tasks, are orthogonal research directions in the larger E2E driving context. We believe our core novelty is the spirit of relaxing costly modularizations and removing manual annotation. Again, we thank the reviewer for this comment and will add clarification in revision to make it more clear.
***
***Q4: The paper lacks a part to introduce 2D auxiliary tasks.***
**A4**: Thanks for the helpful comment. With the spatial relationship between the multi-view images and BEV space, a few works exploit 2D tasks to provide auxiliary clues for accurate BEV perception. For instance, MV2D exploits 2D detectors to generate object queries conditioned on rich image semantics, which help to recall objects in camera views and localize 3D objects. Far3D similarly utilizes a 2D detector and a depth predictor to generate reliable 2D box proposals and their corresponding depths, which are then concatenated and projected into 3D space as object anchors to predict the locations.
Notably, the intermediate results from the auxiliary tasks, e.g., 2D boxes, are used for the downstream tasks in the aforementioned works. But differently, the mask from our introduced angular perception is not used, since we tend to losslessly transfer the environment information from the pretext to the planning module.
We will include the discussion in revision. Thanks.
***
***Q5: Why use 2D boxes instead of 2D segmentation of objects?***
**A5**: Thanks for the constructive comment. (1) The design concept of our work is to provide crucial environmental knowledge while avoiding heavy annotation and computation costs. Compared with easily achievable 2D boxes, the 2D segmentations undoubtedly bring more overload. (2) Since the BEV grids are sparsely distributed, there are no clear differences after projecting the grids in a sector to the 2D boxes or segmentation masks and then summarizing their information as pseudo labels.
As suggested, we also try to generate 2D segmentations within the 2D boxes by the open-set segmentor SAM. Then we retrain our planner with the 2D segmentations, and the results are listed in #Rebuttal-PDF-Tab.2/Fig.2. It shows that more fine-grained 2D segmentations only bring small performance gains compared with the one using 2D boxes. For economic and efficiency reasons, applying 2D boxes is more friendly for real-world deployment.
We will include the results and discussion in revision. Thanks.
---
Rebuttal 2:
Title: Discussion Invitation
Comment: Dear Reviewer XcsQ,
We thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.
Sincerely,
Authors. | Rebuttal 1:
Rebuttal: **Dear Reviewers,**
We thank all reviewers for their very careful review, valuable comments and suggestions on our manuscript. We have worked our best to address all concerns with analyses and experiments according to the comments from reviewers.
In specific, for **Reviewer d489**'s comments, we analyze the working reasons of our angular perception (**Q1**) and provide more insights about the direction-aware learning strategy (**Q2**). In addition, we clarify the date settings in our experiments following the baselines (**Q3**). Furthermore, we show the implicit perception capability for traffic lights or signals by visualization, and discuss the extension to real-world scenarios and applications (**Q4**).
For **Reviewer XcsQ**'s comments, we update the closed-loop evaluation performance by adding ego status for fair comparisons with the mentioned planners (**Q1**). We analyze the working reasons of our angular design to collision rate (**Q2**) and clarify the difference compared with PolarFormer (**Q3**). We also review the use of 2D auxiliary tasks and explain the difference with our pretext (**Q4**). In addition, we explore the influence of 2D ROI form by replacing boxes with segmentations (**Q5**).
For **Reviewer UZRG**'s comments, we clarify the use of ego status (**Q1**,**Q6**), historical frames and temporal fusion (**Q2**). We again explain the working reason of BEVPlanner in nuScenes open-loop evaluation (**Q3**). We also discuss the difference between our angular design and 2D detection auxiliary head (**Q4**,**Q8**), the significance of direction-aware learning (**Q5**). Moreover, we analyze the influence of introducing 3D detection head or online mapping head (**Q7**).
For **Reviewer 3W73**'s comments, we explain the design concept of angular perception for regional environmental knowledge, and try using segmentations as ROIs for precise spatial information (**Q1**). We explore the influence of different volumes of training data (**Q2**). We also clarify the date settings in our experiments following the baselines (**Q3**). Moreover, we show the implicit perception capability for traffic lights or signals by visualization (**Q4**), and detail the configurations of the BEV area following the baseline UniAD (**Q5**).
For **Reviewer YXrs**'s comments, we try different methods to generate 2D ROIs and analyze the performances (**Q1**). We also explore the influence of different volumes of training data (**Q2**). Besides, we analyze the reason of achieving excellent performance even without backbone pretraining, compared with the baseline UniAD (**Q3**). Finally, we discuss the limitations and broader impact of our work (**Q4**).
**For more details, please check individual responses**. We thank all reviewers for their time and efforts! We hope our responses have persuasively addressed all remaining concerns. Please don’t hesitate to let us know of any additional comments or feedback.
**Note that we include all additional experimental results in the one-page PDF submitted along with this global rebuttal response**.
Pdf: /pdf/e60e30847c6efce3d0af49f211fdcab44bc558ee.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper proposes a new e2e driving model named UAD. In this paper, the authors propose an unsupervised method for effective training and inference of the e2e model. The paper mainly has two contributions: 1. It designs an angular-wise perception module. In this module, the authors directly project 2D GT labels onto the BEV and define a new BEV map label for perception training. This module design can efficiently reduce complexity and preserve effectiveness. 2. The authors propose a direction-aware method to augment the trajectory training and use consistency loss for further supervision.
The final results show the effectiveness and soundness of the proposed method.
Strengths: 1. The idea of the angular-wise perception module is interesting. It can utilize a huge amount of 2D annotated autonomous driving datasets to train the e2e model, which removes the restriction of the limited number of 3D annotations.
2. The proposed direction-aware method for trajectory prediction is also meaningful since it can add additional consistency loss for better supervision.
3. The results are promising and the efficiency improvement is impressive.
Weaknesses: 1. The design of the angular-wise perception module is a little bit counter-intuitive to me. From my perspective, it works because (1) It can greatly enlarge the size of the training data. (2) It makes the perception task simpler, thus the model can do it better (knowing an object in a direction is much simpler than detecting the BBox). (3) The efficiency improves because of the light design of the perception task. I think it can be simply treated as a low-resolution object detection task without depth. Except for (1), I do not understand why it can improve the final results.
2. The design of the direction-aware planning training strategy is effective but simple. I cannot see too much insight here. Could the authors provide more insight into this? Or is this just an engineering trick?
3. For experiments, do you use exactly the same training data for both the open-loop and close-loop experiments? If yes, could you provide more analysis about why the results are surprisingly good even if you use a low-resolution detection module? If not, could you provide details about the pertaining data info? Can the impressive results come from leaked data? How to prevent testing data leakage?
4. In real-world applications, when the vehicle plans its path, it needs to "see" a lot of things, including objects and some other elements, like traffic lights, traffic signals, or some special marks on the road. How can you handle these elements based on your model? For example, can your model understand traffic signals and see red traffic lights? If not, how to extend your model to real-world scenarios and applications?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weakness part for questions and suggestions.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: My main concern with this paper mainly comes from the insight of the angular-wise perception module. I still cannot understand why it works except for the huge amount of additional training data. Provide more details for this.
Besides, how to deploy and extend the model to real-world cases, that requires depth information (e.g., ) or semantic information (e.g., traffic light)? The authors should provide more discussion about this to make the work more promising.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer d489:
We thank the reviewer for providing valuable and thoughtful comments on our work. We provide our responses below to address the reviewer's concerns, and remain committed to clarifying further questions that may arise during the discussion period.
***
***Q1: The working reasons of the angular-wise perception module to improve final results.***
**A1**: Thanks for this insightful comment. Besides all the advantages mentioned by the reviewer, there are two main reasons that we believe can explain the effectiveness of our design:
(1) **Lossless Information Transmission.** The essential difference between the E2E paradigm and the previous modularized design, is that E2E approaches enable the possibility to transmit information without loss, from the input sensors to the planning module in the model architecture. However, planners in UniAD or most other works perceive the environment from sparse object queries with limited and manually defined categories (e.g., vehicles, pedestrians, maps). Any information not covered by these categories or related perception subtasks is inevitably lost. Moreover, the errors from upstream sub-tasks like object detection would directly hinder the performance of downstream planning. On the contrary, the angular queries in our method contain compact and lossless information about the driving environment, which benefits the E2E model in making more intelligent planning decisions, especially after training with large-scale datasets.
(2) **Less is More.** It's a common wisdom that the training of deep learning models is a process of balancing losses of different tasks towards Nash Equilibrium. Complicated designs from previous works make their training process unnecessarily complex and fragile. For example, UniAD needs careful pretraining on early modules. In addition, more sub-tasks can easily distract the E2E model from the planning objective. The results of previous work UniAD evidence this claim. Particularly, the introduction of "Track & Map & Motion & Occ" tasks in a multi-task learning (MTL) manner even degrades the planning performance (#0 v.s. #10 in Tab.2 of the UniAD official paper). Our simpler spatial perception design thus guarantees the model to allocate more attention to the core planning goal, which we believe is a representative practice of Occam's Razor.
We will include the discussion in the revision. Again, thanks.
***
***Q2: The direction-aware training strategy is effective but simple, it seems not having too much insight. Could the authors provide more insight into this?***
**A2**: The reason that the direction-aware training strategy being extremely effective is that we greatly alleviated the previous unsolved data bias issue in E2E driving problems. As noted in BEVPlanner[1] (see Fig.2 of BEVPlanner), ego trajectories from E2E training set have highly skewed distributions: in most scenarios the self-driving vehicles simply go straight. Yet, there are no **efficient** solutions in recent E2E works to mitigate such severe data bias problem from the perspective of data augmentation. We then design the direction-aware training strategy, which attempts at solving the data bias problem in an efficient data augmentation manner. Despite its simplicity, we believe such finding and the corresponding strategy can greatly benefit the E2E driving research community.
[1] Li Z, et al. Is ego status all you need for open-loop end-to-end autonomous driving. In CVPR 2024.
***
***Q3: (1) If the training data for both open-/close-loop experiments are the same? (2) Why the results are surprisingly good even if using a low-resolution detection module? (3) If the impressive results come from leaked data?***
**A3**: (1) The evaluation of open-/closed-loop performance is performed on two **different** benchmarks. In particular, nuScenes only supports open-loop evaluation, and CARLA supports closed-loop testing. Therefore, following the standard protocol in previous works (e.g. UniAD, VAD), we train the model on corresponding data respectively, i.e., train with nuScenes for open-loop evaluation and with CARLA for closed-loop one.
(2) As discussed in Q1, we believe lossless information transmission and the simpler structure for optimization are essential factors for our good improvement. Besides, both **spatial** angular perception and **temporal** latent world-model are the impulses of impressive results.
(3) We adopt the same training and testing pipelines following our baseline UniAD, which we promise no data leakage. Code and models for both open- and closed-loop experiments will be released.
We thank the reviewer for this comment and will add clarification in revision to make it more clear.
***
***Q4: If the proposed planner could understand traffic lights or signals, and how to extend to real-world scenarios and applications?***
**A4**: Thanks for the thoughtful comments. Recent E2EAD models are mostly designed to implicitly perceive and understand object-irrelevant elements like traffic lights and signals through data-driven learning. We follow this paradigm and illustrate the capability in the #Rebuttal-PDF-Fig.1. The visualization shows that our UAD can correctly understand the red traffic light and then brake the vehicle.
As discussed in Sec. 4.5 of our manuscript, it's easy to apply customized tasks like object detection, lane segmentation, also the mentioned traffic light detection that are required in application products under current auto-driving technical conditions to our framework. The intermediate results can be used for post-processing and refinement of the output trajectories from the E2E planner. However, we believe that someday the redundant perception and prediction sub-tasks will be optimized or fused in an efficient manner for practical products. We hope our efforts can speed up this progress.
We will include the results and discussion in the revision. Thanks.
---
Rebuttal Comment 1.1:
Title: Final comments
Comment: Thank the authors for the detailed response and information. Some of my concerns are addressed, but I still have problems with the design and working principles of the angular-wise perception module. I have strong concerns about its actual application values in real-world scenes. I will slightly raise the score, and the authors are encouraged to further refine the paper and make it more solid.
---
Reply to Comment 1.1.1:
Title: Thank you so much for the feedback!
Comment: Dear Reviewer d489,
We are sincerely grateful for your decision to raise the score. Your suggestions have greatly enhanced our paper and inspired our future research direction.
Regarding the application of our work in the real world, (1) we share the same expectation as the reviewer in extending our work to autonomous vehicles. And We are also actively working on implementing our approach in real-world auto-vehicles. (2) We provided a transition plan in the paper to integrate our approach into current autonomous driving systems. Specifically, our framework can easily accommodate a typical perception model for post-processing. (3) We believe our paradigm will unlock more potential from data. Given that billions of data points are used in real-world scenarios to ensure the robustness of autonomous driving, it is nearly impossible to precisely annotate all the data. Therefore, it is essential to advance efficient and unsupervised end-to-end research. In addition to addressing real-world application needs, we aim to set a potential direction for unsupervised methods in the future. We are grateful that the reviewer recognizes our contribution.
Once again, thank you for your positive feedback and valuable insights! Your suggestions regarding the deeper exploration of working mechanisms, practical considerations, and experimental details will greatly strengthen our work and improve the quality of our manuscript. We will incorporate them in the revision. Code and models will also be released to facilitate future research.
Best regards,
Authors | null | null | null | null | null | null |
Diffusing Differentiable Representations | Accept (poster) | Summary: This paper introduces a zero-shot method to sample neural representations with pre-trained diffusion models. By pulling back the measure over the data space through the representation, the authors express the PF ODE in the parameter space. Solving this ODE can directly provide parameter samples of the representation. The authors also discuss cases where the representation needs to model coupled images. In this scenario, the PF ODE is expressed by expectation and can be solved following the same principle. This method offers random samples rather than returning the mode, which can yield results with higher diversity compared to baselines without sacrificing performance. The method is also compatible with RePaint, which can ensure the implicit constraint (e.g., the inductive bias of the representation or practical consistency constraint).
Strengths: 1. This work introduces a zero-shot method to sample neural representations by pre-trained diffusion models. The method is fully zero-shot and does not rely on the specific form of the representation. Therefore, it may gain great potential in downstream tasks where training is difficult, or the representation is highly restrictive.
2. This method can generate samples with significantly higher diversity compared to the baselines, and can ensure consistency constraint using RePaint, improving global coherence and quality of generation.
3. The idea to handle the Jacobian matrix by approximating each ODE step using an optimization task is cute and neat.
Weaknesses: My primary concern is its runtime. If I understand correctly, you need to run the optimization for several iterations for each data point and each Euler step. And for tasks like NeRF, where the PF ODE involves expectations, this optimization also involves an MC estimator. However, this paper only reported the runtime for 3D NeRF experiments. I am curious on
a. what is the runtime for other experiments? What is the runtime for the baselines? Is this method significantly more expensive than others?
b. How many samples do you use to estimate the expectation? Does it contribute to the runtime? Will reducing the number of samples increase the variance and lead to suboptimal performance?
I would still appreciate this approach even though it may be more expensive. However, reporting the runtime and comparing it with the previous baselines can help us understand this method better.
Technical Quality: 4
Clarity: 3
Questions for Authors: I have two more questions regarding the experiments.
1. In Fig 1 (right), you compare your method with SJC and claim that "the samples produced by our method are almost indistinguishable from the reference." However, I do not necessarily agree with this claim. Your method yields much smoother images than SD. Even though I mainly suspect this may be due to SIREN's inductive bias, it is visible and distinguishable. Additionally, it seems that your method provides very different outcomes with CFG 30, while results by CFG 3/10/100 are pretty similar. Do you have any explanation for this? Is it a sign that this approach is not robust?
2. The PF ODEs in all experiments need to meet some implicit constraints due to the inherent bias of INRs. However, you only apply RePaint for Panorama SIRENs to ensure the consistency of the generated panorama. Do I understand this correctly? Is there a difference if you ensure the implicit constraints for other experiments? Is it possible to compare the results with and without ensuring the constraint?
In general, I appreciate the contribution of this work and would be happy to raise my score if my questions are adequately addressed.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: I did not find discussions on limitations even though Sec 6 is called Limitations and Conclusion. I am unsure if this work has no limitations, such as runtime. Could the author please clarify this?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and review. We will address your concerns in turn.
Experiment details: As mentioned in the paper, generating a NeRF with our method takes around 24 minutes for 199 NFEs. We used as many samples as possible in the 40GB VRAM of an NVIDIA A6000; our experiments were eight samples. Generating a comparable NeRF using SJC takes around 34 minutes for 10000 steps using the code they provide. Generating eight batched SIREN images using our method takes 82 seconds. Generating a SIREN Panorama with our method takes 218 seconds.
As you correctly pointed out, Figure 1 in the paper had some issues (the images were out of order). We have briefly described the problems in the common response and attached the corrected Figure 1 in the accompanying PDF.
We use RePaint for all our experiments, including the SIREN image and NeRF experiments. Without the RePaint method, the results are much less compelling. In fact, the NeRF experiments diverge without it. The INR implicitly defines the constraint, so it is impossible to remove the constraints from the INR. However, as a proxy for the constraint, one could consider using the CFG scale, where a high CFG corresponds to a stronger constraint because it enforces the renders from the INR to look a specific way. On the other hand, low CFG means more flexibility in the kinds of INR we can sample. Figure 3 shows the results of ablating RePaint on SIREN Panoramas with a high and a low CFG.
We thank you for your time spent reviewing. We have invested substantial effort in addressing your concerns and improving the quality of the paper, and we ask that you consider adjusting your score in light of our response. Please let us know if you have additional questions we can help answer.
---
Rebuttal 2:
Comment: Thank you for your reply and for providing more details. However, I have 3 more further questions:
1. How many iterations per Eular step do you use for optimization?
2. > Generating eight batched SIREN images using our method takes 82 seconds. Generating a SIREN Panorama with our method takes 218 seconds.
What is the baseline methods' runtime?
2. My concern regarding limitations has not been addressed yet. Did you discuss limitations in Section 6? Or do you think there is no clear limitation?
---
Rebuttal Comment 2.1:
Comment: Thank you for your comment.
In our experiments, we used 100 inference (reverse, Euler) steps of DDIM and a single forward step for RePaint at every increment except the first one. This schedule corresponds to 199 steps in total. We optimize the INR using 200 Adam steps for each inference step.
Generating a batch of 8 reference images takes 39 seconds. Generating 8 SIREN images using SJC depends on the number of iterations used. Running SJC for 3000 iterations to generate 8 SIRENs takes 694 seconds, corresponding to a per-iteration time of 2.31 seconds. Generating a non-SIREN panorama with eight views using our method takes around 41 seconds, which makes sense since this is essentially the same problem as generating eight batched images.
We highlight some specific limitations from Section 6 and include some additional ones here:
- One significant limitation of our method is the additional steps we need for RePaint. A priori, we do not know how many steps are required to harmonize the constraint into the diffrep. Moving to a conditional sampling method like MCG would allow us to skip the extra forward steps and directly integrate the constraint into the solver. However, this needs further exploration to work with the rest of our method.
- There is also often significant stochasticity in the Monte Carlo estimate of the pullback score over several views, especially when the Jacobian of the render function is ill-conditioned. We would prefer to take more sample views to reduce the variance in our estimate, but this slows down our algorithm or takes more memory. More work is needed to improve the view sampling techniques (like importance sampling) to decrease the variance in our estimates without slowing down our algorithm.
- Finally, we only investigated our method on NeRFs; since then, there have been significantly more advanced diffreps like Gaussian Splats and full-scene representations. While our method should also work for these cases, more work needs to be done to delineate the specifics of how to adapt our method for these cases.
We will add all these details to the final version of the paper. | Summary: The paper introduces a method for sampling a differential representation using a pre-trained diffusion model. Instead of sampling in the image space, the authors propose sampling in the parameter space of differential representations by 'pulling back' the probability distribution from the image space to the parameter space of the differential representation.
Strengths: - The paper introduces a novel method for sampling a differentiable representation for a pre-trained diffusion model, which is highly relevant.
- Their method appears to be well-principled.
- Although their results are limited, they seem promising.
Weaknesses: - The technical part of the paper is difficult to follow.
- Comparison to the state-of-the-art is provided only in Section 5.1, where a differential representation is not required.
- The code is not shared at this stage.
Technical Quality: 2
Clarity: 2
Questions for Authors: - The two main relevant papers (Poole et al. [2022] and Wang et al. [2022]) are not discussed in appropriate detail:
- The authors claim equivalence between the methods without providing a detailed argument or a citation where the argument is made.
- Lines 93-94 suggest that these methods act as "mode-finding algorithms," however, a discussion is lacking.
- The arguments in Section 3.1 (which I believe is the main argument of the paper) are hard to follow for a non-expert:
- What is the relevance of Equations 3-5?
- How are Equations (6) and (7) derived?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and review. We will address your concerns in turn.
SJC and Dreamfusion (SDS) are extremely similar in methodology when accounting for the conversion between the denoising model and the score function. The key differences are in the weighting function that the expectation is taken over, the optimization and sampled times in the schedule, and regularizers added for the 3D case. We show how the SJC and SDS objectives are identical up to a weighting term and delineate the major differences between the methods and how they compare to our method in the common response.
Reliable metrics for generative models are primarily image-based. This is why we used the SIREN image diffrep to evaluate our method quantitatively. While it is possible to generate NeRFs and SIREN panoramas using the other methods, comparing them would be infeasible since there are no good techniques to compare distributions of general diffreps yet.
We will make the code public in the final version of the paper.
We have significantly changed the technical description of our methodology to make it simpler and more intuitive. We hope you find this new description clearer.
The score model associated with the noise predictor implicitly defines a distribution on the data space $\mathcal X$, and the reverse time PF-ODE provides a means to sample from that distribution.
In this section, we will show how to use the score model and the PF-ODE to sample the parameters of a differentiable representation (diffrep) so that the rendered views look like samples from the implicit distribution.
In SJC and DreamFusion, they derive their parameter updates using the pullback of the sample score function through the render map $f$. Explicitly, this pullback is obtained by using the chain rule $\tfrac{d(\log p)}{d\theta} = \tfrac{d(\log p)}{dx}\tfrac{dx}{d\theta}$. The pullback score is then used __as is__ to perform gradient ascent in the parameter space. However, since $p$ is a distribution and not a simple scalar field, the change of variables formula requires an additional $\log \det J$ for the appropriate score function in the parameter space, where $J$ is the Jacobian of $f$. Carefully examining this approach through the lens of differential geometry reveals even deeper issues.
From differential geometry, recall that it is a vector field (a section of the tangent bundle $T\mathcal X$), not a differential form (a section of the cotangent bundle $T^*\mathcal X$), which defines the integral curves (flows) on a manifold. To derive the probability flow ODE in the parameter space, we must pull back the vector field $\frac{dx}{dt}\in T\mathcal X$ to the parameter vector field $\frac{d\theta}{dt} \in T\Theta$ using the render map $f$. The pullback of the vector field through $f$ is given by
$(f^* \frac{dx}{dt})|_\theta = (J^\top J)^{-1}J^\top \frac{dx}{dt}|\_{f(\theta)},$ and thus the pullback of the probability flow ODE is
\begin{equation}
\frac{d\theta}{dt} = -\dot\sigma(t)\sigma(t)(J^\top J)^{-1}J^\top\nabla \log p_t(f(\theta)).
\end{equation}
We note that this is in contrast to the SJC and DreamFusion score. The confusion arises when comparing the types of elements that are pulled back. While the left-hand side of the PF-ODE $\frac{dx}{dt}$ is a vector field, the score function $\nabla \log p_t(x(t))$ on the right-hand side is a covector field, i.e., it is a differential form. Pulling back the score function as a differential form correctly yields $J^\top\nabla \log p_t(f(\theta))$, the term used in SJC. The problem is that hidden in PF-ODE is the use of the (inverse) Euclidean metric to convert the differential-form score function into a vector field to update the parameters. In canonical coordinates, the Euclidean metric is the identity. As a result, the components of the score function do not change when they are transformed into a vector field by the Euclidean metric. Therefore, we can safely ignore the metric term in the original PF-ODE formulation for $\mathcal X$.
If we want to convert the pulled-back score function into the corresponding pulled-back vector field, we need to use the pulled-back Euclidean metric inverse given by $(J^\top J)^{-1}$. When we use this metric, we get the same pulled-back form of the PF-ODE as given above.
A more informal way of seeing why the correct update must look like the pulled-back ODE we have derived here rather than that given by SJC is to look at the case where the number of input and output dimensions is the same. From the chain rule, we have $\frac{d\theta}{dt} = \frac{d\theta}{dx} \frac{dx}{dt}$.
We can compute $\frac{d\theta}{dx}$ using the inverse function theorem $\frac{d\theta}{dx} = \left(\frac{dx}{d\theta}\right)^{-1} = J^{-1}$, where $J$ is the Jacobian of $f$. So we arrive at $\frac{d\theta}{dt}=J^{-1}\frac{dx}{dt}$. When $f$ is invertible, this equation is equivalent to the one given above. For non-invertible $f$, we can interpret the pullback as the solution to the least squares minimization problem $\min\left\|J\frac{d\theta}{dt} -\frac{dx}{dt}\right\|^2$ for $\frac{d\theta}{dt}$.
We thank you for your time spent reviewing. We have invested substantial effort in addressing your concerns and improving the quality of the paper, and we ask that you consider adjusting your score in light of our response. Please let us know if you have additional questions we can help answer. | Summary: This paper introduces a novel, training-free method to sample through differentiable functions using pretrained diffusion models.
Strengths: It sounds a general method that could apply to many different scenarios.
Weaknesses: More systematic evaluation of the method in addition to image examples would be great.
Technical Quality: 2
Clarity: 3
Questions for Authors: I do not have questions at this stage.
Confidence: 1
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors have discusses some limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive review. Please see the common response for additional SIREN panoramas. Let us know if there are any remaining concerns we might be able to address. | Summary: The present paper introduces a training-free method to sample differentiable representations using pre-trained diffusion models. This is achieved by pulling back the dynamics of the reverse-time process from the image to the parameter space. Moreover, training-free methods for conditional sampling are employed to (approximately) satisfy implicit constraints. Numerical experiments show the performance of the method for images, panoramas, and 3D NeRFs. In particular, it is shown that the method can improve upon the baselines in terms of quality and diversity of generated representations.
Strengths: The proposed framework is versatile and can be used for various differentiable representations. The presented numerical experiments show that the resulting methods can improve quality and diversity compared to other training-free baselines.
Weaknesses: 1. Baselines and related works: Further discussion of related works and additional numerical comparisons are needed.
- It would be helpful to provide further discussion on related works, such as Zero-1-to-3, HiFA, Magic123, LatentNeRF, Fantasia3D, ...
- While the presented results are promising, additional experiments and visualizations would strengthen the validation of the method.
- A more detailed comparison to methods requiring fine-tuning/training could provide a clearer picture of the method's performance. In particular, it would be good to add comparisons in terms of (overall) runtime/NFEs.
- Why are there no PSNR/SSIM/LPIPS numbers in Table 1 for the baselines, e.g., SJC?
- While DreamFusion is similar to SJC, it would still be interesting to add empirical comparisons.
2. Presentation and theory: Further details and explanations could be provided (e.g., also in the appendix).
- The geometric perspective and pullback operation might be challenging for readers not well-versed in these areas. One could improve accessibility by providing more intuitive explanations.
- Precise details on how RePaint is adapted to this setting seem to be missing. In this context it would also be good to clarify the connections to other methods for posterior sampling (DPS/ReSample/...).
- It would be helpful to add some explanation on the "separately managed noise object" $\epsilon(t)$ or $\epsilon(\pi)$.
Technical Quality: 2
Clarity: 2
Questions for Authors: Why is the score not the gradient of a scalar function?
Typos:
1. The visualizations in Fig. 1 (right) seem not to match the explanation "The example images in Figure 1 (right) show that the samples produced by our method are almost indistinguishable from the reference."
2. The order (top/bottom) in the caption of Fig. 4 is wrong.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Only a few limitations are mentioned and it would be good to enumerate potential failure cases of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and review. We will address your concerns in turn.
We have added a discussion of additional 3D asset generation from image diffusion methods like Zero-1-to-3, HiFA, Magic123, LatentNeRF, and Fantasia3D, which you mentioned. These works build off the Dreamfusion (SDS) / SJC method by adding additional inputs, finetuning, or regularization to improve the generation quality or by expanding the inputs for generating 3D assets. Zero-1-to-3 builds off of SJC using the same form of the score chaining, but with a score function fine-tuned to make use of a single input real image and additional view information. Magic 123 uses Zero-1-to-3 with some additional priors based on the 3D information. Fantasia3D separates geometry modeling and appearance into separate components, using SDS to update both. HiFA adds a different schedule and applies regularization to improve SDS. Finally, LatentNERF also uses SDS but parametrizes the object directly in the latent space of the stable diffusion autoencoder, rendered not to RGB values but instead into the dimensions of the latent space. These methods use the SDS/SJC backbone method for sampling/mode finding. In contrast, our work is focused on providing a more faithful sampling procedure to replace SDS/SJC for 3D generation and broader differentiable function sampling.
Similarity metrics for SJC: Initially, we did not report PSNR, SSIM, and LPIPS for SJC because the generations have little relation to the ones generated by the vanilla diffusion model with the same seed. We have added these in the table below. Notice that they are far, far worse than our method (a better comparison is a measure of distributional similarity like KID, which we report in Figure 1). This is because SJC does not follow the same trajectory as the PF-ODE, so even if it were performing sampling, the samples would not look identical to those produced in the image space. The PSNR, SSIM, and LPIPS scores are used to compare images with identical content but with different compression levels; they are not designed to compare distributions of images.
Table for metrics on SJC
| CFG | 0 | 3 | 10 | 30 | 100 |
|:------|-----------:|----------:|----------:|----------:|----------:|
| psnr | 8.50652 | 8.04771 | 7.13961 | 6.14309 | 5.54287 |
| ssim | 0.00938183 | 0.0419226 | 0.0988217 | 0.0973837 | 0.0843486 |
| lpips | 1.10491 | 0.982072 | 0.855826 | 0.813387 | 0.787548 |
Benchmarking Dreamfusion: SJC and Dreamfusion (SDS) are extremely similar in methodology when accounting for the conversion between the denoising model and the score function. The key differences are in the weighting function that the expectation is taken over, the optimization and sampled times in the schedule, and regularizers added for the 3D case. The SJC paper includes a section discussing the differences between the methods and provides a qualitative assessment of the assets generated by both methods.
Precise details of how RePaint is applied: The most straightforward approach to encourage the pullback of the PF-ODE to produce intermediate samples that are representable by $f$ is to adapt the RePaint method (designed for inpainting) to this consistency constraint. RePaint takes advantage of the complete Langevin SDE diffusion process instead of the PF-ODE we have been considering here. It works by intermixing several forward and reverse steps in the schedule. Since RePaint requires a stochastic process to apply the conditioning, we use the DDIM \citep{song_denoising_2022} sampling procedure with $\eta=0.75$ for both the forward and backward steps.
$\epsilon(t)$ and $\epsilon(\pi)$: Following the pulled-back PF-ODE we can find $\theta_t$ so that $f(\theta_t)$ represents a sample $x(t)\sim p_{0t}(x(t)|x(0)) = \mathcal N(x(t);x(0),\sigma^2(t)I)$. One limitation of this approach is that $x(t)$ is noisy, and differentiable image or 3D representations have a hard time expressing noise. In these settings, $J$ is ill-conditioned, leading to poor performance and long convergence times.
To address this issue, we can factor $x(t)$ into the noiseless signal and noise using the reparameterization of the perturbation kernel $x(t) = \hat x_0(t) + \sigma(t)\epsilon(t)$. If we let $\epsilon(t) = \epsilon$ be constant throughout the sampling trajectory and start with $\hat x_0(T) = 0$, we can update $\hat x_0$ using $\frac{d\hat x_0}{dt} = \frac{dx}{dt} - \dot\sigma(t)\epsilon$. Using this decomposition, we let $f(\theta_t)$ represent the noiseless $\hat x_0(t)$ instead of $x(t)$, which makes it much more likely that $J$ is much better conditioned.
Question: Why is the score not the gradient of a scalar function?
Here, we use scalar in the differential geometry sense of being a scalar quantity on the given manifold, not an object that has a value dependent on the chosen coordinate chart. For continuous probability distributions, the probability density function varies not just on the chosen point but also on the given coordinate chart, as made evident by its changing value under a change of variables (where the additional $\det J$ term arises). Informally, this can be seen from the fact that $\int_R p_X(x)dx = \int_R p_Y(y)dy$, but because of the changing volume element $p_X\ne p_Y$. In contrast, a scalar function will have the same value when evaluated at the same point but in two distinct coordinate charts.
We also agree that the method section could also be explained better. We have provided a revised method section in our response to Reviewer wR2k. We hope this more intuitive description of our method improves your consideration of our paper.
We thank you for your time spent reviewing. We have invested substantial effort in addressing your concerns and improving the quality of the paper, and we ask that you consider adjusting your score in light of our response. Please let us know if you have additional questions we can help answer.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal and explanations and raised my score. I understand the difference between mode finding and sampling, however, I still think that this should be further validated experimentally, e.g., further visualizations like Fig. 2 (also for 3D NeRFs) as well as comparison to other baselines (even though they are *conceptually* similar to SDS/SJC). In that spirit, I thank the authors for providing additional metrics but agree that other metrics should be considered that can compare distributions instead of instances. | Rebuttal 1:
Rebuttal: Firstly, we would like to thank all the reviewers for their thoughtful comments and feedback on our submission. We appreciate the opportunity to address your concerns and clarify aspects of our work.
We want to reiterate the contribution of our work as a superior method to perform true sampling of differentiable representations (diffreps) using a pretrained diffusion model instead of merely mode finding as performed by DreamFusion and SJC. This allows our method to generate diverse and high-quality diffreps for a given prompt, even at low CFG levels. Also, we want to highlight that our method is training-free compared to methods like ProlificDreamer. This allows it to be applied directly with any score estimator regardless of architecture, making our contribution considerably more versatile and applicable to various domains and modalities.
We want to make an important note that may have given the impression that our method is not performing as well as it actually does in practice. In Figure 2, we have compared the raw image samples generated by following the PF-ODE in image space to the SIREN renders generated with our method and SJC. This figure is a substantively more faithful comparison of the generation abilities of the methods than Figure 1 (right). The results in Figure 1 (right) are presented out of order, giving the impression that there is no clear winner between the two methods and introducing the apparent tension with the statement, "samples produced by our method are almost indistinguishable from the reference." We apologize for this oversight and have corrected the figure, attaching it to the material below. The new Figure 1: (Corrected) demonstrates the clear improvement of our method, particularly at low CFG levels when the distinction between mode and typical samples is most relevant.
We provide a more detailed explanation of why DreamFusion and SJC are practically identical and why they perform mode-finding instead of sampling next.
In DreamFusion, they perform gradient ascent using $\mathbb{E}\_{t,\epsilon}[w(t)J^\top(\hat\epsilon(f(\theta) + \sigma(t)\epsilon)-\epsilon)],$ derived from the denoising objective, where $J$ is the Jacobian of the differentiable render function $f$ and $\hat{\epsilon}$ is the noise predictor. Score Jacobian Chaining (SJC) performs gradient ascent using $\nabla \log p(\theta):= \mathbb{E}_{t,\epsilon}[J^\top \nabla\log p_t(f(\theta)+\sigma(t)\epsilon)].$ If we rewrite the DreamFusion objective $\mathbb{E}\_{t,\epsilon}\big[w(t)J^\top (\hat{\epsilon}(f(\theta)+\sigma(t)\epsilon)-\epsilon)\big]$ using the Tweedie formula, we get $\mathbb{E}\_{t,\epsilon}\left[\tfrac{w(t)}{\sigma(t)}J^\top \nabla \log p_t(f(\theta)+\sigma(t)\epsilon)\right].$
This expression is identical to the $\nabla \log p(\theta)$ term from SJC if we let $w(t) = \sigma(t)$ for all $t$.
Both approaches estimate the objective using Monte Carlo sampling. In addition, SJC also uses a custom sampling schedule for the $t$s, which can be interpreted as gradient annealing to align the implicit $\sigma(t)$ of the diffrep with the $t$ used to evaluate the score function. Following the gradients to convergence leads to a critical point in the $\log p(f(\theta))$ landscape, finding a (local) maximum or mode. Neither procedure has a set stopping time, and running the PF-ODE using the parameter score function suggested by these methods would fail to produce samples from the distribution (consider the procedure on Gaussian data as a concrete example). Furthermore, while both methods can produce passable diffreps at high classifier-free guidance (CFG) levels, they struggle to produce coherent diffreps when the CFG weight is low, especially if the distribution is multimodal or has more degrees of freedom.
To appreciate the limitations of using the mode as a proxy for sampling, we need to consider two scenarios: the multi-modal setting and the high-dimensional setting. In the multi-modal setting, the sample distribution may contain several distinct high-density regions in the multi-modal setting. However, mode-finding algorithms typically focus on only one of these regions, potentially sacrificing sample diversity. This lack of diversity can be seen in the SJC samples (last row) of Figure 2.
Although the mode might look like a typical sample in low dimensions, the mode is anomalous when the sample space is high dimensional $d \gg 1$ (e.g., the space of images). This fact can be intuitively understood as a consequence of the thin-shell phenomenon, which states that while low-dimensional standard Gaussian samples concentrate around their mode, high-dimensional Gaussian samples predominantly reside in an exponentially thin shell around the boundary of a ball centered on the mode with a radius $\sqrt{d}$. As an illustrative example, consider sampling a normalized pure-noise image. Despite having a mode of $0$, we would almost never expect to generate a uniformly gray image. This provides some insight as to why the mode of a high-dimensional distribution will lack the quality and details present only in the samples from the thin shell.
We have also added more panorama images in the supplementary PDF.
Once more, we would like to highlight the uniqueness of our contribution as the true pullback of the diffusion process to the diffrep parameter space. Unlike fine-tuning methods like ProlificDreamer or other training-free methods, our approach provides a means to efficiently apply the dynamics of the diffusion process directly in the parameter space through the pullback to sample high-quality diffreps.
We thank the reviewers for their valuable insights and hope our responses have addressed their concerns. We believe that our work significantly contributes to the distillation of diffusion sampling space, and we hope this response alleviates any concerns the reviewers have.
Pdf: /pdf/997e8975f4710379667de0f4469ccf3bd19b1d30.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.