title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Enriching Disentanglement: From Logical Definitions to Quantitative Metrics | Accept (poster) | Summary: This paper investigates relating logical definitions of disentanglement and existing quantitative metrics via formal derivations of novel metrics.
Strengths: The topic is very interesting and it seems the authors were very rigorous in their investigations especially concerning the amount of all of the background material and derivations.
Weaknesses: However, the paper is very poorly motivated, structured and written. It is very difficult to follow the authors along what they are trying to achieve (the motivation), what they are doing (their ideas) and the details on how this is connected with other research (context via related work) and how this should be used for future ML research/ what the significance is of their work for the future. E.g. the Introduction dives right into formal background notations without any motivation/overview of what the authors wish to achieve/investigate in the work.
Maybe I have misunderstood some things, but what exactly are the results of the section 3? I.e which of the many metrics are the relevant ones and what do they mean intuitively? The authors provide many examples of specific “implementations” (for lack of a better word here) and then conclude by statements such as “Upon analyzing the metrics above, it becomes evident that what we need is not the best approximation itself (e.g., the mean) but rather the approximation error “ on page 7. So it seems the previous derivations are irrelevant for the overall goal (what this is is further unclear). So I wonder whether we really require this material then and can remove it, e.g., for the sake of more details on experiments (see below).
Overall, I noticed the supplementary materials has much more relevant information that I believe needs to be in the main text. E.g. related works as these are necessary to understand the context of the work.
Further, it was also very difficult for me to understand the experiments of the main paper. The experimental setup is not described in the main paper (there is not even a caption to Table 2). This needs to be greatly improved. There seem to be a lot of information in the supplementary concerning experiments and I would suggest moving this into the main paper. Overall, it would help me if the authors could summarize again what they were investigating with these experiments.
The claims made in the experiments of the main paper are not backed up by the experiment shown in Table 2, e.g. "The metrics derived from equivalent definitions may differ in terms of computation cost and differentiability.". Where are the results for this? If these can only be found in the supplementary currently, I think they should also be moved to the main text.
I am also puzzled by all of the experiments in the supplementary material (on interesting datasets), but that are not mentioned in the main text (unless I have missed this). Was there a reason for this?
Lastly, I am missing a thorough discussion of the overall findings and particularly potential limitations of the findings/proposed metrics.
Overall, it is too difficult to assess the quality of the work and proposed ideas with the current structure. I would recommend putting some time into rewriting and restructuring to make this more clear to the reader.
Technical Quality: 2
Clarity: 1
Questions for Authors: see above
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 3
Limitations: see above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your detailed review of our work!
We'd like to address your concerns as follows.
---
However, the paper is very poorly motivated, structured and written. It is very difficult to follow the authors along what they are trying to achieve (the motivation), what they are doing (their ideas) and the details on how this is connected with other research (context via related work) and how this should be used for future ML research/ what the significance is of their work for the future. E.g. the Introduction dives right into formal background notations without any motivation/overview of what the authors wish to achieve/investigate in the work.
---
Please allow us to clarify the motivation and why we chose the current presentation.
One of the major problems in the current research of disentangled representation learning is that we lack a clear, logical definition of properties such as modularity and informativeness, and **we don't know if a metric truly quantifies a property or not** (abstract `l.4`).
This is the research problem we want to address in this paper.
We started the introduction section with a discussion on the parallel between loss function and function equality, which all machine learning researchers are familiar with, to show our goal: "**measuring and optimizing other properties like this**" (`l.29`) *by connecting logical definitions and quantitative metrics and extending this parallel to other properties*.
This is the main idea of this paper.
We used informativeness as an example in Section 1.1, which is an important property in (disentangled) representation learning.
We used this example to show that there could be multiple ways (e.g., injectivity and left-invertibility) to formally define a property (e.g., informativeness), and we need to deal with **logical operations** (e.g., implication), **quantifiers** (e.g., universal quantifier and existential quantifier), composition, and identity, which are the **basic building blocks** that will be used in the following sections.
We then contextualized our work in Section 1.2 (and further explained related work in detail in Appendix D), gave a logical definition of an essential property of disentanglement (modularity), provided a concrete example to help the reader understand this concept, and introduced the research questions we want to answer: **how to find a (preferably differentiable) metric for a logically defined property**.
Due to the page limit, we had to move the detailed discussions on related work to Appendix D.
If we have a chance, we will move some of them back to the introduction section.
In Section 2, we immediately answered the questions we asked, and we summarized the proposed technique in **Table 1**.
The "meta-theorem" was stated in **Theorem 1**.
The rest of this paper is just explanations and concrete instantiations of this technique.
We tried our best to present this work in a way that a machine learning researcher may get the gist of this work by reading the first three pages, and a practitioner can implement the derived metrics using the technique summarized in Table 1.
Therefore, the introduction section is not just the notation; We chose the examples very carefully to show the **goal, motivation, basic concepts, research questions, and our main idea**.
We are aware that this may not be a conventional way to present a work in the machine learning community, but we believe that the current "*let the math speak for itself*" style is more suitable for this work and straight to the point.
We are glad that `Reviewer GqiT` stated that `The paper is well-written. The math objects are introduced with good intuitions`, but we also know that this writing style is not for every reader.
We will try our best to further improve the readability of the introduction section for the machine learning audience.
---
Maybe I have misunderstood some things, but what exactly are the results of the section 3?
---
Section 3 demonstrated the use of the technique proposed in Section 2 and presented the modularity metrics (**Propositions 2 and 3**), their concrete instantiations (**Eqs. (17) and (25)**), informative metrics (**Definitions 8 and 9**), and an example of Theorem 1 (**Eq. (28)**).
"We need the approximation error but not the approximation itself" can be interpreted as "we can calculate the variance without calculating the mean".
**Metrics derived from logically equivalent definitions and using different aggregators may have different computational costs, sensitivity to outliers, and learning dynamics.**
They are all relevant and useful in different scenarios (e.g., robustness evaluation, gradient-based optimization), and further research is needed to understand their characteristics.
Please see Appendix E.3 for further explanation.
---
Further, it was also very difficult for me to understand the experiments of the main paper.
---
The experiments in the main text were meant to confirm Theorem 1: **minimizers of the derived metrics must satisfy the properties the metrics quantify**.
We focused on the theoretical exposition of our work, and the experiments were indeed supplementary to demonstrate the benefits of the derived metrics.
We believe that the proposed logic-metric theory is our main contribution, which was supported by the formal proofs.
Nevertheless, we will follow your suggestions and clarify the meaning of experiments by moving some materials from the supplementary material to the main text in the revised version.
Thank you!
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed response and your patience to explain these things. I have now understood better what the work is about and that I had misunderstood a few things after the first read. However, I stand by the point that it takes very long (within the text) until the reader understands what the motivation is of the work. I.e. the goal is to investigate “we don't know if a disentanglement metric truly quantifies a property or not”. I do not agree with the authors that one requires the sections until section 1.2 to understand this. I would rather put in related work into the main paper and start the introduction from an updated version of section 1.2, i.e. rather than start the motivation from very basic definitions whereby the reader does not know where these are leading to.
Overall, I understand the work now and find the contributions sufficient to justify to raise my score. However, I really recommend the authors restructure/rewrite the motivation, etc, to make it easier for any reader to understand what the goal is upfront rather than after several formalizations which are difficult to follow without knowing the goal.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your reply and for raising your score! We are glad that our explanation was helpful.
We agree with you that the current introduction might be confusing on a first read for some readers. If granted an additional page, we will follow your suggestions by **clearly stating our goal and motivation in plain words** before explaining them in technical and mathematical terms. This approach should make the content more accessible to a broader audience.
Your suggestions from a reader's perspective have greatly helped us improve the readability of our paper. We sincerely appreciate your valuable feedback! | Summary: The paper consider the connection between logical definitions and quantitative metrics, proposing a systematic approach to design metrics from logical definitions. Particularly, the paper is focused on the measure of disentanglement. The paper theoretically justifies the correspondence between logical definitions of disentanglement and quantitative metrics via topos theory and enriched category theory and empirically demonstrates the effectiveness in isolating various aspects of disentangled representations compared to existing metrics.
Strengths: The idea presented in the paper, especially the systematic method of converting logical predicates into quantitative metrics, is novel and very interesting. The authors also theoretically support this idea with topos theory and enriched category theory, which are sophisticated and advanced mathematical frameworks not commonly employed in machine learning literature.
Weaknesses: The superiority of the proposed metrics over the the existing ones does not seem fully validated, either empircally or theoretically. The evidence provided does not show the proposed metrics dominate the existing ones universally.
Technical Quality: 3
Clarity: 3
Questions for Authors: Could you provide more evidence, either empirical or theoretical, to show the proposed metrics dominate the existing ones universally? Or if the proposed metrics do not outperform the existing ones universally, when are they better?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your appreciation of the novelty and soundness of this work!
We'd like to address your concerns as follows.
---
The superiority of the proposed metrics over the the existing ones does not seem fully validated, either empircally or theoretically. The evidence provided does not show the proposed metrics dominate the existing ones universally.
---
Please let us summarize the benefits of the derived metrics over the existing ones:
### **No failure modes**
First, the biggest advantage is that the derived metrics are rooted in the logical definitions of the property we want to quantify and are governed by **Theorem 1**. Therefore, we know that minimizing these metrics won't lead to wrong answers. In contrast, several existing metrics have "failure modes", which means that these metrics can wrongly score representations (see `Carbonneau et al., [2022]`, `Mahon et al., [2023]`). This was theoretically supported by **Theorem 1** and empirically validated in **Table 2**.
### **Differentiability**
In the literature, a new evaluation metric is often introduced along with a new representation learning method `[Carbonneau et al., 2022]`, but it is usually unproven that the method can optimize the new metric. One reason is that many existing metrics are not differentiable (because, e.g., they need to train a predictor `[Higgins et al., 2017, Kim and Mnih, 2018, Eastwood and Williams, 2018]`) so we cannot directly optimize them. By choosing specific aggregators, we can obtain differentiable metrics for a property, which allows us to directly optimize the property using gradient-based optimization. This was supported by the existence of differentiable metrics in **Eqs. (17), (25), and (27)**.
### **Weakly supervised evaluation**
We revealed that it is possible to evaluate some modularity metrics using only similarity information because the modularity is invariant to bijections. This is impossible for some existing metrics. We demonstrated weakly supervised modularity metrics in **Appendix F.2**.
### **Efficiency**
Some existing metrics such as DCI need to train an additional predictor, which requires hyperparameter tuning and is usually computationally expensive. High computation cost may be acceptable if the metrics are only used in the evaluation phase, but it is not feasible to use them as learning objectives even in derivative-free optimization. In contrast, the derived metrics are more efficient. This was empirically supported by **Table 7** in **Appendix F.5**.
### **Fine-grained evaluation**
We proposed to evaluate modularity and informativeness separately. **Table 2** shows that existing single-score metrics cannot distinguish modularity and informativeness, which provides less information about the representations. We further demonstrated how to use the proposed metrics to diagnose issues of representation learning methods in a more fine-grained manner in **Appendix F.6**.
---
We will further clarify the benefits of the proposed metrics in the revised version.
Please let us know if you have any other questions or suggestions.
Thank you!
---
Rebuttal 2:
Comment: Thank you for your detailed response, which addresses some of my concerns. I will maintain my current score.
---
Rebuttal Comment 2.1:
Comment: Thank you for your reply! We are glad that our explanation addressed some of your concerns.
Due to the page limit, some of the advantages of the derived metrics were only fully explained in the appendix. We will **summarize the benefits of the proposed method** and emphasize them in the revised version. Thank you for your feedback! | Summary: The paper proposed to establish a connection between logical definitions of disentanglement and quantitative metrics from the perspective of typos theory and category theory. It then propose a metrics for disentanglement with stronger theoretical guarantees and compared it with some state of the art metrics.
Strengths: The paper provided theoretical justification for establishing metrics from typos theory and category theory in disentanglement of representation learning. It then proposed to convert first-order predicate into real-value quantity, which is innovative. The proposed metrics also have practical implications as the author mentioned differentiability and its effectiveness through some experimental results.
Weaknesses: 1, Despite that the authors included as much background information in the appendix, the advanced mathematical concepts makes the paper very hard to follow and limit its accessibility. Moreover, the theoretical connections may not be readily feasible to practical and intuitive interpretations. Hence its use scenarios should be stated.
2, Even though the paper provides empirical results, its scope and effectiveness may not be fully demonstrated through the limited range of scenarios considered. Therefore its generalizability it not clear.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1, why were the specific modularity metrics used for evaluation? How do they contribute to assessing disentanglement in representation learning?
2, Line 308 mentioned that the results are transformed isomorphically using $e^{-x}$. What is the purpose of this procedure?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for acknowledging the innovation of our work!
We will answer your questions as follows.
---
... the advanced mathematical concepts makes the paper very hard to follow and limit its accessibility.
---
Thank you for pointing this out.
We developed the theory with the help of these mathematical concepts, and we are aware that they are not as well known as other tools such as linear algebra and statistics for machine learning researchers. Therefore, in the main body of this paper, we tried to present the proposed technique (**Table 1**) and theorem (**Theorem 1**) without these terms. We simplified the complex structures into easy-to-follow rules, such as "*replacing the conjunction (logical AND) with the addition*", and used many examples to demonstrate the application of this technique. We believe that the reader does not need any category theory background to understand the proposed metrics.
To make the theory more accessible, we added a **non-categorical restatement** of the proposed theory in the revised version, which is less general but easier to follow. However, we have to introduce some necessary algebraic concepts such as *homomorphism*. We hope this part can help the reader understand not only the proposed technique but also the theory behind it.
---
1, why were the specific modularity metrics used for evaluation? How do they contribute to assessing disentanglement in representation learning?
---
We evaluated not only modularity but also informativeness (Section 3.3, the middle 5 columns in Table 2). Modularity is arguably the essential property of disentanglement (it is simply called *disentanglement* in `Eastwood & Williams [2018]`), informativeness is a basic property of representation learning, and they are considered to be more important than other properties such as completeness/compactness in practice `[Ridgeway and Mozer 2018, Duan et al. 2020, and Carbonneau et al. 2022]`.
---
2, Line 308 mentioned that the results are transformed isomorphically using e^{-x}. What is the purpose of this procedure?
---
Originally, we used ($[0, \infty]$-valued) strict premetrics as the quantitative versions of equality. In this case, $0$ means a property holds, and non-zero numbers mean that a property does not hold (*the lower the better*). However, many existing metrics used $[0, 1]$-valued metrics, where $1$ means a property holds perfectly (*the higher the better*). A reader may feel a subtle cognitive dissonance if we compare them directly. Therefore, we transformed $[0, \infty]$-valued metrics into a $[0, 1]$-valued metrics using $e^{-x}: [0, \infty] \to [0, 1]$.
We say it's an isomorphism in the sense that it preserves the order ($a \leq b$ implies $e^{-a} \geq e^{-b}$), quantitative operations such as the addition ($e^{-(a + b)} = e^{-a} \cdot e^{-b}$), and so on.
If we don't need to compare the results with existing $[0, 1]$-valued metrics, we can use the original values (e.g., for optimization).
---
We hope these answers address your concerns.
Please let us know if you have any other questions.
Thank you!
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response and I think all my concerns are addressed. I have raised my score now.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply and for raising your score!
We will address your concerns in the revised version, and we hope the simplified theory can make this paper easier to follow. Thank you for sharing your insights and questions! | Summary: This study introduces a systematic approach to quantify properties of representation learning models. By translating logical definitions into quantitative metrics, the paper evaluates two key properties: modularity and informativeness. Two sets of metrics are derived for each property, one based on approximation and the other on distance computation and aggregation. Theoretical analysis of these metrics is conducted, including examination of their minimizers.
(Full disclosure: I have reviewed this paper before)
Strengths: 1. I think this paper is interesting and innovative and mathematically sound.
2. The paper is well-written. The math objects are introduced with good intuitions.
Weaknesses: 1. It might be difficult for people with less prior knowledges to read.
2. It’s unclear if the metrics is practical be to a loss to optimize in practice because the experiments do not train with proposed metrics.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Heyting algebra is mentioned a couple of times in the paper without a proper definition, which would be good for general ML audience.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks again for your kind evaluation of our work!
Regarding your concerns, our answers are as follows.
---
It might be difficult for people with less prior knowledges to read.
---
Thank you for pointing this out.
In the main body of this paper, we tried our best to avoid using abstract categorical concepts, which were necessary for developing the theory. The proposed technique was explained as simple rules such as "*replacing the conjunction (logical AND) with the addition*". We believe that the reader does not need any category theory background to understand the proposed metrics.
To make the theory more accessible, we added a **non-categorical restatement** of the proposed theory in the revised version, which is less general but easier to follow. However, we have to introduce some necessary algebraic concepts such as homomorphism. We hope this part can help the reader understand not only the proposed technique but also the theory behind it.
---
It’s unclear if the metrics is practical be to a loss to optimize in practice because the experiments do not train with proposed metrics.
---
Learning disentangled representations with the proposed metrics is indeed the immediate step after this work. We would like to point out that some existing metrics are much less practical as learning objectives for two reasons:
1. Some metrics such as DCI `[Eastwood and Williams, 2018]` need to train an additional predictor, which requires hyperparameter tuning and more computation. Evaluation using these metrics is time-consuming. In some cases, the calculation of the DCI metrics using `GradientBoostingClassifier` takes around 15 minutes (See **Appendix F.5**). It is not feasible to use them as learning objectives, even in derivative-free optimization.
2. For the same reason, many existing metrics are not differentiable, so it is impossible to use them in gradient-based optimization.
To use the proposed metrics to learn representations, we need to consider what supervision can be used, which is a different topic and beyond the scope of this work.
---
Heyting algebra is mentioned a couple of times in the paper without a proper definition, which would be good for general ML audience.
---
We mentioned Heyting algebra and quantale so that a reader with an algebra background can immediately understand our main idea. It is like saying *group* instead of *a set equipped with an associative, unital, and invertible binary operation*. However, we believe that a machine learning researcher who is not familiar with algebra can still understand the proposed technique because we spelled out all the components of a Heyting algebra (true, false, conjunction, implication, etc.). Roughly speaking, a Heyting algebra may be considered as a **Boolean algebra** without excluded middle. For interested readers, the formal definition of Heyting algebra is given in **Appendix B**.
---
We hope we addressed your concerns.
Please let us know if you have any other questions.
Thank you!
---
Rebuttal Comment 1.1:
Comment: Thanks for you reply. I am still keeping my score because I have low confidence in the topic but I am leaning towards acceptance.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply and for confirming your positive opinion about this paper!
We understand that the technical details may be inaccessible to some readers.
We conjecture that this may be due to three reasons:
- While logic is fundamental in math and machine learning, the *algebraic approach* to logic --- where a predicate $p: A \to \\{\top, \bot\\}$ is considered as a function from a set $A$ to the set $\\{\top, \bot\\}$ of truth values, and logical operations like conjunction $\land: \\{\top, \bot\\} \times \\{\top, \bot\\} \to \\{\top, \bot\\}$ are viewed as binary operations over this set --- is less familiar to many.
- It is even less well known that the universal quantifier $\forall: \\{\top, \bot\\}^A \to \\{\top, \bot\\}$ can be viewed as a function from a set $\\{\top, \bot\\}^A$ of predicates (i.e., functions) to the set of truth values, and this function is also a kind of [algebraic structure](https://en.wikipedia.org/wiki/F-algebra). However, this algebraic perspective is valuable because it allows us to formulate the relationship between quantifiers and aggregators (e.g., sup or sum $[0, \infty]^A \to [0, \infty]$) as a homomorphism (e.g., *all values are zero if and only if the sup/sum value is zero*).
- Our logic-metric theory is general and applicable to various problems, but its compositional nature may render it too abstract for some readers.
Despite these technical challenges, we still think this logical and algebraic perspective is worth sharing with the machine learning community. To make it more accessible, we will **simplify the presentation of the theory and minimize the prerequisites** in the revised version. Thanks again for reviewing this work! | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
DisCEdit: Model Editing by Identifying Discriminative Components | Accept (poster) | Summary: This paper applies model editing to address two active areas of research, Structured Pruning and Selective Class Forgetting.
Specifically, it adopts a distributional approach to identify important components useful to the model's predictions.
With the witness function-based lower bounds on the TV distance, it is able to discover critical subnetworks responsible for classwise predictions, thus achieving Structured Pruning and Selective Class Forgetting.
Strengths: 1. This paper uses lower bounds on the TV distance to approximate the Bayes error rate to quantify the discriminative ability of a filter in a neural network.
2. This paper introduces DISCEDIT-U, which selectively prunes components capable of discriminating each class to facilitate unlearning and DISCEDIT-SP, which prunes non-discriminative filters to achieve structured pruning.
3. The experimental results demonstrate the efficacy of DISCEDIT in both unlearning and structured pruning.
Weaknesses: 1. The code provided by the authors redirects to an empty GitHub repository, lacking reproducibility.
2. This paper focuses on the well-established research areas of machine unlearning and structured pruning.
However, the authors fail to provide an introduction to the field or an overview of the current state of existing research.
3. In the experiments of DISCEDIT-U, the authors do not compare its results with other existing unlearning methods, making it difficult to ascertain whether it outperforms the current baselines.
4. The experimental results of DISCEDIT-SP show no significant improvement over the compared baselines, especially when evaluated against CHIP on ImageNet.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. The authors should provide a more detailed description of machine unlearning and structured pruning, along with a comprehensive introduction to the current research landscape in these areas.
2. To demonstrate the effectiveness of the proposed DISCEDIT-U, the authors should compare its performance against other works within the field.
3. The authors should incorporate a more detailed introduction of the experimental setup and the results into the main text, rather than relegating them to the appendix without sufficient description.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback and useful suggestions.
### Weaknesses:
**The code provided by the authors redirects to an empty GitHub repository, lacking reproducibility.**
We wish to immediately amend the issue with the empty github and apologize for the mistake. We provide a corrected link below:
https://rb.gy/9zaxzz
**This paper focuses on the well-established research areas of machine unlearning and structured pruning. However, the authors fail to provide an introduction to the field or an overview of the current state of existing research.**
We provide a detailed related works section in Appendix A, lines 538-598. Moreover, in lines 30-38 and section 3, we introduce the problem of model editing, particularly by using component attribution.
**In the experiments of DISCEDIT-U, the authors do not compare its results with other existing unlearning methods, making it difficult to ascertain whether it outperforms the current baselines.**
We thank the reviewer for the suggestion, and have compared our results with additional baselines in the General response.
**Takeaway: our proposed method for model unlearning is competitive with current state of the art without requiring additional retraining, simply by identifying and masking discriminative components.**
**The experimental results of DISCEDIT-SP show no significant improvement over the compared baselines, especially when evaluated against CHIP on ImageNet.**
In the work, we show that with extensive fine-tuning, all models recover nearly the entire accuracy of the original model. However, in Table 7 of the Appendix, *we present results for pruning without fine-tuning*. Our results clearly outperform near baselines such as CHIP [1] and TVSPrune [2].
**Takeaway: We show that our proposed method outperforms the nearest baselines in this regime (without extensive fine-tuning).**
### Questions:
**The authors should provide a more detailed description of machine unlearning and structured pruning, along with a comprehensive introduction to the current research landscape in these areas.**
The problems of structured pruning and classwise unlearning are stated in Section 3 in the paper, on lines 145-163. However, we formally state the problems of machine unlearning and structured pruning in the sequel. Should the work be accepted, we will incorporate these problem definitions into Section 3, where we set up the two problems. Moreover, a detailed discussion of related work is presented in Appendix A.
**Machine Unlearning**
Let $\mathcal{D}$ be the data distribution, let $\mathcal{D}\_{c}$ be the distribution of class $c$ in the dataset, and let $\mathcal{D}\_{\bar{c}}= \mathcal{D}\backslash \mathcal{D}\_c$ be the distribution of the remaining classes. Let $f\_\theta(X)$ be a neural network with parameters $\theta$ trained on samples drawn from $\mathcal{D}$, with loss function $\mathcal{L}(\cdot)$. Unlearning class $c$ by editing $\theta$ can be formalized as finding parameters $\theta^*$, given some $\epsilon > 0$, using:
$\theta^* = \arg\max\_{\theta'} \mathbb{E}\_{X\sim\mathcal{D}\_c}\mathcal{L}(f\_{\theta'}(X))\quad \text{s.t. }|\mathbb{E}\_{X\sim\mathcal{D}\_{\bar{c}}}[\mathcal{L}(f\_{\theta'}(X))] - \mathbb{E}\_{X\sim\mathcal{D}\_{\bar{c}}}[\mathcal{L}(f\_{\theta}(X))]|\leq \epsilon$
That is, we edit the parameters to maximize the loss on $\mathcal{D}\_c$ while minimizing the effect on $\mathcal{D}\_{\bar{c}}$.
Similarly, structured pruning with a fixed budget $K$ can be written as a search for sparse parameters $\theta^*$
such that
$\theta^* = \arg\min\_{\theta'} \mathbb{E}\_{X\sim\mathcal{D}}[\mathcal{L}(f\_{\theta'}(X)] \text{ s.t. } ||\theta'||_0\leq K$
Thus, we aim to find a sparse set of parameters (with at most $K$ nonzero parameters) that minimizes the loss.
**To demonstrate the effectiveness of the proposed DISCEDIT-U, the authors should compare its performance against other works within the field.**
We thank the reviewer for the suggestion and present a comparison with additional baselines in the General Response.
**Takeaway: Our proposed methods are competitive with the additional baselines without requiring additional retraining/fine-tuning.**
**The authors should incorporate a more detailed introduction of the experimental setup and the results into the main text, rather than relegating them to the appendix without sufficient description.**
Should this work be accepted, we will add a more detailed description of the experimental setup to the main body of the work.
---
Rebuttal 2:
Title: Response to Reviewer
Comment: We have responded to the concerns raised by the reviewer in our previous response. In particular, we have
- addressed the empty github link
- highlighted our related works section in Appendix A
- shown the comparison of our models as compared to baselines *without fine-tuning*
- given formalizations of unlearning and pruning in addition to what we presented in Section 3 of the main document.
- Comparisons with baselines for model unlearning have been presented in the general rebuttal [**here**](https://openreview.net/forum?id=tuiqq1G8I5¬eId=8sBlgK4EB5).
- Moreover, we have added the requested ViT experiments (by reviewers YbA1 and FULk) as a response to the General Rebuttal [**here**](https://openreview.net/forum?id=tuiqq1G8I5¬eId=MiXju1ozmG).
We hope that we have adequately addressed your concerns, and are eager to find out what we might do to improve your appraisal of our work. We would like to gently remind the reviewer that the author-reviewer discussion period ends in 3 days, and we are eager to engage further with you.
---
Rebuttal 3:
Title: Gentle Reminder about approaching Auhor-Reviewer discussion deadline
Comment: We'd like to provide a gentle reminder that the author-reviewer discussion period ends in 36 hours. We hope that we've addressed the concerns raised in your review in our previous responses, and we are very eager to engage with you further if you have additional concerns, and what we might do to improve your evaluation of our work. | Summary: In this work, the authors propose to tackle two problems at once, class unlearning and structured pruning. To do so, they propose a novel way to compute a lower bound on the total variation distance between distributions of features.
In practice, this distance is approximated using the first and second order moments of a transformation of intermediate features generated from samples of specific classes. The aforementioned transformations are specific to the task of class unlearning or structured pruning.
The resulting method comes with a strong theoretical background, completed by convincing empirical evidence regarding.
Strengths: This work bridges the gap between, selective class forgetting and efficient inference. In its approach, this work is quite original and generalizes previous methods by removing questionable assumptions.
The presentation of the result is overall clear, despite the strong mathematical grounding of the proposed method.
The empirical results show that the proposed technique slightly improves over previous similar approaches.
Weaknesses: I have one minor concern regarding the evaluation of the method:
In the main paper, the authors insist on the ability of the proposed method to better discriminate important filters in the case of pruning and selective class forgetting. However, the experiments all include fine-tuning and a quite extensive one on pruning with up to 100 epochs. In my understanding, this empirically validates that the filter selection leads to a selection that can be better fine-tuned, not necessarily that the selection alone better preserves the performance.
I think that some experiments in that regard were conducted and discussed in the appendices, but they should be moved to the main paper if possible. Also, they should be extended to ImageNet, also, if possible.
Technical Quality: 3
Clarity: 2
Questions for Authors: I have three questions:
1. (main point from weaknesses) can the author provide results on ImageNet without fine-tuning to highlight the better selection of filters from their method in the main paper?
2. It is now standard to include transformers when evaluating pruning techniques. If the authors could provide results on Bert, that would benefit the scope of the paper.
3. In the article, it is claimed that the method can perform structured pruning with no access to the data. However, from my understanding, the saliency score from equation 9 uses Y(X) which is computed with some data. Could the authors elaborate on this point, please?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have adequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive appraisal of our work, and
### Questions:
**(main point from weaknesses) can the author provide results on ImageNet without fine-tuning to highlight the better selection of filters from their method in the main paper?**
We thank the reviewer for the insightful question. We have provided results for pruning ImageNet and CIFAR10 models without fine-tuning in Table 7 of the Appendix. In that table, we we show that **our structured pruning algorithm, DisCEdit-SP, outperforms baselines to a greater extent in the regime without fine-tuning.** Note that at higher sparsity regimes, the accuracy of Imagenet models falls to less than 1% on all baselines.
**It is now standard to include transformers when evaluating pruning techniques. If the authors could provide results on Bert, that would benefit the scope of the paper.**
We thank the reviewer for the suggestion. We will present results on ViTs trained on CIFAR10 for both unlearning and pruning (without fine-tuning) shortly.
**In the article, it is claimed that the method can perform structured pruning with no access to the data. However, from my understanding, the saliency score from equation 9 uses Y(X) which is computed with some data. Could the authors elaborate on this point, please?**
As stated in lines 83-84 in Section 1, motivated by recent works highlighting the need to be able to compress models without training data or the loss function, our proposed method requires no access to *original training data (data upon which the model was trained)*; rather, we only require distributional access, either via samples from the original distribution not in the training set, or in the form of *finitely many moments*, which can be used in conjunction with Theorem 2 to identify discriminative components.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I would like to thank the authors for their response. I would recommend moving Table 7 from the appendix to the main paper since it appears to answer many concerns from several reviewers. I consider that most of my points have been addressed, and I am looking forward to seeing your results on ViT.
---
Rebuttal 2:
Title: Response to Reviewer
Comment: We thank the Reviewer for engaging with us so clearly early on, as well as the patience shown! We hope that this adequately addresses the reviewer's concerns.
As regards Table 7, we thank you for the suggestion, and will incorporate Table 7 into the main document
We have presented the requested experiments on ViTs as a response to the general Rebuttal [**here**](https://openreview.net/forum?id=tuiqq1G8I5¬eId=MiXju1ozmG). Our experimental results highlight the fact that the proposed approaches for model editing (pruning and classwise unlearning) are effective when used with Vision Transformers as well.
As the author-reviewer discussion period ends in 3 days, we are eager to engage further and would like to know what we might do to improve the reviewer's appraisal of our work.
---
Rebuttal Comment 2.1:
Title: Gentle Reminder about Author-Reviewer Discussion Deadline
Comment: We'd like to gently remind the reviewer that the Author-Reviewer discussion period ends in 36 hours. We sincerely hope that we've addressed the concerns you expressed, and we're eager to engage further with you, and identify how we might improve your rating of our work. | Summary: The paper addresses the task of model editing that focuses on modifying critical components within neural networks to improve performance. One of the cornerstone steps is to first identify these components. The authors adopt an approach based on recently proposed discriminative filters hypothesis. Instead of using a Total Variation distance (which is intractable in this case), the authors derive a lower bound on the TV that is subsequently used to discover critical subnetworks responsible for classwise predictions.
The authors introduce algorithms for structured pruning and selective class forgetting and experimentally show its performance.
Strengths: - The paper is well written, and the ideas are well motivated.
- The proposed method looks efficient. The problem that is being addressed is significant, so improvements in this area can potentially benefit the community.
- The authors provided a thorough analysis and theoretical justifications of the proposed method (including findings in Appendix).
Weaknesses: - It is quite challenging to evaluate the performance of the propose solution due to the absence of comparison to other methods. While it is possible to see that the method is capable of preserving the overall test accuracy almost unchanged while reducing the quality of predictions on a chosen class by ~80%, it is still challenging to interpret these values. It looks to me that the methods listed in A.4 can be relevant and included into comparison.
- Related work is moved to Appendix. In my opinion, it would be better to include at least a shortened version into the main part of the paper so that the readers can easier understand the relations of the proposed method to the existing literature.
Technical Quality: 3
Clarity: 4
Questions for Authors: NA
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors has adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive appraisal of our work, and we address the concerns raised below.
### Weaknesses:
**It is quite challenging to evaluate the performance of the propose solution due to the absence of comparison to other methods. While it is possible to see that the method is capable of preserving the overall test accuracy almost unchanged while reducing the quality of predictions on a chosen class by ~80%, it is still challenging to interpret these values. It looks to me that the methods listed in A.4 can be relevant and included into comparison.**
We thank the reviewer for the suggestion, and provide a comparison with additional recent baselines in the general response. We refer the reader to the General Response for a summary of our experimental results.
**Takeaway: Our work achieves comparable performance to current state of the art without requiring any fine-tuning or retraining, simply by identifying and editing discriminative filters.**
**Related work is moved to Appendix. In my opinion, it would be better to include at least a shortened version into the main part of the paper so that the readers can easier understand the relations of the proposed method to the existing literature.**
We thank the reviewer for the suggestion. Should the work be accepted, we will include a summary of the work presented in Appendix A in the main document.
---
Rebuttal 2:
Title: Response to Reviewer
Comment: We hope we have adequately addressed the reviewer's concerns, and thank you for your positive review of our work.
In response to the concerns you raised, we would like to point out that we have:
- provided comparisons with additional baselines in the general rebuttal [**here**](https://openreview.net/forum?id=tuiqq1G8I5¬eId=8sBlgK4EB5), as requested by the reviewer.
- we will add at least a truncated version of the related works section into the main manuscript
- We have presented experimental results on a simple ViT as asked by reviewers crkh, FULk, and YbA1 as an additional comment to the General Rebuttal [**here**](https://openreview.net/forum?id=tuiqq1G8I5¬eId=MiXju1ozmG).
We would like to gently remind you that the author-reviewer discussion period ends in 3 days, and we are eager to engage further, and to find out what we might do to improve your appraisal of our work.
---
Rebuttal Comment 2.1:
Title: Rebuttal
Comment: Thank you for your answers! I decided to keep my rating unchanged.
---
Reply to Comment 2.1.1:
Title: Thanks!
Comment: We thank the reviewer for the engagement! As the author-reviewer discussion period comes to a close in the next 36 hours, we're eager to continue to engage with you, particularly if there are any other unaddressed concerns you might have, and what we might do to improve your appraisal of our work. | Summary: This paper proposes a method for model editing of convolutional classifier networks. The proposed approach assess the class-discriminative ability of convolutional filters by looking at the distribution of their produced feature maps. Specifically, by comparing the class conditional feature distribution to the marginal feature distribution (which is related to the Bayes error rate of the filter), this work is able to produce a saliency score for each filter that determines its importance for classifying a specific class. For model pruning, the least discriminative filters are pruned, whereas for class forgetting, the most discriminative filters for the forget class are removed.
This work involves several technical innovations; primarily, they approximate computation of the TV distance between the class conditional and marginal feature distributions by proposing a moment-based lower bound, which does not make assumptions about the feature distributions, such as Gaussianity.
Numerical results reported for model pruning and class forgetting using ResNets and VGG nets on ImageNet, CIFAR10, CIFR100. Results demonstrate compelling performance. Interesting additional results in the appendix, e.g., analyzing the distribution of the feature maps.
The contribution of this work is both theoretical and empirical.
Strengths: **Originality**
* This work is novel to the best of my knowledge. In particular, the main innovation in this work is the removal of the Gaussian feature distribution assumption from previous works by expending many pages to providing a moment-based lower bound on the TV distance, which can be estimated form few samples.
**Quality**
* This work is of high technical quality. Numerical results are compelling, demonstrating significant class forgetting with minimal model editing, and similarly, demonstrating strong performance even with significant pruning.
**Clarity**
* The paper is well written and relatively straightforward to follow, although some design choices; e.g., choice of witness function in numerical results are not clearly motivated.
**Significance**
* The problem of model editing is likely of broad interest to the neurips community, and the proposed approach is conceptually simple and does not require access to the loss function.
Weaknesses: * Numerical results both in the main paper and the appendix do not report the actual accuracies of any of the models. Instead, they only report relative drops in performance on forget and non-forget classes. The link provided for the ImageNet classifier in line 910 in the appendix does not work.
* Computational complexity of the approach compared to related works is not clear; specifically, what is the cost of computing the saliency scores for each filter in your experiments, and how does it compare to those of related methods.
* Motivation for the choice of witness functions in numerical results is not well motivated, where do these choices of functions come from?
* Minor issue in Fisher discriminant eq. (2)... should be $\Sigma_q$ not $\Sigma_2$.
* Minor issue in algorithm description for class unlearning; the filter saliency scores should be class labeled $r^c_{\ell, j}$ and should be computed using equation (10) not equation (9).
* Applicability: method is currently restricted to image classification with convolutional networks.
Technical Quality: 4
Clarity: 3
Questions for Authors: * Please fix the minor issues described above.
* Consider reporting computational performance of proposed method compared to baselines.
* Motivate the choice of witness functions in numerical results, and provide any ablations where applicable.
* Report model accuracies in all tables, and not just relative performance drops.
* Minor; consider adding figures to summarize performance and confidence intervals across all classes.
* Please consider elaborating and including numerical results indicating the impact of the number of classes on the ability to find class discriminative filters. These can be conducted on ImageNet by training models on subsets of the classes; e.g., 10 classes, 100 classes, 500 classes, 1000 classes.
* Please consider including experiments with a small vision transformer (e.g., ViT-Tiny) and discuss how the approach might extend beyond convolutional networks.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: N.A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive appraisal of our work, and especially the detailed feedback and insightful questions. We address the reviewers concerns below.
## Weaknesses
**Numerical results both in the main paper and the appendix do not report the actual accuracies of any of the models. Instead, they only report relative drops in performance on forget and non-forget classes.**
In this work, following standard practice (e.g. [3], [4]), we presented the accuracy drops. We present a table with the baseline accuracies, as requested by the reviewer in the Questions section. This can be seen as a reworking of Table 1, which will be repeated across tables in the paper.
**The link provided for the ImageNet classifier in line 910 in the appendix does not work.**
The referee refers to the Gdrive link from which we directly downloaded the VGG16 and ResNet50 models provided in the CHIP (our near baseline) GitHub page that later got updated by the authors of CHIP. We will update the link to the CHIP GitHub repo in the document.
**Computational complexity of the approach compared to related works is not clear; specifically, what is the cost of computing the saliency scores for each filter in your experiments, and how does it compare to those of related methods.**
Typically, complexity of the saliency computation is not provided in related work, such as CHIP [1] or TVSPrune [2]. In our work, we state the complexities of computing different types of witness functions in Appendix C.2, specifically Table 3. For comparison, we also provide a table of wallclock times of computing saliencies *for all filters in the model* with our approach, as compared to CHIP and TVSPrune.
**Imagenet (ResNet50)**
|Method |Time |
|--|--|
| CHIP[1] | >15 hours |
| TVSPrune [2] | >15 hours |
| Ours | ~13 hours |
**CIFAR10 (VGG16)**
|Method |Time |
|--|--|
| CHIP [1] | 224 minutes |
| TVSPrune [2] | 195 minutes|
| Ours | 24mins |
**Takeaway: Our proposed approach is computationally more inexpensive than common baselines and requires no loss function/backpropagation.**
**Motivation for the choice of witness functions in numerical results is not well motivated, where do these choices of functions come from?**
The choice of witness function is governed by relations to classical methods (i.e. Fisher discriminants/MPM), speed of saliency computation, and ease of estimation of moments. The witness function used in our pruning experiments (section 6.2) is described in lines 358-359. This enables us to recover the Fisher/MPM bounds stated in Theorems 1 and 2, and corollary 2, which are also easy to compute. Moreover, these witness functions are also in the spirit of connecting classical, discriminant-based classifiers to the TV distance/Bayes error rate, with which we can then derive novel bounds on the excess risk of those classifiers, as seen in section C.1.
In section 6.1, we use the witness function stated in line 335, which is similar to the RBF Kernel or the moment generating function. A detailed study on the choice of witness functions is, however, the focus of our ongoing research.
**Typos**
We have amended the typos pointed out by the reviewer in the main document
**Applicability: method is currently restricted to image classification with convolutional networks.**
The results proposed in Theorems 1 and 2 can be applied whenever lower bounds on the total variation distance are required, and not just in the context of model editing. The techniques employed in deriving the key results can be applied to finding witness function-based lower bounds for other divergences as well.
Moreover, when applied to model editing problems such as pruning and classwise unlearning, the proposed methods can be applied whenever we can access conditional data distributions.
Our method applies to other model types as well. As requested by the reviewer, we will shortly present experiments showcasing the use of our methods on ViTs trained on CIFAR10 and CIFAR100.
## Questions:
**Minor; consider adding figures to summarize performance and confidence intervals across all classes.**
We will do so in the revised manuscript, and thank the reviewer for the suggestion.
**Please consider elaborating and including numerical results indicating the impact of the number of classes on the ability to find class discriminative filters. These can be conducted on ImageNet by training models on subsets of the classes; e.g., 10 classes, 100 classes, 500 classes, 1000 classes.**
We thank the reviewer for the interesting question. The discriminative filters hypothesis proposed in [2] was supported by several experiments, as well as those in this work (see Figs. 5, 6 of the manuscript). It is expected that discriminative filters are easy to identify if the width of the layers being investigated exceeds the number of classes. We see this for models trained on CIFAR100. The final layers of VGG16 (containing 512 filters) possess, on average, 103 (out of 3072) discriminative filters per class in the final 6 layers, whereas ResNet56 models trained on the same dataset possess on average 21 (in the final layer block). Experiments are ongoing for models trained on Imagenet and its subsets.
**Takeaway: The ability to identify discriminative components depends on whether the number of classes exceeds the width of the network or not**.
### References
[1] Sui et al. *CHIP: CHannel Independence-based Pruning for Compact Neural Networks*
[2] Murti et al. *TVSPrune - Pruning Non-discriminative filters via Total Variation separability of intermediate representations without fine tuning*
[3] Shah et al. *Decomposing and Editing Predictions by Modeling Model Computation*
[4] Jia et al. *Model Sparsity Can Simplify Machine Unlearning*
---
Rebuttal 2:
Title: Response to Reviewer
Comment: We thank you for your patience, and have provided the requested ViT experiments as a response to the General Rebuttal [**here**](https://openreview.net/forum?id=tuiqq1G8I5¬eId=MiXju1ozmG). We see that our proposed approaches for model editing (specifically structured pruning and model editing) are effective when applied on vision transformers as well.
In our previous response, we have addressed the following points:
- addressed various typos pointed out by the reviewer
- discussed the motivation for the choice of witness function
- discussed the applicability of work beyond CNNs, as well as the broad applicability of our key theoretical results
- discussed how difficult it is to identify discriminative filters in different networks/different datasets
- discussed the computational cost of our approach against common baselines and provided clock times for our method compared to baselines.
We hope that we have adequately addressed your concerns, and are eager for further engagement. Given that the rebuttal period ends in 3 days, we would be grateful to know what we might do to improve your appraisal of our work.
---
Rebuttal Comment 2.1:
Title: Gentle Reminder about upcoming deadline
Comment: We would like to gently remind the reviewer that the author-reviewer discussion period ends in less than 48 hours. We hope we've addressed the concerns you raised in your detailed review of our work. We would really to engage with you further, and in particular, address any additional concerns you may have. | Rebuttal 1:
Rebuttal: We thank the readers for their appreciation of our work. In particular, we thank reviewers for noting:
- The importance of the problem addressed by our work - that is, model editing with a view toward structured pruning and classwise unlearning/
- The efficient and simple nature of our solution to both the problems of structured pruning as well as classwise unlearning by way of model editing, and highlighting the hitherto unknown connection between them by using discriminative components.
- The rigorous derivations of the novel lower bounds on the Total Variation distance that require no assumptions on the distributions being compared (i.e. no Gaussianity assumption). We also take this opportunity to highlight three facts about our results. First, the bounds may be of general interest, and can be used whenever lower bounds on the TV distance are required. Second, the bounds reveal new connections between the TV distance and discriminant based classifiers (such as the minimax probability machine and the Fisher discriminant), which we use to derive novel excess risk bounds for these classifiers in Appendix C.1. Third, the techniques used to derive these bounds can be used to derive lower bounds on other divergences as well.
However, we address a common concerns raised by the reviewers.
**Comparison with Baselines for Model Unlearning**
As suggested by Reviewers, **H2n6**, and **crkh**, we compare our proposed method with recent additional baselines provided in [4]. Following [4], we state the test accuracies on the forget and remain classes, averaged over all classes in the tables below . In the sequel, GA refers to Gradient Ascent, IU refers to influence unlearning (both as implemented in [4]), and l1-sparse refers to the approach as proposed in [4].
**CIFAR10 models**
VGG-16
|Method | Forget Class accuracy |Remain Class accuracy|
|--|--|--|
| **Ours** | 9.66% | 82.5% |
| [4], GA | 22.49% | 88.80% |
| [4], IU | 11.42% | 89.81% |
ResNet-20
|Method | Forget Class accuracy| Remain Class accuracy|
|--|--|--|
| **Ours** | 6.37% | 83.90% |
| [4], GA | 11.52% | 85.46% |
| [4], l1-sparse | 1.42% | 90.18% |
**Takeaway: our model achieves superior forgetting compared to the baselines listed in [4] *notably without any fine-tuning of the model, and without utilizing the loss function in any way*, with minimal difference in remain accuracy.**
### References
[1] Sui et al, 2021. *CHIP: CHannel Independence-based Pruning for Compact Neural Networks*
[2] Murti et al, 2023. *TVSPrune - Pruning Non-discriminative filters via Total Variation separability of intermediate representations without fine tuning*
[3] Shah et al, 2024. *Decomposing and Editing Predictions by Modeling Model Computation*
[4] Jia et al, 2023. *Model Sparsity Can Simplify Machine Unlearning* | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Unveiling the Tapestry of Consistency in Large Vision-Language Models | Accept (poster) | Summary: This paper introduces a multimodal Consistency Benchmark (ConBench) to systematically evaluate the capabilities of LVLMs via diverse question formats.
ConBench has a total of 4k questions on 1k images and corresponding 3k discriminative ground truths, as well as two special metrics to evaluate the consistency of LVLMs.
Based on ConBench, this work conducts a comprehensive analysis of inconsistency in LVLMs.
Strengths: - The constructed ConBench can better evaluate LVLMs and encourage further advancements in the consistency domain.
- Their findings provide insight for future community research.
- This paper proposes a trigger-based diagnostic refinement method (TDR) to ameliorate the generation skill of LVLMs without any additional training.
Weaknesses: - The results in Table 4 suggest that TDR can markedly improve the consistency of baseline LVLMs. In addition to ConScore[C], how does TDR affect LVLMs' performance on comprehensive multimodal benchmarks?
Technical Quality: 4
Clarity: 4
Questions for Authors: See Weakness
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Yes, the authors discuss limitations in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the efforts in reviewing our paper and the positive evaluation. Our responses according to the reviewer's comments are summarized as follows.
---
> 1. In addition to ConScore[C], how does TDR affect LVLMs' performance on comprehensive multimodal benchmarks?
Our proposed method TDR aims to utilize the discriminative prompt template and VLMs' judgment capability to enhance their open-ended generation ability.
Given the difficulty in evaluating open-ended responses, we randomly selected 100 images from MMMU, MMBench, and MMStar. Then, LLaVA-v1.5-7B was employed to generate initial captions for these images, which were then improved by our TDR method. After conducting five rounds of manual evaluation, we observed that the TDR method resulted in improvements in 37% of cases, no change in 60% of the cases, and a slight decrease in 3% of the cases (**GSB = 37:60:3**). The results further confirm the effectiveness of the TDR approach.
Furthermore, while not directly for the improvement of discriminative responses, motivated by your question, we also attempted two discriminative benchmarks (rewriting prompts for some low-probability answers). As shown in the table below, we are pleased to see that our method improves LLaVA-v1.5-7B by $0.84$ and $0.92$, and boosts MiniGemini-7B by $0.53$ and $0.79$ on GQA and POPE, respectively. These promising results will inspire our further research.
| Method | GQA | POPE
| :-------- | :----- | :----: |
| *LLaVA-v1.5-7B (baseline)* | 61.94 | 85.88|
| + TDR (ours) | **62.78**| **86.80**|
| | | |
| *MiniGemini-7B (baseline)* | 63.66 | 87.65|
| + TDR (ours) | **64.19**| **88.44**| | Summary: This paper presents ConBench, a multi-modal benchmark to intuitively analyze how LVLMs perform when different prompts are used for one model around a knowledge point. Based on the proposed benchmark, several interesting findings are pointed out, such as the relationships between the prompt space and the answer accuracy in both the discrimitive and generative realm. Based on the findings, the paper proposes a way to improve the consistency of LVLMs by trigger-based diagnostic refinement. This is specifically conducted on improving the caption capabilities of LVLMs.
Strengths: 1. The benchmark is comprehensive and the related findings are interesting.
2. The motivation for the proposed method to improve the consistency of LVLMs is straightforward based on the findings.
3. The evaluations conducted on the designed benchmarks provide evidence of the effectiveness. However, there are some concerns regarding the experimental results, which will be further discussed in the weaknesses section.
Weaknesses: 1. Some of the presentations are unclear. For example, the definition of the discriminative and generative domains should be clearly explained at the beginning to avoid obscure understandings.
2. Some metrics are not clearly explained, including but not limited to ConScore[C], metric[C] and metric[D] in Sec 4.1.
3. For the generative problem, the paper uses GPT4 to compare the consistency of the LVLM output answer and the ground truth. However, several issues arise here: (1) Does GPT4 have a certain bias; (2) Does GPT4 also have inconsistent judgment issues; (3) How to eliminate the impact of GPT4's inconsistent judgments, for example, is it feasible to conduct several experiments and provide results variation.
4. About the Trigger-based Diagnostic Refinement: it seems that previous work [a] also used the same way to give more accurate answers in LLM realm, that is prompting the models with previous answers and the same-meaning questions. Further explanations are suggested to clarify on the common and different things between the proposed Trigger-based Diagnostic Refinement and other works (e.g., [a]) in LLM realm.
[a] Chuanyang Zheng, etc. Progressive-Hint Prompting Improves Reasoning in Large Language Models, 2023.
Technical Quality: 4
Clarity: 3
Questions for Authors: Shown in the weaknesses part
Justification of final rating:
I appreciate the opportunity to evaluate this submission. Overall, the proposed benchmark is interesting. The findings based on the benchmark are reasonable. The proposed methods based on the findings are practical. I give my rating as boardline accept.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The author has claimed it in the checklist.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer aYRC for the efforts in reviewing our paper and positive evaluation. Our responses according to the reviewer's comments are summarized as follows.
---
> 1. Explain the definition of the discriminative and generative domains clearly.
(1) Discriminative questions provide prior information about the image in the text, restricting the model to answer based on this information, making it a closed-ended question.
(2) Generative questions, on the other hand, do not provide the model with specific image priors, allowing the model to generate answers totally based on its own reasoning, making it an open-ended question.
(3) Furthermore, we will carefully review the clarity of other presentations and make corrections in the revised version
---
> 2. Explain the definition of ConScore[C], metric[C], and metric[D].
**(1) ConScore[C]**
The ConScore[C] can be clearly defined by the following formula:
ConScore[C] = (Con[T] + Con[C] + Con[V]) / 3,
where the $Con[X]$ is the Consistency ratio between discriminative answer type $X$ and the caption. The ConScore[C] is a generative domain evaluation metric to indirectly quantifying and assessing the actual user experience.
**(2) Metric[C] and Metric[D]**
Metric[C] and Metric[D] actually refer to the tables with ConScore[C] and Score[D] as evaluation metrics, respectively. We will provide clearer explanations in the revised version.
---
> 3. The concerns about using GPT4: (1) Does GPT4 have a certain bias; (2) Does GPT4 also have inconsistent judgment issues; (3) How to eliminate the impact of GPT4's inconsistent judgments?
(1) Initially, we only employ GPT-4 to determine the Consistency of model responses, (e.g., prompts like "Based on the caption, is my response correct? Please answer with yes or no only."). Therefore, this is a binary choice task for GPT-4, and our primary focus is on its accuracy. Regarding this task, we manually checked its performance on five different VLMs by randomly sampling 200 cases in each trial. As shown in the following table, the GPT-4 achieves an **accuracy of 95.4% (until August 4, 2024)** with standard deviation of 0.0035, which is highly reliable and stable.
| Responder | Accuracy rate of GPT-4 (6 trials)
| :-------- | :----- |
|GPT-4V | 95.1%
|Gemini-Pro-Vision | 95.8%
|Qwen-VL-Max | 94.9%
| LLaVA-NeXT-34B | 95.7%
|InternVL-v1.2P-40B | 95.5%
(2) Inconsistency refers to the model providing inconsistent answers when faced with different solution spaces of prompts for the same knowledge point. Based on the above, GPT-4 in our evaluation always answers 'yes' or 'no', where the solution space of prompts remains the same, so the concept of Inconsistency does not need to be considered in this situation.
---
> 4. The comparison between the proposed Trigger-based Diagnostic Refinement and other works (e.g., [1]) in the LLM realm.
**(1) Similarity**
The similarity between ours and [1] lies in asking follow-up questions based on the historical responses of the model and confirming the final answer, to enhance the quality of responses.
**(2) Differences**
However, the motivation and technical details of ours and [1] are completely different. [1] utilizes all the accumulated responses as options for the next round of prompts, iteratively searching through historical responses to find the correct answer, which is just effective for mathematical and reasoning abilities. Our method only picks out the uncertain words in model responses, and constructs discriminative templates for questioning, proving effective comprehensive abilities.
[1] Chuanyang Zheng, etc. Progressive-Hint Prompting Improves Reasoning in Large Language Models, 2023.
---
Rebuttal 2:
Title: Discussion to Reviewer aYRC
Comment: Dear Reviewer aYRC,
We sincerely thank you for your efforts in reviewing our paper. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether your concerns have been addresses or not. Please let us know if you still have any unclear part of our work.
Best,
Authors | Summary: The paper presented a comprehensive study LVLMs on their inconsistent answers given different prompt solution spaces (true/false, multiple choice, and limited QA). Specifically, the authors introduced the ConBench benchmark to evaluate the performance of various models. They investigated the relationship between the size of the solution space and the accuracy of answers provided by these models, and analysed the consistency between answers and the generated captions. They also presented a simple yet effective method to ameliorate the consistency.
Strengths: * It is an interesting perspective to construct a benchmark to evaluate the model's performance when prompt with different solution spaces are provided. In fact, this is a common problem that we might encounter when we used LVLMs to answer our questions.
* The analysis was comprehensive with some insightful findings. In particular, the positive correlation between the accuracy and consistency, and the relationship between the confidence and consistency.
* The authors also provided a simple yet effective method, i.e., picking those words with lower confidence scores for re-prompting, to generate more accurate captions. The method did not require re-training the model.
Weaknesses: * The proposed benchmark was not sufficient to evaluate the model, with the size of only 1K images, and three types of questions. It would be great if the bench can be more diverse and representative for various domains.
* It was not clear what was the insight for Fact 4.4.1. Did the authors want to show that closed-source models put more efforts on improving the consistency compared with open-sourced ones?
* The current method to improve consistency was ad-hoc and based on such an idea, it is likely that simply changing the generation arguments like temperature, top_p or top_k might have similar effects.
* The paper seems more suitable to be submitted to the Datasets & Benchmarks track instead of the main track.
Technical Quality: 3
Clarity: 3
Questions for Authors: See above for questions.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors discussed potential social impacts as well as limitations of their paper in the appendix. I agreed with them that using GPTs for evaluation might pose a bias and a more rigorous human evaluation was preferred.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer AnYL for the efforts in reviewing our paper. Our responses according to the reviewer's comments are summarized as follows.
---
> 1. It would be beneficial to increase the diversity and scale of ConBench.
We greatly appreciate your attention to the scale and diversity of ConBench. The current version of ConBench contains 1,000 images with 3,000 questions, and covers 19 subdomains, comparable to popular multimodal datasets such as MME [1] and MMBench [2]. In this version, multimodal models have not yet achieved high scores (e.g., the current best model Qwen-VL only achieves 37.00 on ConScore[D]). With further development of VLMs, we will update and scale up the cases of benchmark datasets to tens of thousands, providing a more comprehensive evaluation of Consistency.
[1] MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models.
[2] MMBench: Is your multi-modal model an all-around player?
---
> 2. Is the Fact 4.4.1. to show that closed-source models put more effort into improving the consistency compared with open-sourced ones?
In the paper, we discovered Fact 4.4.1 to provide an alternative perspective to demonstrate why closed-source models, despite sometimes having lower accuracy on benchmarks, offer a better user experience in practical applications. One possible reason, based on our paper, is that closed-source models have a Consistency bias. Therefore, evaluating the Consistency of VLMs is a feasible approach to indirectly quantifying and assessing the actual user experience.
---
> 3. Changing generation arguments like temperature, top_p or top_k might have similar effects with the proposed method TDR.
In fact, adjusting these hyperparameters only affects the randomness in sampling, not improving the quality of the generation. As shown in the following table, we conducted comprehensive experiments on LLaVA-NeXT-34B and MiniGemini-34B. The temperature and top_p in the baselines are $0.0$ and $None$, and we set temperature to $0.2/0.8$ and top_p to $0.9/0.7$ in the ablation studies. The results show that tuning the temperature or top_p values can not enhance the Consistency of model responses. In contrast, our approach makes the model aware of where lacks confidence and has errors, leading to corrections that boost generation quality. Therefore, changing generation argument hardly achieves similar effects.
| Method | ConScore[C] | Con[T] |Con[C] | Con[V]
| :-------- | :----- | :----: | :----: | :----: |
| *LLaVA-NeXT-34B (baseline)* | 48.3 | 46.00 | 52.20 | 46.80 |
| + temperature (0.2)| 48.0| 45.75 | 51.75| 46.51|
| + temperature (0.8)| 44.1|42.55 | 47.15 | 42.57|
| + temperature (0.2), top_p (0.9)| 47.8| 45.57 | 51.72 | 46.12|
| + temperature (0.2), top_p (0.7)| 48.2| 45.88 | 52.12 | 46.60|
| + TDR (ours) | **57.4**| 69.10 | 57.40 | 45.70 |
| | | | |
| *MiniGemini-34B (baseline)*| 49.6 | 56.80 | 48.00 |44.10 |
| + temperature (0.2) |48.9|56.00|47.32 | 43.38|
| + temperature (0.8) |45.6|52.22 | 47.99 |36.59|
| + temperature (0.2), top_p (0.9) | 48.1| 55.08 |50.62 | 38.60 |
| + temperature (0.2), top_p (0.7)| 49.1 | 55.97 | 47.64 | 43.69 |
| + TDR (ours) | **60.2** | 76.10 | 53.80 | 50.80 |
---
> 4. The paper seems more suitable to be submitted to the Datasets & Benchmarks track instead of the main track.
Our paper differs from the papers in the Datasets & Benchmarks track. It not only introduces a new dataset ConBench but, more importantly, we provide deep analyses of the Inconsistency phenomenon based on that. Our findings establish a relationship between the discriminative and generative realms, highlighting the importance of Consistency between the discriminative answer and caption. Moreover, we propose a solution by forcing VLMs to self-think, where a discriminative prompt is constructed via uncertain words in the caption. It is a comprehensive evaluation pipeline instead of introducing the dataset or benchmark solely.
---
Rebuttal 2:
Title: Discussion to Reviewer AnYL
Comment: Dear Reviewer AnYL,
We sincerely thank you for your efforts in reviewing our paper. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether your concerns have been addresses or not. Please let us know if you still have any unclear part of our work.
Best,
Authors | null | null | Rebuttal 1:
Rebuttal: Dear ACs and Reviewers,
We thank all the reviewers for their valuable comments and efforts in reviewing our paper.
We are delighted that Reviewer AnYL, aYRC, and wv4L stated that our findings are interesting and the benchmark is comprehensive; Reviewer AnYL and wv4L acknowledged that our method TDR is efficient and has a technical contribution.
Regarding the questions and concerns of the reviewers, we have provided our responses in their respective sections. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Revisiting the Integration of Convolution and Attention for Vision Backbone | Accept (poster) | Summary: This paper addresses the scalability issue in vision transformers by integrating convolutions (Convs) and multi-head self-attentions (MHSAs) at different granularity levels, rather than at the finest pixel level. The authors propose using Convs for fine-grained per-pixel feature extraction and MHSAs for coarse-grained semantic slots in parallel. They introduce a pair of fully differentiable soft clustering and dispatching modules to bridge the grid and set representations, enabling effective local-global fusion. The proposed integration scheme, named GLMix, offloads fine-grained feature extraction to light-weight Convs and utilizes MHSAs in a limited number of semantic slots. Through extensive experiments, they demonstrate the efficiency and effectiveness of their approach, showing improved performance and interpretability in various vision tasks. The paper also highlights the potential of their method to induce better semantic segmentation with weak supervision.
Strengths: 1. Innovative Integration Approach: The idea of integrating Convs and MHSAs at different granularity levels is innovative and addresses the scalability issues inherent in vision transformers.
2. Efficient Local-Global Fusion: The use of soft clustering and dispatching modules enables efficient local-global feature fusion, which is a significant advancement.
3. Extensive Empirical Validation: The method achieves state-of-the-art results on ImageNet-1k, COCO, and ADE20K benchmarks, showing a favorable performance-computation trade-off. Furthermore,the semantic grouping effect observed in the clustering module enhances the interpretability of the model.
Weaknesses: 1. Complexity of Implementation: The introduction of soft clustering and dispatching modules adds complexity to the implementation, which might pose challenges for practical deployment.
2. Static Number of Semantic Slots: The use of a static number of semantic slots for all images may lead to redundancy and inefficiency in certain cases.
3. Limited Scope of Clustering Strategy: The clustering strategy, though effective, could be further optimized. The current implementation might still be computationally intensive for real-time applications.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1.How sensitive is the overall performance of the GLMix model to the choice of clustering strategy? Have the authors tested alternative clustering methods, and what were the outcomes?
2.What are the potential limitations of using a static number of semantic slots, and have the authors considered dynamic allocation of semantic slots based on the complexity of the input image?
3.How adaptable is the GLMix integration scheme to different types of vision tasks, such as object detection, instance segmentation, and semantic segmentation? Are there any task-specific adjustments needed for optimal performance?
4.What are the scalability limits of the GLMix models when dealing with very high-resolution images or videos?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. Scalability to Higher Resolutions: While the method shows good performance on standard benchmarks, its scalability to very high-resolution images or videos remains to be fully explored.
2. Hardware Inefficiency: The use of depth-wise convolutions, despite their low arithmetic intensity, might still be inefficient on certain hardware, limiting the practical applicability of the model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: W1: Introducing the soft clustering and dispatching modules adds complexity to the implementation, which might pose challenges for practical deployment.
The clustering and dispatching modules involve only standard and widely used operators such as Matrix Multiplication and SoftMax, which are supported by all deep learning libraries across different hardwares, such as CPUs and GPUs. Therefore, the complexity of the implementation should not be a problem.
---
W2: The use of a static number of semantic slots for all images may lead to redundancy and inefficiency in certain cases.
The static number of semantic slots is indeed a limitation of our method, as we mentioned in Section 5. However,
* On the one hand, we have observed that such a simple strategy is already sufficiently efficient to achieve a good efficiency-performance tradeoff, as demonstrated in Figure 2.
* On the other hand, dynamic allocation of the slots still has the limitation on efficiency in the typical batched inference scenarios: an image with fewer slots needs slot padding to align an image with more slots.
Although it may not be easily addressed, it would be interesting for future work to consider the dynamic allocation of slots.
---
W3: The clustering strategy, though effective, could be further optimized. The current implementation might still be computationally intensive for real-time applications.
* Our clustering strategy is designed to be lightweight and only involves a pooling operation, a matrix multiplication followed by SoftMax. The three operations are widely used and highly efficient in all deep learning libraries.
* The clustering takes less than 5% FLOPs of the whole models. It is far from being a computation bottleneck.
---
Q1: How sensitive is the overall performance of the GLMix model to the choice of the clustering strategy? Have the authors tested alternative clustering methods, and what were the outcomes?
The design choice of the clustering strategy is significant. According to our ablation study on the clustering strategy in L312-317 and Table 6,
* using k-means clustering not only produces a significantly lower throughput (835.9 im/s -> 440.6 im/s) but also incurs unstable training;
* clustering initialization with per-image adaptive pooling improves the performances over static initialization with learnable parameters (82.5% vs. 82.1%).
---
Q2: What are the potential limitations of using a static number of semantic slots, and have the authors considered the dynamic allocation of semantic slots based on the complexity of the input image?
We have addressed this issue in W2. Please refer to our reply to W2 above.
---
Q3: How adaptable is the GLMix integration scheme to different types of vision tasks, such as object detection, instance segmentation, and semantic segmentation? Are there any task-specific adjustments needed for optimal performance?
* Keeping the number of slots consistent (i.e., 64) with image classification is good enough for dense prediction tasks. For object detection (Table 4 top group), instance segmentation (Table 4 bottom group), and semantic segmentation (Table 5), we do not apply any task-specific adjustments and still obtain state-of-the-art performances.
* We have also tried using more slots in the GLNet-4G backbone for semantic segmentation with UperNet. Using more slots, such as 100 or 256, did not improve the performances (50.6 mIoU -> 50.5 mIOU or 50.6 mIOU).
---
Q4: What are the scalability limits of the GLMix models when dealing with very high-resolution images or videos?
* With GLMix, memory and computation grow linearly w.r.t. the input size, as the clustering and dispatching take linear complexity, and the attention among slots takes constant complexity. Therefore, the scalability should be similar to linear attention models such as SwinTransformer.
* In practice, the scalability limits depend on memory and latency constraints, as well as engineering optimizations. For reference, below we show the peak memory occupancy and latency w.r.t. different input resolutions and batch sizes. Note that the single-input cases (batch size = 1) are heavily unoptimized in Pytorch and can be further improved with advanced inference engines like Nvidia Triton.
| Batch Size | Input Size | GLNet-STL | GLNet-4G | GLNet-9G | GLNet-16G |
|---|---|---|---|---|---|
|128|$224^2$|1957MB/149ms|1553MB/233ms|2373MB/392ms|3230MB/586ms|
|128|$448^2$|7469MB/562ms|5889MB/1012ms|8762MB/1690ms|11682MB/2517ms|
|1 |$896^2$|346MB/22ms|433MB/59ms|722MB/95ms|1057MB/137ms|
|1 |$1792^2$|1108MB/108ms|5027MB/628ms|7599MB/970ms|10218MB/1329ms|
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: I've thoroughly reviewed the authors' responses and appreciate their thoughtful engagement. I will stay in touch for further discussion as we approach the final rating.
---
Rebuttal 2:
Comment: Thanks for your response. We hope our previous reply has addressed your main concerns. Please feel free to reach out if you have any additional questions or require further clarification on any aspects of our work. | Summary: The authors propose to leverage the strengths of convolution layers and MHSA block to improve the performance of vision transformers. They propose to apply them in parallel at different granularity, such that the convolution layers are applied to the grid of local features and MHSA to slots for global features. Slots can be considered as clustering of patch features, where patches corresponding to the same object would be associated to a slot. To create slots, they use soft clustering that enables local-global fusion across patch features such that it produces some meaningful semantic grouping effect.
To connect the slots back to the patch features, they propose a dispatching module, which is then fused with a convolution block os 1x1 convolution and depth-wise convolution operation (similar to Convnext). They terms this entire block as GLMix for Global-local mixing block.
Finally, to validate the effectiveness of their approach, the authors propose steps to adapt it to Swin-tiny-layout module and empirically show the performance for image classification on Imagenet-1k, semantic segmentation on ADE20K, object detection and instance segmentation on MS-COCO. The authors provide a comprehensive list of baseline methods (Table 3, Table 4) for different tasks and also provide visualization of the slots attending to different regions of the image. They also discuss different experimental settings and provide additional visualizations of the slots in the Appendix.
Strengths: The following are the strengths of the paper:
- The authors propose an idea to integrate the strengths of convolution i.e. ability to encode inductive bias, alog with MHSA block with ability to learn global representation, to improve vision transformers. The idea is quite interesting and while the use of convolution and transformer layers have been applied in Hybrid architectures, their use in parallel is under-studied and is an exciting topic.
- The visualizations provided in Figure 5 of the object level representations learned by the slots and additional such visualizations in the Appendix, gives a nice insight into representations encoded by slots. Its also interesting to see the slots learning different regions of the image, not only focusing on the object, but also regions in the background that might be of interest.
- I also really liked the through and extensive experiments, along with comparisons to different SoTA methods, not only in terms of performance but also in efficiency.
- The paper is also easy to understand, with exhaustive mentions of related works and their comparisons.
Weaknesses: I have the following concerns about the work:
- In L59, the authors state that they have applied several macro designs from SMT [32]. However, comparing the performance of SMT, SMT-S achieves similar performance as GLNet-4G i.e. 83.7% accuracy, but has much fewer parameters (20.5M vs 27M), while having similar FLOPs. Additionally, the variant SMT-B has 2G fewer flops and half the number of parameters and yet achieves the same performance i.e 84.3% s. 84.5%. Furthermore, SMT-S is also better than GLNet-4G on object detection on MS-COCO, which using RetinaNet. The authors of SMT also provide visualizations of the attention heads clearly attending to the object, same as in GLNet. This means that the scale-aware-module (SAM) achieves a much more efficient integration of convolution with MHSA. Could the authors please comment on effectiveness of GLNet against SMT.
- Maybe I missed it, but Im unable to understand the intuition of applying self-attention operation on the slots (L161). Instead of just having a cosine similarity between the slots and patch features as in eq(2), a softmax over the slots would ensure that the slots do not collapse to the same object. Is this the reason why authors apply MHSA on S to obtain S'?
- Continuing from the above point, I further cannot understand the significance of the dispatching module. Could the authors please explain the intuition behind going from slots to patch features? Is it because, it would be easier to integrate with the features obtained after conv layers?
- In Figure 4, the clustering before applying the GLMix block and after are exactly the same. Its not clear to me what is being learnt by the GLMix block.
- A popular work that uses slots to perform object centric learning is DINOSAUR [1*], which show in Figure 6 of their paper that increasing the number of slots results in a drop in performance. The authors however find that varying the slots has almost no effect in performance. Could the authors please comment of this difference in observation? Additionally, by carefully looking in figure 5, we can observe that there are many slots that represent the same object. This leads to some redundancy in learning. Can the authors vary the slots to somewhere between 5-10 like the one performed in [1*] to see if they have the same observation?
[1*] Seitzer et al., Bridging the Gap to Real-World Object-Centric Learning, ICLR 2023
Technical Quality: 3
Clarity: 2
Questions for Authors: Apart from the ones mentioned in the Weakness section, I have the following questions
- Can the authors please clarify their architecture? I see references to Swin, ConvNext, LV-ViT, etc. But Im unsure about what a single layer of GLNet looks like? Im also curious if the authors use a pretrained backbone, because Im unsure about the performance gains with just 1 V100 GPU and training the slots over just 1 iteration. Maybe Im missing some detail here.
- Is there any specific reason apart from the smaller feature resolution that the GLMix block is applied to the 3rd layer?
- There are missing references to hybrid ViTs, which the authors can add and discuss.
[2*] Venkataramanan et al., Skip-Attention: Improving Vision Transformers by Paying Less Attention, ICLR 2024
[3*] Pan et al., Less is more: Pay less attention in vision transformers., AAAI 2022
[4*] Mehta & Rastegari, Separable self-attention for mobile vision transformers, arxiv 2022
[5*] Graham et al., Levit: a vision transformer in convnet’s clothing for faster inference, ICCV 2021
- Could the authors also provide the throughput details for table 3,4 and 5
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes the authors have discussed the limitation, but one can reduce the redundancy in slots by trying to empirically understand the performance of the proposed method with fewer slots, as suggested above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: W1: Please comment on the effectiveness of GLNet against SMT.
* For classification, the throughput/efficiency of SMT is not as good as its parameters and FLOPs indicate. This is because its core design, scale-aware modulation (SAM), relies heavily on depthwise convolutions (DWConvs), which cannot utilize the GPUs well due to low arithmetic intensity. Note that although we also use DWConvs, we only use it once in each GLMix block.
| Model |#Params(M)|FLOPs(G)|Acc.@IN1k(%)|Throu.(im/s)|
|---|---|---|---|---|
|SMT-S|20.5|4.7|**83.7**|484|
|GLNet-4G|27|4.5|**83.7**|**541**|
| | | | | |
|SMT-B|32.0|7.7|84.3|298|
|GLNet-9G|61|9.7|**84.5**|**345**|
* For object detection (Table 4) and semantic segmentation (Table 5), GLNet performs better and has lower FLOPs. This implies that GLNet has better scalability to high-resolution inputs than SMT.
|Backbone|#Params(M)|FLOPs(G)|mAP (RetinaNet 3X+MS @ COCO) |
|---|---|---|---|
|SMT-S|30|247|47.3|
|GLNet-4G|37|**214**|**47.9**|
|Backbone|#Params(M)|FLOPs(G)|mIoU (UperNet 160k @ ADE20K) |
|---|---|---|---|
|SMT-S|50.1|935|49.2|
|GLNet-4G|56.8|**907**|**50.6**|
| | | | | |
|SMT-B|61.8|1004|49.6|
|GLNet-4G|91.7|**988**|**51.4**|
---
W2: What's the intuition of applying self-attention operation on the slots? Is it to ensure that slots do not collapse to the same object?
* The self-attention operation on the slots is for inter-object/inter-region relation learning.
* We have tried removing the self-attention over slots, and the slots do not collapse to the same object, possibly because the column-wise and row-wise SoftMax operations (Eq. 3 and Eq. 4) have introduced both inter-slot and inter-position competitions. However, the IN1k accuracy drops from 82.5% to 82.0% for the GLNet-STL.
---
W3: Could the authors please explain the intuition behind the dispatching module? Is it because it would be easier to integrate with the features obtained after conv layers?
The goal of the dispatching module is to propagate inter-region/object relation to spatial positions in the feature grid. Indeed, it is crucial for the fusion with the Conv layer processed features, as you point out here.
---
W4: In Figure 4, the clustering before and after applying the GLMix block are exactly the same. It's not clear what is being learnt by the GLMix block.
The GLMix block performs feature transformation. The input and output of the GLMix block are both feature maps. In Figure 4, we colorize both the input and output feature maps according to the expected clustering so that they look the same, which may be the reason for the confusion. We will revise the figure to be more clear (e.g., by removing the clustering colorization on the input feature map). Thanks for pointing out the issue.
---
W5: The observation on the effects of varying number of slots differs from DINOSAUR. Could the authors please comment on this difference? Can the authors vary the slots to somewhere between 5-10?
* There are at least two aspects that make our observation on slot numbers differ from DINOSAUR:
* Supervision signal. DINOSAUR is an unsupervised framework for object discovery while our GLNet is trained under a supervised learning framework. DINOSAUR may require setting a proper number of slots to serve as a prior on how many objects can emerge in an image from their datasets.
* Evaluation metric. DINOSAUR uses the foreground adjusted rand index (FG-ARI), a no-reference metric for clustering quality, while we use the IN1k accuracy in the ablation study.
* We have tried to use 9 slots (initialized with 3 $\times$ 3 pooling) in GLNet-STL; the IN1k accuracy drops from 82.5% to 81.9%.
---
Q1: Can the authors please clarify their architecture? What does a single layer look like in GLNet? Do the authors use a pretrained backbone?
The architecture design is specified in Section 3.3. To summarize,
* The macro architecture of GLNet follows most existing hierarchical transformers: it has four stages with gradually smaller spatial resolutions (1/4, 1/8, 1/16, 1/32) and larger hidden dimensions (C, 2C, 4C, 8C). Each layer contains a spatial mixing block (the GLMix block or MHSA block) followed by a channel mixing FFN/MLP. The detailed configurations can be found in Table 2.
* We do not use a pretrained backbone. Since there are architectural differences, it is impossible to inherit the weights from existing architectures. We train the model from scratch on ImageNet and then use the weights to initialize the backbone for downstream dense prediction tasks, following existing works such as SwinTransformer.
---
Q2: Is there any specific reason apart from the smaller feature resolution that the GLMix block is applied to the 3rd layer?
The GLMix block is applied to the 1st, 2nd, and 3rd stages, and each stage has several layers according to the model size (see Table 2). The 4th stage uses a standard attention block as the resolution (1/32 input size) is sufficiently small so that the standard attention is affordable and beneficial for the performance. See also Section 3.3 for more details.
---
Q3: There are missing references to hybrid ViTs, which the authors can add and discuss.
GLNet distinguishes itself from existing works, including the recommended references, by applying MHSAs and Convs at different granularity levels, i.e., the object/region level and pixel level. Thanks for bringing these works to our attention. We will reference and discuss them in our revision.
---
Q4: Could the authors also provide the throughput details for table 3,4 and 5.
* It isn't easy to provide throughputs for these tables because most of the compared works did not report the data.
* For classification (Table 3), Figure 2, together with the SMT throughputs in W1, has covered the latest SOTA models for comparison.
* For object detection (Table 4) and semantic segmentation (Table 5), the throughputs may not be that useful because the task-specific heads, instead of the backbone, consume most computations.
---
Rebuttal Comment 1.1:
Title: Response to author's rebuttal
Comment: Thanks to the authors for their response and pointing out the results for object detection and semantic segmentation. The authors have clarified some of my queries. The following are some points, which are still unclear to me.
- With regard to the GLMix block, my main query about intuition was trying to understand more about what each component is learning. To me, the architecture can be compressed into 3 components:
- slots to patch feature cross attention
- self-attention of output of above step to obtain refined slots
- patch features to refined slot cross attention, which is then combined with output of convolution features
If this is the case, like i mentioned before, I am curious to see the visualization on images on ImageNet-1k, where objects are not object-centric unlike the ones shown in Figure 5. This would help better understand what different slots observe and whether they form nice clusters. I understand as per NeurIPS policy, the authors can upload a 1 page document containing figures and tables.
- Following up with the above point, the authors mention that they do not use a pre-trained model. In this case, during the initial phase of the training, the slots would be random features. Im curious to know how does it affect the learning? Can the authors show any visualization of how the slots evolve and as function of #epochs? More importantly, Im curious to understand how these random slots is not detrimental to the overall learning, as it would form incorrect object clusters.
- Instead of making the slots conditioned on the input image, Im curious to know what happens when the slots are made independent similar to Perceiver.
Im open to discussing this with the authors in case of any misunderstanding from my end.
Thanks
---
Rebuttal 2:
Comment: Thanks for the response. Below are our replies to the new comments.
---
C1: further clarification on the GLMix block.
Your latest comments on the GLMix block are mostly correct. However, we would like to clarify one thing. Although the clustering and dispatching modules are similar to cross attention, the two modules are designed to be lightweight so that they have **no QKV projections**, **no multihead mechanism**, and **the two modules share the correspondence/assignment logits**.
---
C2: the slots are random features during early epochs. Does this affect the learning?
We did not encounter any difficulties caused by the random slots in early epochs. Previous Perceiver/DETR-like architectures, which involve something similar to the slots, also have no problems here. One possible reason for this phenomenon is that even random projection can preserve distances/similarities well.
---
C3: request for the visualization of non-object-centric images from IN1k, of semantic slots evolution against training epochs, and of semantic slots with Perceiver-like non-image-conditioned design.
* Unfortunately, while authors could optionally upload a PDF in the global response section during the initial rebuttal phase, we are not allowed to edit this section now. Therefore, we are unable to submit a PDF now.
* By looking at the visualization of non-object-centric images, we still observe meaningful semantic grouping effects of the things and stuff.
* Unfortunately, we did not keep the model snapshots at different epochs or for the Perceiver-like non-image-conditioned design, which are required to visualize the latter two items. We will redo the experiments and will describe the results as soon as possible. However, since we are attending the ACL conference now, it may take a couple of days for us to get back to you, but we will try our best to reply soon.
Title: Clarification on the GLMix block, random slots during early epochs, and additional visualizations
---
Rebuttal 3:
Title: Update on the visualization
Comment: We have done some further visualizations as per your request. Below are our observations.
* At the end of the 1st epoch, a few semantic slots can already be associated with foreground objects, while the others are either uniform or scattered patterns. At the end of the 5th epoch, the semantic grouping effect becomes similar to Figure 5 in the paper. In subsequent epochs (e.g., 10th, 50th, ...), the grouping effect gradually becomes more obvious.
* With non-image-conditioned initialization for the slots, while a similar semantic grouping effect is observed, we find that nearly all slots are associated with the foreground objects in many cases. This possibly accounts for the degraded classification accuracy with such a design (82.5% -> 82.1%, Table 6): the background information is ignored, even though it may be helpful in some cases. | Summary: The paper discusses the use of Convolutions (Convs) and multi-head self-attentions (MHSAs) in vision backbones. Traditionally considered alternatives, the authors question the need for both to operate at the finest pixel granularity, particularly highlighting the scalability issues this causes in vision transformers. They propose a novel approach where Convs and MHSAs work in parallel but at different granularities: Convs handle local features on a fine-grained grid, while MHSAs manage global features on a coarse-grained set of semantic slots. The integration is facilitated by soft clustering and dispatching modules that bridge these representations, enabling local-global fusion. Their method, GLMix, leverages lightweight Convs for fine details and uses MHSAs on fewer semantic slots.
Strengths: 1. The paper re-evaluates current methods of integrating Convs and MHSAs and proposes a novel integration at different granularities. This approach harnesses the advantages of Convs, such as translation equivariance, and MHSAs, such as global interactions and data adaptivity, while mitigating scalability issues related to input resolution.
2. The paper presents a pair of fully differentiable clustering and dispatching modules that connect the set and grid representations of image features. This enables the fusion of global features from MHSAs and local features from Convs. A notable advantage of the soft clustering module is its ability to produce meaningful semantic groupings without requiring direct dense supervision.
Weaknesses: 1. In Line 6, this paper mentions "the scalability issue," but it only contains experiments on the IN-1K size dataset. Such a dataset cannot evaluate "scalability." I suggest that the authors revise this argument.
2. The novelty that introduces clustering into attention is limited. [1,2,3,4] have already studied the clustering in attention. Please compare with these methods and highlight your novelty and contributions.
[1] Xie, Y., Zhang, J., Xia, Y., Hengel, A. V. D., & Wu, Q. (2022). Clustr: Exploring efficient self-attention via clustering for vision transformers. arXiv preprint arXiv:2208.13138.
[2] Liang, J., Cui, Y., Wang, Q., Geng, T., Wang, W., & Liu, D. (2024). Clusterfomer: clustering as a universal visual learner. Advances in neural information processing systems, 36.
[3] Grainger, R., Paniagua, T., Song, X., Cuntoor, N., Lee, M. W., & Wu, T. (2023). PaCa-ViT: learning patch-to-cluster attention in vision transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 18568-18578).
[4] Zeng, W., Jin, S., Liu, W., Qian, C., Luo, P., Ouyang, W., & Wang, X. (2022). Not all tokens are equal: Human-centric visual analysis via token clustering transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 11101-11111).
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weakness.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This paper will address the limitations in the conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: W1: IN-1k size dataset cannot evaluate "scalability." It is advisable to revise this argument.
We thank this reviewer for the suggestion. However, in Line 6, we actually refer to the scalability w.r.t. the input size, instead of the dataset size. This can be reflected by the FLOPs-performance tradeoff on dense prediction tasks such as semantic segmentation (Table 5) and object detection (Table 4), where high-resolution inputs are used. We apologize for the confusion and will clarify it in our revision: "... the scalability issue w.r.t. the input resolution for vision transformers".
---
W2: Clarify the novelty and contribution in comparison with existing clustering attention works.
* As an integration scheme of attention and convolution, GLMix distinguishes itself from existing works in using Convs and MHSAs at different granularity levels, i.e., Convs on fine-grained feature maps and MHSAs on coarse semantic slots. We have found that with Convs applied on the feature grid, MHSAs can be aggressively applied to a few semantic slots while achieving comparable and even better performances than existing state-of-the-arts.
* Unlike ClusTR [1] and TCFormer [4], which use DPC-KNN for the clustering, the soft clustering module in our work is fully learnable and does not rely on predefined rules. In comparison with ClusterFormer [3] and PaCaViT [4], which perform cross-attention between the feature grid and cluster representations/slots, our work performs self-attention over the slots (i.e., queries and key-value pairs are both from the slots), making the attention even more lightweight. We will incorporate this discussion in the Related Work section in our revision. | null | null | Rebuttal 1:
Rebuttal: We thank all reviewers for their constructive comments and suggestions. We are glad to see that reviewers consider our work novel/interesting/innovative (Reviewers fhiH/CazW/txiT), appreciate the semantic grouping effect brought by our design (Reviewers fhiH/CazW/txiT), and highlight the extensive experiments (Reviewers CazW/txiT). We address each reviewer's main concerns separately below. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
AirSketch: Generative Motion to Sketch | Accept (poster) | Summary: The authors present an interesting application of multiple vision components merged together. They introduce AirSketch to generate sketches directly from hand gestures. They present a self-supervised training procedure and augmentations to with an image diffusion model to generate the realistic sketches. They also present two datasets building on existing datasets and Unity engine. They claim that controllable image diffusion can produce a clear sketch from a noisy input and present a new technique for drawing without markers.
Strengths: The paper is clearly written and covers most of the related works in sufficient depth.
The authors clearly motivate why the problem is important and what could be some clear applications of the idea from an AR/VR perspective (which has become a popular field of research lately).
The authors highlight the lack of hand drawing to sketch datasets and to that end present novel datasets.
The authors also cover the different cases of failures and show that the augmentations can make it robust.
Weaknesses: The idea presented in the paper might be a combination of existing ones and can be easily materialized by using a combination of different models, but it is nevertheless an interesting area.
Very limited description of the hand tracker. The authors should expand on it.
Didn’t find enough motivation for why only Quick, Draw! was used as the starting point?
Is there a reason for choosing Unity over other engines? Can stable diffusion be used to generate the hand gesture videos for the dataset?
The metrics used for evaluations can all be prone to errors. I am not sure if there is sufficient time to get IRB and run human experiments, but that could significantly strengthen the paper.
Technical Quality: 3
Clarity: 4
Questions for Authors: Is there anything specific in the model that can solve for jitters? Any particular component of the pipeline that authors think is helping with robustness?
Can something other than Quick, Draw! be used for generating the synthetic dataset?
For ControlNet, the authors use a batchsize of 8. Is that enough to maintain statistics with layers like BatchNorm?
Have the authors looked at more human-like augmentations to help the model better generalize to unseen objects and become more robust to the existing ones? Pls. see Atoms of Recognition, Extreme Image Transformations, etc.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: The authors find that the model relies on text prompts more when applied to unseen categories. The hypothesis is based on the absolute numbers. The authors should also try and show something like a saliency map for both visual and textual elements of the model to better understand the parts that the model is focusing on.
The authors did not release the code with this submission but promise a future release. Please make sure that the code does accompany the paper for better reproducibility.
The authors also list that there is no need for statistical significance. However, can SSIM not be a good metric to see if the differences between the generated maps and ground truth is significant, when not relying on the human eye?
SSIM and CD by themselves are weak metrics to measure similarity, and are prone to errors. The authors must address that in the text and also tell what might be good ways to mitigate that (i am not asking to devise a new metric, just let the reader clearly know).
The technology can also have a negative societal impact, perhaps not directly but as a derivative of this being applied to another area. The authors should try and address those areas.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: #### **Limited description of the hand tracker**
We thank the reviewer for pointing this out! While we did discuss hand trackers in the appendix (A.3), we will add a more explicit description to the main body of the paper.
#### **Why only Quick, Draw! was used as the starting point?**
1. When drawing in the air without controllers or headsets, quick and convenient everyday sketches are likely to resemble the simplistic sketches found in the Quick, Draw! dataset, rather than the more elaborate sketches in the TUBerlin dataset.
2. All prior works on sketch generation, such as [a,b,c], only consider the Quick, Draw! dataset.
3. The Quick, Draw! dataset is the biggest sketch dataset, and is the only dataset with timestamp information for each point in the drawing. We needed this information to create the synthetic dataset, where the hand moves at a speed accurate to how the sketch was truly drawn.
#### **Is there a reason for choosing Unity over other engines? Can stable diffusion be used to generate the hand gesture videos for the dataset?**
Generating hand gesture videos with Stable Diffusion would be much harder than simply using a graphics engine, especially since it is prone to incorrectly generating hands (even despite recent improvements). Unity is a very mature graphics engine and does the job for us, but we are totally open to suggestions from the reviewer for future work.
#### **IRB and human experiments**
Unfortunately, we won't have enough time within the rebuttal period to get that done. Our institution has an extensive IRB protocol.
#### **Is there anything specific in the model that can solve for jitters?**
We account for jitters (as a result of the hand tracking and human tremors) by introducing the jitter augmentation during training. The model then learns to clean up jitters when they are encountered in the input.
#### **Any particular component of the pipeline that authors think is helping with robustness?**
The part of the pipeline that we think helps the most with the robustness are our suite of augmentations, which helps our model learn what aberrations in the sketches need to be removed. We made sure to carefully develop our augmentations to account for common errors in sketching that are encountered in our use-case, whether it be from human error, the hand tracking, or simply the nature of the task.
#### ** other datasets than Quick, Draw! for generating the synthetic dataset?**
Unfortunately since Quick, Draw! Is the only dataset providing timestamp associated with drawing coordinates, it is the only available dataset for creating synthetic drawing videos.
#### **For ControlNet, the authors use a batch size of 8. Is that enough to maintain statistics with layers like BatchNorm?**
ControlNet’s standard training regime has a batch size of 4, so our batch size of 8 is likely not too small.
#### **Have the authors looked at more human-like augmentations to help the model better generalize to unseen objects and become more robust to the existing ones?**
We thank the reviewer for the suggestions to include more “human-like” augmentations and bringing up interesting topics in Atoms of Recognition and Extreme Image Transformations[c,d]; we will consider them for improving robustness in our future work. Meanwhile, the way we conduct our augmentations are designed to simulate a comprehensive range of “human-like” errors found in the noisy sketches produced by our air drawing.
#### **Statistical significance**
While it is too computationally intensive to conduct training multiple times, we re-conduct our main result in Table [1] from the paper by performing inference 10 times on each sample with different seedings, and report the mean&std shown in the table below.
|w/ Aug.|SSIM (↑)|CD (↓)|LPIPS (↓)|CLIP I2I (↑)|CLIP I2T (↑)|
|-|-|-|-|-|-|
||||**Seen Categories**|
|✘| 0.55±0.03$e^{-3}$ | 31.92±0.12$e^{-3}$|0.41±4.0$e^{-3}$|0.77±0.78$e^{-3}$|0.21±0.41$e^{-3}$|
|✓| **0.64**±0.14$e^{-3}$ |**25.13**±0.22|**0.36**±1.0$e^{-3}$|**0.84**±1.2$e^{-3}$ | **0.29**±0.93$e^{-3}$|
||||**Unseen Categories**|
|✘|0.55±0.08$e^{-3}$| 33.53±0.31$e^{-3}$|0.41±3.8$e^{-3}$|0.78±0.84$e^{-3}$|0.21±0.51$e^{-3}$|
|✓|**0.63**±0.45$e^{-3}$|**24.26**±0.71|**0.38**±1.9$e^{-3}$|**0.85**±2.8$e^{-3}$|**0.28**±1.7$e^{-3}$|
We also conducted a Paired T-test for significance. However as the means have a clear margin and std is low, the p-values are all low in the scale of 1e-14. Since this is not a rigorous test, we are not including the results here.
#### **The authors should also try and show something like a saliency map for both visual and textual elements of the model to better understand the parts that the model is focusing on.**
Figure [4] (right) in the rebuttal PDF shows the visualization of ControlNet’s hidden state throughout the reverse diffusion steps. We can clearly observe that while the model trained with augmentation (bottom) is able to infer the clean sketch contour (t>=16), the one trained without augmentation (top) fails to do so.
#### **Data & Code release**
We would like to assure the reviewer that we will absolutely release the code and datasets with the paper for better reproducibility, once the paper is accepted to a venue.
#### **Negative societal impact**
We thank the reviewer for bringing this up. The only potential negative impact we could think of is the use of our model for generating malicious content. However since we focus on sketch generation, the impact is much less than a regular image generation model. However, we would love to hear what the reviewer thinks, so we can include it in the final version of our paper.
[a] A Neural Representation of Sketch Drawings
[b] Sketch-pix2seq: a model to generate sketches of multiple categories
[c] Controllable stroke-based sketch synthesis from a self-organized latent space
[d] Atoms of recognition in human and computer vision
[e] Extreme Image Transformations Improve Latent Representations in Machines
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their time to answer the questions. I would like to see some clarifications on the following:
IRB: Given human subjects, the authors should try to obtain IRB, even if it is extensive (for a reason).
Jitters: Can the authors provide some visualizations/insights into how the model is doing that?
Human-like augmentations: Can the authors clarify why are the Atoms of Recognition and Extreme Image Transformations not useful here for robustness or generalization?
Negative societal impact: There could be an adverse adoption of this work for generating mockery/meme videos, specially in the teen category. What do the authors think?
I look forward to seeing the thoughts and/or clarifications on the above points in the manuscript.
Thanks.
---
Rebuttal 2:
Title: Reply to Reviewer THqa
Comment: Thank you again for your time and feedbacks!
#### **IRB: Given human subjects, the authors should try to obtain IRB, even if it is extensive (for a reason).**
We thank the reviewer for mentioning. We have initiated IRB and will include in our final version of the paper.
#### **Jitters: Can the authors provide some visualizations/insights into how the model is doing that?**
During training, we apply augmentations (jitters, false strokes, distortions etc.) to the sketch and ask the model to reconstruct the original sketch, thereby forcing the model to learn to remove noise and distortions. In Figure 6 (main paper) we show ablations on models trained with different augmentations. As can be seen from the 1st, 2nd and 4th image where jitters exist on the top edges of these images, models trained with structural and false-stroke augmentations are not able to remove the jitter, whereas the model trained with Local (e.g. jitter) augmentation successfully removes the jitter.
In addition, we run additional inferences on jittered sketches. In the resulting generations, all the jitters are removed as expected. We have sent the anonymous sharing link to AC per policy, who shall in turn release it.
#### **Human-like augmentations: Can the authors clarify why are the Atoms of Recognition and Extreme Image Transformations not useful here for robustness or generalization?**
Due to the length limit we were not able to expand on this topic in the above reply. Below is a more detailed discussion:
Atoms of Recognition [a] discovers the phenomenon where humans fail sharply to recognize objects with a tiny degradation from Minimal Recognizable Configurations(MIRC), which is not observed on computational models. Instead of improving robustness/generalizability of computational models, this work mainly focuses on the pitfall of human recognition instead of machine’s, which seems to be tangential to our focus. However, it could be interesting to conduct sketch-based MIRC for machines and see if any improvement can be derived, which we will leave for future work.
Extreme Image Transformations (EIT) [b] utilizes global transformations (e.g. grid/full shuffle, color flatten etc.) to improve classification/detection robustness and generalizability by forcing the model to learn global representation. In our case, given black and white sketches, only grid shuffle is applicable, and we indeed considered grid shuffle (Jigsaw) as one of the augmentation initially, but the resulting sketch often became incomprehensible even to human eyes – because of the high versatility and the absence of color and textual information in sketches.
In terms of “Human-like augmentations”, the augmentations we implemented are intended to simulate “human-like” errors made during air-drawing, while both Atoms of Recognition and EITs are not “Human-like” augmentations.
#### **Negative societal impact: There could be an adverse adoption of this work for generating mockery/meme videos, specially in the teen category. What do the authors think?**
We thank the reviewer for raising the point. In general generative AI can result in negative impact if misused, and we agree in our case the negative impact can be specifically to teenagers. We will add it to the final version of the paper.
[a] Atoms of recognition in human and computer vision
[b] Extreme image transformations affect humans and machines differently | Summary: The paper presents a technique for generating raster handwritten sketches based on the tracking of the hand (from egocentric video), with the target application scenario of sketching in AR/VR.
The training is done mostly based on Quick, Draw! dataset of sketches combined with generated hand videos using Unity (5k samples). In addition, 500 real videos were collected by the authors for the purposes of training and evaluation.
The model is ControlNet-based (most experiments using Stable Diffusion XL).
The authors show qualitatively that this technique allows for generating sketches that are quite close to the ground truth, and can be extended to sketch completion. They also show the importance of providing a text prompt describing the object category that needs to be generated. The also show the importance of using the data augmentation when generating the synthetic dataset, to ensure that synthetic sketch data is close to the real data obtained by hand tracking algorithm.
Strengths: 1. Originality. The paper presents a first-of-its-kind approach and a first-of-its-kind dataset to train the model on.
2. Clarity. The writing is mostly clear and easy to follow.
3. Quality. The ablation study is well done and highlights the importance of the main decisions made by the authors.
Weaknesses: The main weakness of the approach is the quality of the analysis, which brings under question the significance of the technique.
In particular, the authors use Stable Diffusion model LoRA-tuned on Quick, Draw dataset (lines 209-210) for all their experiments. While the evaluation is done on a held-out set of Quick, Draw classes (appendix, line 521), it seems that the tuning could allow leakage of the set of classes to the model generation - suggesting that instead of the ability to generalize to unseen classes, the model actually simply memorized the class set.
Another similar issue is the technique for the selection of the held-out set of classes (appendix, lines 514-520) where the held-out classes were selected from the K-Means clustering of the classes in Quick, Draw - meaning that they are intentionally similar to the ones used during training.
More generally, the unanswered question of the applicability of this technique to anything beyond the given set of 50 common sketching classes is the main weakness of the paper - generalization to truly unseen sketch classes, ability of SDXL and CLIP to deal with more complex objects or multiple objects on the same image is unclear.
The second weakness is the limited reproducibility of the approach - the authors have not made even the synthetic dataset public (even though they claim to intend to), citing (appendix, line 659) "As of the submission of the paper, the new datasets presented in this paper need to be cleaned to ensure personal information of the creator isn’t released unintentionally" - which is unlikely to be an issue for synthetic data.
The third weakness is the set of evaluation metrics selected by the authors. While it does a good job of highlighting the importance of decisions made by the authors, for me as the reader, the most important missing piece of information about the quality of this approach that I am interested in is "how often does the output of the model actually depict the object of the same class as the user intended to draw?" - which could be obtained by running a high-enough quality classifier on the generated data or looking to the closest neighbour in CLIP embedding space with all of the class categories.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could you please clarify / show whether SDXL that has only been tuned on Quick, Draw classes that are not used for evaluation, could perform equally well?
2. How does the method perform on other sketches that are further away from the training data (ex not the cluster centers for clustering of the training data)?
3. Could you please clarify the exact dataset used for evaluation? Was it only comprised of real videos, or of synthetic training samples as well? What was the train-test split?
I am willing to increase my score if the authors could provide answers to Q 1/2 and planning to decrease it if there is insufficient evidence that the model can perform well beyond the training Quick, Draw classes.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have generally addresses the limitations of their work (with the exception of the data release, see above).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: #### **Weakness 1.1: Finetuning samples leaking**
*-- “While the evaluation is done on a held-out set of Quick, Draw classes (appendix, line 521), it seems that the tuning could allow leakage of the set of classes to the model generation - suggesting that instead of the ability to generalize to unseen classes, the model actually simply memorized the class set.”*
In Figure[3] we run inference on less-common classes that are not seen during both augmentation-based training and LoRA finetuning, and we can clearly observe that the model maintains its performance over these unseen classes.
#### **Weakness 1.2: Regarding K-Means clustering for partitioning seen/unseen classes**
*-- “... the held-out classes were selected from the K-Means clustering of the classes in Quick, Draw - meaning that they are intentionally similar to the ones used during training.”*
We would like to clarify:
* The K-Means clustering is based solely on evaluation data (our collected synthetic/real hand videos), not training data.
* We perform the clustering over the order of chaos of classes – how different is the trajectory image from the ground-truth (SSIM/CD), and how similar is it to its corresponding text label (CLIP I2T). We are *not* using it to intentionally split semantically similar classes into seen/unseen classes (e.g. assigning “dog” to seen and “gray dog” to unseen).
* We are *not* sampling unseen classes from one particular cluster that “resembles the training data”, but uniformly from all clusters, such that the averaged baseline statistics between seen/unseen are similar and can thus be compared.
Different classes from the Quick, Draw! dataset have vastly different qualities. Without such a procedure it will be impossible to quantitatively draw comparisons two sets of classes. In addition, as we show in our ablations, order of chaos has a clear impact on model performance. By drawing unseen classes uniformly from cluster centers, we can perform such ablations (e.g. Figure [8] main paper) in different order of chaos.
We apologize for any ambiguous wording that causes the confusion, and will improve the expression in the final draft of the paper.
#### **Weakness 1.3: Applying to anything out of the 50 classes/more complex objects?**
We show that our model is indeed capable of generalizing out of the seen classes and to generate more complex sketches. For results on classes out of the 50 classes please see Figure[3] in rebuttal PDF and response under Weakness 1.1. For results on more complex objects please see Figure [1] in rebuttal PDF for results on TUBerlin dataset and general response 1.
#### **Weakness 2: Synthetic dataset has not been made public.**
Our datasets will become available when the paper is accepted to a venue! It is a very common practice for researchers to wait to release code and datasets until their paper is accepted, to avoid having their work misappropriated by other researchers prior to publication; we hope the reviewer understands. We will absolutely release all code and datasets once the paper has been accepted into a venue.
#### **Weakness 3: Evaluation metrics**
*-- “The third weakness is the set of evaluation metrics selected by the authors…I am interested in is "how often does the output of the model actually depict the object of the same class as the user intended to draw?"*
In the paper we indeed report the CLIP image-to-text similarity (CLIP I2T) to measure how likely is the output depicting the intended class. However if the reviewer believes a different metric is needed, we are happy to provide further evaluation.
#### **Q1: Could you please clarify / show whether SDXL that has only been tuned on Quick, Draw classes that are not used for evaluation, could perform equally well?**
Yes, please see Figure[3] in the rebuttal PDF and response under Weakness 1.1.
#### **Q2: How does the method perform on other sketches that are further away from the training data (ex not the cluster centers for clustering of the training data)?**
We hope that the clarification about the K-Means selection and the evaluation on the unseen classes/datasets answers this question.
#### **Q3: Could you please clarify the exact dataset used for evaluation? Was it only comprised of real videos, or of synthetic training samples as well? What was the train-test split?**
We used the collected synthetic and real hand motion datasets only for evaluation. In the main paper (e.g. Table[1]) we report the testing accuracy separately under synthetic and real dataset. The training samples were all augmented-clean sketch pairs from the Quick, Draw! dataset and do not overlap with the evaluation samples.
---
Rebuttal 2:
Title: Does our reply address your concern?
Comment: Dear reviewer,
Thank you again for your feedbacks! As the rebuttal period is approaching the end today, could you please check if our reply addresses your concerns?
Thank you for your time and feedbacks,
Authors | Summary: This paper addresses a new task: sketch generation from marker-less air drawing. The authors trained a spatially conditioned diffusion model to generate sketches from noisy hand tracking results and text prompts. During training, they devised an augmentation-based training procedure.
Strengths: 1. Good paper writing. Easy to follow.
2. This paper addressed a new task: marker-less sketch generation, which is useful in AR/VR applications.
3. Experiments are performed on both real and synthetic datasets.
4. The idea of "soft" spatial constraint is novel, which harnesses the capability of image generation model to de-noise the output of hand tracking algorithms.
Weaknesses: 1. The model may overfit on the sketches from the Quick, Draw! dataset, as the diffusion model is finetuned on this dataset.
2. The ability to generate simple geometries without certain semantics is not explored. For example, can the model generate a simple curve following the user's drawing?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could the authors please provide more results that is not from the Quick, Draw! dataset?
2. Could the authors please generation results of simple geometry drawing?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. The input of this model is a complete drawing of the user, which means the user cannot see their half-way drawing. This is not favorable in many applications.
2. This method only takes the hand tracking algorithm's output as input. It ignores the rich information from the video, which may help to achieve more precise control over drawing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: #### **Model overfitting on Quick, Draw! Dataset**
In Figure[3] in the rebuttal PDF we show inference results on unseen classes during both augmentation-based training and LoRA finetuning, where our method still maintains its performance.
On the other hand, we agree with the reviewer that due to the finetuning, the model is “overfit” to the dataset’s style. However, fine-tuning diffusion models to a specific style for consistent outputs is a common practice in the image generation community and is not generally regarded as a limitation.
#### **Results from a different dataset**
In Figure [1] in the rebuttal PDF, we show results from additional experiments on the TUBerlin dataset, which contains more complicated sketches compared to Quick, Draw! Dataset. As can be seen in Figure [1], our method is able to generate faithful and coherent sketches given more complex objects.
#### **Simple geometries**
In Figure[2] in the rebuttal PDF we show examples from simple geometries shapes such as triangles, circles and squares, and the model is able to draw them faithfully.
#### **Can’t see halfway drawing**
*-- "The input of this model is a complete drawing of the user, which means the user cannot see their half-way drawing. This is not favorable in many applications."*
First, we have shown that our model is capable of auto-completing sketches with partial input (Figure[10] Appendix), and therefore the input is not limited to complete drawing. From a research perspective, interactive sketching and editing in a generative manner is indeed interesting, which we will consider in our followup works. From a practical AR/VR standpoint, certain software engineering designs can also be incorporated to enable users seeing half-way drawing/performing interactive sketching. While we do not focus on the engineering side in this work, we will be happy to engage the reviewer about this during the discussion period.
#### **Lose wealth of information from the video**
*-- “This method only takes the hand tracking algorithm's output as input. It ignores the rich information from the video, which may help to achieve more precise control over drawing.”*
We fully agree with the reviewer’s observation, which in fact aligns with our initial approach: translating video content directly to sketch under an Encoder-Decoder framework. However we found it very hard to learn meaningful trajectory information from video encoders such as VideoMAE. Additionally, there is no large-scale, high quality dataset that includes the rich information found in videos and yet has highly detailed sketches as targets. Creating such a dataset is also time consuming. However, we do agree that by considering additional information such as video features or motion velocities, the generation quality can be improved, which we leave for future work.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their response. I think all my concerns are addressed. I hope that the authors will include all the additional results and discussion in their final manuscript. For now, I will keep my initial score.
---
Reply to Comment 1.1.1:
Title: Thanks Reviewer gzU1
Comment: Thank you for taking time to go through our paper and rebuttal. We will certainly incorporate all the additional results and discussion in the final version. | Summary: The authors investigate the problem of sketch generation from the finger's motion trajectory, which is interesting. To make it, the authors adopt a standard ControlNet pipeline, while highlighting the importance of data augmentation (adding noise to clean sketches to mimic the hand-tracking image). Hence the major contribution of this paper is the tweak that makes ControlNet applicable for generating clean sketches from the continuous messy finger trajectories.
Strengths: - The authors repurpose ControlNet to produce a clean sketch from a severely deformed and line-messy input, which is useful for finger motion to sketch generation and could be potentially valuable for other related HCI applications.
- The results look promising, and the application of sketch-completion and text-instructed stroke styling are interesting.
- The experimental results are extensive and validate the effectiveness of the proposed augmentation strategies.
Weaknesses: - The major concern is about the novelty of the technical contribution. Personally, the proposed augmentation methods are more like practical tricks for better cleaning up messy sketches.
- The methods of accepting raster-format input for sketch generation should be also compared by training these models with the same augmented sketches, such as [a][b][c].
- The practical usage seems limited since: (1) the proposed method is only validated on simple sketches, so no way to justify whether any scalability issue when facing complex scenarios; (2) the user probably needs to edit (e.g., deleting lines) while drawing, how to deal with it?
- The collected real air-drawing dataset is very small.
[a] Chen, Yajing, et al. "Sketch-pix2seq: a model to generate sketches of multiple categories."
[b] Yang, Lan, et al. "Sketchaa: Abstract representation for abstract sketches."
[c] Zang, Sicong, Shikui Tu, and Lei Xu. "Controllable stroke-based sketch synthesis from a self-organized latent space."
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weaknesses.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The major limitation may be the limited practical usage as I listed in weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: #### **Concern about the novelty of the paper**
Thank you for your feedback. While augmentation is a well-known method, it has never been applied for sketch generation, and our finding that it can be used as an effective self-supervised pre-training task for airsketch is in our opinion a non-trivial contribution. Further, we would like to reiterate that the contribution of our work is not only about an implementation ‘trick’, but also
1. a novel task of generative motion to sketch, with a dataset. There is no other work, including those cited by the reviewer, that achieves the task nor was the task previously proposed. We believe it will open the doors to many interesting applications by bringing generative modeling into the AR space.
2. A concrete baseline method that has been thoroughly evaluated and analyzed, with performance that we believe will be useful for future research and applications.
3. A novel way of viewing and exploiting controllable diffusion models.
We are happy to discuss this further.
#### **Comparison to raster-format input for sketch generation**
We thank the reviewer for mentioning these works. As per requested, we run Sketch-Pix2Seq on a subset of 10 classes. For each class, we train a separate Sketch-Pix2Seq model by first training 50K steps without augmentation, followed by another 50K steps with augmentation. Results are summarized in Table [1] below and figure [4](left) in the rebuttal PDF. In particular, we find that while Sketch-Pix2Seq produces semantically coherent sketches, it often cannot identify and follow visual cues from the messy tracking image, as shown in Figure [1](left). It thus leads to a noticeable improvement in terms of CLIP I2I/I2T, but a marginal improvement over SSIM/CD.
| | SSIM (↑) | CD (↓) | LPIPS (↓) | CLIP I2I (↑) | CLIP I2T (↑) |
|-------------------|----------|--------------|---------------|--------------|--------------|
| Tracking | 0.5 | 32.36 | 0.42 | 0.76 | 0.21 |
| Sketch-Pix2Seq | 0.58±1.7$e^{-3}$ | 30.19±0.60 | 0.38±1.5$e^{-3}$ | 0.82±0.78$e^{-3}$ | 0.26±0.41$e^{-3}$ |
| Ours | **0.63**±0.21$e^{-3}$ | **25.45**±0.26 | **0.36**±1.1$e^{-3}$ | **0.84**±2.0$e^{-3}$ | **0.29**±0.86$e^{-3}$ |
In addition, we also want to point out that works like [a][c] could only fit a model on one or few classes at a time and struggle to generalize beyond – while both [a][c] aim at learning across multiple classes, their maximum number of categories is only 5, which is far less than what is needed in our scenario. Therefore we did not include them for benchmarking.
#### **Practical usage**
(1) *-- "the proposed method is only validated on simple sketches, so no way to justify whether any scalability issue when facing complex scenarios"*
* We run additional experiments on the TUBerlin dataset. Results are summarized in Figure [1] in the rebuttal PDF, which shows our method is able to generate faithful and coherent sketches given more complex objects.
* All previous works in sketch generation [a][c] also only consider simple sketches from Quick, Draw! dataset.
(2) *-- "the user probably needs to edit (e.g., deleting lines) while drawing, how to deal with it?"*
While sketch editing is indeed an interesting topic, we believe this may be out of scope of this work and could become an interesting future direction.
As our work is the first to study motion to sketch in a generative manner, we acknowledge (Line 41-45, Introduction) there is a large space of exploration and improvement. Meanwhile, to deploy such models into production-ready AR/VR systems, certain software engineering designs are still needed, which is yet not our primary concern in this work.
#### **Size of the dataset**
We acknowledge that the real dataset is small for training tasks. However, these are difficult datasets to collect, especially for a first work in this task. We are still in the process of expanding the dataset to support future work. On the other hand, this underlines the necessity of our augmentation-based training, which is self-supervised and does not require labeled data.
[a] Chen, Yajing, et al. "Sketch-pix2seq: a model to generate sketches of multiple categories."
[b] Yang, Lan, et al. "Sketchaa: Abstract representation for abstract sketches."
[c] Zang, Sicong, Shikui Tu, and Lei Xu. "Controllable stroke-based sketch synthesis from a self-organized latent space."
---
Rebuttal Comment 1.1:
Title: Could you check if the rebuttal addresses your concern or not, please?
Comment: Dear Reviewer V9Hw, thank you for your time in reviewing this paper. It would be great if you could check if the rebuttal addresses your concern or not. Thank you in advance.
---
Rebuttal 2:
Comment: I appreciate the authors for the detailed responses. Most of my concerns have been addressed.
However, I still think it is weak on the technical side, and this work is more like a cute application paper.
That said, I like the angle of this work and I believe it would attract sufficient attention to encourage further research works in this direction. So I am open and will be happy to follow other reviewers' lead if they consistently feel it should be accepted.
---
Rebuttal Comment 2.1:
Comment: Thank you for acknowledging our response and the perspective of our work. Concerning the technical aspects, we would like to emphasize the following points:
1. We believe that model complexity should not be the primary criterion for evaluating the merit of a study. Many existing methods are effective despite their simplicity. Our method: (a) avoids additional complexity, allowing for better reproducibility and easier hyper-parameter tuning (b) is self-supervised, eliminating the need for labeled data, (c) produces high-quality results that are robust to significantly distorted trajectory images and generalizable to unseen classes, and (d) is evaluated through extensive experiments. Given the novelty of our proposed task, we believe a simple yet effective approach shall serve as a solid baseline, upon which more complex designs can be built as future works.
2. The contribution of this work extends beyond the technical implementation. It also introduces a novel task that treats motion-to-sketch from a generative perspective, along with newly collected datasets. As the reviewer noted, this contribution is likely to attract considerable attention and stimulate further research in this area. | Rebuttal 1:
Rebuttal: We thank all reviewers for spending time reading our paper and providing insightful feedback. We appreciate reviewers finding our proposed task and approach novel, interesting, and useful (V9Hw, gzU1, Gs1n), and the experiments being extensive and convincing (V9Hw, Gs1n, THqa). We address each of the reviews separately below but provide some common responses here.
### Generalizability on unseen classes
Figure[3] in the rebuttal PDF shows inference results on less-common classes not seen during both LoRA fine-tuning and augmentation-based training. We could clearly observe that the model maintains its performance over these unseen classes.
### Experiments on TUBerlin dataset
We run additional experiments on the TUBerlin dataset following the same training procedure from the main paper, and results are shown in Figure[1] in the rebuttal PDF. Our method produces faithful and coherent results with much more complicated sketches compared to Quick, Draw!, demonstrating its scalability to more complex scenarios (Reviewer V9Hw, gzU1).
Pdf: /pdf/cdb99f683cc11264eadb484f5f6c0521eb8356f1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Leveraging Tumor Heterogeneity: Heterogeneous Graph Representation Learning for Cancer Survival Prediction in Whole Slide Images | Accept (poster) | Summary: The authors present ProtoSurv, a model that leverages heterogneous graph representation learning to predict survival risk. Their model can be decoupled into two basic submodules: 1)Structrure view module whish is responsible for injectiing the topological ingormation of the wsi into the model and the 2) histology view module which incorporates pathological priors reflecting tumor heterogeneity. The authors conducted experiments across five WSI datasets, demonstrating superior performance, supported a series of by ablation studies.
Strengths: - The paper is clear and well-written.
- The methodology that is introduced is novel, tackling the inherent heterogeneity of pathology images
- There is a detailed series of abalation studies justifying the use of each component in the model
Weaknesses: - The prototyes that are used in histology view are not learnt, but are rather selected before training based on prior knowledge, potentially hindering the adaptability of the model since preselected prototypes may not capture the full variability or complexity of the data. This could also introduce generalisation issues because the model may struggle to generalize to new, unseen data that differ significantly from the prototypes.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Andrew H. Song et al. have recentrly published PANTHER [Morphological Prototyping for Unsupervised Slide Representation Learning in Computational Pathology](https://openaccess.thecvf.com/content/CVPR2024/papers/Song_Morphological_Prototyping_for_Unsupervised_Slide_Representation_Learning_in_Computational_Pathology_CVPR_2024_paper.pdf) that explores a similar idea of extracting morphological signatures using protypes from WSIs in an unsupervised way showcasing superior results. Have you considered comparing PANTHER to your model?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - Apart from the comparisons with SOTA methods, it would be interesting to shift the focus also to the interepretability of the model. The prototypes are instrumental in enhancing the model's performance, offering insight into how prototype-guided decisions contribute to the predicted outcome. There is only one attention map in the suplementary material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive comment. Here are the responses to Weaknesses (W), Questions (Q) and Limitations(L).
**W. The prototyes that are used in histology view are not learnt,
potentially hindering the adaptability of the model since preselected prototypes may not capture the full variability or complexity of the data.**
We fully agree with your comment.
We also consider that prototypes selected before training based on prior knowledge indeed may struggle to capture the full variability or complexity of the data.
Therefore, when designing the prototype extraction module (the Histology View (HV) module),
we rely solely on the node categories provided by prior knowledge to give each prototype an initial preference.
Then we use learnable shifts to obtain diverse prototypes for each category,
enabling them to prefer on different factors within the category,
including both previously known and unknown factors / phenotypes.
Finally, under the guidance of these preferences,
the prototypes to aggregate relevant information from the global features by learnable cross-attention.
In summary, our prototype extraction process is learnable and highly flexible,
and has the ability to capture various previously unknown factors / phenotypes that may contribute to patient outcomes.
We apologize for any misunderstanding caused by the description in our paper.
We will revise this section in the camera-ready version to more clearly explain our prototype extraction process.
Additionally, we have uploaded a detailed interpretability figure in the PDF attachment in "Author Rebuttal",
which visualizes the attention preferences of multiple prototypes across different categories to further support our claims.
If you are interested about it, please refer to "Author Rebuttal" for more details.
**Q. Compare with PANTHER.**
Thanks for the advice.
We compared PANTHER with our model.
Additionally, we compared our model's Histology View (HV) module (prototype extraction module) to ensure the fairness of the prototype extraction module comparison.
We tested two pooling methods for the HV module,
including the mean pooling method used in the ablation experiments in the main paper (corresponding to the "without (w/o) HV" row in Table 2 in the main paper),
as well as the concat pooling method used by PANTHER (PANTHER performs pooling by concatenating all prototypes along the channel dimension).
The experimental results are as follows:
| | COAD | LGG | LUAD | PAAD | BRCA |
|:------------------:|:-----------------:|:-----------------:|:-----------------:|:-----------------:|:-----------------:|
| ProtoSurv | **0.692 ± 0.045** | **0.774 ± 0.063** | **0.658 ± 0.046** | 0.669 ± 0.049 | **0.720 ± 0.040** |
| PANTHER | 0.635 ± 0.056 | 0.748 ± 0.046 | 0.631 ± 0.029 | **0.673 ± 0.082** | 0.699 ± 0.019 |
| HV(mean pooling) | 0.684 ± 0.044 | 0.706 ± 0.036 | 0.646 ± 0.051 | 0.624 ± 0.032 | 0.657 ± 0.049 |
| HV(concat pooling) | 0.688 ± 0.046 | 0.766 ± 0.044 | 0.641 ± 0.037 | 0.661 ± 0.057 | 0.713 ± 0.039 |
The learnable prototype extraction method (HV module) outperforms PANTHER on most datasets.
In addition, we observed that PANTHER's performance surpassed all comparison models on the PAAD dataset.
We attribute this to the relatively small sample size of the PAAD dataset (208 cases), which may lead to overfitting.
Therefore, PANTHER, which extracts prototypes in an unsupervised way based on priors and has a minimal number of learnable parameters,
demonstrated superior results on the PAAD dataset.
**L. Apart from the comparisons with SOTA methods, it would be interesting to shift the focus also to the interpretability of the model.**
Thank you for your interest in the interpretability of our model.
We have expanded the figure to visualize the preferences of each prototype and the tissues they focus on,
to better demonstrate the interpretability of our model.
if you are interested about it, please refer to "Author Rebuttal" for more details.
Once again, thank you for your careful review and valuable comments on our paper.
If you are satisfied with our response,
would you kindly bump up your score? | Summary: This paper analyzes the limitations of the existing MIL method in survival prediction with WSIs 1) overfitting, 2) numerous redundant and irrelevant instances, and 3) insufficient exploration of the interaction between local, regional features and global contextual features in WSI. To address these issues, the paper proposes Multiple Instance Learning with Hierarchical Graph Transformer over Super-Pixel (HGTSP). HGTSP consists of three novel modules 1) a Pixel-based Pseudo-Bag Division (SPBD), 2) a Super-Pixel Region Sampling (SPRS), and 3) a Hierarchical Graph Transformer (HGT). The experiments on TCGA verify the effectiveness of HGTSP. However, the modules proposed in this paper only partially correspond to the issues it claims to solve. In addition, HGTSP contains a large number of hyper-parameters, which makes it challenging to apply this method to new datasets quickly.
Strengths: The limitations of previous methods have been well analyzed: 1) overfitting, 2) numerous redundant and irrelevant instances, and 3) insufficient exploration of the interaction between local, regional features and global contextual features in WSI. To address these limitations, this paper proposes 1) a Pixel-based Pseudo-Bag Division (SPBD), 2) a Super-Pixel Region Sampling (SPRS), and 3) a Hierarchical Graph Transformer (HGT). The experiments on TCGA verify the effectiveness of HGTSP.
Weaknesses: 1. This paper claims to address the overfitting via SPBD in the abstract. However, according to the introduction, SPBD is mainly used to address the risk of pseudo-bag mislabeling and inconsistency between original bags and pseudo-bags. Therefore, it would be better to refine the claims in the abstract.
2. There are a large number of hyper-parameters. According to the appendix, these hyper-parameters highly influence the performance of HGTSP.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. As HGTSP combines multiple techniques, including K-means clustering, adaptive sampling, and Graph Transformer, comparing the time required for inferring the same number of samples is more appropriate than only comparing FLOPs and the number of parameters.
2. Is there a method to automatically set hyper-parameters?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Title: update review
Comment: I'm sorry for the reviews submitted for another paper. I update the reviews here.
This paper proposes ProtoSurv, a heterogeneous graph model for WSI survival prediction. ProtoSurv is driven by data and incorporates pathological domain knowledge. Specifically, ProtoSurv consists of three modules 1) a multi-layer GCN to learn the structure representation of WSIs, 2) a prototype representation method to learn pathological priors, 3) a prior guided fusion method to aggregate structure view features and pathological multi-prototypes. The experiments on TCGA verify the effectiveness of ProtoSurv. The motivation of this paper sounds good. However, the model lacks novelty and in-depth exploration. In addition, ProtoSurv requires additional labels to train a classifier, leading to unfair comparison.
Soundness: 3: good
Presentation: 3: good
Contribution: 2: fair
Strengths:
This paper is easy to follow.
The motivation sounds good.
This paper proposes ProtoSurv, which deciphers intratumoral tissue heterogeneity using a heterogeneous graph and incorporates prior knowledge of prognostic tissue types into the prediction process.
The experiments on TCGA verify the effectiveness of ProtoSurv.
Weaknesses:
The design of the main modules lacks novelty and in-depth exploration. The Structure View (SV) is a type of multi-layer feature aggregation that has been widely studied and may be highly influenced by the selection of layers to extract features. The Histology View (HV) is a type of feature shifting which has also been widely studied.
ProtoSurv requires additional labels to train a classifier, leading to unfair comparison.
Questions:
More fine-grained ablation experiments and comparisons are helpful: 1. How about only using the last or last two layers to extract features in SV? 2. How about sharing the learnable parameter for each category in HV? 3. How about the comparisons of FLOPs and the number of parameters?
---
Rebuttal 2:
Rebuttal: Thanks for your constructive comment. Here are the responses to Weaknesses (W), Questions (Q) and Limitations(L).
**W1. The design of the main modules lacks novelty and in-depth exploration**
**Novelty.**
We fully agree with your summary of the modules within our model.
However, the crucial aspect is that we construct a framework capable of leveraging intratumoral tissue heterogeneity
and utilizing node types to introduce pathology priors into the model.
To our knowledge, this is novel in the computational pathology literature.
We also introduced a more suitable prototype extraction method tailored for computational pathology tasks.
In existing prototype-based computational pathology networks,
prototypes are often extracted simply based on fixed cluster centers,
which may struggle to capture the full variability or complexity of the features.
By contrast, in our approach,
We design learnable shifts to extend each prior-based prototype into multi-prototypes with different preferences.
These prototypes focus on interactions with different tissues and learn from previously unknown factors or phenotypes that may contribute to patient outcomes.
Additionally, we provide an interpretability figure illustrating the interactions between prototypes from different categories and the global tissue,
which allows for a better exploration of the global tissue interactions discovered by the model.
We have expanded the interpretability figure in the PDF attachment in "Author Rebuttal",
if you are interested about it, please refer to "Author Rebuttal" for more details.
**In-depth exploration.**
For the finer-grained ablation experiments and comparisons you suggested,
we add ablation studies in the following sections.
If you believe there are other areas that require in-depth exploration, we are willing to include additional experiments.
**W2. ProtoSurv requires additional labels to train a classifier, leading to unfair comparison**
We used classifier-obtained node types to inject pathology priors into the model,
aiming to explore the benefits of incorporating domain priors.
The experimental results indicate that the model indeed improved with the assistance of these domain priors.
In the ablation experiments shown in Table 4 of the main paper,
we demonstrated that when using publicly available classifiers or simply using K-means to obtain node categories,
our framework still achieved better results than the compared models.
This indicates that in the fair comparison without the advantage of pretrained classifier node types,
our framework's ability to handle tumor heterogeneity still leads to superior prediction performance compared to baselines.
**Q1. How about only using the last or last two layers to extract features in SV?**
Thanks for the advices.
We tested using only the last and the last two layers to extract features in SV.
The experimental results are as follows:
||COAD|LGG|LUAD|PAAD|BRCA| average |
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| ProtoSurv (SV all layers, n=4) | 0.692 ± 0.045| **0.774 ± 0.063** | **0.658 ± 0.046**|0.669 ± 0.049|0.720 ± 0.040| **0.703** |
| ProtoSurv (SV last layer)| 0.678 ± 0.051| 0.764 ± 0.054 | **0.658 ± 0.060**|**0.671 ± 0.042** |0.718 ± 0.049|0.698|
| ProtoSurv (SV last two layers) | **0.693 ± 0.057** |0.762 ± 0.037|0.656 ± 0.058|0.662 ± 0.049|**0.723 ± 0.044** |0.699|
Although the optimal results varied across different datasets, overall,
models that used more layers to extract features achieved better results.
**Q2. How about sharing the learnable parameter for each category in HV?**
We greatly appreciate your advice.
Sharing the learnable parameters for each category can significantly reduce the model's parameter.
This helps explore the scalability potential of the model.
Here are the results of sharing the learnable parameters for each category.
||COAD|LGG|LUAD|PAAD|BRCA|average|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|ProtoSurv| **0.692 ± 0.045** | **0.774 ± 0.063** | **0.658 ± 0.046** |**0.669 ± 0.049** |**0.720 ± 0.040**| **0.703** |
|ProtoSurv(Sharing parameters)|0.667 ± 0.055|0.765 ± 0.044|0.653 ± 0.042|0.652 ± 0.053|0.707 ± 0.026|0.689|
We show the optimization of computational requirements brought by sharing the learnable parameters in Q3.
**Q3. How about the comparisons of FLOPs and the number of parameters?**
Follow your advice,
we evaluate the model's inference time, floating point of operations(FLOPs), model parameters, and maximum GPU memory usage.
We use a WSI which contains 32,625 patches as input.
The computation time is measured using a Nvidia RTX 3090 GPU.
We included PatchGCN for comparison.
We additionally test ProtoSurv-tiny under a reduced parameter configuration (prototype dim = 256, hidden dim of SV and HV = 64, prototypes per category = 4),
to evaluate the performance degradation of our architecture with fewer parameters and its scalability for more limited hardware.
|| PatchGCN | ProtoSurv | ProtoSurv-tiny | ProtoSurv(Sharing parameters) |
|:-:|:-:|:-:|:-:|:-:|
|Time(s)|0.12|0.29|0.21|0.29|
|FLOPs(G)| 30.49|627.3|96.5|627.3|
|Number of Parameters(M)|1.19|39.1|4.77|15.5|
|Maximum GPU memory usage(MB) |1570|5417|1523|5326|
Here are the survival prediction results of ProtoSurv under the reduced parameter configuration.
||COAD|LGG|LUAD|PAAD|BRCA|
|:-:|:-:|:-:|:-:|:-:|:-:|
|Patch-GCN|0.652 ± 0.086|0.713 ± 0.054|0.635 ± 0.027|0.618 ± 0.057 |0.647 ± 0.032|
|ProtoSurv|**0.692 ± 0.045** |**0.774 ± 0.063** |0.658 ± 0.046|0.669 ± 0.049| **0.720 ± 0.040**|
|ProtoSurv-tiny|0.673 ± 0.039|0.756 ± 0.038| **0.664 ± 0.039** | **0.687 ± 0.049**|0.707 ± 0.044|
| ProtoSurv(Sharing parameters) |0.667 ± 0.055|0.765 ± 0.044|0.653 ± 0.042|0.652 ± 0.053|0.707 ± 0.026|
Once again, thank you for your careful review and valuable comments on our paper.
If you are satisfied with our response,
would you kindly bump up your score?
---
Rebuttal Comment 2.1:
Comment: thanks for the detailed reply from the authors, which addressed most of my concerns.
According to the opinion of other reviewer, this paper is at a marginal level, and I am inclined to accept it at the marginal level. | Summary: The authors proposed ProtoSurv, a graph model for WSI survival prediction. The key contribution is learning different prototypes for each node type, and aggregating nodes using cross attention and learned prototypes.
Strengths: - Outcome prediction for cancer patients is a very relevant and important problem and has a large impact. Recently, there has been a lot of interest in the field of computational pathology to solve this problem.
- ProtoSurv has a good motivation to identify and aggregate based on prototypes, which provides some inherent interpretability to the model.
- ProtoSurv seems to outperform various baselines reported by the authors.
Weaknesses: - Having a fixed set of hand crafted node types and a fixed number of multi-prototypes is limiting, and restricts the model’s ability to learn from previously unknown factors / phenotypes that may contribute to patient outcome.
- Evaluation benchmarks should include popular aggregation baselines, including ABMIL (used by UNI authors for benchmarking) and TransMIL.
- Ablation studies is not complete. The authors should consider benchmarking:
- Use UNI features directly for prototype learning, without considering node types;
- Removing graph network, or replacing it with a transformer.
Technical Quality: 2
Clarity: 2
Questions for Authors: Please address the points in the weakness section.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Limitations are addressed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive comment. Here are the responses to Weaknesses (W).
**W1. Having a fixed set of hand crafted node types and a fixed number of multi-prototypes is limiting,
and restricts the model’s ability to learn from previously unknown factors / phenotypes that may contribute to patient outcome.**
We fully agree with your comment.
We consider that simply extracting prototypes for each node category based on priors
might indeed restrict the model's ability to learn from previously unknown factors / phenotypes that may contribute to patient outcome.
Therefore, when designing the prototype extraction module (the Histology View (HV) module),
We made the following efforts to ensure the model is not constrained by node types,
and to enhance the model's ability to learn from factors/phenotypes beyond the priors:
1. We rely solely on node types provided by prior knowledge to give each prototype an initial preference,
rather than using them as constraints for the prototypes.
2. Then we use learnable shifts to obtain diverse prototypes for each category,
enabling the model to prefer on different factors within the category,
including both previously known and unknown factors / phenotypes.
3. Finally, under the guidance of these preferences,
the prototypes to aggregate relevant information from the global features by learnable cross-attention.
In summary, our prototype extraction process is learnable and highly flexible.
The node types only provide an initial preference for each prototype, rather than a constraint.
The fixed number of multiple prototypes obtained through offsets increases the diversity of prototypes within each category,
encouraging the model to have more varied preferences. Finally, through learnable cross-attention,
each prototype extracts relevant features from the global context (without being limited by node types),
further enhancing the model’s ability to learn from previously unknown factors/phenotypes that may contribute to patient outcomes.
We apologize for any misunderstanding caused by the description in our paper.
We will revise this section in the camera-ready version to more clearly explain our prototype extraction process.
Additionally, we have uploaded a detailed interpretability figure in the PDF attachment in "Author Rebuttal",
which visualizes the attention preferences of multiple prototypes across different categories to further support our claims.
If you are interested about it, please refer to "Author Rebuttal" for more details.
**W2. Evaluation benchmarks should include popular aggregation baselines, including ABMIL (used by UNI authors for benchmarking) and TransMIL.**
We include additional experiments with popular aggregation baselines, the results are as follows:
| | COAD | LGG | LUAD | PAAD | BRCA |
|:---------:|:-----------------:|:-----------------:|:-----------------:|:-----------------:|:-----------------:|
| ABMIL | 0.647 ± 0.036 | 0.710 ± 0.048 | 0.653 ± 0.059 | 0.625 ± 0.063 | 0.657 ± 0.064 |
| TransMIL | **0.695 ± 0.051** | 0.739 ± 0.034 | 0.608 ± 0.040 | 0.642 ± 0.037 | 0.694 ± 0.053 |
| ProtoSurv | 0.692 ± 0.045 | **0.774 ± 0.063** | **0.658 ± 0.046** | **0.669 ± 0.049** | **0.720 ± 0.040** |
**W3(1). Ablation study: Use UNI features directly for prototype learning, without considering node types.**
Following your advice, we supplement the ablation experiments here.
In this ablation experiment,
we used 8 cluster centers obtained from the K-means algorithm as the initial values of each prototype (corresponding to the process of eq (4) in the main paper),
and performed prototype learning directly without considering node types.
Here are the experimental results:
| | COAD | LGG | LUAD | PAAD | BRCA |
|:--------------------------------------:|:-----------------:|:-----------------:|:-----------------:|:-----------------:|:-----------------:|
| ProtoSurv | **0.692 ± 0.045** | **0.774 ± 0.063** | **0.658 ± 0.046** | 0.669 ± 0.049 | **0.720 ± 0.040** |
| ProtoSurv(K-means cluster center) | 0.674 ± 0.044 | 0.763 ± 0.065 | **0.658 ± 0.054** | **0.671 ± 0.054** | 0.718 ± 0.041 |
**W3(2). Ablation study: Removing graph network, or replacing it with a transformer.**
We performed an ablation experiment on removing the graph network in the "without (w/o) SV" row of Table 2 in the main paper.
However, we acknowledge that this experiment is still insufficient.
Following your advice, we supplement the ablation experiments here.
We remove the graph network (Structure View module (SV)) while retaining the prototype extraction module (Histology View module (HV)),
and test the performance of the prototypes extracted by HV under different pooling methods.
We test mean pooling by calculates the average value of all prototypes (HV(mean pooling) row),
and concat pooling by concatenating all prototypes along the channel dimension(HV(concat pooling) row).
Here are the experimental results:
| | COAD | LGG | LUAD | PAAD | BRCA |
|:------------------:|:-----------------:|:-----------------:|:-----------------:|:-----------------:|:-----------------:|
| ProtoSurv | **0.692 ± 0.045** | **0.774 ± 0.063** | **0.658 ± 0.046** | **0.669 ± 0.049** | **0.720 ± 0.040** |
| HV(mean pooling) | 0.684 ± 0.044 | 0.706 ± 0.036 | 0.646 ± 0.051 | 0.624 ± 0.032 | 0.657 ± 0.049 |
| HV(concat pooling) | 0.688 ± 0.046 | 0.766 ± 0.044 | 0.641 ± 0.037 | 0.661 ± 0.057 | 0.713 ± 0.039 |
Once again, thank you for your careful review and valuable comments on our paper.
If you are satisfied with our response,
would you kindly bump up your score? | Summary: This paper introduces ProtoSurv, an algorithm which performs survival prediction for Whole Slide Images (WSIs) of tumour samples, by taking into account the interaction between different tissue types and tumour heterogeneity. ProtoSurv proposes leveraging prior tissue knowledge by constructing a graph where the nodes represent patches and edges represent spatial relationship. The attributes of the nodes correspond to embedded feature representation of the patch, as well as a tissue category label. This tissue category label is assigned by a finetuned feature extractor (UNI), which was previously modified by adding a classifier head and training it on patches with know tissue type labels. Once the graph is initialised, the pipeline employs a dual-view architecture, composed of a Structure view and a Histology view. In the Structure view, the graph with its embedded feature representation is passed through a GCN (which correspond to the Patch-GCN architecture) and a final representation H is stored. In the Histology view multiple prototypes are learned for each tissue category, aiming to capture tissue heterogeneity by using cross-attention between the initial feature vector and the prototypes. Cross-attention is then also employed to guide the fusion of the information from the Structure and the Histology view. The loss function employed during training correspond to composition of the Cox regression loss + a modified compatibility loss + an orthogonality loss. The pipeline is applied to 5 TCGA cancer datasets and compared to several SOTA baseline models. The results are presented using C-index. There is ablation on different model components, the number of tissue categories, the feature extractor used and number of prototypes. The results provided show the algorithm outperforms against the baseline models.
Strengths: This paper presents an interesting dual-stream architecture for cancer survival prediction, combining a spatial graph based approach, with a prototype learning module. Overall the paper is well structured, and the incorporation of prior tissue knowledge to guide cross-attention is a nice contribution which grounds the model in clinical understanding. The method is tested comprehensively on 5 benchmark methods and ablation is provided on the different components of the model. The results obtained show improvement in Survival prediction across the datasets.
Weaknesses: Below I list what I perceive to be the main weaknesses in this paper:
1 - This is not a heterogeneous graph neural network: the edges in the graph are created based on spatial proximity of patches, regardless of their tissue type, and while each node has two types of attributes (Feature vector and Tissue category label) the tissue label isn't actually used in the GNN, which operates on a standard homogeneous graph structure. I find this problematic because its a significant misrepresentation of the core methodology in my opinion. Instead, this is a dual stream architecture that processes the data in two different ways (Patch-GCN + prototype learning), before combining them using cross-attention. Given this fact it would be good to compare how this method compares to a simple ensemble of Patch-GCN and a prototype-based model to verify model performance against this baseline.
2 - As this pipeline relies heavily on cross-attention, both in the histology view and in the fusion module, I think the authors should discuss computational requirements and scalability of the model to large WSIs, which is an important consideration in the healthcare setting.
3 - The ablation on the PGF module is unclear: it seems the PGF module is replaced with an aggregation method from Dong et al., where the order of Q and KV in the cross-attention mechanism is swapped. This doesn't completely remove the fusion between the two views, rather it changes how the fusion is performed and hence does not provide a clear understanding of the individual contribution of the PGF module. Maybe adding a comparison to a simple concatenation and classification could add clarity to this point.
4 - The authors claim improved interpretability due to the use of prototypes and they have an interesting Figure 10 in the Appendix, which I think should be referenced in text. It would be good to clarify the model obtains attention maps per category and per prototype, and if possible include the Figure in the main body of the text as this would back the improved interpretability claim.
5 - One important comment is that the structural view doesn't simulate viewing at multiple magnification or scales as stated in the text, rather it merely extends the receptive field. This provides multi-hop neighbourhood information, but not multi-scale information.
Technical Quality: 2
Clarity: 3
Questions for Authors: I have addressed my questions in the Weaknesses section. Below are some general comments:
- Line 1 - "Tumors are"
- Line 18 - slide
- Line 21 - molecular alterations
- Line 22 - no need to redefine WSIs acronym
- Line 24 - you could cite Ilse et al. 2018 here.
- Line 26 - In MIL aggregation to bag-level representation can be done using non-learnable or learnable aggregation layers. MIL methods which use learnable aggregation do learn about interrelationship among instances, depending on the approach employed.
- Line 29 - I don't agree with this statement, subtyping and staging can absolutely require a holistic view of the tumour micro-environment.
- Line 47 - I would qualify this statement: "it could be affecting" - there is a lot of research and debate the performance of GNNs on homophilous vs heterophilous graph. For example see [1].
- Line 56 - 59 - I don't find these sentences very clear, maybe you could reformulate them?
- Line 75 - I imagine most approaches using CNNs would also be employing a MIL based approach.
- Line 80 + 87 - "context-awareness"
- Line 93 - This was done before HEAT, see [2]
- Line 106 - See also [3]
- Line 114 - $e_{i,j}$
- Line 115 - "where $x_i$ is the feature ..." ?
- Line 130 - " by the current consensus as* highly relevant"
- Line 213 - "We employ*"
- Line 225 - "given the morphological alterations found in frozen sections*"
- Table 1 - underlined*
- Line 310 - "In our model, we incorporate five tissue categories based on pathological knowledge of prognosis-related tissues."
1 - [1] Platonov etal., A critical look at evaluation of GNNs under heterophily: Are we really making progress, ICLR, 2023.
2 - [2] Pati et al., Hierarchical graph representations in digital pathology, MIA, 2022.
3 - [3] Yu et al., Prototypical multiple instance learning for predicting lymph node metastasis of breast cancer from whole-slide pathological images, MIA, 2023.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors mention an important limitation, which is that the tissue categories are fixed and obtaining the labels remains an obstacle. I would expand on this and mention the method relies on a pre-trained tissue classifier, but there's limited discussion on how errors in this classification might impact the overall performance. I also think the heavy use of cross attention mechanism used could introduce problem for scaling to a clinical setting.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive comment. Here are the responses to Weaknesses (W), Questions (Q) and Limitations(L).
Due to word limit, we omit many details.
If you have any further questions, we would be willing to response.
**W1. This is not a heterogeneous graph neural network.**
We describe our model as a heterogeneous graph neural network in the paper,
because we were indeed inspired by the optimization of heterogeneous graphs of aggregating information from global nodes [1].
Similar to them, each node in our model has a category which participates in guiding the global feature extraction.
Therefore, we cautiously feel that it is appropriate to define the model as a heterogeneous graph-based model.
However, we believe that the dual-stream architecture indeed better captures the model's characteristics.
Each module has its own focus and does not need to be collectively described as a heterogeneous graph.
We will revise the description to highlight the characteristics of its dual-stream architecture.
[1] Li et al. Finding global homophily in graph neural networks when meeting heterophily, ICML2022.
**W1(1). Compare with Patch-GCN and prototype-based models.**
Here, we provide a more detailed ablation experiments about prototypes.
Additionally, as a complement comparison to the prototype-based model,
we reference PANTHER [2], a SOTA prototype-based unsupervised model, for comparison.
We test mean pooling and concat pooling of Histology View (HV).
Same as PANTHER, concat pooling concatenate all prototypes along the feature dimension.
Here are the results:
||COAD|LGG|LUAD|PAAD|BRCA|
|:-:|:-:|:-:|:-:|:-:|:-:|
|Patch-GCN| 0.652 ± 0.086|0.713 ± 0.054|0.635 ± 0.027|0.618 ± 0.057|0.647 ± 0.032 |
|ProtoSurv| **0.692 ± 0.045** | **0.774 ± 0.063** | **0.658 ± 0.046** |0.669 ± 0.049| **0.720 ± 0.040** |
|PANTHER| 0.635 ± 0.056 |0.748 ± 0.046|0.631 ± 0.029 | **0.673 ± 0.082** |0.699 ± 0.019 |
| HV(mean pooling) |0.684 ± 0.044|0.706 ± 0.036|0.646 ± 0.051|0.624 ± 0.032|0.657 ± 0.049 |
| HV(concat pooling) |0.688 ± 0.046|0.766 ± 0.044|0.641 ± 0.037|0.661 ± 0.057|0.713 ± 0.039 |
[2] Song et al. Morphological prototyping for unsupervised slide representation learning in computational pathology, CVPR2024.
**W2. Computational requirements and scalability**
**Computational requirements.**
We evaluate the model's inference time, floating point of operations(FLOPs), model parameters, and maximum GPU memory usage.
We use a WSI which contains 32,625 patches as input.
The computation time is measured using a Nvidia RTX 3090 GPU.
We additionally test ProtoSurv-tiny under a reduced parameter configuration
(prototype dim = 256, hidden dim of SV and HV = 64, prototypes per category = 4).
|| ProtoSurv | ProtoSurv-tiny | PatchGCN |
|:-:|:-:|:-:|:-:|
|Time(s)| 0.29|0.21| 0.12 |
|FLOPs(G)| 627.3|96.5| 30.49|
|Model Parameters(M)|39.1|4.77|1.19|
| Maximum GPU memory usage(MB) |5417|1523|1570|
Here are the results of ProtoSurv-tiny.
||COAD|LGG|LUAD|PAAD|BRCA|
|:-:|:-:|:-:|:-:|:-:|:-:|
| ProtoSurv | **0.692 ± 0.045** | **0.774 ± 0.063** |0.658 ± 0.046|0.669 ± 0.049 | **0.720 ± 0.040** |
| ProtoSurv-tiny |0.673 ± 0.039|0.756 ± 0.038| **0.664 ± 0.039** | **0.687 ± 0.049** |0.707 ± 0.044 |
**Scalability.**
The HV and SV modules, as well as the process of extracting prototypes, are completely decoupled,
and can be computed in parallel.
**W3. The ablation on the PGF module.**
Here, we include a supplement ablation study for the PGF module.
In this ablation study,
we remove the fusion between the two views and instead concatenated directly along the patch dimension.
||COAD|LGG|LUAD|PAAD|BRCA|
|:-:|:-:|:-:|:-:|:-:|:-:|
|ProtoSurv| **0.692 ± 0.045**| **0.774 ± 0.063** |0.658 ± 0.046| **0.669 ± 0.049** | **0.720 ± 0.040** |
|w/o PGF(concat) |0.659 ± 0.058|0.712 ± 0.084| **0.662 ± 0.064**|0.652 ± 0.040|0.719 ± 0.024 |
**W4. An interesting Figure 10 in the Appendix.**
We greatly appreciate your interest of Figure 10.
As per your suggestion,
we will reorganize the camera-ready version to move it to "Experiments" section.
In addition, we have expanded the figure,
please refer to the PDF attachment in "Author Rebuttal" for more details.
**W5. SV doesn't simulate viewing at multiple magnification.**
GNNs extend the receptive field, and each GNN layer provides different neighborhood information.
In SV, all output from each GNN layer are concatenated,
which aggregate features with varying hops of neighborhood and different receptive fields.
We full agree with you that it strictly provides multi-hop neighborhood information rather than multi-scale information.
We will revise the description to make it more precise.
**L1. Discussion on how errors in classification might impact the overall performance.**
To minimize the impact of classification errors on the overall performance,
in the HV module, we rely only on the node categories provided by the classifier to delineate an initial range.
We calculate the average value of the patch features within the category range as the initial prototype,
providing an initial preference for the aggregation of each prototype (pathology prior injection).
In the main paper's Table 4,
we demonstrated the robustness of our model using ablation results with publicly available classifiers and clustering methods.
To further illustrate this point, we randomly generated categories for 20% and 30% of the nodes.
The experimental results are as follows:
||COAD|LGG|LUAD|PAAD|BRCA|
|:-:|:-:|:-:|:-:|:-:|:-:|
|ProtoSurv | **0.692 ± 0.045** |0.774 ± 0.063| **0.658 ± 0.046** |0.669 ± 0.049| **0.720 ± 0.040** |
|20% random|0.685 ± 0.051|0.769 ± 0.048| **0.658 ± 0.046** | **0.671 ± 0.044** |0.712 ± 0.045 |
|30% random|0.689 ± 0.053| **0.777 ± 0.047** |0.656 ± 0.043|0.666 ± 0.047|0.717 ± 0.047 |
Once again, thank you for your careful review and valuable comments.
If you are satisfied with our response,
would you kindly bump up your score?
---
Rebuttal Comment 1.1:
Title: Response to Author's Rebuttal
Comment: I thank thank the authors for their comprehensive rebuttal. In light of their answers I have updated my rating. | Rebuttal 1:
Rebuttal: We thank all reviewers for the valuable feedbacks and constructive comments.
We have responded to each reviewer's comments.
We attach a PDF containing an expanded interpretability figure.
From the interpretability figure,
we observed that the attention preferences of multi-prototypes from a category varied.
Some prototypes were responsible for extracting global category information,
while others focused on discovering interactions between other categories.
This indicates that our prototype learning paradigm has the potential to uncover unknown interactions and factors.
Pdf: /pdf/25dbd5ede032eaff4748b6d39d6e6e9dab451793.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Get Rid of Isolation: A Continuous Multi-task Spatio-Temporal Learning Framework | Accept (oral) | Summary: This work proposed a novel spatiotemporal learning framework CMuST. In CMuST, MSTI is devised to dissect complex multi-dimension data correlations, to reveal disentangled patterns. And RoAda is proposed to extract the task-wise consistency and task-specific diversity. In addition, this paper introduce a benchmark of three cities for multi-task spatiotemporal learning, and empirically demonstrate the superiority of CMuST via extensive evaluations on these datasets.
Strengths: 1. The paper achieves the best or second-best results in most experiments, validating the feasibility of the proposed method.
2. The paper proposes a continuous learning mechanism that enables the model to continuously learn from tasks. As claimed by the paper, it is "the first continuous multi-task spatiotemporal learning framework, CMuST, to jointly model learning tasks in the same spatiotemporal domain."
Weaknesses: 1. The paper considers one of its major contributions to be the proposal of a benchmark. However, after reviewing the code, it seems that the authors did not provide details on how they processed the data.
2. The proposed MTSI module largely uses the Attention module from Transformers, but the authors did not provide any references to Transformers. Additionally, using the attention mechanism to capture relationships is a relatively straightforward design and lacks significant innovation.
3. One of the main problems this paper addresses is the cold start problem for new tasks. However, the paper still involves task-specific refinement, i.e., training is still required. Perhaps the superiority of the proposed module can be validated by comparing the adaptation time to new tasks.
4. In the experiments conducted by the authors, it can be observed that when the number of tasks is one, the model's performance is not superior to many models. This might indicate that the proposed MTSI module is not sufficiently effective.
5. The authors use a simple layer to obtain task summarization $S$. Compared to common designs in many MAEs, using just a linear layer with an activation function seems somewhat simplistic for MAE.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please explain each weaknesses.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 1
Limitations: This paper adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer xkVJ,
Thank you for your meticulous review and insightful comments, which are invaluable in refining our approach and enhancing the quality of our manuscript.
**W1. Data preprocessing.** Thank you for your reminder. We have included the detailed data processing code in our anonymous repository, please check it available, and we explain the preprocessing steps in the **Common Issue 4** of our global response. All datasets and the processed results will be made publicly available on a cloud storage platform after the paper is accepted.
**W2.** **Contribution of MSTI and Transformers references.** In MSTI, our innovation not only lies in designing a new attention mechanism but in how we use it to capture and refine interactions across different dimensions, such as 'spatial aspect - main observation' and 'temporal aspect - main observation' correlations. By coupling with the RoAda process, we enhance capturing these correlations across multiple tasks, leading to more effective and enriched encoding of spatiotemporal representation over main observations. This multidimensional interaction is specifically designed to exploit the inherent complexities and dependencies in multiple urban elements that standard spatiotemporal models may not fully capture. Additionally, we explain our specific design and the differences from other traditional ST with attention in the **Common issue 2**. Regarding the specific references to the foundational work on Transformers and the attention mechanism, we now have rectified this by including pertinent references to the original works on Transformers by Vaswani et al. [1], as well as other seminal papers that have shaped the use of attention in spatiotemporal learning [2-4], which provides a clearer context for our contributions.
[1] Attention Is All You Need, NeurIPS'17
[2] Learning Dynamics and Heterogeneity of Spatial-Temporal Graph Data for Traffic Forecasting, TKDE'22
[3] STAEformer, CIKM' 23
[4] PDFormer, AAAI'24
**W3. Cold-start problem.** Actually, generalization capacities have been empirically validated in Sec 5.2 and Sec. 5.4, where experiments on Tab. 2 can be viewed as imitating the cold-start issue on spatial dimension. Thanks for your good suggestion, we further add the cold-start experiments concerning tasks, i.e., training on two tasks and testing on another task. The details and results can be found in the **Common issue 3**.
**W4. Performance of single task.** Thanks for your careful reading, we also confirmed this result and found not seriously inferior to other single-task baselines (The comparison with the baselines have been made over single-task in Tab. 1 of Sec 5.2). Moreover, since in the experiment of Fig. 4(b), the hyperparameters and experimental settings we adopted are based on multi-tasks, investigation over the influence of the task number are also following such multi-task settings to guarantee the fairness of comparisons. This ensures that the only variable is the number of tasks. Considering learning over individual task, we also believe our backbone, MSTI, can outperform other baselines. To confirm this intuition, we have conducted additional experiments following the individual task setting, (one model for one task), and the testing MAE and MAPE are as follows,
| **Metrics/Dataset** | **①/Ⅰ** | **①/Ⅱ** | **①/Ⅲ** | **①/Ⅳ** | **②/Ⅰ** | **②/Ⅱ** | **③/Ⅰ** | **③/Ⅱ** | **③/Ⅲ** |
| ------------------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |
| MAE | 11.2457 | 13.1284 | 6.9357 | 6.0122 | 11.8684 | 0.6912 | 2.3317 | 2.0223 | 1.1175 |
| MAPE | 0.4623 | 0.4782 | 0.3453 | 0.3531 | 0.2758 | 0.2613 | 0.3983 | 0.4023 | 0.2506 |
(The symbols in the table are explained in the attached **PDF** of global response.)
Even in single-task setting, comparison with single-task baseline results in Tab. 1, it is observed that performance of MSTI is still better than most other single-task models, showing the effectiveness of MSTI. Additionally, the goal of our task is to collective the intelligence across different tasks, and enhance the individual learning, especially improving resilience on extreme scenarios. For detailed evaluation, it include experiments of Sec 5.2, experiments in Sec 5.4, and additional cold-start challenge investigated in our global rebuttal (Results in **Common issue 3**). All these results have demonstrated the success of coupling RoAda and MSTI, as well as the well-obtained goal of our work. We will also incorporate these discussion and new results into our revised manuscript.
**W5. Justify the data summarization module and potential improvement.** For this module, we are aimed to extract a snapshot of the data that captures the representative characteristics and typical pattern summarization of the data, so that it can play the role of task prompt. We utilized a straightforward yet effective MLP with an activation function to capture data snapshots, efficiently reflecting the intrinsic properties of the data. This approach has been empirically validated through ablation studies, demonstrating its capability to capture essential data features effectively. Thanks for your suggestion, and it inspire us for a potential improvement, i.e., we can further introduce contrastive loss, and build a constraint mechanism that is similar among tasks and different across tasks for the abstract representations of different tasks, so as to generate a higher quality task description learner. We are committed to further exploring this improvement and will include the results in revisions of our manuscript.
Thank you once again for your thoughtful feedback. We are committed to addressing these issues thoroughly and improving our manuscript accordingly, ensuring it meets the standards expected by the NeurIPS community.
Authors of Paper 2077
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. My concerns are addressed. And I have raise my score.
---
Reply to Comment 1.1.1:
Title: Thanks for your positive feedback
Comment: Dear Reviewer xkVJ,
We sincerely appreciate your constructive feedback and valuable comments, which really contributed to the enhancement of our manuscript. We will try our best to improve the quality of manuscript and make both codes and datasets open-sourced. Thanks very much!
Authors of Paper 2077 | Summary: This paper proposes a Continuous Multi-task Spatio-Temporal Learning Framework CMuST to facilitate the task-level cooperation in spatiotemporal predictions (mainly for traffic related tasks). The model is composed of three components . Data representation and integration module processes and standardizes diverse urban data into a harmonized format. MSTI modules reduce the complex interactions within spatiotemporal data. RoAda modules iteratively captures the task-wise consistency and task-specific diversity.
This study is generally fine with a relatively novel method for spatiotemporal multi-task learning through prompting (although prompting studies in this field is being rapidly developed). The main contribution is how the prompting is handled. The main problem I found is that the text is not that easy to follow with many jargons, coined (complicated) phrases, e.g., continuous multi-task spatiotemporal learning is a bit confusing - is it something related to continual (lifelong) learning?
Strengths: S1: The paper provides open access to data and code.
S2: The paper introduces the first continuous multi-task spatio-temporal learning framework for joint modeling of tasks within the same spatio-temporal domain, which is generally a novel method.
S3: The paper validates the proposed model on multiple datasets and tasks, demonstrating its generalization ability.
Weaknesses: W1: During the model construction process, the paper does not clearly address the inconsistency issue of feature C across different task data. It is recommended to provide detailed explanations either in the data preprocessing stage or within the "Data representation and integration module" of the model.
W2: For continuous task rolling, specific operational details of each task model from training to convergence (such as the number of epochs) are not mentioned in the paper.
W3: Typos such as "Compressedd" (line 235).
W4: Modify proprietary terms that lack detailed explanations, such as "PECPM" on line 124.
Overall, the paper falls short in its presentation. Hope that it could be revised to ease the understanding.
Technical Quality: 3
Clarity: 2
Questions for Authors: Q1: The definition of "domain" in the paper appears somewhat ambiguous. Please clarify whether "domain" refers to different types of tasks (such as "pick up" vs. "drop off") or different geographical regions (such as "NYC" vs. "SIP").
Q2: Please explain the distinction between the symbols "H[. . . ,slice(s)]" and "Es". Currently, these two parts appear to belong to the same content.
Q3. For different tasks in the same city, are the input features the same?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer GaXf,
Thank you for your thoughtful review and acknowledging the potential and contribution of our work. We appreciate your insightful comments, which have provided us with an opportunity to refine our manuscript and address critical aspects that will enhance the clarity and impact of our research.
**W1. Feature inconsistency across Tasks.** Actually, the tasks we collected have consistent features ($C=3$: numerical value, time of day, day of week). The investigated space and interval units are standardized, and context factors are mapped into same dimension with an MLP to aviod the diverse dimensions of raw contexts. The detailed data preprocessing process can be referred to the **Common issue 4** of the global response. On the other hand, if another task with inconsistent features, we will use task-specific MLP for observation encoding and a task-specific prediction head to transform features into consistent spatiotemporal interaction (MSTI), which is corresponding to unique prompt for each task. This approach ensures the unique features of each task to be handled uniformly and effectively, thus the shared spatial and temporal encoders can enhance spatiotemporal representation during rolling training.
**W2.** **Task rolling details.** For the rolling training of each task, we set a maximum of 100 epochs, as this number is based on observations from our training logs, where most tasks typically converge around 90 epochs. Additionally, we maintain a learning rate of 1e-5 during rolling training to prevent catastrophic forgetting. We will include these specific operational details in the revised manuscript to provide a clearer understanding of this process.
**W3 & W4. Presentation issue.** Appreciate your careful reading, we will check thoroughly and correct the typos in the whole paper and modify the proprietary terms with detailed explanation.
Specifically, PECPM refers to Pattern Expansion and Consolidation on Evolving Graphs for Continual Traffic Prediction [1] proposed in SIGKDD 2023.
[1] Pattern Expansion and Consolidation on Evolving Graphs for Continual Traffic Prediction, SIGKDD, 2023
**Q1. Definition of domain.** Actually, in our study, various domains correspond to different urban elements collected with different manners in a given city. For instance, in an urban system, it includes diverse elements such as taxi demands, crowd flow, traffic speed and accidents. We collect and organize various urban elements in a city into one integrated dataset. The goal of our work is to explore the integrated intelligence from various domains and enhance learning of each individual urban element. To this end, the concept of multi-task here is to forecast various elements from different domains in an integrated model.
**Q2. Explanation of formalization.** Initially, $E_s$ represents the spatial embeddings at the beginning of the model's operations, and $H[..., slice(s)]$ refers to the spatial segment within the tensor $H$ that initially matches $E_s$. As the model processes, $H'[..., slice(s)]$ evolves to reflect these updates, making it distinct from the initial embeddings $E_s$ , so we adopt a uniform slice representation. We will clarify this issue and the dynamic nature of these embeddings in our revised manuscript.
**Q3. Input features across tasks.** For different tasks within the same city, the input features can vary specific to each task, though they share the same dimensionality $C$. Currently, the tasks we have collected features with an input dimension $C=3$, which includes numerical values, time of day, and day of week as answered in **W1**. In our future work, we plan to collect more diverse datasets and conduct further experiments with various types of input features with different dimensions $C$ to accommodate broader ranges of urban tasks.
We again show our great appreciation of your valuable efforts on our work. We will comprehensively take you and all other reviewers comments into consideration and try our best to polish our manuscript for satisfying the high-level requirement of NeurIPS community.
Authors of Paper 2077
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for your response. I have raised my assessment.
---
Reply to Comment 1.1.1:
Title: Thanks for your support and thoughtful comments
Comment: Dear Reviewer GaXf,
We sincerely thanks for your support and valuable comments of our work, which have played a great significant role in the quality of our manuscript. We will make our best efforts to improve the presentation of the manuscript and add corresponding experimental details, as well as make the code and dataset open source for reference and availability. We would appreciate it if you could give us your kind support during the discussion phase. Thank you very much.
Authors of Paper 2077 | Summary: This paper proposes a Continuous Multi-task Spatio-Temporal learning framework (CMuST) to enhance urban intelligence. CMuST introduces a Multi-dimensional Spatio-Temporal Interaction network (MSTI) for capturing complex data interactions and a Rolling Adaptation training scheme (RoAda) to iteratively update the model, simultaneously maintaining task uniqueness and leveraging shared patterns across tasks. The framework is validated through extensive experiments on datasets from three cities, demonstrating superior performance against existing methods.
Strengths: S1. Well presentation. This paper is well-presented and well-organized, providing a clear and comprehensive overview of the proposed methods and their implications.
S2. Good significance. The proposed CMuST can jointly model different tasks of spatiotemporal forecasting within the same spatiotemporal domain. This approach not only reinforces individual correlated tasks from a collective perspective, but also helps understand the cooperative mechanism within the dynamic spatiotemporal system.
S3. Sufficient and qualified technical contribution. The contribution and innovation of MSTI network lies in that effectively dissecting interactions across multiple data dimensions for improved spatiotemporal representation and commonality extraction, and RoAda training scheme for ensuring model to adapt to new tasks and continuously learn commonality and personalized patterns. The coupling of these two major components can well contribute to the ST learning field.
S4. New benchmark construction and good experiment designs. The construction of benchmark datasets for three cities enriches the research field and provides a solid foundation for evaluating the framework performance. Extensive experiment designs including robustness in data-scarce scenarios, visualized attention scores, and performance variation with task increasing, demonstrate the framework's superiority in enhancing individual tasks with limited data and providing insights into task-wise continuous learning.
Weaknesses: 1. In Section 4.4, there is missing detailed description on how to avoid catastrophic forgetting during task rolling adaptation. It would be beneficial if the authors could provide more experimental details in this regard.
2. Lacking comparison baselines. More baselines which are argued for unified spatiotemporal/time series learning, such as UniST, UniTime should be added for comparisons.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. In your CMuST, how can you avoid catastrophic forgetting during task rolling adaptation?
2. Do more comparison experiments with SOTA baselines will be better.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer SHU9,
Thank you for your valuable feedback and for recognizing the contributions of our work. Your insights are greatly appreciated and will help us further improve the quality of our manuscript.
**W1&Q1. Avoiding catastrophic forgetting.** Actually, to avoid catastrophic forgetting, we implement several strategies during the Rolling Adaptation (RoAda) phase. Firstly, we set the learning rate for each task to 1e-5. This helps to retain more knowledge from previous tasks and prevents the model from over-adjusting to new tasks. By maintaining a low learning rate, the model can incrementally learn new information while preserving the stability of previously learned tasks. Additionally, we use task-specific weights for each task, such as task prompts. This method allows us to absorb the common features across all tasks while independently preserving and updating task-specific parameters. This approach ensures that when learning new tasks, the model does not forget the knowledge gained from previous tasks, thereby preventing catastrophic forgetting and avoiding overfitting to new tasks.
**W2&Q2. More comparison baselines.** We appreciate your feedback and agree that comparing against unified spatiotemporal/time series learning (such as UniST [1] and UniTime [2]) will better showcase the generalization capabilities of our model for multi-task learning within same urban system. In response to your suggestion, we have conducted additional experiments comparing our CMuST model with UniST and UniTime. The results of these comparisons are as follows:
| **Model/Dataset** | **①/Ⅰ** | **①/Ⅱ** | **①/Ⅲ** | **①/Ⅳ** | **②/Ⅰ** | **②/Ⅱ** | **③/Ⅰ** | **③/Ⅱ** | **③/Ⅲ** |
| ----------------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |
| UniST/MAE | 11.3865 | 13.0762 | 6.8942 | 5.8804 | 11.7461 | 0.6985 | 2.3551 | 2.0134 | 1.1186 |
| UniST/MAPE | 0.4610 | 0.4478 | 0.3261 | 0.3490 | 0.2465 | 0.2661 | 0.3916 | 0.4162 | 0.2508 |
| UniTime/MAE | 12.2874 | 14.9120 | 7.4723 | 6.4641 | 13.9172 | 0.6993 | 2.4564 | 2.0341 | 1.1292 |
| UniTime/MAPE | 0.4721 | 0.4760 | 0.3671 | 0.3719 | 0.2965 | 0.2713 | 0.3987 | 0.4254 | 0.2511 |
(The symbols in the table are explained in the attached **PDF** of the global response.)
The results of these additional comparisons will be incorporated into the next version of our manuscript, providing a thorough evaluation and demonstrating the practical benefits and advancements introduced by our approach.
Thank you once again for your valuable feedback. Your suggestions and comments will significantly enhance the quality of our manuscript, and we are committed to making the necessary improvements to meet the high standards of the NeurIPS community.
[1] UniST: A Prompt-Empowered Universal Model for Urban Spatio-Temporal Prediction, SIGKDD, 2024
[2] UniTime: A Language-Empowered Unified Model for Cross-Domain Time Series Forecasting, WWW, 2024
Authors of Paper 2077
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I have checked all the content and also my concerns are well-addressed. This work did address a new problem and contributed new techniques in the field of spatiotemporal learning. I would like to raise my score to 7.
---
Reply to Comment 1.1.1:
Title: Thanks for your recognition and insightful suggestions
Comment: Dear Reviewer SHU9,
Thank you for your recognition and valuable suggestions, we will carefully revise our manuscript, including adding more experimental details and baselines to further improve the quality for satisfying the high-level requirement of NeurIPS community. Thanks a lot!
Authors of Paper 2077 | Summary: This work proposes a multi-task spatiotemporal learning framework that helps the model understand the relationships between multiple tasks. The specific contributions lie in proposing MSTI to model the multidimensional spatiotemporal data and RoAda to capture the commonality and personalization among multiple tasks.
Strengths: 1. The author attempts to construct a spatiotemporal model for continuous multi-tasks, which is an attractive motivation with development potential.
2. RoAda provides an executable technical solution for spatiotemporal continuous learning.
Weaknesses: 1. The author mentions multi-task and multi-domain problems several times in the introduction, but these concepts are not intuitively introduced in the paper. Multi-task and multi-domain do not always have a unified consensus in the ST community. For example, does multi-task include regression tasks and classification tasks? Does multi-domain refer to different cities or different modes of transportation? The author should provide more specific scopes for these terms in the introduction.
2. The author's method of handling ST dependencies is not novel. Using cross-attention techniques to model from the perspectives of temporal dimension, spatial dimension, and spatiotemporal relationships separately is a common processing paradigm in the ST community. The contribution of MSTI can be considered overstated.
3. The research on ST prediction and continual learning in the related work section is insufficient, lacking analysis of advanced ST prediction models and continual learning models in recent years.
4. The author argues that continual learning can help spatiotemporal models enhance generalization ability. The theory and experiments in the paper are insufficient to support this argument.
5. The author neglects experiments on cold start problems.
6. The comparative experiments did not achieve the best results on all tasks; reasons for this should be analyzed.
Overall, I believe this work proposes a very attractive challenge, but in the end with the old problem, it only solves a standard problem, i.e. the ST multi-task problem. The author proposes Rolling Adaptation to solve this problem, which is a contribution that cannot be ignored. However, this work is incomplete for many issues mentioned in the introduction are not addressed, the definition of tasks is confusing at the beginning, and there is a lack of a pseudocode algorithm to help readers understand the proposed method more accurately.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Is the author using "continuous learning" to replace the commonly used "continual learning" in the community to indicate the uniqueness of this work?
2. Did the author train three models on three datasets, or train only one model and continuously update it on three datasets? In other words, the author provided definitions for temporal increment, spatial increment, and feature increment in Definition 3, but in the actual work, was the increase in spatial nodes ignored?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The limitations have been addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer YiFX,
Thank you for your detailed and insightful feedback. We have carefully addressed each concern below.
**W1. Concept, definition and scope of Multi-task learning.** Various domains correspond to different urban elements in a given city, and the concept of multi-task is to forecast various urban elements in a same neural network. (More details in **Common issue 1)**.
**W2. Contribution of MSTI.** Previous attention-based spatiotemporal learners often process spatial and temporal aspects respectively. Different from those, MSTI designs cross-dimension attention, allowing flexible decomposition and spatial-temporal cross interactions. (More details in **Common issue 2**)
**W3. Related work review.** TrafficStream [1], PECPM [2] on ST learning, and CLS-ER [3] on task-level continuous learning are investigated in our paper. We supplement some more related works as below.
1. *ST learning:* DG2RNN [4] designs a dual-graph convolutional module to capture local spatial dependencies from both road distance and adaptive correlation perspectives. PDFormer [5] designs a spatial self-attention and introduces two graph masking matrices to highlight the spatial dependencies of short- and long-range views. TESTAM [6] uses time-enhanced ST attention by mixture-of-experts and modeling both static and dynamic graphs. These solutions focus on accuracy and generalization but neglect significance of continuous learning and fail to capture commonalities across different dynamic elements for collective intelligence.
2. *Detailed continuous learners:* C-LoRA [7] is low-rank adaptive by continuous self-regularization in cross-attention layer with stable diffusion. Kang et al. [8] develop a customized dirty label backdoor for online settings while maintaining high accuracy. COPAL [9] continuously adapts new data through a pruning approach guided by sensitivity analysis. These researches predominantly centers on the same task. Even CLS-ER addresses different tasks, it still focuses on image classification, trapping into little exploration on task-level streaming data. To model various aspects of commonality, MSTI and RoAda are proposed respectively. Therefore, the proposed task and solution are both novel to ST learning community.
[1] TrafficStream, IJCAI'21
[2] PECPM, KDD'23
[3] Learning Fast, Learning Slow, ICLR'22
[4] DG2RNN, TITS'24
[5] PDFormer, AAAI'24
[6] TESTAM, ICLR'24
[7] C-LoRA, TMLR'24
[8] Poisoning Generative Replay in Continual Learning ...., ICML'23
[9] COPAL, ICML'24
**W4. Generalization experiments and theory.**
1. *Experimental Evidence:* We have conducted generalization tests in Tab. 2 of Sec.5.2 and Fig. 4(b) of Sec. 5.4. Tab. 2 shows performance variations when node number reduced on NYC, indicating the robustness of CMuST. Fig. 4(b) shows performance changing with number of input tasks, indicating that task learning benefits from collective intelligence through assimilating common representations and interactive information, supporting the enhanced generalization in continual learning. Additional task-level cold start experiments are added to validate CMuST (**Common issue 3**).
2. *Theoretical Perspective:* It can be analyzed from uncertainty and information theory. First, introducing more diverse samples and iteratively repeating model training can reduce the epistemic uncertainty and increase the experience of models [10,11]. From information-theory aspect, continual learning allows the model to maintain useful common information and dynamically updates the model with new data, increasing the mutual information across task-level observations [12,13]. By learning multiple related tasks, the perceived knowledge of model is expanded via increasing patterns and dependencies, leading to enhanced generalization [14,15].
[10] Aleatoric and epistemic uncertainty in machine learning, MachLearn'21
[11] SDE-Net, ICML'20
[12] A Comprehensive Survey of Continual Learning, TPAMI'24
[13] Graph information bottleneck, NeurIPS'20
[14] Improving robustness to model inversion attacks via ..., AAAI'21
[15] Incorporating neuro-inspired adaptability for continual learning ..., NMI'23
**W5. Cold-start issue.** Generalization capacities have been empirically validated in Sec 5.2 and Sec. 5.4, where Tab. 2 can be viewed as imitating the cold-start issue on spatial dimension. (Task-level cold-start experiments are added in **Common issue 3**)
**W6. Analysis of comparative experiments.** 1) CMuST achieves overall good performances with most best results, and only 4 out 18 achieve the second best, showing the superiority against all baselines. 2) Baseline models are usually designed from specific tasks and datasets, then they tend to be tailored and tuned for the specific data and tasks, e.g., PromptST is designed on NYC, thus obtaining best performances on NYC. To this end, baseline model tends to individually achieve best while CMuST achieves overall best results. We will incorporate such discussions into our manuscript.
**Q1. Continuous & continual learning.** 'Continuous' is equivalent to 'continual'. The uniqueness of our work refers to a novel continuous task learning in ST community, which collects the integrated intelligence and benefits each individual learning task.
**Q2. Model training and working details.** Given each dataset with different urban elements, we trained separate models for each city (dataset). Regarding the increment on spatial domain, we have conducted the generalization experiments on Tab.2 by node masking. The results suggest that CMuST can ease the data requirements of single task by capturing and exploiting commonalities and diversity among tasks.
**Other. The pseudocode of RoAda.** We have added pseudocode of RoAda to global response **PDF**.
Based on your suggestions, we are revising the manuscript to satisfy the high standards of the NeurIPS community. If you have further questions, please feel free to discuss with us.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I have raised the rating.
---
Reply to Comment 1.1.1:
Title: Thanks for your constructive suggestion and positive feedback
Comment: Dear Reviewer YiFX,
We would like to express our deeply gratitude to your professional reviews and useful suggestions for promoting our manuscript. We promise to polish our manuscript, by improving the readability, supplementing more related works, and providing additional experiments. Many thanks! Hope you a nice day!
Authors of Paper 2077 | Rebuttal 1:
Rebuttal: Dear Reviewers,
Thanks to all reviewers for your meticulous review and valuable feedback. We collated several common questions that identified multiple reviewers and have compiled detailed explanations and responses to these concerns as follows:
**Common issue 1.(Reviewer YiFX, GaXf)** **The concept, definition and scope of Multi-task learning.** Actually, in our study, various domains correspond to different urban elements collected with different manners in a given city. For instance, in an integrated urban system, it includes taxi demands, crowd flow, traffic speed and accidents. We collect and organize various domain data (urban elements) in a city into one integrated dataset. The goal of our work is to explore the integrated intelligence from various domains and enhance learning of each individual urban element. To this end, the concept of multi-task here is to forecast various elements from different domains in an integrated model. Therefore, our work does not target at unifying regression or classification problems, but proposes an integrated model to iteratively establish the common intelligence among different elements and improve generalization for each element learning in succession, thus getting rid of task isolation. Noted that our experiments are performed with regression tasks, but it can easily generalize to classification task with shared representations.
**Common issue 2.(Reviewer YiFX, xkVJ) The design and technical contribution of MSTI.** Conventional attention-based spatiotemporal learners often process spatial and temporal aspects respectively [1-3]. Different from those, our MSTI designs a cross-dimension attention mechanism, where the dimension indicates the data representation on spatial aspect or temporal aspect. Our MSTI not only considers the self-correlation within spatial dimension, temporal dimension and main observations (e.g., taxi demands, flows), but also the interactions from main observation to spatial representation and main to temporal representation. This design allows flexible decomposition and capturing spatial-temporal cross interactions. Coupling with RoAda, we can flexibly capture the various commonality between spatial-temporal dimensions thus enhancing the continuous learning over each task. We believe this strategy is less-explored, especially for continuous multi-task ST learning.
[1] Learning Dynamics and Heterogeneity of Spatial-Temporal Graph Data for Traffic Forecasting, TKDE'22
[2] STAEformer, CIKM' 23
[3] PDFormer, AAAI'24
**Common issue 3.(Reviewer YiFX, xkVJ)** **Experiments for cold-start and generalization.** We have designed the experiment of cold start. Specifically, for NYC dataset, we selected three of the four tasks of Crowd In, Crowd Out, Taxi Pick and Taxi Drop in turn for training, and calculated the adaptation time and results for the remaining one task on this basis, comparing with training a single task alone. A similar design is applied for SIP and Chicago datasets. The results are shown in Table 2 in the attached **PDF**.
The results show that, both in terms of effect and time, it performs better than single task, indicating that our model adapts to the newly arrived task more quickly and well, which is conducive to solving the problem of cold start of urban prediction.
**Common issue 4.(Reviewer GaXf, xkVJ)** **Preparation of datasets and data processing.**
We construct datasets with multiple tasks for each city. For a given city, we first collect the intensive main ST data and filter out the in-range geolocation indicators (e.g., GPS). Different urban elements are then aggregated to corresponding valid geographical ranges.
The space and interval units are standardized. The contexts are mapped into same dimension with an MLP to aviod the diverse dimension of raw contexts.
1. **NYC**: We collect yellow taxi trip data from January to March 2016 from the NYC Open Data website. Each trip record includes information such as pickup and dropout times, locations, and the number of passengers. We filter out records with abnormal longitude and latitude values or missing data. Then we select data within Manhattan and surrounding areas, divided into 30x15 grids, and counted trips per grid, selecting those with total trips greater than or equal to 1000, resulting in 206 grids. Each grid's data is aggregated into 30-minute intervals, yielding taxi pickup counts, taxi dropout counts, and crowd in/out flows. We also include time of day (tod) and day of week (dow) as context, resulting in four tasks with input features [value, tod, dow].
2. **SIP**: We collect traffic data from Suzhou Industrial Park from January to March 2017, comprising tens of thousands of records. The area is divided into nodes, and data is aggregated into 5-minute intervals. After filtering out grids with sparse data, we obtain 108 nodes, each containing traffic speed and traffic flow. We include time of day and day of week as input context, resulting in two tasks: traffic flow and traffic speed, with input [value, tod, dow].
3. **Chicago**: We collect taxi trip and accident data from the Chicago Open Data platform for June to December 2023. The taxi data includes trip start, end times and locations. We divide the area into 30x20 grids and select grids with total trips greater than 100, resulting in 220 grids. Similar to the NYC dataset, data is aggregated into 30-minute intervals, yielding taxi pickup and dropout counts, resulting in two tasks with input features [value, tod, dow]. The accident data includes incident locations, times, casualty numbers, and injury severity of each casualty. We then obtain the risk score by weighting it according to each casualty and injury, mapped it to the 220 grids, and aggregated the risk score over time intervals, resulting in a risk task with input features [risk score, tod, dow].
The detailed process can be found in the code implementation in the anonymous repository.
Authors of Paper 2077
Pdf: /pdf/e3081456260a47b672eb2769cadbade904ad20f3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
TransAgent: Transfer Vision-Language Foundation Models with Heterogeneous Agent Collaboration | Accept (poster) | Summary: This paper proposes the TransAgent framework, which unifies and transports knowledge from isolated agents to guide CLIP in generalizing through multi-source knowledge distillation. This framework allows flexible collaboration with 11 heterogeneous agents to enhance vision-language foundation models. Importantly, this approach incurs no additional cost during the inference phase.
Strengths: **[Interesting idea]** Leveraging heterogeneous agent collaboration for generalized vision language models is intriguing.
**[Good presentation]** The paper is well-written and organized, which is easy to follow.
**[Well-illustrated figures]** The figures shown in this paper are clear enough to tell the workflow of the method.
Weaknesses: **[Overclaimed statements]** The paper mentions that “it is the first unified transfer framework for generalizing vision-language foundation models with heterogeneous agent collaboration.” However, CaFo [67] had done similar things by adopting multiple heterogeneous agents. This statement may need a revision to differentiate from CaFo.
**[Some SOTA methods are not compared]** Some works to be compared or discussed in related works are listed as follow: “PromptKD: Unsupervised Prompt Distillation for Vision-Language Models, CVPR2024”, “DePT: Decoupled Prompt Tuning, CVPR2024”, “Beyond Sole Strength: Customized Ensembles for Generalized Vision-Language Models, ICML2024”, “Consistency-guided Prompt Learning for Vision-Language Models, ICLR2024”, “GraphAdapter: Tuning Vision-Language Models with Dual Knowledge Graph, NeurIPS2023” and “Task Residual for Tuning Vision-Language Models, CVPR2023”.
**[Need further explorations]** The knowledge distillation technique used in this work is too straightforward. It would be nice to see an advanced distillation with heterogeneous agents.
**[Inadequate experiments]** There are some experiments to be done for verifying the effectiveness of the proposed method. (i) Training time comparisons with other methods and more comparisons for inference time in Table 10; (ii) It would be nice to see the performance by increasing each agent gradually. This can help better understand which agent is more effective and which one is useless. (iii) Ablation study conducted using VAC, LAC, or MAC individually, as well as in combinations of any two.
**[Disorganized reference format]** Please reformat the references, e.g., “Conference on Computer Vision and Pattern Recognition”, “Proceedings of the IEEE/CVF conference on computer vision and pattern recognition” and “Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)”.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please refer to the weakness section.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Yes, they have provided the limitation section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive comments. We provide our feedbacks as follows.
**Q1: The paper mentions that “it is the first unified transfer framework for generalizing vision-language foundation models with heterogeneous agent collaboration.” However, CaFo [67] had done similar things by adopting multiple heterogeneous agents. This statement may need a revision to differentiate from CaFo.**
**A1:** Thanks for your suggestion.
CaFo [67] leverages cascades of external models based on their output forms, which necessitates careful sequential arrangement of the models. This design inherently lacks **flexibility** due to the required model arrangement.
In contrast, our TransAgent framework avoids this issue by employing a unified distillation process for heterogeneous agents, eliminating the complexity of sequential model arrangement. Additionally, TransAgent unloads all external models during inference, achieving **deployment efficiency** without the burden of a heavy model ensemble as seen with CaFo.
Based on these distinctions, we will revise the statement to: “it is the first unified *distillation* framework for generalizing vision-language foundation models with *efficient* heterogeneous agent collaboration.”
**Q2: Some works to be compared or discussed in related works are listed as follow:
PromptKD (CVPR2024),
DePT (CVPR2024),
T\\(_{En}\\) (ICML2024),
CoPrompt (ICLR2024),
GraphAdapter (NeurIPS2023),
Task Residual(CVPR2023).**
**A2:** Thanks for this suggestion.
We list the comparison on base-to-novel generalization of 11 benchmarks.
(1) **PromptKD (CVPR2024)** also leverages distillation but uses the full training set of base and novel classes without labels. To align with the mainstream setting, we re-implemented its official code using only 16-shot labeled data. Our TransAgent clearly outperforms PromptKD under these conditions.
(2) **T\\(_{En}\\) (ICML2024)** employs a model ensemble that must be maintained during inference, leading to inefficient deployment. In contrast, our TransAgent unloads all external models after training, with the inference pipeline identical to CLIP. This results in better performance with only **1/3** complexity during inference compared to **T\\(_{En}\\)**.
(3) Compared to other prompt-learning based methods such as **DePT (CVPR2024)** and **CoPrompt (ICLR2024)**,
our TransAgent simply achieves better performance.
(4) For adapter-based methods such as **GraphAdapter (NeurIPS2023)** and **Task Residual(CVPR2023)**, we will refer them in the related works since the model and data settings are different.
|Method|Training Data|Inference Model|Base Acc.|Novel Acc.|HM|
|:-|:-|:-:|:-:|:-:|:-:|
|PromptKD (CVPR2024)|16-shot + unlabeled|87M|86.96|80.73|83.73|
|PromptKD (CVPR2024)|16-shot|87M|77.75|71.69|74.59|
|T\\(_{En}\\) (ICML2024)|16-shot|268M|85.48|77.17|81.11|
|DePT (CVPR2024)|16-shot|86M|85.19|76.17|80.43|
|CoPrompt (ICLR2024)|16-shot|91M|84.00|77.23|80.48|
|TransAgent (ours)|16-shot|86M|**85.29**|**77.62**|**81.27**|
**Q3: It would be nice to see an advanced distillation with heterogeneous agents.**
**A3:** Thanks for your insightful comments.
To be noted,
our primary goal is to introduce a **generic knowledge integration** method to enhance CLIP using heterogeneous agents. The main contribution lies in how we collaborate these heterogeneous agents to generate the knowledge vector for distillation, rather than the distillation operation itself.
Specifically, we propose a novel **mixture-of-agent gating mechanism** (outlined in Equation 3, 5, and 7) for collaborating heterogeneous agents in each modality. This mechanism allows TransAgent to adaptively select and weight the knowledge from various agents, effectively reducing the introduction of irrelevant knowledge during distillation.
The visualization of this process is provided in Figure 5, and the effectiveness of this design is demonstrated in the bottom sections of Tables 2-4 (Page 9). This innovative approach represents the advancement in our collaboration method.
**Q4: More Experiments
(i) Training time comparisons with other methods and more comparisons for inference time;
(ii) It would be nice to see the performance by increasing each agent gradually.
(iii) Ablation study conducted using VAC, LAC, or MAC individually, as well as in combinations of any two.**
**A4:** (i) Since PromptKD (CVPR2024) also leverages distillation. We further make the comparison with it. The cost comparison on ImageNet is presented below, where all the models are trained for 5 epochs under 16-shot setting on a single A6000 GPU.
Clearly, our model achieves a better performance with higher efficiency.
|Method|Training Time (min)|Inference Time (sec)|HM|
|:-|:-:|:-:|:-:|
|PromptKD|146|201|71.29|
|TransAgent|72|54|73.93|
(ii) Note that we have already conducted the experiment of **whether use** a specific agent can help improve the performance in the second row of Table 2-4. The visualization of the gating weights in Figure 5 may also provide insights of which agent is more useful for a specific dataset. For instance, as for vision agents, DINO experts in recognizing general objects while lags behind on some fine-grained datasets (e.g., EuroSAT), where the other agents like SAM which focus more on details perform better.
(iii) The results are presented below. As observed, each individual module is beneficial for improving the generalization ability of the foundation models. And the combinations of these modules further boost the results.
|Module|Base Acc.|Novel Acc.|HM|\\(\Delta\\)|
|:-|:-:|:-:|:-:|:-:|
|baseline|84.21|71.79|77.51|-|
|VAC|84.96|73.90|79.04|+1.53|
|LAC|85.23|75.20|79.90|+2.39|
|MAC|85.04|74.85|79.61|+2.10|
|VAC + LAC|85.31|75.36|80.02|+2.51|
|VAC + MAC|85.11|75.10|79.79|+2.28|
|LAC + MAC|85.56|75.84|80.40|+2.89|
|TransAgent|85.29|77.62|81.27|**+3.76**|
**Q5: Disorganized reference format.**
**A5:** Thanks for your advice. We will check and reformat the references in our final version.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response of authors. It has addressed my concerns, and I would like to raise my score to 6.
---
Reply to Comment 1.1.1:
Comment: Thanks for your feedback. We are glad to have addressed your concerns and receive your acknowledgement. | Summary: The paper introduces TransAgent, a novel framework designed to enhance vision-language foundation models like CLIP through the integration of knowledge from diverse, pre-trained expert models. These experts, which include vision, language, and multi-modal models, possess rich knowledge acquired from different modalities, tasks, networks, and datasets. TransAgent addresses the challenge of generalizing these models to new domains, especially under low-shot conditions, by proposing a unified transfer framework that leverages heterogeneous agent collaboration. The framework employs a mixture-of-agents gating mechanism to adaptively integrate external knowledge and uses multi-source distillation to transfer this knowledge into CLIP, enhancing its generalization ability without additional inference cost. Experiments demonstrate TransAgent's state-of-the-art performance on 11 visual recognition datasets, significantly outperforming methods like CoOp under low-shot settings and showing remarkable results on EuroSAT with large domain shifts. The paper's contributions lie in its innovative approach to knowledge transfer, the flexibility of its framework, and its significant improvements in model generalizability and efficiency.
Strengths: The paper presents a highly original approach to enhancing vision-language foundation models by proposing the TransAgent framework. This framework innovatively integrates knowledge from a diverse set of pre-trained expert models into a unified system.
The use of a mixture-of-agents gating mechanism and multi-source distillation is a unique contribution that sets this work apart from existing methods.
The authors have conducted extensive experiments on 11 visual recognition datasets, demonstrating the effectiveness of TransAgent under various low-shot scenarios. The comparison with state-of-the-art methods like CoOp and the detailed ablation studies provide a solid understanding of the framework's capabilities and the impact of different components.
The paper is well-structured, with a clear abstract and introduction that succinctly summarize the contributions and scope of the work.
Weaknesses: After changing the expert agent selection, the model needs to be retrained, and this structure is not plug-and-play.
The use of integrated features from multiple teacher models as supervision for distillation is widely used in other fields, and the authors have transferred this approach to the domain of this paper, which lacks sufficient novelty
Technical Quality: 3
Clarity: 3
Questions for Authors: Could the authors provide insights into how the TransAgent framework generalizes to domains outside of the tested benchmarks, especially those with significantly different characteristics?
The paper mentions the potential introduction of irrelevant knowledge during distillation. What strategies are in place or could be considered to mitigate this issue?
The role of learnable prompts seems pivotal. How were the prompts engineered, and what impact did they have on the model's performance?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive comments. We provide our feedbacks as follows.
**Q1: After changing the expert agent selection, the model needs to be retrained, and this structure is not plug-and-play.**
**A1:** We would like to clarify that the training effort required by our framework is minimal. All pre-trained models (including CLIP and the 11 agents) are frozen. Fine-tuning is performed on the learnable prompts and the gating mechanism, which requires only a small amount of computational effort. For example, under the 16-shot setting, training for 20 epochs on a single A6000 GPU takes less than 20 minutes for most benchmarks.
This efficient process means that our framework can be **quickly adapted** to changes in expert agent selection or the testing of new downstream datasets.
Additionally,
our method is actually **'plug-and-play'**,
i.e.,
this concise collaboration framework can be integrated into various prompt-learning based methods with ease.
We will further investigate this in future work.
**Q2: The authors have transferred the distillation approach to the domain of this paper, which lacks sufficient novelty.**
**A2:** To be noted,
how to perform multi-source distillation is not trivial, since how to **extract and collaborate** knowledge from various heterogeneous teachers has not been explored for CLIP-like foundation models.
We would like to emphasize the distinct technical novelty and contributions of our approach, specifically in the context of *Transfer Flexibility* (Introduction, Lines 49-55).
(1) **Generic Knowledge Extraction Method:**
Our method introduces a novel approach for extracting knowledge from heterogeneous agents, particularly the multi-modal ones. We design a unique method to extract prediction score vectors as multi-modal knowledge. This involves elaborate mining of vision-language alignment within these models. The efficacy is demonstrated in Table 4 (Page 9).
(2) **Flexible Knowledge Collaboration Mechanism:**
We propose a mixture-of-agent gating mechanism to integrate external knowledge from different agents. This mechanism allows TransAgent to dynamically select agents via soft weighting, enabling it to adapt effectively to few-shot settings across various target datasets. The effectiveness is detailed in Table 2-4 (Page 9).
Beyond these technical innovations, TransAgent offers two additional significant advantages.
(1) *Knowledge Versatility* (Line 45-48):
To our best knowledge,
TransAgent is **the first framework** to enhance CLIP-like models comprehensively using 11 heterogeneous agents, covering a wide range of vision, language, and multi-modal research areas.
(2) *Deployment Efficiency* (Line 56-60):
TransAgent allows all external agents to be unloaded after distillation. This guarantees the inference pipeline of the enhanced CLIP to be consistent with the original one, without the need for a cumbersome model ensemble, as evidenced in Table 10 (Page 16).
**Q3: Could the authors provide insights into how the TransAgent framework generalizes to domains outside of the tested benchmarks, especially those with significantly different characteristics?**
**A3:** The generalization capacity of our TransAgent framework is primarily credited to the **integration of diversified knowledge** from a wide range of heterogeneous agents that are pre-trained on different modalities, tasks, networks, and datasets.
To substantiate this claim, we follow [1-5] to verify TransAgent on 11 downstream datasets spanning diverse domains arranging from image to video task, from natural to satellite images, from object to scene understanding, from regular to fine-grained recognition. The remarkable performance across these datasets demonstrates that TransAgent can **generalize effectively**, even when faced with **significant domain shifts**.
We will investigate more diverse domain data in future work.
**Q4: The paper mentions the potential introduction of irrelevant knowledge during distillation. What strategies are in place or could be considered to mitigate this issue?**
**A4:** Thanks for your concern.
To mitigate the introduction of irrelevant knowledge during distillation, we employ a **mixture-of-agent gating mechanism** (outlined in Equation 3, 5, and 7). This mechanism enables our TransAgent to adaptively select and weight the knowledge from different agents, thereby reducing the impact of irrelevant information.
The visualization of this process is provided in Figure 5, and the effectiveness of this design is demonstrated in the bottom sections of Tables 2-4 (Page 9). This adaptive selection ensures that only the most relevant knowledge is distilled, enhancing the overall performance and robustness of the model.
**Q5: The role of learnable prompts seems pivotal. How were the prompts engineered, and what impact did they have on the model's performance?**
**A5:** To avoid the complex design of prompt engineering, we simply leverage the widely-used CoOP [1] method with a number of learnable prompt vectors, as mentioned in Lines 102-110 in the paper. Our method works well with such a simple prompting method, showing its effectiveness and flexibility. The importance of the learnable prompts as well as the prompt design have been studied thoroughly by previous works [3, 4, 5], and we simply follow the de-facto standard.
**References.**
[1] Learning to prompt for vision-language models. (IJCV-22)
[2] Prompt, generate, then cache: Cascade of foundation models makes strong few-shot learners. (CVPR-23)
[3] Maple: Multi-modal prompt learning. (CVPR-23)
[4] Self-regulating prompts: Foundational model adaptation without forgetting. (ICCV-23)
[5] Read-only prompt optimization for vision-language few-shot learning. (ICCV-23) | Summary: The paper focuses on the challenge of vision-language foundation models (e.g., CLIP) struggling to generalize to diverse target domain data in downstream tasks. It highlights the potential of using expert models, which are pre-trained on various modalities, tasks, networks, and datasets, to improve generalization. The proposed TransAgent framework integrates the knowledge of these isolated expert models through a unified approach, enhancing CLIP's performance via multi-source knowledge distillation. TransAgent collaborates with 11 heterogeneous agents to empower vision-language models without adding inference phase costs. The framework achieves good performance on some visual recognition datasets.
Strengths: 1. The problem is of practical importance.
2. The experiments are sufficient.
3. The gains of knowledge distillation seem strong.
Weaknesses: 1. In my opinion, the proposed method leverages multiple pretty strong 'experts' for knowledge distillation, e.g., SAM, MAE, DINO, ViTDet, GPT, and Vicuna. However, almost all the baselines to be compared do not rely on external models. Hence, I think the majority of comparisons (e.g., Table 1 and Figure 4) may be unfair. The proper baselines should be other distillation methods (using the same external models as this paper), which the proposed method is actually orthogonal to most of the current baselines.
2. On top of 1, I think that if some significant technical contributions regarding knowledge distillation are not proposed, simply 'applying knowledge distillation methods to CLIP' may not be an acceptable novel contribution for me.
3. Given that multiple pretty strong 'experts' (e.g., SAM, MAE, DINO, ViTDet, GPT, and Vicuna) for knowledge distillation have been employed, the current gains of performance seem limited.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please refer to 'Weaknesses'.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors have addressed the limitations and potential negative societal impacts of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive comments. We provide our feedbacks as follows.
**Q1: Almost all the baselines to be compared do not rely on external models. Hence, I think the majority of comparisons (e.g., Table 1 and Figure 4) may be unfair.**
**A1:** Thank you for raising the concern.
(1) Table 1 and Figure 4 are provided for the state-of-the-art (SOTA) comparison, rather than an ablation study focusing solely on the distillation method. Our aim is to identify **the most effective method**, regardless of the specific techniques employed. The results demonstrate that TransAgent achieves superior performance, highlighting its advancements in few-shot generalization.
(2) During the inference phase, we unload all external models, ensuring that the inference process of TransAgent is identical to that of the original CLIP with learnable prompts. Therefore, it is fair to compare our method with others using learnable prompts, as they share **the same inference setting**, as presented in Table 1.
(3) Our TransAgent also outperforms CaFo, which similarly relies on external models, as shown in Figure 4. This comparison further underscores the effectiveness and fairness of our approach.
**Q2: The proper baselines should be other distillation methods (using the same external models as this paper), which the proposed method is actually orthogonal to most of the current baselines.**
**A2:** Thanks for your insightful feedback.
(1) We have included baselines that involve distillation with external models. As indicated in the second row of Tables 2-4 (Page 9), the use of distillation from each external model contributes to performance improvement. Our proposed heterogeneous agent collaboration (the Gating results shown in Tables 2-4) consistently achieves the best results. This fairly demonstrates the effectiveness of our method in leveraging multiple external models.
(2) For the other distillation method in the research of CLIP [PromptKD, CVPR2024], it is not feasible to use the exact same set of external models as in our study due to differences in frameworks and settings. Specifically, the referenced method employs a transductive setting, which differs from our approach. Nonetheless, we have included this method as a state-of-the-art (SOTA) baseline for comparison.
For instance, on the EuroSAT benchmark, which involves a **significant domain shift** with satellite images, our method achieves an accuracy of 83.43 on novel classes, compared to 82.08 for PromptKD. Notably, we achieve this with only 16-shot labeled data in the base class, whereas PromptKD utilizes 16-shot labeled data in the base class and the full training set without labels in the base and novel classes. This demonstrates our method's superior performance.
**Q3: If some significant technical contributions regarding knowledge distillation are not proposed, simply 'applying knowledge distillation methods to CLIP' may not be an acceptable novel contribution.**
**A3:** To be noted, how to perform multi-source distillation is not trivial, since how to **extract and collaborate** knowledge from various heterogeneous teachers has not been explored for CLIP-like foundation models.
We would like to emphasize the distinct technical novelty and contributions of our approach, specifically in the context of *Transfer Flexibility* (Introduction, Lines 49-55).
(1) **Generic Knowledge Extraction Method:**
Our method introduces a novel approach for extracting knowledge from heterogeneous agents, particularly the multi-modal ones. We design a unique method to extract prediction score vectors as multi-modal knowledge. This involves elaborate mining of vision-language alignment within these models. The efficacy is demonstrated in Table 4 (Page 9).
(2) **Flexible Knowledge Collaboration Mechanism:**
We propose a mixture-of-agent gating mechanism to integrate external knowledge from different agents. This mechanism allows TransAgent to dynamically select agents via soft weighting, enabling it to adapt effectively to few-shot settings across various target datasets. The effectiveness is detailed in Table 2-4 (Page 9).
Beyond these technical innovations, TransAgent offers two additional significant advantages.
(1) *Knowledge Versatility* (Line 45-48):
To our best knowledge,
TransAgent is **the first framework** to enhance CLIP-like models comprehensively using 11 heterogeneous agents, covering a wide range of vision, language, and multi-modal research areas.
(2) *Deployment Efficiency* (Line 56-60):
TransAgent allows all external agents to be unloaded after distillation. This guarantees the inference pipeline of the enhanced CLIP to be consistent with the original one, without the need for a cumbersome model ensemble, as evidenced in Table 10 (Page 16).
**Q4: Given that multiple pretty strong `experts' for knowledge distillation have been employed, the current gains of performance seem limited.**
**A4:** Thank you for raising this concern. The performance gains achieved by our TransAgent are indeed significant, as highlighted by the following observations. *First*, TransAgent operates similarly to CoOp [2] during inference, utilizing learnable prompts while unloading all external models.
With the same inference pipeline, TransAgent significantly outperforms the popular CoOp by around **10%** on average, and by as much as **20%** on the EuroSAT dataset which contains large domain shifts, under the same low-shot setting in Table 1. *Second*, when compared to CaFo [3], which also utilizes external models, TransAgent achieves approximately **5%** higher accuracy on average, while using only **1/10** of the inference time as shown in Table 10.
**References.**
[1] PromptKD: Unsupervised Prompt Distillation for Vision-Language Models (CVPR-24)
[2] Learning to prompt for vision-language models. (IJCV-22)
[3] Prompt, generate, then cache: Cascade of foundation models makes strong few-shot learners. (CVPR-23) | Summary: This paper aims to handle heterogeneous foundation model combination across different pretrained backbones. Instead of using vanilla ensemble, it proposes to use a distillation process to transfer knowledges from different agents. More specifically, it uses a learnable gate module to integrate different knowledge sources. Extensive experiments show the propose method outperforms other related models.
Strengths: 1. Efficiently combining several pretrained foundation models is a valuable research direction.
2. Overall, the writing is clear to read and follow, such as motivation, methodology, and empirical results.
3. Compared with baseline, the proposed model does not need to involve extra inference cost, benefitted by a distillation process.
4. Empirical results show the model superiority.
Weaknesses: 1. Overall, I think this is more like a jointly distillation technique, instead of so-called agent collaboration, which may lead to certain misunderstanding.
2. Some figures are a little confused to understand. A clean and informative figures to show the whole working pipeline will be helpful.
3. Even if the empirical results are good, the proposed method still lacks of research novelty.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weaknesses section above.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive comments. We provide our feedbacks as follows.
**Q1: The proposed method is more like a jointly distillation technique, instead of so-called agent collaboration, which may lead to certain misunderstanding.**
**A1:** We would like to clarify this misunderstanding. Joint distillation is our goal, while agent collaboration is our method to achieve this goal. Specifically, we introduce a generic gating mechanism for each modality (outlined in Equation 3, 5 and 7). This mechanism allows us to flexibly weight the contributions of various teacher models, thereby creating a summarized knowledge vector that is subsequently used for distillation within the given modality. We describe this process as **"agent collaboration"** because it involves all the agents (models) working together adaptively towards a shared objective, i.e., constructing a knowledge vector for effective distillation. This collaborative aspect distinguishes our method.
We will clarify this in the revision.
**Q2: Some figures are a little confused to understand. A clean and informative figures to show the whole working pipeline will be helpful.**
**A2:** Thanks for your suggestions. Figure 1 actually shows the whole pipeline of our TransAgent, where
we extract external knowledge from heterogeneous models in each modality and leverage the knowledge of all the modalities to boost vision-language foundation models. The current figures mainly focuse on "what" knowledge are integrated. We will add more informative figures to show "how" to integrate these knowledge in the revision.
**Q3: Even if the empirical results are good, the proposed method still lacks of research novelty.**
**A3:** To be noted,
how to perform multi-source distillation is not trivial, since how to **extract and collaborate** knowledge from various heterogeneous teachers has not been explored for CLIP-like foundation models.
We would like to emphasize the distinct technical novelty and contributions of our approach, specifically in the context of *Transfer Flexibility* (Introduction, Lines 49-55).
(1) **Generic Knowledge Extraction Method:**
Our method introduces a novel approach for extracting knowledge from heterogeneous agents, particularly the multi-modal ones. We design a unique method to extract prediction score vectors as multi-modal knowledge. This involves elaborate mining of vision-language alignment within these models. The efficacy is demonstrated in Table 4 (Page 9).
(2) **Flexible Knowledge Collaboration Mechanism:**
We propose a mixture-of-agent gating mechanism to integrate external knowledge from different agents. This mechanism allows TransAgent to dynamically select agents via soft weighting, enabling it to adapt effectively to few-shot settings across various target datasets. The effectiveness is detailed in Table 2-4 (Page 9).
Beyond these technical innovations, TransAgent offers two additional significant advantages. (1) *Knowledge Versatility* (Line 45-48):
To our best knowledge, TransAgent is **the first framework** to enhance CLIP-like models comprehensively using 11 heterogeneous agents, covering a wide range of vision, language, and multi-modal research areas. (2) *Deployment Efficiency* (Line 56-60):
TransAgent allows all external agents to be unloaded after distillation. This guarantees the inference pipeline of the enhanced CLIP to be consistent with the original one, without the need for a cumbersome model ensemble, as evidenced in Table 10 (Page 16). | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for their constructive comments.
We are delighted to receive positive feedback such as
"the idea is intriguing" (**DUuV**),
"valuable research direction" (**gY7M**),
"can be extented flexibly" (**QaNB**),
"unique contribution" (**A3Ab**),
"of practical importance" (**wEnm**),
"figures are clearly presented" (**DUuV, QaNB**),
"easy to follow" (**QaNB, gY7M, DUuV**)
and "well-structured" (**A3Ab, DUuV, gY7M**).
We have carefully addressed all the concerns raised by the reviewers in the individual response section. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper introduces a TransAgent framework, which guides CLIP to generalize with multi-source knowledge distillation. The framework contains three kinds of collaboration, including vision models, language models and muti-modal models. A Mixture-of-Agents (MoA) gating mechanism is proposed to adaptively integrate the knowledge. SOTA is achieved on 11 datasets under the few-shot scenarios.
Strengths: This method guarantees the deployment efficiency, as knowledge from heterogeneous agents are distillated and injected into CLIP.
This framework can be extended flexibly. Any expert can be introduced as a teacher model.
The paper is easy to follow. Figures and experiments are clearly presented.
Weaknesses: The method applies multi-teacher distillation in clip with prompts, and popular large models are exploited as teachers. Thus, the novelty is limited.
Given that the framework comprises three components — VAC, LAC, and MAC—the reviewer insists that the results of every module (only-VAC, only-LAC, only-MAC) will be more persuasive to demonstrate their respective effectiveness (i.e. only-VAC's, only-LAC's, only-MAC's).
The clip absorbs various knowledge from 11 excellent agents. In addition to few-shot learning on 11 downstream datasets, experiments on general dataset are also encouraged, like WebQA and CIRR.
In the related works, research on muti-teacher distillation should be included. Baselines in the experiments should be introduced in the related work.
Typos in line 259: knowledege -> knowledge
Technical Quality: 4
Clarity: 3
Questions for Authors: 11 agents contain different domain knowledges. Irrelevant information or inconsistent information may be induced. Is there any challenge in the training and how to deal with challenges?
How to prevent overfitting when distilled from 11 pretrained models?
The λ_2 is far greater than other λ values, does it mean that the loss of LAC is more important than other losses?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors have listed the limitation in the paper. It is encouraged to provide more analysis and solutions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive comments. We provide our feedbacks as follows.
**Q1: The method applies multi-teacher distillation in clip with prompts, and popular large models are exploited as teachers. Thus, the novelty is limited.**
**A1:** To be noted, how to perform multi-source distillation is not trivial, since how to **extract and collaborate** knowledge from various heterogeneous teachers has not been explored for CLIP-like foundation models. We would like to emphasize the distinct technical novelty and contributions of our approach, specifically in the context of *Transfer Flexibility* (Introduction, Lines 49-55).
(1) **Generic Knowledge Extraction Method:** Our method introduces a novel approach for extracting knowledge from heterogeneous agents, particularly the multi-modal ones. We design a unique method to extract prediction score vectors as multi-modal knowledge. This involves elaborate mining of vision-language alignment within these models. The efficacy is demonstrated in Table 4 (Page 9).
(2) **Flexible Knowledge Collaboration Mechanism:** We propose a mixture-of-agent gating mechanism to integrate external knowledge from different agents. This mechanism allows TransAgent to dynamically select agents via soft weighting, enabling it to adapt effectively to few-shot settings across various target datasets. The effectiveness is detailed in Table 2-4 (Page 9).
Beyond these technical innovations, TransAgent offers two additional significant advantages. (1) *Knowledge Versatility* (Line 45-48): To our best knowledge, TransAgent is **the first framework** to enhance CLIP-like models comprehensively using 11 heterogeneous agents, covering a wide range of vision, language, and multi-modal research areas. (2) *Deployment Efficiency* (Line 56-60):
TransAgent allows all external agents to be unloaded after distillation. This guarantees the inference pipeline of the enhanced CLIP to be consistent with the original one, without the need for a cumbersome model ensemble, as evidenced in Table 10 (Page 16).
**Q2: The results of every module (only-VAC, only-LAC, only-MAC) will be more persuasive to demonstrate their respective effectiveness.**
**A2:** The ablation is conducted below. As observed in the table, each module is effective and their combination further boosts the overall performance on 11 benchmarks (the same setting as Table 1 in the paper).
|Module|Base Acc.|Novel Acc.|HM|
|:-|:-:|:-:|:-:|
|baseline|84.21|71.79|77.51|
|VAC|84.96|73.90|79.04|
|LAC|85.23|75.20|79.90|
|MAC|85.01|74.85|79.61|
|TransAgent|85.29|77.62|81.27|
**Q3: In addition to few-shot learning on 11 downstream datasets, experiments on general dataset are also encouraged, like WebQA and CIRR.**
**A3:** Note that, these 11 downstream datasets have been widely used to evaluate the generalization capacity of foundation models [1-4], since they are actually the general datasets arranging from image to video task, from natural to satellite images,
from object to scene understanding, from regular to fine-grained recognition. Hence, we follow the mainstream setting for evaluation.
**Q4: In the related works, research on muti-teacher distillation should be included. Baselines in the experiments should be introduced
in the related work. Typos in line 259: knowledege -> knowledge.**
**A4:** Thanks for your suggestions on related works. We will include baseline methods and research on multi-teacher distillation [5-7] in the related work in our final version. Additionally, we will fix the typos in the revision.
**Q5: 11 agents contain different domain knowledge.
Irrelevant information or inconsistent information may be induced.
(1) Is there any challenge in the training and how to deal with challenges?
(2) How to prevent overfitting when distilled from 11 pretrained models?
(3) The \\(\lambda_2\\) is far greater than other \\(\lambda\\) values, does it mean that the loss of LAC is more important than other losses?**
**A5:** Thanks for raising the concerns.
(1) The primary challenge lies in the varying importance of the 11 agents for different target datasets. Treating all the knowledge equally would indeed hinder generalization. To address this, we implement a **mixture-of-agent gating mechanism** in the few-shot training. This mechanism enables us to adaptively weight the contributions of the agents for different target datasets. The visualization is provided in Figure 5, and its effectiveness is demonstrated at the bottom of Tables 2-4 (Page 9).
(2) We mitigate the risk of overfitting, since all pre-trained models (including CLIP and 11 agents) are frozen during the training process. Fine-tuning primarily works on a few learnable prompts and the gating mechanism, which involves minimal adjustments (conducted over 20 epochs).
(3) The main reason is that, the absolute value of LAC loss is smaller than other losses. To ensure that the LAC loss contributes effectively to the training process, we choose a greater \\(\lambda_2\\). This adjustment ensures that the weighted values of all the losses are comparable, and be effective when training our TransAgent. We ablate the value of \\(\lambda_2\\) below.
|\\(\lambda_2\\)|Base|Novel|HM|
|:-:|:-:|:-:|:-:|
|1.0|84.89|74.36|79.28|
|10.0|84.75|75.47|79.84|
|20.0|84.97|76.50|80.51|
|25.0|85.29|77.62|81.27|
|30.0|85.15|77.31|81.04|
**References.**
[1] Learning to prompt for vision-language models. (IJCV-22)
[2] Prompt, generate, then cache: Cascade of foundation models makes strong few-shot learners. (CVPR-23)
[3] Maple: Multi-modal prompt learning. (CVPR-23)
[4] Self-regulating prompts: Foundational model adaptation without forgetting. (ICCV-23)
[5] Mitigating Accuracy-Robustness Trade-Off Via Balanced Multi-Teacher Adversarial Distillation. (TPAMI-24)
[6] Let All be Whitened: Multi-teacher Distillation for Efficient Visual Retrieval. (AAAI-24)
[7] AM-RADIO: Agglomerative Vision Foundation Model Reduce All Domains Into One. (CVPR-24) | null | null | null | null | null | null |
Evaluation of Text-to-Video Generation Models: A Dynamics Perspective | Accept (poster) | Summary: This paper proposes a new evaluation metric for the text-to-video model, and this metric in particular focus on the dynamics on the generated video. Their metric is based on three sub-scores: inter-frame dynamics score, inter-segment dynamics score, and video-level dynamics. They did
Strengths: It explores a more fine-grained protocol to evaluate the dynamics of the T2V generation.
They have done extensive evaluation on the exiting T2V models, and obtained some insights.
They propose the "improved metric", which would give higher weight to the large-dynamic range compared to the "existing metric". It would be useful to detect whether the generated videos of a model cover all dynamic ranges.
Weaknesses: I feel this paper over-emphasizes existing evaluation works ignore the dynamics, this is not quite what I saw. I think most of papers already paid attention to the dynamics evaluation, they are just not so detailed and fine-grained as in this paper. For example, EvalCrafter has Motion Quality section for considering the dynamics.
I think there are already some existing metric to quantify the dynamics in the video, and they're simpler, such as Motion Quality in EvalCrafter. I think there should be a direct comparison between the proposed metric and the existing dynamics metric. Maybe the authors can consider to use the average of three motion quality metrics in EvalCrafter (after normalization) and show which one is aligned with human evaluation better?
Another problem is for the Inter-segment Dynamics Score, I think the way to take segments is too brutal-force and sketchy, for example, just take every 8 frames. This maybe totally misaligned with the actual semantic segments, in that case, I feel the Inter-segment Dynamics Score does not make much sense.
For the naturalness metric's definition, why can we trust Gemini-1.5pro? Authors may want to justify this.
Technical Quality: 2
Clarity: 2
Questions for Authors: - Why is this protocol called DEVIL?
- What are the cross and circles in the Dynamics Evaluation plot in Figure 2? I'm confused what this plot wants to convey.
- Figure 3 may need to be improved. (b) is only the word cloud for DEVIL instead of for all three benchmarks.
- The reference in 136 line seems to be incorrect, it is not a paper for optical flow.
- What's SIM in equation (2)?
- What does I function mean in equation (14)?
Typo:
Line 223: overall h??
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: .
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: #### **Q1: Compare to existing metrics, such as Motion Quality in EvalCrafter. Consider to use the average of three motion quality metrics in EvalCrafter (after normalization) and show which one is aligned with human evaluation better?**
The differences between our method and the existing metrics, e.g., Motion Quality in EvalCrafter, are as follows:
1. Dynamics Assessment
- Vbench and EvalCrafte rely solely on optical flow, limiting analysis to inter-frame pixel motion dynamics. They fail to capture non-pixel-motion dynamics, such as variations in illumination, style, and color.
- Our method employs seven dynamic indicators across three temporal scales (inter-frame, inter-segment, and full-video), comprehensively assessing video dynamics. This approach significantly improves correlation with human evaluation, when evaluating inter-frame dynamics it achieving an 84% win ratio, surpassing the 73% of optical flow.
2. Evaluation of Models' Capabilities on Dynamics
- Vbench and EvalCrafter created Dynamic Degree and Flow Score metrics based on optical flow, both favoring videos with greater dynamics. However, they overlook the need for videos to match varying dynamics in text prompts, such as low dynamics for low-dynamics descriptions and high dynamics for high-dynamics descriptions.
- Although EvalCrafter's Motion AC-Score assesses a model's ability to generate videos with high or low dynamics, it is a binary metric, which is a coarse measurement and can not reflect the true dynamic controllability.
- Instead, our Dynamics Range, $M_{range}$, measures a model's ability to generate videos with various dynamics, and Dynamics Alignment, $M_{align}$ measures how well models can align video dynamics to text prompts across five grades, enabling a more reasonable and precise assessment of a model's capabilities on dynamics.
3. Quality Metrics
- Vbench and EvalCrafter assess video dynamics and quality separately, neglecting their correlation. It results in that the models tend to generate low-dynamics videos to ensure high quality, limiting to reflect the model quality across different dynamics.
- EvalCrafter's Action Recognition is limited to evaluating human actions using recognition scores and doesn't directly evaluate video quality across different dynamics.
- By incorporating dynamics into existing quality metrics, we quantitatively assess a model's ability to generate high-quality videos across dynamic ranges, achieving a more comprehensive evaluation.
Following your suggestion, we use the average of motion quality metrics in EvalCrafter (after normalization) and calculated the correlation of this average metrics with human evaluation. It obtains 83% Pearson's correlation andb 74% win ratio, which are significantly lower than those in our proposed dynamics metric ( 95% Pearson's correlation and an 84% win ratio).
#### **Q2: Inter-segment dynamics Score**
We would like to clarify that the length of each segment is not 8 frames. Instead, segments are divided proportionally according to the video's duration when evaluating global aperiodicity. Experimental results indicate that the global aperiodicity in inter-segment dynamics is robust with respect to the proportion, maintaining a high correlation with human evaluations (>90%) when assessed at proportions of 1/8, 1/4, and 1/2 of the video's length.
| Proportion | PC | KC |
|----------|----|----|
| 1/8 | 0.92 | 0.90 |
| 1/4 | 0.94 | 0.91 |
| 1/2 | 0.93 | 0.90 |
Based the on the observation that different patches within the video take different lengths of time to change, we assess the correlation of changes over different time interval (from 1 to the full video length) base on the temporal autocorrelation factor in this paper. This constructs the Inter-segment Dynamics feature combining with the global aperiodicity score, which achieves a 95% correlation with human judgments.
#### **Q3: Why trust Gemini 1.5 Pro for Naturalness Evaluation**
1. Gemini 1.5 Pro has been validated to have the best video comprehension capabilities among Multimodal Large Language Models (MLLM) currently available [a].
2. Our validation indicates that Gemini 1.5 Pro achieves a 78% correlation with human ratings in terms of naturalness as shown in Table 3 in the paper.
[a] Fu C, Dai Y, Luo Y, et al. Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis[J]. arXiv preprint arXiv:2405.21075, 2024.
#### **Q4: Why is this protocol called DEVIL?**
"DEVIL" stands for “**D**ynamics-based **E**valuation of **VI**deo generation mode**L**s”. The name also hints at the tough challenges that dynamics bring to video generation models.
#### **Q5: What are Cross and circles in Figure 2**
We apologize for any confusion. The rightmost plot in Figure 2 illustrates the calculation process for Dynamics Alignment and Dynamics Range. A circle represents videos where the dynamics are correctly aligned with the prompt, while a cross indicates videos where the dynamics are misaligned.
#### **Q6: Figure 3 may need to be improved. (b) is only the word cloud for DEVIL instead of for all three benchmarks.**
Thanks for pointing this out. It is true that Figure 3 (b) is the DEVIL's word cloud instead of all three benchmarks. We will rectify the caption.
#### **Q7: Wrong reference in line 136**
Thank you for pointing out the error. We have updated the reference to:
Teed, Zachary and Jia Deng. "RAFT: Recurrent All-Pairs Field Transforms for Optical Flow." European Conference on Computer Vision (2020).
#### **Q8: What's SIM in equation (2)?**
SIM in equation (2) represents the cosine similarity between two feature vectors.
#### **Q9: What does I function mean in equation (14)?**
The I in Equation (14) refers to the indicator function.
#### **Q10: Line 223: overall h???**
Thank you for pointing out the typo. The correct sentence should read:
"The overall naturalness is then determined by averaging the scores of all videos."
---
Rebuttal Comment 1.1:
Comment: I'll raise my score but I'm still a bit uncertain on the segment way of Inter-segment Dynamics.
---
Reply to Comment 1.1.1:
Comment: Thank you for recognizing our work and rebuttal.
Inter-segment Dynamics measures the dynamics of temporal patterns by assessing the correlation/similarity between video segments.
Proportional video segmentation allows for the simultaneous comparisons of videos with varying lengths while taking into account the frequency of pattern changes.
In our rebuttal, we have demonstrated that Inter-segment Dynamics is robust to segment ratios.
Additionally, we compared it with keyframe-based segmentation methods, as shown below:
| Segment method| Pearson Correlation | Kendall Correlation | Win Ratio |
|---------------|:-------:|:-------:|:-------:|
| Keyframe-based| 0.963 | 0.940 | 0.848 |
| Ratio | 0.954 | 0.934 | 0.865 |
Our method is comparable to keyframe-based segmentation.
In the future, we will continue to explore better methods for calculating inter-segment dynamics.
We appreciate your consideration and hope for a favorable review.
Thank you so much!
---
Reply to Comment 1.1.2:
Comment: Before the end of the rebuttal, we welcome further discussion and hope for further raising of the rating. We would like to appreciate your further encouragement. Thank you very much.
---
Rebuttal 2:
Title: Final thoughts
Comment: Hi Reviewer Kt6U,
The discussion period is ending soon. Please take a moment to review the author's rebuttal and share any remaining concerns or questions.
Thank you,
AC | Summary: Effective evaluation protocols are essential for developing advanced text-to-video (T2V) generation models. Current protocols primarily address temporal consistency and content continuity but often neglect the dynamics of video content, which are crucial for visual vividness and fidelity to text prompts. This study introduces DEVIL, a protocol that emphasizes dynamics by defining metrics across multiple temporal granularities and creating a benchmark of text prompts graded by dynamics levels. DEVIL evaluates T2V models using metrics of dynamics ranges and T2V alignment, enhancing existing metrics from a dynamics perspective. Experimental results show that DEVIL achieves up to 90% consistency with human evaluations, demonstrating its potential as a powerful tool for advancing T2V generation models.
Strengths: 1. **Introduction of New Dynamics Metrics**: The study presents a novel set of dynamics metrics that enhance the evaluation of text-to-video (T2V) generation models. These metrics focus on the dynamics dimension, which assesses the visual vividness and fidelity of video content to text prompts across multiple temporal granularities.
2. **Comparative Analysis with Existing Methods**: The research includes a thorough comparative analysis of existing evaluation methods, highlighting their limitations in addressing video dynamics. By juxtaposing these methods with the newly proposed dynamics metrics, the study demonstrates how the latter offers a more detailed assessment of T2V models.
3. **Code Submission for Reproducibility**: The authors have submitted the code used in their study. This openness is crucial for validating the proposed methods and fostering innovation in the community.
Weaknesses: 1. **Difficult to Assess Dynamics from Images Alone**: Evaluating the impact of dynamics solely through images is challenging. To better understand the effectiveness of the proposed dynamics metrics, it would be beneficial to provide several real videos along with their corresponding scores. This would allow for a more accurate assessment of how well the metrics reflect the dynamics and visual vividness of real video content. Without these examples, it is hard to gauge the practical utility and accuracy of the dynamics scores.
2. **Comparison of Existing T2V Methods**: Similarly, assessing the dynamics of generated videos from existing T2V methods using only images is insufficient. To comprehensively evaluate the effectiveness of the proposed dynamics metrics, the authors are recommended to include several examples of fake videos generated by existing methods along with their corresponding dynamics scores. This comparison would help in understanding how the new metrics perform relative to current standards and whether they offer a significant improvement in evaluating video quality.
3. **Computational Efficiency**: The computational efficiency of the proposed method is not well addressed. Understanding the resource requirements and processing time for the new dynamics metrics is crucial for practical applications. If the method is computationally intensive and very slow, it might limit its applicability in real-world scenarios where quick and efficient evaluations are necessary.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This paper discusses the limitations in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: #### **Q1: Demonstrations of Dynamics Score of Real Videos (weakness1) and Fake Videos from Different T2V Methods (weakness2).**
Thank you for your valuable suggestion. We have collected some real videos and fake videos from different T2V models with dynamic scores for demonstration. However, due to NeurIPS guidelines, we **CAN'T** include links in the rebuttal. Therefore, we present them as images in the PDF instead. Please refer to the attached files. We will also create a project page containing the demo videos in the future for a more intuitive understanding, as you suggested.
As demonstrated in the paper, our dynamics scores correlate with human scores at a level exceeding 90%.
The improved metrics can better reflect the quality of generation across different levels of dynamics, as measured by root mean square bias.
As shown in the table below, the improved metrics achieve a **lower root mean square bias** in assessing video quality across various dynamic levels for Motion Smoothness, Subject Consistency, Background Consistency, and Naturalness.
| Metric | Motion Smoothness | Subject Consistency | Background Consistency | Naturalness |
|--------|-------------------|---------------------|------------------------|-------------|
| Improved | 0.44 | 0.41 | 0.43 | 0.29 |
| Original | 0.56 | 0.55 | 0.56 | 0.39 |
#### **Q2: Computational Efficiency**
The evaluation of proposed dynamics metrics incur low computational cost, processing approximately 10 FPS on a single NVIDIA A100 GPU with support for multiple GPUs. Evaluating models like VideoCrafter 2 takes about 20 minutes, reducible to minutes with parallel GPU processing—a common practice when developing video generation system.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. I appreciate that the authors will also create a project page containing the demo videos in the future for a more intuitive understanding; this will help everyone better grasp the effectiveness of the new evaluation metrics. I would suggest that the authors include the information regarding efficiency in the camera-ready version, as it will aid in understanding the practical utility of the evaluation. I have no further questions. Best of luck to the authors.
---
Rebuttal 2:
Title: Final thoughts
Comment: Hi Reviewer VYfN
The discussion period is ending soon. Please take a moment to review the author's rebuttal and share any remaining concerns or questions.
Thank you,
AC
---
Rebuttal 3:
Comment: Thank you for your positive feedback and score. We will ensure to include detailed efficiency information in camera-ready version. Your support is greatly important to us, and we look forward to carrying your good wishes forward. Thanks again. | Summary: This paper proposes DEVIL, an evaluation suite of metrics for evaluating text-to-video generation, focusing on dynamics (the authors note that many previous works on video generation focus on other aspects but ignore dynamics). Proposed metrics are computed using a number of automatically extracted values based on e.g. optical flow or autocorrelations and then a linear combination of these values are learned such that they align with human perceptions of dynamics.
The authors also rate prompts (automatically using gpt4) to assign a dynamics “grade” which allows them to evaluate alignment with the level of dynamics asked for by a text prompt. They also report a “dynamic range” of a given model across the set of prompts. Finally the authors use their proposed metrics to evaluate a number of recent video generation models.
Strengths: This paper attacks a useful problem in video generation that researchers talk about / are aware of, but don’t have good metrics to measure. Overall the proposed metrics seem quite thorough and as shown in the results, the proposed measures do seem to be quite correlated with human judgements. I think this would be a good contribution to the community particularly if made publicly available.
Weaknesses: Conceptually the novelty of the paper is mostly incremental but since good measures of dynamism in video generation do not currently exist, the work should be of good value to the community.
One potential weakness is that I am not sure if I agree with the decision to segment videos based on length relative to total number of frames since this ignores the fact that different models can generate widely different video lengths or at quite different framerates(e.g. 2 seconds at 8 fps for VideoPoet videos and some examples from Sora that go to 1 min at 24 fps).
There is also sloppy writing in some parts of the paper. Examples:
* Some tables are never referenced in the body of the text
* Authors mention that human “refine” the dynamics grades but no details are given other than to say that they did this for three months (which is mostly meaningless).
* The word cloud in Figure 3 is not very interpretable — is this for just the high dynamics prompts or everything? What is the take-away?
Eqn 3 (which mentions 5%) does not agree with the paragraph above
* The function SIM is never defined
* Orphaned footnote in line 248
* Plots in Fig 5 are attractive but very hard to actually compare across the different methods
Technical Quality: 4
Clarity: 3
Questions for Authors: * I believe that the proposed measures are not necessarily able to separate camera motions from object motions. Similarly, it is not able to distinguish between fast moving “video textures” (e.g. roaring flame) which tend to be easy to generate using these models vs more specific motions (e.g. ninja doing a backflip). I would ask the authors to comment on whether these are real limitations or not.
* Suggestion: Please add examples of prompts with high dynamics and low dynamics in the main body of the paper.
* Suggestion: Report linear regression weights (given that there are just a few weights and would allow the reader to evaluate the relative importance of each factor)
* Question: Will the code/prompts be made public?
* Are the “improved metrics” better correlated with human judgements too? I am not sure if I saw this somewhere in the paper
* Suggestion (but lower priority): I believe Runway has a control that lets you control the dynamics (by changing CFG weight) — it would be interesting to see the effect of this control on measures proposed in this paper.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: #### **Q1: Concern about segmenting videos based on length relative to total number of frames.**
1. **Why using relative length.** Segmenting videos by relative length enables standardized comparison no matter how long the video is. The following table shows consistently high correlation values with different ratios, demonstrating our method's robustness.
| ratio | PC | KC |
|-------|------|------|
| 1/8 | 0.92 | 0.90 |
| 1/4 | 0.94 | 0.91 |
| 1/2 | 0.93 | 0.90 |
2. **Influence of frame rate**: We standardized videos to 8 FPS, following Vbench protocol, to eliminate frame rate variability effects.
3. **Influence of video length**: We group videos based on the video length (max is 8s in tested models) and study the relation between dynamics scores and human scores. DEVIL robustly achieves over 90% correlation whatever the video length is.
| Video length | PC | KC |
|----------------|------|------|
| 2s | 0.96 | 0.94 |
| 4s | 0.93 | 0.91 |
| 8s | 0.94 | 0.90 |
#### **Q2: Necessity of Distinguishing Motion Types**
Thanks for highlighting this. Our paper focuses on creating a comprehensive dynamic scoring system for various motions, rather than differentiating specific motion types. In video generation, factors such as object categories, contexts, and actions affect efficacy. Therefore, we designed a benchmark with 19 object categories, 4 scenes, and 10 dynamic categories, covering over 40 subtypes. These subtypes include camera movements, actions, complex effects, and environmental dynamics, ensuring our evaluation system accurately measures video generation performance across diverse scenarios.
#### **Q3: Examples of prompts**
The following are examples of prompts with different dynamics grades, we will add them to the main paper.
**Static:**
a man is laying on the ground.
**Low dynamics:**
A male fencer adjusts his epee mask and prepares to duel with his sparring partner in slow motion.
**Medium dynamics:**
Tilt up of shirtless sportsman doing pull-ups on bars during cross-training workout at gym.
**High dynamics:**
A runner explodes out of the starting blocks, racing down the track.
**Very High dynamics:**
A medieval siege with catapults launching, walls breaking, soldiers charging, and arrows raining down.
#### **Q4: Report linear regression weight**
The linear regression weight of each dynamics score is as follows, we will add them to the main paper.
| Temporal Scale | Dynamics Score | Typical Value | Weight |
|----------------|----------------|---------------|--------|
| Inter-frame | $D_{ofs}$ | 62.00 | 6.70E-04 |
| | $D_{sd}$ | 1.00 | 0.17 |
| | $D_{pd}$ | 33.00 | 0.03 |
| Inter-segment| $D_{pa}$ | 0.80 | 0.63 |
| | $D_{ga}$ | 0.20 | 2.20 |
| Video | $D_{te}$ | 7.00E+04 | 1.00E-05 |
| | $D_{tsd}$ | 0.20 | 1.46 |
#### **Q5: Code release**
Yes, we will release the code, rompts, and weights.
#### **Q6: Effetiveness of improved metrics.**
Improved Metrics enhances model performance evaluation through grouping assessment based on video dynamics. Its effectiveness is demonstrated from two perspectives:
1. **Representativeness**.
Improved Metrics better represent a model's ability to generate quality videos with varying dynamics, as shown by lower root mean squared bias compared to Original Metrics. The table below demonstrates this improvement.
| Metric | Motion Smoothness | Subject Consistency | Background Consistency | Naturalness |
|--------|-------------------|---------------------|------------------------|-------------|
| Improved | 0.44 | 0.41 | 0.43 | 0.29 |
| Original | 0.56 | 0.55 | 0.56 | 0.39 |
2. **Human Correlation**.
Improved metrics evaluate the model as a whole, not individual videos. To evaluate consistency with human scores, annotators grouped videos by dynamics and assessed their quality(focusing on naturalness due to time limitations). We averaged scores for each group to derive a comprehensive human score per model. The table below compares the human correlation of original and improved metrics.
| Metric | Pearson's Correlation | Kendall's Correlation |
|--------|----------------------|----------------------|
| Improved | 0.70 | 0.60 |
| Origion | -0.60 | -0.33 |
Our improved metric shows a 0.70 Pearson correlation with human scores, confirming its effectiveness. The original metric, dominated by low-dynamic videos, overlooks dynamics and shows a negative correlation with human scores, despite accurate individual video ratings.
#### **Q7: Effetiveness of Runway CFG**
During Runway GEN-2's evaluation, we considered CFG functionality by assigning different CFG weights based on the prompts' dynamics grade. With such strategy, the model achieves a high degree of dynamics control capability, as evidenced by the high Dynamics Alignment metric.
#### **Q8: Some tables are never refered**
We will reference the previously uncited Table 1 and Table 2 in Section 3 and Section 4 respectively.
#### **Q9: Details of Human Refinement**
We have designed detailed criteria and examples for each dynamics grade, used by both human annotators and GPT-4. See Fig. 7 in the Appendix for more information. This will be clarified in the paper.
#### **Q10: Figure 3 is not very interpretable**
1. Fig. 3 is a word cloud of all prompts in the DEVIL prompt benchmark. It shows that users often use similar terms like "rapid" and "extremely fast" for high-dynamics scenarios, making these words more prominent.
2. The number in the preceding paragraph of Eqn 3 should be corrected to 5% to match Equation 3.
#### **Q11: The function SIM is never defined**
SIM represents the cosine similarity between two feature vectors.
#### **Q12: Orphaned footnote in line 248**
Thank you for pointing out. We will make the necessary corrections.
#### **Q13: Plots in Fig 5 are attractive but very hard to actually compare across the different methods**
We'll improve Figure 5 to make it more clear. | Summary: The paper presents a comprehensive study on the evaluation of Text-to-Video (T2V) generation models, with a particular focus on the dynamics of video content. The authors introduce a novel evaluation protocol named DEVIL, which aims to address the often overlooked aspect of dynamics in existing evaluation metrics. This protocol defines a set of dynamics scores that correspond to multiple temporal granularities and introduces a new benchmark of text prompts under various dynamics grades.
In addition, this paper verifies a significant issue with some existing evaluation metrics: they allow models to attain high scores by generating low-dynamic videos. The authors demonstrate that the metrics provided by DEVIL align closely with human in rating dynamics, indicating its potential to significantly advance T2V generation models. The paper also highlights the current limitations in T2V models and datasets, and provides valuable suggestions for future research in this field.
Strengths: 1. The paper introduces an innovative T2V model metric, DEVIL, which evaluates the content dynamics of generated videos—an aspect often overlooked by current evaluation methods but essential for realistic video generation. DEVIL assesses dynamics at multiple granularities, achieving high correlation with human evaluations. By revealing the negative correlation between existing metrics and dynamic scores, the paper challenges the current evaluation standards.
2. The study employs robust experiments to validate the dynamics metric (DEVIL), achieving up to 90% consistency with human evaluations. Additionally, the paper identifies issues with existing T2V evaluation metrics, revealing their negative correlation with dynamics metrics and the tendency to favor low-dynamic content, which misrepresents model performance.
3. The theoretical framework is detailed and thorough, covering the formulation of dynamics scores across multiple temporal granularities. The paper also highlights a reliable human rating collection process, amassing 50,000 prompt dynamic level classifications for training the linear regression model used in the metrics and evaluations of 4,800 generated videos from six T2V models for proving the consistency between the metric and human ratings.
4. The paper is overall logically structured and easy to follow.
Weaknesses: 1. How to differentiate dynamic video from videos with low-quality motions, for example, flickering or temporal inconsistency? We desire videos with not only large dynamics but also high-quality dynamics, but some models which generate low-quality motion such as flickring frames and temporally-inconsistent videos may also achieve high dynamic scores under the proposed evaluation scheme.
2. Some previous works also propose metrics to evaluate dynamic levels of generated videos. For example, VBench [23] proposes dynamic degree evaluated with RAFT, and EvalCrafter [31] proposes motion quality evaluated with RAFT. The proposed metric should be compared with existing metrics from those works.
3. Frame rate of videos is an important factor that may affect the metrics. For example, a video with low fps may get higher score than a model with high fps if the evaluation is conducted on all frames without aligning their fps. Do the authors take it into consideration when designing the metrics? Are the models evaluated at their original fps or samped frames at the same fps?
4. In Section 3.3, how many videos are used to fit the human alignment dynamic scores? I am concerned that the metric is overfitting the selected videos for human alignment, but may not generalize well to other videos. Are the human evaluation results in Table 3 computed from the same videos as the videos used to fit the dynamic scores in Section 3.3? There is a risk of cheating if using the same set of videos to fit the human alignment dynamic score (in Section 3.3) and to calculate the human correlation (in Table 3).
5. The effectiveness of the improved metrics proposed in Section 4 is not validated with human correlation.
6. The relationship between the naturalness score defined in Section 4 and the rest of the paper is not clear.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please refer to weakness section.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors addressed the limitations and societal impact in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: #### **Q1: How to differentiate dynamic video from videos with low-quality motions?**
We aim to motivate models to produce videos with a wide range of dynamics while maintaining high quality. We recognize that some videos may exhibit high dynamics but poor quality.
To address, we have enhanced quality metrics by integrating dynamics scores, enabling a more comprehensive evaluation of models' generative capabilities rather than assessing quality and dynamics independently.
When filtering individual videos based on both quality and dynamics, our dynamics scores are complementary to exisiting video quality scores.
#### **Q2: Compare with dynamics degree in Vbench and motion quality in EvalCrafter**
Thank you for this insightful comment. DEVIL is superior to existing T2V evaluation protocols in three key aspects.
1. Dynamics Assessment
- Vbench and EvalCrafter are bulit solely on optical flow, limiting analysis to inter-frame pixel motion dynamics. They fail to capture non-pixelmotion dynamics, such as variations in illumination, style, and color.
- DEVIL employs seven dynamics scores across three temporal scales (inter-frame, inter-segment, and video level) to assess video dynamics comprehensively. It significantly improves correlation with human evaluation, achieving an 84% win ratio for inter-frame dynamics, substantially outperforming optical flow's 73%.
2. Evaluation of Models' Capabilities on Dynamics
- Vbench and EvalCrafter created Dynamic Degree and Flow Score metrics based on optical flow, both favoring videos with greater dynamics. However, they neglect the importance of matching video dynamics to prompt requirements, which can range from low to high.
- EvalCrafter's Motion AC-Score assesses a model's ability to generate videos with low/high dynamics according to prompts. However, as a binary metric, it only roughly measures dynamic controllability and doesn't capture the full range of a model's dynamic capabilities.
- We propose the Dynamics Range, $M_{range}$, to drive generating videos with various dynamics, and Dynamics Alignment, $M_{align}$ to evaluate how well models can control dynamics based on text prompts. Our approach enables a more reasonable and precise assessment of a model's dynamic capabilities.
3. Quality Metrics
- Vbench and EvalCrafter assess video dynamics and quality independently, neglecting their correlation. As models tend to generate low-dynamics videos, quality metrics are skewed towards these, failing to accurately represent model performance across the full range of dynamics.
- EvalCrafter's Action Recognition assesses human actions using recognition scores, but it's limited to human activities and fails to evaluate model quality across various dynamics ranges.
- We combine dynamics with existing quality metrics to quantitatively assess models' ability to generate high-quality videos across dynamic ranges, enabling more comprehensive evaluation.
#### **Q3: Influence of Frame Rate**
Thanks for the constructive comment. In the paper, we DO standardize the frame rate of each video to 8 FPS, following Vbench. Experiments show our dynamics evaluation maintains high correlation (>0.9) with human ratings across various frame rates. The table below shows Pearson correlation between our dynamics scores and human ratings:
| Dynamics | 4FPS | 8FPS | 16FPS | Origin FPS |
|-------------|-------|-------|-------|------------|
| Inter-frame | 0.952 | 0.950 | 0.946 | 0.951 |
| Inter-Segm | 0.952 | 0.954 | 0.954 | 0.953 |
| Video | 0.967 | 0.967 | 0.967 | 0.967 |
#### **Q4: Details of human alignment**
We used a 75/25 split of the annotated data, 75%(3600 videos) for fitting the human alignment dynamic scores, 25%(1200 videos) reserved as a test set. Results in Table 3 are computed exclusively from the 25% test set.
#### **Q5: Effectivenss of the improved metrics**
Improved Metrics aims to achieve a more comprehensive and accurate evaluation of model performance by employing grouping assessment based on video dynamics. The effectiveness of Improved Metrics can be demonstrated from two perspectives:
1. Representativeness
Improved Metrics can better represent models' capability to generate high-quality videos with varying levels of dynamics, as measured by the root mean squared bias. As the table below demonstrates, our Improved Metrics achieve a **lower root mean squared bias** than the Original Metrics.
| Metric | Motion Smoothness | Subject Consistency | Background Consistency | Naturalness |
|--------|-------------------|---------------------|------------------------|-------------|
| Improved | 0.44 | 0.41 | 0.43 | 0.29 |
| Original | 0.56 | 0.55 | 0.56 | 0.39 |
2. Human Correlation
Our metrics are defined for the entire model rather than individual videos. Annotators grouped videos by their dynamics and assessed their quality, focusing on naturalness due to time limitations. We then averaged the scores for each dynamic group to obtain a comprehensive human score for each model. The table below shows the human correlation evaluation of the original metrics and the improved metrics.
| Metric | Pearson's Correlation | Kendall's Correlation |
|--------|----------------------|----------------------|
| Improved | 0.70 | 0.60 |
| Original | -0.60 | -0.33 |
Our improved metric achieved a Pearson correlation of 0.70 with human scores, confirming its effectiveness. The original metric, which averages video quality without considering dynamics, is dominated by low-dynamic videos. This oversight leads to a negative correlation with human scores, despite accurate individual video ratings.
#### **Q6: Naturalness**
Naturalness is inspired by the observation that dynamics also accompany unnatural scenarios, e.g., car wheels spinning rapidly while the vehicle remains stationary. We thus propose naturalness to reflect how much the generated videos are like camera-captured ones. Note that it achieved a correlation of 79% with human ratings.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer 7oNg
Comment: Thank the authors for the rebuttal. The authors addressed most of my concerns so I raised my rating to borderline accept.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for your positive rating!
---
Rebuttal 2:
Title: Final thoughts
Comment: Hi Reviewer 7oNg,
The discussion period is ending soon. Please take a moment to review the author's rebuttal and share any remaining concerns or questions.
Thank you,
AC
---
Rebuttal 3:
Comment: Before the end of the rebuttal, we look forward to your valuable feedback and further suggestions. We will take full efforts to address your concerns. Looking forward to your reply. Thank you very much. | Rebuttal 1:
Rebuttal: Thanks to all reviewers and ACs for the valuable comments and suggestions. In the original review, all the reviewers acknowledged the contributions of the proposed evaluation protocol(DEVIL). The strengths are summarized as follows:
1. Reviewer R-7oNg: "The paper introduces an **innovative** T2V model metric", "the theoretical framework is detailed and thorough", "employs **robust experiments**... achieving up to **90% consistency** with human evaluations", "The paper is overall logically structured and easy to follow."
2. Reviewer R-HC61: "This paper attacks a **useful problem** in video generation that **researchers talk about / are aware of**, but don’t have good metrics to measure", "quite **correlated with human judgements**", "**good contribution** to the community".
3. Reviewer R-VYfN: "The study presents a **novel** set of dynamics metrics", "offers a **more detailed assessment** of T2V models".
4. Reviewer R-Kt6U: "It explores a **more fine-grained protocol**", "propose the 'improved metric'... would be **useful** "
The major concerns and suggestions are mostly about novelty (R-7oNg, R-Kr6U), effectiveness of improved metrics (R-7oNg, R-HC61), computation efficiency (R-VYfN), technical details (R-7oNg, R-HC61) and paper writing (R-HC61, R-Kt6U).
During the rebuttal, we carefully considered the reviewers' feedback and provided results of more experiments. We believe our responses can address reviewers' concerns and enhance our work.
Pdf: /pdf/897d7d384427a62a4012a14dd83a6a56539acd51.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Adaptive Variance Reduction for Stochastic Optimization under Weaker Assumptions | Accept (poster) | Summary: This paper introduces novel adaptive variance reduction methods for stochastic optimization, building on the STORM technique. The proposed Ada-STORM method closes the $O(\log T)$ gap and achieves an optimal convergence rate of $O(T^{-1/3})$ for non-convex functions under assumptions weaker than previous approaches. Furthermore, the paper extends the method to stochastic compositional optimization, maintaining a similar $O(T^{-1/3})$ convergence rate. For non-convex finite-sum problems, the authors develop another innovative adaptive algorithm that attains the optimal $O(n^{1/4}T^{-1/2})$ rate. Numerical experiments effectively demonstrate the effectiveness of the proposed method.
Strengths: 1. The proposed Ada-STORM method overcomes major limitations in existing adaptive STORM methods. It does not require bounded gradient and function value assumptions, and achieves the optimal convergence rates without the additional $O(\log T)$ term. The problem investigated in the paper is challenging and the results are substantial.
2. The theoretical analysis is novel and easy to follow. Extending the method to stochastic compositional and finite-sum optimization problems illustrates its flexibility and potential impact across a wide range of optimization tasks.
3. The numerical experiments on various tasks (e.g., image classification, language modeling) validate the theoretical results and highlight the method's superior performance compared to other algorithms.
Weaknesses: 1. In Theorem 4, while the authors clearly state the convergence rate of the proposed method in terms of $T$, $\Delta_F$, and $L$, they only present the convergence of previous adaptive finite-sum methods in terms of $T$. A comparison of other important constants such as $\Delta_F$ and $L$ with existing methods could enhance the discussion.
2. The content in Lines 240 to 242, especially Algorithm 4, should be included in the main body of the paper rather than being delayed to the Appendix.
3. Some typos in the paper:
- Line 145: $(t>2)$ should be $(t \geq 2)$.
- Line 516, the first equality in Proof 7: $L^2$ should be $L$.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Regarding Line 401, could the authors explain why the first inequality holds?
2. In Theorem 4, is the dependency on $\Delta_F$ and $L$ better or worse than that of previous methods?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: This paper does not present any negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your constructive comments!
---
**Q1:** In Theorem 4, while the authors clearly state the convergence rate of the proposed method in terms of $T$, $\Delta_F$, and $L$, they only present the convergence of previous adaptive finite-sum methods in terms of $T$. A comparison of other important constants such as $\Delta_F$ and $L$ with existing methods would enhance the discussion.
**A1:** According to Theorem 1 of the original paper [Kavis et al., 2022], the convergence rate of previous adaptive finite-sum methods is $\mathcal{O}\left(n^{1 / 4} T^{-1/2} \left(L^2 + \Delta_F \right) \cdot \log \left(1+n T L\right)\right)$. In contrast, our convergence rate obtained in Theorem 4 is $\mathcal{O}\left(n^{1 / 4} T^{-1/2} \left(L^{\frac{1}{2\alpha}} + \Delta_F^{\frac{1}{2(1-\alpha)}} \right) \right)$, where $\alpha$ can be any constant within $(0,1/3)$. For example, if we set $\alpha = 1/4$, the rate reduces to $\mathcal{O}\left(n^{1 / 4} T^{-1/2} \left(L^2 + \Delta_F^{{2}/{3}} \right) \right)$, which is better than the previous convergence rate.
---
**Q2:** The content in Lines 240 to 242, especially Algorithm 4, should be included in the main body of the paper rather than being delayed to the Appendix.
**A2:** Thank you for your suggestion! We will move Algorithm 4 to the main body of the paper and add more discussion about this algorithm.
---
**Q3:** There are some typos in the paper.
**A3:** Thank you for catching the typos. We will correct them in the revised version.
---
**Q4:** Regarding Line 401, could the authors explain why the first inequality holds?
**A4:** To address your question, we provide a more detailed analysis here:
\begin{align}
& \mathbb{E}\_{\xi_{t+1}} \left[ ||(1-\beta)(\mathbf{v}_t - \nabla f(\mathbf{x}_t)) + \left(\nabla f(\mathbf{x}_t)-\nabla f(\mathbf{x}\_{t+1}) + \nabla f(\mathbf{x}\_{t+1};\xi\_{t+1}) - \nabla f(\mathbf{x}\_{t};\xi\_{t+1}) \right) + \beta\left(\nabla f(\mathbf{x}\_{t};\xi\_{t+1}) - \nabla f(\mathbf{x}\_{t}) \right) ||^2 \right]\\\\
\leq & \mathbb{E}\_{\xi\_{t+1}}\left[(1-\beta)^2||\mathbf{v}\_{t} - \nabla f(\mathbf{x}\_{t})||^2\right] + \mathbb{E}\_{\xi\_{t+1}}\left[|| \left(\nabla f(\mathbf{x}\_{t})-\nabla f(\mathbf{x}\_{t+1}) + \nabla f(\mathbf{x}\_{t+1};\xi\_{t+1}) - \nabla f(\mathbf{x}\_{t};\xi\_{t+1}) \right) + \beta\left(\nabla f(\mathbf{x}\_{t};\xi\_{t+1}) - \nabla f(\mathbf{x}\_{t}) \right) ||^2 \right] \\\\
\leq & \mathbb{E}\_{\xi\_{t+1}}\left[(1-\beta)^2||\mathbf{v}\_{t} - \nabla f(\mathbf{x}\_{t})||^2\right] + \mathbb{E}\_{\xi\_{t+1}}\left[2|| \nabla f(\mathbf{x}\_{t})-\nabla f(\mathbf{x}\_{t+1}) + \nabla f(\mathbf{x}\_{t+1};\xi\_{t+1}) - \nabla f(\mathbf{x}\_{t};\xi\_{t+1}) ||^2 + 2|| \beta\left(\nabla f(\mathbf{x}\_{t};\xi\_{t+1}) - \nabla f(\mathbf{x}\_{t}) \right)||^2\right] \\\\
\leq & (1-\beta)^2 \mathbb{E}\_{\xi\_{t+1}}\left[||\mathbf{v}\_{t} - \nabla f(\mathbf{x}\_{t})||^2\right] +2 \mathbb{E}\_{\xi\_{t+1}}\left[||\nabla f(\mathbf{x}\_{t+1};\xi\_{t+1}) - \nabla f(\mathbf{x}\_{t};\xi\_{t+1}) ||^2\right] + 2\beta^2 \mathbb{E}\_{\xi\_{t+1}}\left[||\nabla f(\mathbf{x}\_{t};\xi\_{t+1}) - \nabla f(\mathbf{x}\_{t})||^2\right]
\end{align}
where the first inequality is due to the fact that
\begin{align}
\mathbb{E}\_{\xi\_{t+1}}\left[ \nabla f(\mathbf{x}\_{t})-\nabla f(\mathbf{x}\_{t+1}) + \nabla f(\mathbf{x}\_{t+1};\xi\_{t+1}) - \nabla f(\mathbf{x}\_{t};\xi\_{t+1}) + \beta\left(\nabla f(\mathbf{x}\_{t};\xi\_{t+1}) - \nabla f(\mathbf{x}\_{t}) \right) \right] = 0,
\end{align}
and the last inequality is due to:
\begin{align}
& \mathbb{E}\_{\xi\_{t+1}}\left[|| \nabla f(\mathbf{x}\_{t})-\nabla f(\mathbf{x}\_{t+1}) + \nabla f(\mathbf{x}\_{t+1};\xi\_{t+1}) - \nabla f(\mathbf{x}\_{t};\xi\_{t+1}) ||^2\right] \\\\
\leq & \mathbb{E}\_{\xi\_{t+1}}\left[|| \nabla f(\mathbf{x}\_{t})-\nabla f(\mathbf{x}\_{t+1}) ||^2\right] + \mathbb{E}\_{\xi\_{t+1}}\left[||\nabla f(\mathbf{x}\_{t+1};\xi\_{t+1}) - \nabla f(\mathbf{x}\_{t};\xi\_{t+1}) ||^2\right] +2 \mathbb{E}\_{\xi\_{t+1}} \left\langle \nabla f(\mathbf{x}\_{t})-\nabla f(\mathbf{x}\_{t+1}), \nabla f(\mathbf{x}\_{t+1};\xi\_{t+1}) - \nabla f(\mathbf{x}\_{t};\xi\_{t+1}) \right\rangle\\\\
\leq & \mathbb{E}\_{\xi\_{t+1}}\left[|| \nabla f(\mathbf{x}\_{t+1};\xi\_{t+1}) - \nabla f(\mathbf{x}\_{t};\xi\_{t+1}) ||^2\right] - \mathbb{E}\_{\xi\_{t+1}}\left[|| \nabla f(\mathbf{x}\_{t})-\nabla f(\mathbf{x}\_{t+1}) ||^2\right]\\\\
\leq & \mathbb{E}\_{\xi\_{t+1}}\left[||\nabla f(\mathbf{x}\_{t+1};\xi\_{t+1}) - \nabla f(\mathbf{x}\_{t};\xi\_{t+1})||^2\right]
\end{align}
---
**Q5:** In Theorem 4, is the dependency on $\Delta_F$ and $L$ better or worse than that of previous methods?
**A5:** As discussed in A1, our dependency on $\Delta_F$ and $L$ is better than previous methods.
---
**Reference:**
Kavis et al. Adaptive stochastic variance reduction for non-convex finite-sum minimization. NeurIPS, 2022.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the rely. It addresses all my questions and concerns. I would keep my score. | Summary: This paper proposes a novel adaptive STORM method that achieves an optimal convergence rate of $O(T^{-3})$ for nonconvex stochastic optimization, which requires weaker assumptions and attains the optimal convergence rate without the additional $O(\log T)$ term.
Strengths: For stochastic non-convex optimization, this paper achieves the optimal convergence rate under more relaxed assumptions, which does not require the bounded function values or the bounded gradients and does not include the additional $O(\log T)$ term in the convergence rate. The proposed technique has also been extended to stochastic compositional optimization with the same optimal rate. For non-convex finite-sum optimization, this papaer further improve the adaptive algorithm to attain an optimal convergence rate, which outperforms the previous result by $O(\log(nT))$ factor.
Weaknesses: Although this paper is theoretically sound in general, there are still some questions need to be discussed:
1. The experimental part is relatively limited. The proposed algorithm is parameter free, so it is best to provide some experimental results to demonstrate whether these compared algorithms are sensitive to parameter changes.
2. In theory, the output {x_ \tau} of the proposed algorithms is randomly selected from {1,..,T}. Is the output in the experiment randomly selected?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The experimental part is relatively limited. The proposed algorithm is parameter free, so it is best to provide some experimental results to demonstrate whether these compared algorithms are sensitive to parameter changes.
2. In theory, the output {x_ \tau} of the proposed algorithms is randomly selected from {1,..,T}. Is the output in the experiment randomly selected?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The authors discussed the limitations and there is no negative societal impact about this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your constructive comments and suggestions.
---
**Q1:** The experimental part is relatively limited. The proposed algorithm is parameter-free, so it is best to provide some experimental results to demonstrate whether these compared algorithms are sensitive to parameter changes.
**A1:** Following your suggestion, we have included additional experiments to assess whether the original STORM method is sensitive to changes in hyper-parameters. Specifically, we tested the initial learning rate from the set {$0.01, 0.1,1,10,50$}. The results are reported in the **Global Response (Figure 1)**, and we observe that the STORM method is sensitive to changes in hyper-parameters.
---
**Q2:** In theory, the output {$x_{\tau}$} of the proposed algorithms is randomly selected from {$1,..,T$}. Is the output in the experiment randomly selected?
**A2:** In the experiment, we simply use the output of the last iteration, i.e., $\mathbf{x}_{\tau} = \mathbf{x}_T$. It is common practice to select the output randomly from {$1,\cdots, T$} in theoretical analysis but use the output of the last iteration in practical experiments [Johnson et al., 2013, Cutkosky et al., 2019]. This is also the case for the original STORM paper. Specifically, in Line 14 of the STORM algorithm [Cutkosky et al., 2019], they state: "Choose $\hat{\mathbf{x}}$ uniformly at random from {$\mathbf{x}_1, \cdots , \mathbf{x}_T$} . (In practice, set $\hat{\mathbf{x}} = \mathbf{x}_T $)."
---
**References:**
Johnson et al. Accelerating stochastic gradient descent using predictive variance reduction. NeurIPS, 2013
Cutkosky et al. Momentum-based variance reduction in non-convex SGD. NeurIPS, 2019.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response and I have no further questions. | Summary: This paper studies non-convex stochastic optimization under the assumption of mean-square smoothness. It introduces, Ada-STORM, a variant of the STORM algorithm, which achieves the optimal rate $O(T^{-1/3})$. Unlike vanilla STORM, Ada-STORM eliminates the $O(\log T)$ factor and does not require the Lipschitz assumption. The algorithm's success hinges on a carefully designed adaptive learning rate schedule. Instead of the standard AdaGrad learning rate $\eta_t \approx (\sum_{i=1}^t \\|g_i\\|^2)^{-1/3}$ used in STORM, Ada-STORM employs $\eta_t = \min\\{T^{-1/3}, T^{-(1-\alpha)/3}(\sum_{i=1}^t \\|v_i\\|^2)^{-\alpha}\\}$ where $\alpha < 1/3$ and $v_t$ is the STORM update.
Strengths: The proposed algorithm, Ada-STORM, achieves the optimal rate $O(T^{-1/3})$ without the log factor. Unlike STORM+ which also eliminates the log factor, this algorithm does not assume $f$ to be Lipschitz and bounded above. This improvement represents a significant contribution. Moreover, the required modification is straightforward: it involves merely adjusting the learning rate scheduler $\eta_t$ and the momentum factor $\beta_t$. I particularly enjoy the constant momentum constant $\beta_t\equiv T^{-2/3}$ rather than time-varying and dependent on $\eta_t$.
In addition, the proposed learning rate scheduler is of interest in its own right. The paper extends the application of this learning rate technique to related problems, achieving optimal rates in composite optimization and finite-sum optimization (ERM). Finally, empirical experiments also validate Ada-STORM's performance, which matches or outperforms existing STORM variants and other state-of-the-art optimizers.
Weaknesses: One major concern pertains to the technical correctness of the proof of Theorem 1, in particular from line 440-444. I don't immediately see how $\mathbb{E}[\sum_{t=1}^{s-1} \\|v_t\\|^2] \le C_0T^{1/3}$ implies $\mathbb{E}[\frac{1}{T}\sum_{t=1}^{s-1}\\|v_t\\|] \le \sqrt{C_0}T^{-1/3}$, and similarly how the equation in line 441 implies line 442. My concern arise from the fact that $s$ is a random variable dependent on the entire history $\mathcal{H}_T$. As a result, applying Jensen's inequality like $\mathbb{E}[\frac{1}{T}\sum\_{t=1}^s\\|v_t\\|] \le \sqrt{\mathbb{E}[\frac{1}{T}\sum\_{t=1}^s\\|v_t\\|^2]}$ seems to be inappropriate when $s$ is a random variable dependent $v_t$'s. It's likely that I miss something, and I encourage the authors to elaborate this part of the proof to make it clear.
Another concern is about the doubling trick. In each stage $k$, the guarantee is
$$\frac{1}{2^{k-1}} \sum_{t=2^{k-1}}^{2^k} \mathbb{E}\\|\nabla f(x_t)\\| \lesssim \frac{\Delta_f^{3/4}+O(1)}{(2^{k-1})^{1/3}}.$$
Since the algorithm is not resetting $x_t=x_0$ in the new stage, the parameter $\Delta_f$ is not constant. In particular, in stage $k$, $\Delta_f = \mathbb{E}[f(x_{2^{k-1}}) - \inf f(x)]$. Consequently, the doubling trick implicitly assumes $f$ has bounded function value in order to bound $\Delta_f$ in all stages.
If the authors can address my major concerns, I am inclined to revise my score upwards, given the otherwise strong results of the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Is there a specific reason why the authors prefer $\alpha$ as large as possible in the trade-off of $\Delta^{1/2(1-\alpha)} + \sigma^{1/(1-\alpha)} + L^{1/2\alpha}$. Is it due to the assumption (or some empirical observation) that the smoothness constant $L$ is usually the dominating parameter?
- In the experiments, did the authors apply any learning rate scheduler (e.g., linear decay or cosine decay) on top of the adaptive learning rate? Although somewhat tangential, it would be interesting to compare the new adaptive scheduler with other popular schedulers.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive comments, and we will revise our paper accordingly! We have addressed the major concerns about the proof as outlined below. We sincerely hope that the reviewer can examine them and reevaluate our results.
---
**Q1:** One major concern pertains to the technical correctness of the proof of Theorem 1, in particular from line 440-444. I don't immediately see how $\mathbb{E} [\sum_{t=1}^{s-1} ||v_t||^2] \leq C_0 T^{1/3}$ implies $\mathbb{E}[\frac{1}{T}\sum_{t=1}^{s-1}||v_t||] \leq \sqrt{C_0}T^{-1/3}$, and similarly how the equation in line 441 implies line 442. My concern arise from the fact that $s$ is a random variable dependent on the entire history $\mathcal{H}\_T$. As a result, applying Jensen's inequality like $\mathbb{E}[\frac{1}{T}\sum_{t=1}^s||v_t||] \le \sqrt{\mathbb{E}[\frac{1}{T}\sum_{t=1}^s||v_t||^2]}$ seems inappropriate when $s$ is a random variable dependent on $v_t$'s.
**A1:** Sorry for omitting the details. We provide a more detailed analysis as follows, which will be added to our revised paper. First, we use the inequality $\mathbb{E} [X] \leq \sqrt{\mathbb{E}[X^2]}$, which is due to Jensen's inequality. By setting $X = \frac{1}{T} \sum_{t=1}^s||v_t||$, we obtain $\mathbb{E} \left[ \frac{1}{T}\sum_{t=1}^s||v_t|| \right] \leq \sqrt{ \mathbb{E}\left[\left(\frac{1}{T}\sum_{t=1}^s||v_t||\right)^2\right]}$. We think the above inequalities hold regardless of the fact that $s$ is a random variable dependent on $v_t$'s. Furthermore, even without using Jensen's inequality, we can still prove the same result as follows:
\begin{align}
0 \leq& \mathbb{E} \left[ \left( \frac{1}{T}\sum_{t=1}^s||v_t|| - \mathbb{E}\left[\frac{1}{T}\sum_{t=1}^s||v_t||\right] \right)^2\right]\\\\
=& \mathbb{E} \left[ \left(\frac{1}{T}\sum_{t=1}^s||v_t||\right)^2 \right] +\left( \mathbb{E}\left[\frac{1}{T}\sum_{t=1}^s||v_t||\right] \right)^2 - 2\mathbb{E}\left[\left\langle \frac{1}{T}\sum_{t=1}^s||v_t||, \mathbb{E}\left[\frac{1}{T}\sum_{t=1}^s||v_t||\right]
\right\rangle\right] \\\\
=& \mathbb{E} \left[ \left(\frac{1}{T}\sum_{t=1}^s||v_t||\right)^2 \right] -\left( \mathbb{E}\left[\frac{1}{T}\sum_{t=1}^s||v_t||\right] \right)^2,
\end{align}
which indicates that $\mathbb{E}\left[\frac{1}{T}\sum_{t=1}^s||v_t||\right] \leq \sqrt{\mathbb{E} \left[ \left(\frac{1}{T}\sum_{t=1}^s||v_t||\right)^2 \right]} $. We can complete the proof by noting that:
\begin{align*}
\mathbb{E}\left[\frac{1}{T}\sum_{t=1}^s||v_t||\right] &\leq \sqrt{\mathbb{E} \left[ \left(\frac{1}{T}\sum_{t=1}^s||v_t||\right)^2 \right]} = \sqrt{\mathbb{E} \left[ \frac{1}{T^2} \left(\sum_{t=1}^s||v_t||\right)^2 \right]} \\\\
&\leq \sqrt{\mathbb{E} \left[ \frac{s}{T^2} \sum_{t=1}^s||v_t||^2 \right]} \leq \sqrt{\mathbb{E} \left[ \frac{1}{T} \sum_{t=1}^s||v_t||^2 \right]} \\\\
&\leq \sqrt{ \frac{1}{T} C_0 T^{1/3}} = \sqrt{C_0 }T^{-1/3}.
\end{align*}
---
**Q2:** Another concern is about the doubling trick. In each stage $k$, the guarantee is $\frac{1}{2^{k-1}} \sum_{t=2^{k-1}}^{2^k} \mathbb{E}||\nabla f(x_t)|| \lesssim \frac{\Delta_f^{3/4}+O(1)}{(2^{k-1})^{1/3}}$. Since the algorithm is not resetting $x_t=x_0$ in the new stage, the parameter $\Delta_f$ is not constant.
**A2:** We apologize for the confusion. In the analysis, we actually reset $x_t=x_0$ at the beginning of each new stage, and we will clearly clarify it in the revised paper. This reset ensures that the initial gap can always be bounded. Since the iteration number doubles with each new stage, we can guarantee that the last complete stage has at least $T/4$ iterations. According to the analysis of Theorem 1, running Algorithm 1 for $T/4$ iterations leads to the following guarantee:
\begin{align}
\mathbb{E}\left[ ||\nabla f(\textbf{x}_{\tau})||^2 \right] \leq \mathcal{O}\left(\frac{\Delta_f^{\frac{1}{2(1-\alpha)}}+\sigma^{\frac{1}{1-\alpha}} + L^{\frac{1}{2\alpha}}}{{(T/4)}^{1/3}}\right)=\mathcal{O}\left(\frac{\Delta_f^{\frac{1}{2(1-\alpha)}}+\sigma^{\frac{1}{1-\alpha}} + L^{\frac{1}{2\alpha}}}{{T}^{1/3}}\right).
\end{align}
---
**Q3:** Is there a specific reason why the authors prefer $\alpha$ as large as possible in the trade-off of $\Delta^{1/2(1-\alpha)} + \sigma^{1/(1-\alpha)} + L^{1/2\alpha}$. Is it due to the assumption (or some empirical observation) that the smoothness constant is usually the dominating parameter?
**A3:** No, there is no specific reason to prefer larger $\alpha$ in the trade-off $\Delta^{1/2(1-\alpha)} + \sigma^{1/(1-\alpha)} + L^{1/2\alpha}$. Whether the smoothness constant $L$ or the parameters $\Delta$ and $\sigma$ dominate usually depends on the specific problem. To avoid misunderstandings, We will change our expression from “larger $\alpha$ leads to better dependence on parameter $L$” to “larger $\alpha$ leads to better dependence on the parameter $L$ and worse reliance on parameters $\Delta$ and $\sigma$”.
---
**Q4:** In the experiments, did the authors apply any learning rate scheduler (e.g., linear decay or cosine decay) on top of the adaptive learning rate? Although somewhat tangential, it would be interesting to compare the new adaptive scheduler with other popular schedulers.
**A4:** We did not apply learning rate scheduler on top of the adaptive learning rate in the experiments. Based on your suggestion, we conducted additional experiments comparing the STORM method with linear decay and cosine decay schedulers. The results are shown in **Global Response (Figure 2)**, which indicate that the performances of STORM with linear decay and cosine decay are still worse than our proposed method in terms of testing accuracy.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the response.
Regarding the technical details in Theorem 1, sorry for not making my point clear initially. I wasn't questioning the validity of Jensen's inequality. Rather, I was unclear about how to apply Jensen's inequality properly in this context. I appreciate that the authors have now clarified this.
Regarding the doubling trick, the authors resolved my misunderstanding. Please clarify this in later drafts.
Regarding the additional experiments, I thank the authors for the extra efforts. Although tested on the rather simple dataset CIFAR-10, the additional results that the new adaptive rate still outperforms Storm with other learning rate schedulers (e.g. linear decay, cosine decay) strongly supports the validity of Ada-Storm. If time is allowed, I'd suggest the authors to repeat the experiments on larger tasks such as LLMs in the future version, and it will be an encouraging result if Ada-Storm also outperforms other optimizers on more complicated tasks.
Overall, the authors have addressed my major concerns. Given its concrete theoretical results and encouraging empirical results, I revised my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your kind reply! We will improve our paper according to your constructive reviews.
Best regards,
Authors | Summary: This paper studies adaptive variants of STORM, a variance reduction technique proposed by Cutkosky and Orabona (2019), for nonconvex stochastic minimization problems. Through introducing a novel adaptive parameter and step size tuning method, the authors aim to remove the bounded gradients and bounded function values assumptions in existing literatures while achieving the optimal convergence rate without an additional penalty of $O(\log T)$ term due to adaptiveness. Based on the same techniques, they further propose Compositional STORM for compositional functions and STORM for finite-sum problems. They support their theoretical claims with numerical experiments on image classification and small-scale language modelling tasks.
Strengths: 1. Their proposed method can achieve optimal convergence rate of $O(T^{-1/3})$ under more relaxed assumptions, namely by removing the need for bounded function values and bounded gradients. Their method does not incur a $\log(T)$ term penalty like in other adaptive methods (Liu et al., 2022).
2. They show that their learning rate design and analysis technique can be extended to stochastic compositional problems under further assumptions, namely Lipschitz continuity.
3. Although rather unsurprising, they show that when the finite-sum structure is present, their method can be adapted to and showed improved convergence rates of a log factor over existing methods.
Weaknesses: 1. My understanding of the original non-adaptive STORM proposed by Cutkosky and Orabona (2019) does not have a log factor in their convergence rate when noise is present. Specifically their convergence rates in expectation is $O(\log T / T^{1/2} + \sigma^{1/3} / T^{1/3})$, thus the $\log T$ term is dominated and should not appear in your big-O convergence rates in many places of the paper (e.g., Line 28, and Table 1). Can you please explain and check your comparisons?
2. I am not sure about the significance of removing the bounded gradients and bounded function values assumption in practice. Cutkosky and Orabona (2019) proposed an adaptive $G_t$ approach to remove the bounded gradients assumption in their original paper, while bounded function values seems to be a very mild assumption under smoothness conditions when the final bounds all depend on the initial function value gap. Thus, I do not agree with the authors claiming them to be "strong assumptions". It would be useful if the authors can provide examples to justify why these assumptions can be problematic in practice.
3. The authors should be clearer that their extension to stochastic compositional optimization is not exactly under weaker assumptions. They require standard Lipschitz continuity assumptions which have been standard in the literature, which in essence is same as bounded gradients.
Overall I think this paper has potential but may benefit from addressing the above points.
Technical Quality: 3
Clarity: 2
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the constructive review!
---
**Q1:** My understanding of the original non-adaptive STORM proposed by Cutkosky and Orabona (2019) does not have a log factor in their convergence rate when noise is present. Specifically their convergence rates in expectation is $O(\log T / T^{1/2} + \sigma^{1/3} / T^{1/3})$, thus the $\log T$ term should not appear in your big-O convergence rates.
**A1:** Although the convergence rate presented in the paper [Cutkosky and Orabona, 2019] is indeed $O(\log T / T^{1/2} + \sigma^{1/3} / T^{1/3})$, it is not accurate after checking the proofs in detail. Specifically, at the end of Section 5 of their paper, they claim that
\begin{align}
\mathbb{E}\left[\sum\_{t=1}^{T} \frac{||\nabla F\left(\mathbf{x}\_{t}\right)||}{T}\right] \leq \frac{\sqrt{2 M}\left(w+2 T \sigma^{2}\right)^{1 / 6}+2 M^{3 / 4}}{\sqrt{T}} \leq \frac{w^{1 / 6} \sqrt{2 M}+2 M^{3 / 4}}{\sqrt{T}}+\frac{2 \sigma^{1 / 3}}{T^{1 / 3}} ,
\end{align}
where they utilize $(a+b)^{1 / 3} \leq a^{1 / 3}+b^{1 / 3}$ in the last inequality and $M$ is a complex term containing the $\log T$ term. However, they neglect the $\sqrt{M}$ factor in the last term, which should be:
\begin{align*}
\mathbb{E}\left[\sum_{t=1}^T \frac{||\nabla F\left(\boldsymbol{x}_{t}\right)||}{T}\right] \leq \frac{\sqrt{2 M}\left(w+2 T \sigma^{2}\right)^{1 / 6}+2 M^{3 / 4}}{\sqrt{T}}\leq \frac{w^{1 / 6} \sqrt{2 M}+2 M^{3/4}}{\sqrt{T}}+\frac{2 \sigma^{1 / 3} \color{red}{\sqrt{M}}}{T^{1 / 3}} .
\end{align*}
As a result, the $\log T$ term should appear in the big-O convergence rates.
---
**Q2:** I am not sure about the significance of removing the bounded gradients and bounded function values assumption in practice. Cutkosky and Orabona (2019) proposed an approach to remove the bounded gradients assumption in their original paper, while bounded function values seems to be a very mild assumption under smoothness conditions when the final bounds all depend on the initial function value gap. Thus, I do not agree with the authors claiming them to be "strong assumptions". It would be useful if the authors can provide examples to justify why these assumptions can be problematic in practice.
**A2:** Thank you for your valuable comment! First, although Cutkosky and Orabona (2019) remove the bounded gradient assumption, their newly proposed algorithm introduces two new drawbacks:
(1) As stated in their paper, this method requires knowledge of the true value of gradient variance $\sigma$, which is impractical; (2) The learning rate of this new algorithm is deterministic, i.e., $\eta_t = \frac{k}{(w+\sigma^2 t)^{1/3}}$, which cannot adjust according to the stochastic gradient anymore. Second, the bounded gradient assumption is still required in the STORM+ method [Levy et al., 2021], and it cannot be removed using the same technique as Cutkosky and Orabona (2019) due to the more complex analysis involved in STORM+.
Regarding the bounded function value assumption, we believe it is indeed a strong assumption. The most commonly used linear and quadratic functions are both smooth but have unbounded function values. Furthermore, this assumption introduces an extra term related to the function value upper bound in the convergence rate. Specifically, STORM+ [Levy et al., 2021] achieves a convergence rate of $\mathcal{O}\left( \left(B^{3/4} + L^{3/2} \right) \sigma^{1/3} T^{-1/3}\right)$, where $B$ is the upper bound of the function value. The overall convergence rate can be significantly impacted when $B$ is large.
---
**Q3:** The authors should be clearer that their extension to stochastic compositional optimization is not exactly under weaker assumptions. They require standard Lipschitz continuity assumptions which have been standard in the literature, which in essence is same as bounded gradients.
**A3:** We agree that in stochastic compositional optimization, the required Lipschitz continuity is equivalent to the bounded gradient assumption. However, this assumption is inherently introduced by the compositional optimization itself rather than by our adaptive techniques. Note that the bounded gradient assumption is essential and widely required in the literature for stochastic compositional optimization [Wang et al., 2017, Yuan et al., 2019, Zhang et al., 2019]. Additionally, we would like to emphasize that our method does not require the bounded function value assumption, which would be needed if we attempt to apply the same techniques from STORM+ [Levy et al., 2021] to stochastic compositional optimization. We will incorporate these clarifications into our revised paper to make it more clear.
---
**References:**
A. Cutkosky and F. Orabona. Momentum-based variance reduction in non-convex SGD. NeurIPS, 2019.
Levy et al. STORM+: Fully adaptive SGD with recursive momentum for nonconvex optimization. NeurIPS, 2021.
Wang et al. Accelerating stochastic composition optimization. JMLR, 2017.
Yuan et al. Efficient smooth non-convex stochastic compositional optimization via stochastic recursive gradient descent. NeurIPS, 2019.
Zhang et al. A stochastic composite gradient method with incremental variance reduction. NeurIPS, 2019.
---
Rebuttal 2:
Comment: Thank you for detailed explanation, and upon further checking, I agree with the authors' claims regarding my W1 and W2. As such, I am increasing my score from 5 to 6.
---
Rebuttal Comment 2.1:
Comment: Many thanks for your kind response and supports! We will revise our paper according to your constructive suggestions. | Rebuttal 1:
Rebuttal: ## **Global Response** ##
---
In response to the request of reviewers, we provide additional experimental results in this part.
**Figure 1:** According to the suggestion of Reviewer 2239, we provide the performance of the STORM method with different initial learning rates. Specifically, we tested learning rates from the set {$0.01, 0.1, 1, 10, 50$}. The results demonstrate that the STORM method is sensitive to the choice of the hyper-parameters.
**Figure 2:** As requested by Reviewer qB9i, we include experiments on the STORM method with linear decay and cosine decay schedulers for comparison. The results indicate that the performances of STORM with linear decay and cosine decay are still worse than our Ada-STORM method in terms of testing accuracy.
Pdf: /pdf/4ae39834368ce964832dab539ce001c22ae86ef3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Dynamic Conditional Optimal Transport through Simulation-Free Flows | Accept (poster) | Summary: This paper introduces COT-FM, a generalization of the Flow Matching model for conditional generation. Specifically, this paper investigates the Conditional Wasserstein Space, a space of joint probability measures on $Y \times U$ with fixed $Y$-mariginals $\mu$. This paper proves that an absolutely continuous path in the Conditional Wasserstein Space can be generated by a triangular vector field. Based on this characterization, the COT-FM is proposed as a Flow Matching model that employs a triangular vector field on the $Y \times U$.
Strengths: - This paper presents theoretical analysis of the Conditional Wasserstein Space, such as the characterization of the absolutely continuous path and the conditional generalization of the Benamou-Brenier Theorem.
- This paper proposes a Flow Matching model for conditional generation.
- This paper is easy to follow.
Weaknesses: - Whether the COT-FM can recover the dynamic optimal transport requires further clarification.
- Please see the Questions Section below.
Technical Quality: 2
Clarity: 3
Questions for Authors: - I would like to clarify the connection between Section (4, 5) and Section 6. It appears that Section (4,5) establish the existence of an absolutely continuous path between two arbitrary measures in the Conditional Wasserstein Space, which can be generated by a triangular vector field. In this context, Section 6 introduces a triangular vector field parametrization to the Flow Matching model. Hence, Section (4,5) justify the triangular parametrization of the standard Flow Matching model within the Conditional Wasserstein Space. Is this correct?
- I am curious whether COT-FM can recover the dynamic optimal transport within the Conditional Wasserstein Space, as mentioned in Line 256. Proposition 3.4 in [Tong et al., 2023] addresses the standard Wasserstein Space case. Could you provide clarification on how this applies to the Conditional Wasserstein Space?
- Table 1 presents the W2 and MMD distances between the joint distributions. Could you also provide the W2 and MMD results between the conditional distributions?
- I am curious about the significance of minibatch optimal coupling for COT-FM, in Lines 257-265. Could you provide the COT-FM results using independent coupling?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: - The authors addressed the potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for their constructive feedback and suggestions.
> I would like to clarify the connection between Section (4, 5) and Section 6 [...] Section (4,5) justify the triangular parametrization of the standard Flow Matching model within the Conditional Wasserstein Space. Is this correct?
Yes, your understanding is correct. We agree that the relationship between our theoretical and methodological contributions could be better clarified, and we will update our paper to make this more clear.
Here, we provide some additional context. Inspired by the empirical success of OT couplings in unconditional generative modeling [Tong 23, Pooladian 23], we sought to employ similar techniques for conditional generative modeling. However, prior to our work, the theory of conditional optimal transport was not sufficiently developed to justify this approach. Thus, a major contribution of our work (Sections 4, 5) is the development of such a theory which provides a principled foundation for COT-FM (Section 6). However, we believe that our theoretical contributions provide a foundation for COT in general, beyond only our proposed COT-FM method.
We additionally would like to clarify that COT-FM requires more than simply using triangular vector fields in the standard flow matching model. In particular, COT-FM requires one to additionally compute the static COT couplings, as discussed in lines 267-265.
> I am curious whether COT-FM can recover the dynamic optimal transport within the Conditional Wasserstein Space, as mentioned in Line 256. Proposition 3.4 in [Tong et al., 2023] addresses the standard Wasserstein Space case. Could you provide clarification on how this applies to the Conditional Wasserstein Space?
Our COT-FM indeed recovers the dynamic COT paths in the limit of zero smoothing. We write in Line 256 that this follows from a “pointwise application of [Tong et al., 2023, Prop. 3.4]”. Here, by pointwise, we mean that we may apply the result of [Tong 23] for a single, fixed $y$ to recover the optimal path conditioned on $y$ -- and thus, over all possible values of $y$.
Somewhat more precisely, Prop 3.4 of [Tong 23] shows in the unconditional setting that $v_t^\sigma(u) \to v_t(u)$ as $\sigma \to 0$, where $v_t^\sigma(u)$ is the smoothed flow matching vector field, and $v_t(u)$ is the dynamic OT vector field. In the *conditional* setting, our vector fields are of the form $v_t^\sigma(y, u)$. However, if we fix $y$, this vector field can be viewed as a vector field purely over the $U$ component, thanks to the triangularity assumption. This allows us to directly apply Prop 3.4 of [Tong 23] -- in particular, because the COT problem is essentially a collection of unconditional OT problems, one for each fixed $y$. See e.g. [Hosseini 23].
We agree that this point could have been explained more clearly, and we will update our paper with a complete, formal proof of this result.
In our global response, we additionally conduct an experiment to measure the degree to which the COT paths are optimal. Overall, we find that the paths learned by COT-FM are close to optimal, even when using the minibatch approximation.
> Table 1 presents the W2 and MMD distances between the joint distributions. Could you also provide the W2 and MMD results between the conditional distributions?
We would like to clarify that we measure the joint metrics for practical reasons. Namely, to measure the conditional MMD or Wasserstein distances, we would require the ability to sample many values of $u \sim \nu (u \mid y)$ from the data distribution, conditioned on a given $y$. However, in many practical instances (including the Lotka-Volterra, Darcy Flow, and many of the 2D datasets), this is not possible, as the data generation mechanism either directly produces sample $(y, u) \sim \nu(y, u)$ from the joint distribution (as in the 2D data) or produces samples $(y, u) \sim \nu(u) \nu(y \mid u)$ from a prior over $u$ and a forward model $\nu(y \mid u)$ (as in the Lotka-Volterra and Darcy Flow problems).
Moreover, even in cases where we can indeed sample $u \sim \nu(u \mid y)$, we found that estimators of the conditional distances have prohibitively high variance, as it is in essence a nested sampling problem.
Thus, some approximate measure of model fit is necessary. We believe the joint metrics are a reasonable proxy since, by Proposition 1, any triangular mapping which couples the joint distributions necessarily couples the conditional distributions.
We thank the reviewer for pointing out this difference, and we will use the additional space afforded by a camera-ready version of our paper to discuss these challenges and justifications.
> I am curious about the significance of minibatch optimal coupling for COT-FM, in Lines 257-265. Could you provide the COT-FM results using independent coupling?
Please see our global response for a discussion regarding the role of minibatches in our method. We would additionally like to point out that we do indeed compare our method against flow matching with an independent coupling. This is “FM” in Table 1 and Table 2, and “FFM” in Table 3. Overall, there is a clear gap between COT-FM and FM with independent couplings across all experiments considered in our work.
We agree, though, that this could have been better explained in our submission. We will update our paper with a more clear explanation of the baseline flow matching model.
1. Improving and Generalizing Flow-Based Generative Models with Minibatch Optimal Transport. Tong et al., 2023.
2. Multisample Flow Matching: Straightening Flows with Minibatch Couplings. Pooladian et al., 2023.
3. Conditional Optimal Transport on Function Spaces. Hosseini et al., 2023.
---
Rebuttal Comment 1.1:
Comment: I appreciate the author for their clarifications. However, I still believe that additional quantitative evaluation as a conditional generative model would provide more solid support for this work. Hence, I will maintain my current score. | Summary: This paper provides a theory for conditional optimal transport (as defined by the authors), followed by numerical simulations. Among their contributions, the authors put forth theory for the geometry of the conditional Wasserstein space (where analogous quantities of e.g., the McCann interpolation, hold). This is due to the geometry given by triangular vector fields that are studied. Their proposed algorithm is based off Flow Matching [Lipman et al. 2023], and achieves strong numerical performance against the other baseline algorithms.
Strengths: This paper has many strengths! It is well-written, fits nicely in the conference format without omitting many details, and has appropriate experiments. The proposed methodology is also quite elegant, and circumvents many issues other methods face.
Weaknesses: N/A :)
Technical Quality: 4
Clarity: 4
Questions for Authors: I am using this space for comments and suggestions as well as questions.
- Is there a clear way to choose the $\epsilon$ parameter for the COT Flow Matching? Any heuristics whatsoever? This appears to be a bottleneck to making this methodology fully practical
- Convergence of the $\epsilon\to 0$ limit of the proposed OT map for the twisted cost is originally due to Carlier et al (2010) --- I would argue that the recent results by Hosseini et al. (2023) are extensions of this older result.
- When citing flow matching throughout the draft, it would be equitable to also cite Liu et al. (2023) alongside Lipman et al. 2023 and Albergo et al. (2023). Same goes for Pooladian et al. (2023) --- should be cited alongside Tong et al. (2023) (in e.g., Section 6)
- Is equation (9) not due to the original flow matching papers?
- For equation (6): I have never heard anyone say the equation should be "understood distributionally". Maybe consider "in the sense of distributions"
- Stylistic comment: Maybe omit "unconditional" from the title of Appendix A? This is not really used
@article{carlier2010knothe,
title={From Knothe's transport to Brenier's map and a continuation method for optimal transport},
author={Carlier, Guillaume and Galichon, Alfred and Santambrogio, Filippo},
journal={SIAM Journal on Mathematical Analysis},
volume={41},
number={6},
pages={2554--2576},
year={2010},
publisher={SIAM}
}
@article{liu2022flow,
title={Flow straight and fast: Learning to generate and transfer data with rectified flow},
author={Liu, Xingchao and Gong, Chengyue and Liu, Qiang},
journal={arXiv preprint arXiv:2209.03003},
year={2022}
}
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed feedback and positive review! We have updated our paper to incorporate the various suggested references and stylistic edits.
> Is there a clear way to choose the $\epsilon$ parameter for the COT Flow Matching? Any heuristics whatsoever?
This is an interesting question -- while we do not have any rigorous results in this direction, we can perhaps give some intuition. Ideally, one would choose a small value of $\epsilon$ -- as suggested by the results of Carlier [2010], with unlimited data this would indeed recover a triangular map. However, we expect there to be dependence between $\epsilon$ and the size of the dataset (or batch size, if using minibatch couplings). Loosely speaking, $\epsilon$ controls the tradeoff between triangularity and coupling nearby points; small values of $\epsilon$ encourage the maps to be triangular at the cost of a potentially large distance in the $U$ component, whereas large values of $\epsilon$ allow greater violations of triangularity while coupling points that are nearby in $U$ distance.
Moreover, we would expect this $U$ distance to be larger for small batches of data, since there may not be a source point with the same (or nearby) $Y$ values as a target point. Thus we would expect that $\epsilon$ and the batch size should be negatively correlated.
However, in practice, $\epsilon$ is a nuisance parameter which can be tuned just as any other parameter in the learning process. In practice, we tuned $\epsilon$ through a grid search, measuring the loss on held-out validation data.
We will include a discussion of these notions in an updated camera-ready version of our paper.
> Is equation (9) not due to the original flow matching papers?
Yes, essentially -- a special case of Equation (9) appears in the early work of [Theorem 2, Lipman 23], where the coupling depends only on the target variable $x_1$. An equivalent loss appears in Proposition 1 of [Albergo 23A], but still using an independent coupling. Theorem 3.2 of [Tong et al., 2023] extends this loss to general couplings, and [Albergo 23B, Pooladian 23] propose a similar use of general couplings. We will update the citation in our paper to better reflect this.
1. Flow Matching for Generative Modeling. Lipman et al., 2023.
2. Building Normalizing Flows with Stochastic Interpolants. Albergo et al., 2023(A).
3. Stochastic Interpolants with Data-Dependent Couplings. Albergo et al., 2023(B).
4. Improving and Generalizing Flow-Based Generative Models with Minibatch Optimal Transport. Tong et al., 2023.
5. Multisample Flow Matching: Straightening Flows with Minibatch Couplings. Pooladian et al., 2023. | Summary: This paper characterizes dynamical conditional optimal transport (COT). It generalizes the Benamou-Brenier theorem to dynamical COT. The authors then propose conditional flow matching and apply it to synthetic data.
Strengths: - The paper successfully extends the Benamou-Brenier theorem to the context of dynamical conditional optimal transport.
- The paper is easy to follow.
Weaknesses: - The paper appears to be extremely similar to [1] both in theoretically and empirically. Particularly, Theorem 18, which discusses a Benamou-Brenier-like formula and it applies COT with flow matching as like this work. I would like to ask authors to discuss the difference with result in [1].
- The paper discusses dynamical COT in Sections 4-5. However, in Section 6, the authors propose the Conditional OT Flow Matching (COT-FM) method. This method solves dynamic COT only when the given joint coupling is the solution of COT. In other words, optimal coupling should be given for COT-FM algorithm to solve dynamic COT problem. In OT literature, the most of the application aims to find optimal coupling (rather than given), hence, this algorithm can be applied only in very restricted situation. Thus, the application as an OT method is extremely limited.
- Moreover, in the conducted experiments, the given pairs are not the solutions to COT (only mini-batch sense). Therefore, the experiments seem to address conditional generation rather than conditional optimal transport. It is unclear if the experimental settings are appropriate for the subject of the paper. Moreover, the experiments were conducted on very small datasets.
[1] Conditional Wasserstein Distances with Applications in Bayesian OT Flow Matching (arxiv, v1 released in March, 2024)
Technical Quality: 3
Clarity: 2
Questions for Authors: - In line 51, it is discussed that "COT-FM ... interpolates between an arbitrary source and target distribution via a geodesic in the conditional Wasserstein space". Does it mean that FM model learn geodesics?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The limitation is discussed in Weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback!
> The paper appears to be extremely similar to [1] [...] I would like to ask authors to discuss the difference with result in [1].
We first would like to remind the reviewer of the [NeurIPS concurrent work policy](https://neurips.cc/Conferences/2024/PaperInformation/NeurIPS-FAQ). Given that [Chemseddine 24] first appeared on 27 March 2024 and the NeurIPS submission deadline was 22 May 2024, this work counts as concurrent. This work was developed independently and concurrently with our submission, which we briefly mention in lines 78-80. However, there are several differences. An updated version of our paper will elaborate on these.
1. [Chemseddine 24] work in finite-dimensional Euclidean spaces. Our theory is applicable in more general spaces -- including many infinite-dimensional function spaces. This necessitates additional techniques in our proofs (particularly Lemma 1 Theorem 4). Theorem 18 of [Chemseddine 24] can be viewed as a special case of our Theorem 5 under a finite-dimensional assumption.
2. Prop 6 [Chemseddine 24] shows existence of vector fields generating a path of measures when the path is induced by an optimal plan. We show a stronger result in Theorem 3, where we characterize **all** absolutely continuous curves, rather than only those induced by an optimal plan. We also prove a converse of this statement in Theorem 4 which does not appear in [Chemseddine 2024].
> The paper discusses dynamical COT in Sections 4-5. [...] the experiments seem to address conditional generation rather than conditional optimal transport.
As discussed in our abstract and introduction, we are indeed primarily interested in applications in conditional generative modeling. Inspired by the success of OT couplings in unconditional generative modeling [Tong 23, Pooladian 23], we sought to employ similar techniques. However, prior to our work, the theory of conditional optimal transport was not sufficiently developed to justify this approach. Thus, a major contribution of our work is the development of such a theory. We believe that our theoretical contributions provide a foundation for COT in general, beyond only our proposed COT-FM method.
We agree with the reviewer that this point could have been made more clear, and we will update our paper to discuss the relationship between Sections 4-5 and Section 6.
> This method solves dynamic COT only when the given joint coupling is the solution of COT. [...] the application as an OT method is extremely limited.
Your understanding is correct -- however, we respectfully disagree that our method has limited applicability.
First, approximating a static COT coupling is relatively straightforward [Carlier 10]. We discuss this in lines 257-265. However, **such a coupling is necessarily empirical** -- that is, the coupling is only between the observed data, and it is not straightforward to generalize to unseen data. Although our method requires these empirical static couplings at training time, we may apply our method to simulate the transport for unseen data. This is particularly useful when one is interested in modeling the transport for large datasets, for which the standard approaches to COT would be prohibitively expensive in terms of memory.
Second, as mentioned previously, we are primarily interested in applications in generative modeling. As demonstrated by our experiments, computing the static COT couplings is not a prohibitive step in solving conditional generation tasks. This is further emphasized by the success of OT based flow-matching algorithms for unconditional generation [Tong 23, Pooladian 23].
> It is unclear if the experimental settings are appropriate
As discussed above, our methodological contributions are focused on conditional generative modeling. Our experiments are aligned with this aim -- in Section 7, we demonstrate that our proposed method, COT-FM, obtains strong empirical performance across a range of conditional generation tasks. We would be happy to hear suggestions for additional experiments that would further improve the paper.
> the experiments were conducted on very small datasets.
In Appendix F, we discuss the size of our datasets -- namely, we train on 20,000 datapoints for our 2D experiments and 10,000 datapoints for our Lotka-Volterra and Darcy flow experiments. We respectfully disagree that these datasets are too small, as many real-world applications of inverse problems have datasets of roughly this magnitude. Moreover, we believe our method will scale to larger datasets, as the additional overhead as compared to e.g. standard flow matching is not prohibitively expensive.
> Does it mean that FM model learn geodesics?
Yes -- the path of distributions modeled by COT-FM is a geodesic in the conditional Wasserstein space, in the sense of Theorem 1.
Informally, Theorem 5 shows these geodesics are induced by triangular vector fields. Moreover, we show in Theorem 2(a) an optimal COT mapping induces a geodesic in this space, where the vector field producing this geodesic is given in Theorem 2(b). The vector fields in Theorem 2(b) are precisely those we use to learn COT-FM in Equation (7). While there is some necessary additional smoothing, in the limit of zero covariance we recover the true geodesics in theory.
We briefly discuss this in Lines 254-256 of our submission, but we agree that this could have been explained more clearly. If accepted, the camera-ready version of our paper will include a more extensive discussion.
1. Conditional Wasserstein Distances with Applications in Bayesian OT Flow Matching. Chemseddine et al., 2024
2. Improving and generalizing flow-based generative models with minibatch optimal transport. Tong et al., 2023.
3. Multisample Flow Matching: Straightening Flows with Minibatch Couplings. Pooladian et al., 2023.
4. From Knothe's transport to Brenier's map and a continuation method for optimal transport. Carlier et al., 2010
---
Rebuttal Comment 1.1:
Comment: I appreciate the author for the clarification. I agree that the main contribution of this work is the development of the dynamical conditional optimal transport theory, and it is quite novel. I also agree that the COT map can be obtained as the mini-batch size approaches infinity (as discussed in [Tong, 2023]). The additional experiments, which demonstrate that it is possible to approximate the actual COT with large batch sizes, enhance the soundness of the approach. Although there are some aspects of the methodology that might be open to discussion, considering the theoretical contributions and the additional experiments, I would like to raise my score to 5.
---
Rebuttal 2:
Title: Comparison to previous work
Comment: Dear reviewer hqSP,
Thank you for your work evaluating this submission.
The internal policy ([link](https://neurips.cc/Conferences/2024/PaperInformation/NeurIPS-FAQ)) states that work appearing two months or less before the deadline - which is the case of the work you refer to - is considered concurrent, and that authors should not be expected to compare to such work.
If part of your review was based on this, it will be possible to update it during the discussion phase. | Summary: This work first extends conditional optimal transport theory to the dynamical setting. Then a flow-matching model is proposed to approximate these flows with a simulation free training objective. This is then applied in several conditional generation tasks including two Bayesian inverse problems. Triangular optimal transport maps are used in combination with the dynamic Brenier-Benamou formulation of standard optimal transport.
Strengths: - This work combines ideas from conditional optimal transport and optimal transport flow matching to create a new approach for Bayesian inverse problems and likelihood-free inference.
- The results effectively demonstrate how this approach can be used in a variety of settings.
- The work is well written and fairly clear
Weaknesses: - Only applied to relatively low dimensional problems.
- It might be useful to clearly highlight the power and generalization of this work over a “simple” conditional optimal transport formulation which simply conditions on a single-class variable. e.g. https://github.com/atong01/conditional-flow-matching/blob/main/examples/images/conditional_mnist.ipynb
While it is clear from a deep enough reading I suggest the authors might want to highlight these differences for potentially broader appeal. This might be done through an algorithm box or other presentation.
Technical Quality: 4
Clarity: 3
Questions for Authors: It would be good to know empirically how far from optimal the learned transport maps are, as this is a known limitation of miqibatch-based approaches.
Comment: I would suggest citing work on rectified flows as a concurrent invention of flow matching.
https://arxiv.org/abs/2209.03003
Confidence: 2
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback!
> It would be good to know empirically how far from optimal the learned transport maps are, as this is a known limitation of mi[n]ibatch-based approaches.
We thank the reviewer for pointing this out. Please see our global response for a discussion regarding minibatches and COT. Overall, we find that our method COT-FM can recover nearly optimal transport distances even when trained at small batch sizes.
> Only applied to relatively low dimensional problems.
This is a fair point. However, we would like to emphasize that our most significant contributions are to the theory of conditional optimal transport, laying the foundations for future applications.
Moreover, many of the applications we have in mind are inverse problems, which are often fairly low-dimensional, e.g. recovering the parameters of an ODE as in our Lotka-Volterra experiment in Section 7. We also demonstrate our method on image data for the Darcy Flow inverse problem. While this is still relatively low-dimensional (at a grid size of 40x40), this was largely due to the computational cost of solving PDEs at high resolutions, which is necessary for both producing the training data as well as running MCMC as a point of comparison.
These results suggest that the method should scale to higher dimensional problems, which is an interesting avenue for future work.
> It might be useful to clearly highlight the power and generalization of this work over a “simple” conditional optimal transport formulation which simply conditions on a single-class variable.
Thank you for this suggestion -- we agree that this could be more clearly explained. We elaborate here on the key differences.
When the conditioning variable takes values in a finite set, like a class label, the conditional optimal transport problem becomes much simpler. This is because, in this scenario, we typically have many observations (in the $U$ space) for any single, given conditioning variable $y$ -- e.g., many images all coming from the same class. In this case, one can simply solve the usual optimal transport problem for each class independently.
However, we are largely interested in problems where we only have a single observation $u$ for each given $y$ -- this is the case, for instance, in inverse problems. Here, it is impossible to solve the optimal transport problem directly for each $y$ independently as there is simply not enough data.
We describe this briefly in lines 98-106, but we will make this point more clear. We will include pseudocode in our camera-ready submission to highlight the differences between our method and existing work.
> I would suggest citing work on rectified flows as a concurrent invention of flow matching.
We thank the reviewer for the reference -- we are aware of rectified flows, and we have updated our paper with a short discussion of this work and how it relates to our submission and flow matching more broadly.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their clarifications and additional experiments. I still believe this makes a potentially useful theoretical contribution but with limited empirical evaluation and therefore maintain my score. | Rebuttal 1:
Rebuttal: # Summary
We would like to thank the reviewers for their detailed and valuable feedback. We are encouraged to hear that the reviewers found our theoretical contributions to be a strength (QX6V hqSP, i1c7, yVF4) and that the submission is well-written (QX6v, hqSP, yVF4, i1c7). We are also glad the reviewers found our proposed method “quite elegant” (yVF4), with experimental results that clearly demonstrate the strengths of the approach (QX6v, yVF4).
# Minibatch COT
Several reviewers had questions regarding the role of minibatches in our setup. While the use of minibatches is needed to scale the method to large datasets, we believe that this limitation is not prohibitive in practical adaptations of these methods. Our experiments (Tables 1, 2, 3) demonstrate that even with a minibatch approximation, COT-FM outperforms standard flow matching (without the use of COT).
The precise relationship between minibatch OT and the standard OT problem is an area of active research. However, there have been some promising results, at least in the unconditional setting. For instance, [Bernton 19] show that minimizers of the batched Wasserstein distance converge to minimizers of unbatched Wasserstein distance (as the batch size grows), and [Sommerfeld 19] establish non-asymptotic bounds between the Wasserstein distance and its minibtach approximation. Studying these relationships for the conditional OT problem is an open, challenging problem and an exciting direction for future work.
To further investigate the role of minibatches, we conducted an additional experiment. In this experiment, we use a standard Gaussian as our source distribution, and a Gaussian with covariance $\rho = 0.75$ as the target. We choose these distributions as the COT distance is available in closed form, as we prove in Section 4 / Appendix C. We then train our proposed COT-FM model on a range of different batch sizes (leaving all other hyperparameters fixed except for the learning rate). We then measure the degree to which the transport learned by COT-FM is optimal. Figures for our preliminary results are contained in the attached response.
In Figure 1, we plot the difference between the true value of the squared COT distance $W_2^{\mu, 2}$, and the squared transport cost resulting from the trained COT-FM model. This learned cost is estimated by sampling 10,000 points from the source distribution, flowing each point along the model’s learned vector field, and computing the Euclidean distance between the initial and terminal location of each point. We see that even for relatively small batch sizes, the transport costs from our learned COT-FM model closely match the ground-truth cost.
In Figure 2, we additionally measure the degree to which the learned dynamic COT paths are optimal. To do so, we measure the difference between the path energy of the model $\int_0^1 \int |v_t^\theta| d p_t d t$ and the known ground-truth distance $W_2^{\mu, 2}$. By Theorem 5 of our work, these two quantities should be equal if the learned dynamic paths are optimal. We find that the deviation decreases for larger batch sizes, but that the magnitude of the deviation is fairly small relative to the distance.
Overall, these two results indicate that COT-FM is able to obtain good approximations of both the dynamic and static COT solution, even when using minibatches. These results are in agreement with similar results in the unconditional case [Tong 23, Fatras 21]. We will conduct additional experiments with the datasets considered in the main submission to assess the degree of optimality in an updated version of our paper.
1. On parameter estimation with the Wasserstein distance. Bernton et al., 2019.
2. Improving Mini-batch Optimal Transport via Partial Transportation. Nguyen et al., 2021.
3. Optimal Transport: Fast Probabilistic Approximation with Exact Solvers. Sommerfeld et al., 2019.
4. Unbalanced minibatch Optimal Transport; applications to Domain Adaptation. Fatras et al., 2021.
5. Improving and generalizing flow-based generative models with minibatch optimal transport. Tong et al., 2023.
Pdf: /pdf/3a9078cec08620441ffc2e1a72b9bd93272c3c36.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
From News to Forecast: Integrating Event Analysis in LLM-Based Time Series Forecasting with Reflection | Accept (poster) | Summary: This paper introduces a time series forecasting framework, where LLM-based agents are employed to sift out relevant news to time series of interests and the news are utilized to enhance the accuracy of time series forecasting models.
Strengths: 1. The idea of filtering and utilizing news to enhance time series forecasting is innovative and interesting.
2. The whole framework is well designed. Each component is with reasonable motivation.
3. The experimental result is impressive, showing great superiority of the proposed framework on time series forecasting.
Weaknesses: 1. Time consumption of the framework should be discussed.
2. The components in the framework are inherited from existing works, making the model itself not as innovative as the idea of the paper.
3. Collecting-then-filtering mechanism of the proposed framework might have negative effects on the real-time performance of the model.
Technical Quality: 3
Clarity: 3
Questions for Authors: For real-world application, we may have to collect up-to-date news related to time series of our interests, do the authors have any suggestion of the amount of news?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are well discussed in the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments and for recognizing the innovation of our idea.
```Q1```: **''Time consumption''**
```A1```: Thanks for the question. The time cost of our method can be divided into training time and inference time. For a dataset with 3,500 time series, training requires approximately 10-15 A100 GPU hours, translating to 1-3 hours in a typical multi-GPU setup. Inference follows the normal language model’s reasoning time, with each instance taking from a few seconds to over ten seconds, depending on the output sequence length. With efficient information integration channels, obtaining real-time news can be very rapid, making real-time prediction feasible. We will include this information in the revised paper.
```Q2```: **''Model itself not as innovative as the idea of the paper''**
```A2```: Thanks for the question and for recognizing the innovation of our idea. We want to emphasize that our method is equally innovative. Our approach goes beyond model training to include **data construction methods, LLM agent construction methods, and a framework for integrating the agent with the model**. We innovatively combine these elements to utilize LLMs for time series forecasting, incorporating supplementary information and news. We introduced an LLM agent for reasoning and assistance within the LLM fine-tuning framework for time series prediction. We also proposed a Reflection mechanism to evaluate the LLM output of time series forecasting. These practices are novel in the literature and are crucial to our work. We believe these innovations will provide valuable insights and contribute to other projects leveraging LLMs to address domain-specific challenges.
```Q3```: **''Real-world application''**
```A3```: Thanks for your question. The amount of recent news required for practical applications varies based on the frequency, duration, domain, and geographical coverage of the specific time series forecasting tasks. Here are some general suggestions:
**Amount of News**: For national-level events, we typically use an average of 100,000 news articles covering one year for model training. For week-ahead forecasting, about 1,000 articles covering one week provide a robust dataset, while day-ahead forecasting requires around 100 articles per day to cover a broad range of events. Our news sources include the GDELT [30], Yahoo Finance, and News AU.
- Australia’s Half-Hourly Electricity Demand and Hourly Exchange Rate: We collected 380,560 raw articles/events over five years.
- California’s Hourly Traffic Volume: We gathered 14,543 raw articles/events over two years.
- Daily Bitcoin Price Prediction: We collected 19,392 raw articles/events from around the world over three years.
**Real-Time News Collection**: We recommend using automated tools and APIs for real-time news collection. For example, GDELT 2.0 [30] updates every 15 minutes, capturing global news events in 65 languages, providing insights into various themes, emotions, and activities. This approach ensures an up-to-date dataset without manual intervention. It’s crucial to ensure the reliability of news sources by selecting official media channels and maintaining a sufficient number of high-quality, relevant, and diverse news items to improve the accuracy and reliability of time series forecasting.
```Q4```: **''Collecting-then-filtering mechanism might have negative effects on the real-time performance''**
```A4```: Thank you for your valuable feedback. The collecting-then-filtering mechanism is essential for ensuring the accuracy and reliability of the model's predictions. By initially collecting a broad range of news data to train the model, we use this mechanism to remove irrelevant news, thereby enhancing the overall quality of the input data for time series forecasting.
While the training phase may require considerable time to analyze all raw news covering long periods in the datasets, the data collection process during the real-world testing phase can be automated, thanks to the availability of real-time news datasets, such as the GDELT Dataset [30] and Yahoo Finance API. On average, analyzing and filtering daily news requires 30 seconds to 1 minute, with each model's prediction instance taking from a few seconds to over ten seconds. This overall processing time is reasonable.
In practical applications, there is often a trade-off between speed and accuracy. Although the collecting-then-filtering mechanism may introduce a slight delay, it significantly improves the model's accuracy and reliability. We believe that this trade-off is justified, especially in scenarios where precise and dependable predictions are critical. Our framework is also extensible and can be adjusted to improve real-time capabilities, and we will continuously explore further improvements to enhance both performance and accuracy.
---
Rebuttal Comment 1.1:
Comment: The response has addressed my concerns well and I decide to raise my score to 6. | Summary: This paper proposes a new framework for time series forecasting. This framework fine-tunes a generative large language model (LLM) to improve forecasting accuracy by integrating news and supplementary information with numerical data and introducing iterative self-evaluation through LLM-based agents.
Strengths: Strength 1: This paper has good originality to identify a unique challenge in time series forecasting task, which is the lack of effective modeling to address the distortions induced by additional random events with time going by.
Strength 2: Through Figures 1, 2, 3, and 4, this paper has good clarity to describe the proposed framework and the detailed procedures to complete the time series forecasting task.
Strength 3: This paper conducts comprehensive experiments, using time series datasets across multiple domains, to demonstrate the effectiveness of the proposed framework.
Weaknesses: Weakness 1: There could be some statistical analysis on random events compared with normal events which represent a universal knowledge distribution with time going by.
Weakness 2: It seems a bit redundant and contradictory for methods 1) and 2). The description of the three-phase prompting could be better organized.
Weakness 3: Ablation studies and sensitivity analyses are encouraged. Based on the description of the four scenarios from line 290 to line 297, the news and the supplementary information are always integrated throughout the experiments.
Weakness 4: There could be more detailed analysis for Table 2, especially regarding the roles of the evaluation agent.
Technical Quality: 3
Clarity: 2
Questions for Authors: Question 1: Apart from prediction accuracy, how to demonstrate the improved reliability? From line 144 to line 145, do sudden shifts embedded in random events from news context help the framework improve prediction reliability?
Question 2: From line 216 to line 217, is the understanding of time series influencers, or the sorting based on impact and duration, developed manually by people or automatedly by the LLM agent? What is the difference between such an understanding and a given reasoning logic mentioned from line 218 to line 219?
Question 3: From line 277 to line 278, apart from news articles, are there any ablation studies studying the accuracy differences between keeping and removing partial components of or the entire supplementary information?
Question 4: According to Table 2, why would introducing more rounds of reasoning selection decrease the forecasting accuracy?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have adequately addressed the limitations and the potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks to the reviewer for the insightful feedback and for highlighting the originality of our work.
```Q1```: **"Statistical analysis on random events compared with normal events"**
```A1```: Thanks for your question. To address it, let me first define random and normal events. Random events are unpredictable and unplanned, such as natural disasters, accidents, health crises, and criminal acts. Normal events are planned or anticipated based on patterns, like political activities, sports & cultural events, economic reports, and public holidays. We used the LLM agent to categorize and detect all random and normal events from our raw news dataset spanning January 1st to August 5th, 2019. The analysis revealed that, on average, 27.7% of all events are random. Figure 1 and Figure 2, presented in the rebuttal PDF, show the daily distribution of these random events.
```Q2```: **"The description of the three-phase prompting could be better organized"**
```A2```: Thank you for the feedback. We’ll clarify the distinction between Method 1, which covers the theory and mathematical modeling of how news affects time series predictions, and Method 2, which outlines practical steps for implementation, including dataset preparation, fine-tuning language models, and agent design. To improve clarity, we’ll present the three-phase prompting more clearly and concisely, moving from the conceptual idea to practical implementation, showing how these elements work together to enhance predictions. This reorganization will eliminate redundancy and improve the presentation.
```Q3```: **About ''prediction reliability''**
```A3```: Thank you for your insightful question. While sudden shifts in random events introduce volatility, our integration of agent reasoning helps the model effectively manage these changes. The agent does not simply translate individual events; it intelligently understands and processes news into concise summaries and rationales in standardized formats, focusing on the impact direction, impact duration, and impact scale of different events. This approach ensures consistent categorization of various random events as positive or negative, maintaining the model’s reliability. Additionally, we ensure prediction consistency through iterative reflections, demonstrating that the model produces stable results. Correlating prediction errors with missed news helps mitigate unexpected prediction deviations, further highlighting the model’s reliability.
```Q4```: **About line 216 -- line 219: "...difference between an understanding and a given reasoning logic..."**
```A4```: Thank you for your question. The LLM agent can automatically form the understanding of time series influencers, and providing a given reasoning logic in our models is optional. In the automated process, the agent forms its logic through prompts designed to help it determine how different types of news affect a specific domain. For example, we use open-ended questions to allow the agent to independently summarize and develop filtering logic. User knowledge can also be incorporated into these prompts as a given reasoning to help the agent generate more comprehensive logic. The agent then filters news based on the generated logic, either fully automatic or including user-provided input. We will revise the paper to clarify these points.
```Q5```: **"Removing partial components of or the entire supplementary information"**
```A5```: Thanks for the question. We add more experiments with different training datasets (1. removing partial supplementary information; 2. removing the entire supplementary information). The results are shown in Table 2 in the rebuttal file.
```Q6```: **"Why more rounds decrease the accuracy?"**
```A6```: Thanks for raising this important question. Optimizing the number of iteration rounds is crucial for achieving the best prediction accuracy. Our findings in Table 2 of the paper show that multiple rounds generally enhance logic and provide more comprehensive insights compared to a single round, without inherently reducing accuracy. Typically, two or three rounds are sufficient enough for significant improvements. However, beyond this, additional rounds may introduce more complexity and noise, potentially affecting accuracy. In our study, we performed multiple iterations primarily to explore the potential for further enhancements and to understand the impact of iterative refinement. While we demonstrated the effectiveness of multiple rounds, determining the optimal number remains an ongoing challenge. This involves refining the evaluation agent’s workflow to balance reasoning depth with the risk of noise accumulation. Our future work will focus on finding this balance to ensure optimal predictive performance.
```Q7```: **"More detailed analysis regarding the evaluation agent"**
```A7```: Thanks for the question. The evaluation agent enhances news filtering by analyzing and relating prediction errors with potential missed news during training. It examines ground truth data, prediction errors, selected news, all raw news, and the forecasting task type to identify overlooked news. For example, if there is a significant discrepancy between the predictions and the actual values during a certain time window, it is necessary to closely examine the recent news from that time frame and look for any relevant events that may have been missed. This refines the model’s understanding of how missing news affects predictions. Insights from this analysis are then used to generate updated logic for subsequent news selection rounds, which is consolidated into a final version after processing all iterations for validation sets. Through this iterative evaluation, the model continuously improves its understanding of relevance. More details are in Section 3.2 and 3.3, Appendix A.6 and A.7.
---
Rebuttal Comment 1.1:
Comment: Thank you for providing more detailed explanations on prediction reliability and conducting more experiments as ablation studies. I acknowledge that I have read the rebuttal and have no further questions. | Summary: This paper introduces a novel approach to enhance time series forecasting using Large Language Models (LLMs) and Generative Agents. By integrating news content with time series data, the method aims to align social events with fluctuations in time series to provide enriched insights. The approach involves filtering irrelevant news and employing human-like reasoning to evaluate predictions, continuously refining the logic of news selection and the robustness of the model's output. The results show significant improvements in forecasting accuracy by effectively harnessing unstructured news data.
Strengths: - The proposed framework integrates unstructured news data with numerical time series inputs, enhancing the contextual understanding and responsiveness to real-world events.
- The use of LLM-based agents for dynamic news selection and analysis is interesting. The agents effectively filter and analyze news content, continuously improving their logic based on forecasting results.
- The incorporation of news data significantly improves prediction accuracy across various domains such as finance, energy, traffic, and bitcoin, demonstrating the model's ability to navigate complex real-world dynamics.
Weaknesses: - The performance heavily relies on the relevance of the news data selected. I'm worried that Inaccurate or irrelevant news can degrade the forecasting accuracy.
- The method may not perform as well in domains requiring highly localized or specific news data that is not available in general news sources.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How does the model ensure the relevance of the selected news items? Could the authors provide more details on the filtering criteria and logic used by the LLM agents?
- How does the model mitigate the impact of irrelevant or misleading news on the forecasting results? Are there any mechanisms in place to detect and exclude such news?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed some limitations, such as the dependency on the relevance and quality of news data and the complexity of integrating textual and numerical data.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments and for recognizing the potential of our work.
```Q1```: **''Inaccurate or irrelevant news can degrade the accuracy.''**
```A1```: Thanks for the question. Inaccurate or irrelevant news does reduce prediction accuracy, as demonstrated in our paper. Reviewers can check Table 1 of our paper, where _the Case (Textual Prompt Non-Filtered News section)_ contains a lot of irrelevant news, significantly reducing the prediction effect and highlighting the importance of introducing the LLM Agent. Ensuring the relevance of news items is crucial for the effectiveness of our model. Our paper proposes LLM agent workflows (Section 3.2) to select and filter the most relevant news for predictions, thereby improving overall accuracy.
```Q2```: **"How does the model mitigate the impact of irrelevant or misleading news"**
```A2```: Thank you for your question. There are three ways to mitigate the impact of irrelevant or misleading news:
1. **News pre-process**: Before the agents analyze the news, we enhance relevance by ensuring source credibility and the temporal and spatial consistency between the selected news and the time series to be predicted. During the raw news collection process, we prioritize reliable and authoritative sources over less reliable ones. Additionally, relevant news is filtered from the open-source news dataset, considering spatial and temporal connections. For example, for traffic domain predictions, we primarily gather local news from California. We align news data with time series data by matching time frequencies, horizons, and geographical areas. This process selects the first round of raw news specific to each domain, ensuring the general relevance of the news to the time series being forecasted.
2. **LLM Agents**: We use LLM-based agents to filter and analyze news content, refining raw news into relevant events. Detailed methods are in Section 3.2, with specific prompts in Appendix A.6. Our agents autonomously filter and categorize news based on its impact (positive/negative) and duration (short-term/long-term). The selected news is formatted into structured JSON, detailing the affected area, time, and rationale. An evaluation agent continuously assesses and improves the filtering process by analyzing prediction errors, refining the model’s news selection, and excluding irrelevant or misleading news.
3. **News Verification**: Our extensible agent workflow can integrate fake news detection techniques to enhance reliability and accuracy. For instance, the agent can use fake news detection algorithms to assess the consistency and credibility of news content, effectively filtering out misleading information.
```Q3```: **"Filtering criteria and logic"**
```A3```: Thanks for your question. Our filtering logic for selecting news involves a multi-step reasoning process. First, our agent uses several Chain of Thought (CoT) prompts to understand the details of forecasting tasks and automatically identify time series influencers within the specific domain. This forms the initial filtering logic, which instructs the agent to sort news by impact (positive/negative) and duration (short/long-term), considering factors such as economic, policy, seasonal, and technological elements. The reasoning agent then filters and categorizes news based on this logic, focusing on its relevance to the time series and classifying the impact as long-term, short-term, or real-time. Additionally, the prediction time span is considered for further filtering of news. For instance, if day-ahead or real-time predictions are needed, long-term influencers will be filtered out.
To refine this logic, we also deploy an evaluation agent that assesses prediction accuracy and uses logical reasoning to identify and correct inaccuracies from missing or irrelevant news. The evaluation agent works in three phases:
1. Input forecasting task type, time horizon, and background information to generate evaluation steps.
2. Analyze ground truth data, prediction errors, and selected news to identify overlooked news.
3. Update the logic based on its findings, refining it for future news selection.
The refined filtering logic ensures the news data selected for time series forecasting is relevant and reliable, enhancing the model's predictive accuracy. For more details on the filtering logic examples and prompts, please refer to Section 3.2, Appendix A.6 and A.9.
```Q4```: **''Not perform as well in domains requiring highly localized or specific news data''**
```A4```: Thank you for your question. Relying solely on open-source datasets may not always be optimal for domains requiring highly localized or specific, unpublished news data. However, our paper demonstrates that including more relevant news can significantly enhance prediction accuracy, potentially improving traditional methods. While localized news may be hard to obtain, public events can still impact time series data related to human activities. Our results show that as the quality and relevance of news data increase, so does prediction accuracy. The proposed framework can also be adapted to incorporate localized and specific news sources, enhancing performance in specialized domains. Our paper demonstrates this potential of incorporating textual news into time series forecasting.
---
Rebuttal Comment 1.1:
Comment: While I appreciate the thorough responses provided in your rebuttal, I have no further questions at this time and will maintain my positive rating. | Summary: The paper proposes a novel method to integrate event news as external information into the time series forecasting system.
Strengths: 1. An important problem is studied in this paper.
2. An innovative idea of an automatic relevant news extraction mechanism is proposed.
3. Overall, the presentation is clear and good.
Weaknesses: 1. Some questions regarding the experiments need clarification.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Though the idea that relevant event news may positively benefit time series forecasting is intuitively correct, this is not well demonstrated in the paper regarding the datasets used. Specifically, for many datasets tested, it’s hard to imagine what kind of news could dramatically affect them. Of course, it’s impractical to manually evaluate the news considered relevant. Besides the examples already provided, the authors could also test the average number of relevant news items per time window. This could give a rough idea of the distribution of relevant news, which can be used to approximate if the LLM works as expected.
2. Conflicting news could exist within a time window. For example, Elon Musk may praise or criticize a cryptocurrency within a short time frame. It seems that the model doesn’t consider this situation. Can the authors elaborate more on this issue?
3. The iterative analysis results show that the performance after each iteration is somewhat random. Though, in general, the final results are better than the initial iteration, it’s actually hard to predict if one iteration will be better or worse between two adjacent iterations. This raises a concern since the huge computational cost may seem unnecessary.
4. Another concern is that only LLaMA 2’s behavior is tested. Therefore, the sample set of LLMs tested is quite small, making it difficult to predict the behavior of other LLMs using the proposed method.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's insightful comments and for recognizing the novelty in our work.
```Q1```: **"Demonstrate the datasets used"**
```A1```: Thanks for your question. In Appendix A.4, we presented the source details of datasets, and they may answer part of the question. Our news datasets are collected from the GDELT Dataset [30], Yahoo Finance, and News AU. GDELT monitors a wide range of real-time worldwide web news. Specifically, (1) to predict electricity demand and exchange rates in Australia, we gathered 380,560 raw articles covering diverse national and international topics over five years; (2) to predict California's traffic volume, we obtained 14,543 raw articles covering two years in California; (3) For Bitcoin price predictions, we collected 19,392 worldwide articles covering three years. This collection of raw news data provides a foundation for analyzing the impact of various events.
```Q2```: **"What kind of news could dramatically affect"**
```A2```: In our framework, the agent analyzes and selects the most relevant news. According to our results, the selected news mainly consists of five categories: economic or political events, health crises, natural disasters, technology development, and social sentiment. For example, fiscal policy changes impact exchange rates, health crises like COVID-19 influence traffic volume and electricity load, AI breakthroughs can affect Bitcoin prices, and political events like elections or new legislation impact exchange rates and electricity demand.
Additionally, the reflection agent, which analyzes prediction errors in the training dataset and missed news, helps identify unexpected and counterintuitive events buried in the raw news. These might include local incidents causing significant impacts or various events with indirect effects on time series. For example, news about Saudi Arabia’s net zero carbon emissions goal could impact global oil prices, indirectly affecting Australia’s economy and exchange rate. Although this news is not directly related to economic policy or typical keywords monitored, it indirectly reduces oil production for carbon mitigation, which can influence oil prices and exchange rates. Some analysis and examples can be found in Appendix A.7 and A.8. The reviewer can refer to these sections for more details.
```Q3```: **"Distribution of relevant news"**
```A3```: Thanks for the question. In the rebuttal file, we provide more statistical distribution details of selected news and their keywords in Table 1, Figure 1, and Figure 2. In our settings (mostly day-ahead prediction), the average number of relevant news influencing a specific domain generally includes 2 to 8 critical events per day. Several examples of selected news contents are provided in Appendix A.8.
```Q4```: **"Conflicting news could exist within a time window"**
```A4```: Thanks for raising this critical point. We respond to this question from the following aspects:
1. Our model does not simply understand individual events in isolation; instead, it considers the aggregation of various events within a specific timeframe. The model can synthesize conflicting information to better understand the overall trend. Different events cause immediate reactions in their respective timeframes. For example, different statements by Elon Musk on Bitcoin will have different effects on its price over time. Our model considers these factors (which are part of our input).
2. Our reflection agent identifies and adjusts conflicting news by analyzing prediction errors and all the raw news. The agent refines the model’s understanding of how conflicting news affects predictions, ensuring better adaptation to future events and improved accuracy.
3. When contradictory news appears, we may also need to judge its source and information authenticity. Our agent can flexibly integrate the analysis and reasoning process to determine the authenticity of new news. Reliable sources of news include official media and newspapers.
4. In the future, we plan to assign different weights to news based on source credibility, relevance, and context. For instance, statements from highly influential individuals will be weighted appropriately. If such conflicts appear, the reflection agent will adjust the weights accordingly.
```Q5```: **"It's hard to predict if one iteration will be better"**
```A5```: Thanks for the question. Our findings suggest that, generally, two iterations are sufficient to see significant improvements. Multiple iterations consistently provide better results than only one iteration due to the reflection mechanisms. In our study, we performed multiple iterations primarily to explore the potential for further enhancements and to understand the impact of iterative refinement. This process is crucial during the model training for refining the filtering logic. However, it is not required during the testing phase. For practical applications, the maximum number of iterations should be adjusted based on the prediction timeframe and the types of news being analyzed. For instance, if more categories of news or new trends emerge globally within the timeframe, additional iterations (three or four times) may be needed. Our paper demonstrates one possible approach for better prediction performance. In practical applications or engineering practices, the optimal iteration number can be determined by balancing performance gains against computational costs.
```Q6```: **"Difficult to predict the behavior of other LLMs using the proposed method"**
```A6```: Thanks for the question. We add more experiments with other LLMs, such as mistral 7b [R1] and gemma 2b [R2]. The results are shown in Table 3 of the rebuttal file, demonstrating the potential of these LLMs.
[R1] Jiang, Albert Q., et al. "Mistral 7B." arXiv preprint arXiv:2310.06825 (2023).
[R2] Team, Gemma, et al. "Gemma: Open models based on gemini research and technology." arXiv preprint arXiv:2403.08295 (2024). | Rebuttal 1:
Rebuttal: We thank all reviewers and area chairs for their valuable comments. We are pleased that all reviewers have responded positively to our paper. They acknowledge that our work addresses an important problem (Reviewers QJWf, UwQn), introduces an innovative idea (Reviewers QJWf, UwQn), and demonstrates good results (Reviewer UwQn, pDM7). Additionally, they appreciate the clear presentation (Reviewer QJWf) and find the work both interesting and meaningful (Reviewers MLnY, UwQn, pDM7).
Reviewer QJWf (denoted as Reviewer 1), Reviewer MLnY (denoted as Reviewer 2), Reviewer UwQn (denoted as Reviewer 3), and Reviewer pDM7 (denoted as Reviewer 4) all give insightful comments. To answer their questions, we provide corresponding clarifications and analysis. Besides, we provide more experimental results. We summarize this rebuttal as follows:
1. According to Reviewer 1 and Reviewer 3’s questions, we add a statistical analysis of both selected news distribution and random event distribution during a specific time window. The results are illustrated in Table 1, Figure 1, and Figure 2 in the rebuttal PDF file.
2. According to Reviewer 1’s concerns, we add new experiments which use other language models with our proposed method. The results are in Table 3 of the rebuttal PDF file.
3. According to Reviewer 2 and Reviewer 3’s questions, we add more descriptions of our proposed method. We give a more detailed analysis of agent workflow for filtering irrelevant news.
4. According to Reviewer 3’s concerns, we add new experiments about the particular case of removing all supplementary information and partial supplementary information. The results are in Table 2 of the rebuttal PDF file.
5. Inspired by Reviewer 4’s comments, we discuss the time consumption and suggestions for the real-world application of our work in more detail.
In the final version, we will improve other minor points of Reviewer 1, Reviewer 2, Reviewer 3, and Reviewer 4. Thank you all for the valuable suggestions.
Best,
Authors
Pdf: /pdf/179b7898f2779f04be7e9d848c98a1b2fee54136.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
EigenVI: score-based variational inference with orthogonal function expansions | Accept (spotlight) | Summary: This work proposes a new variational family based on orthogonal function expansions from the quantum mechanics literature. Furthermore, the paper proposes to find the basis coefficients by solving a score matching problem, where the problem cleverly reduces to an eigenvalue problem. The overall approach, called EigenVI, is compared to recent score-matching-based Gaussian VI algorithms and BBVI.
Strengths: * The idea of using orthogonal function expansions is new, although it reminds me of some other non-parametric/almost parametric variational families.
* The fact that the score-matching problems reduce to an eigenvalue problem is beautiful in its own right.
* The writing of the article is the best in my batch.
Weaknesses: * While the authors are upright about the major limitation of the work, scaling with respect to dimensionality, this, unfortunately, strongly limits the practicality of the work. To me, for a VI algorithm to be useful, it should at least be scalable with respect to $d$, the dimensionality of the posterior, or $n$, the number of datapoints. Unfortunately, the proposed approach seems to achieve neither. In particular, the eigenvalue problem formulation is very cute but it isn't clear if subsampling results in an unbiased objective to be scalable with respect to $n$. (Please correct me if this is not the case!)
* The evaluation is not adequate for a work proposing a new variational family. The two baseline variational families considered here are the full-rank Gaussian variational family and normalizing flows. There is a rather long history of developing less-parametric variational families in VI. For instance, boosting VI [1], copulas [2,3], splines [4], semi-implicit families [5], mixtures [6], and many more. These works should be the main focus of the evaluation, not Gaussian VI.
* Unless I am not mistaken, the method suffers in the case the posterior is not *a priori* standardized. It doesn't seem like such standardization would be possible unless some preliminary inference has already been performed. This would defeat the point of using a VI algorithm. If one has already obtained preconditioners, why not just provide them to an MCMC algorithm instead?
* The paper doesn't discuss how to solve the eigenvalue problem. Given that we only need to solve a specific eigenvalue problem, there might be clever ways to solve it in a scalable way, maybe by using randomized linear algebra methods, which could technically be interesting.
Overall, I hope to see at least some evidence that the proposed methodology will eventually lead to a practical algorithm in the future. For instance, whether subsampling results in an unbiased/consistent algorithm. If this is the case, I would be happy to re-assess the paper.
## Additional Comments
* The Fisher divergence and the score-matching problem were introduced to the machine-learning community through the work of Hyvarinen [7], which isn't cited.
* Related works: A discussion on the literature of non-parametric/"less-parametric" VI algorithms seems necessary.
* Line 30 "One thread of BBVI research focuses on Gaussian variational approximations. (ADVI, BBVI)": I wouldn't quite agree with this statement since ADVI is applicable to any family where the reparameterization gradient is applicable. In fact, when normalizing flows are used with ELBO maximization, this is pretty much the same as ADVI. Furthermore, the original BBVI paper [8] requires conjugacy, meaning that one cannot simply use Gaussian variational families at all in a lot of cases.
* A lot of the references are citing the arXiv version. Berg et al. 2018 was published at UAI; Dinh et al. 2016 was published at ICLR 2017; Giordano et al. 2023 was published in JMLR; and etc. Please thoroughly review the reference section.
* Line 39, Section 2.1: Citing something when mentioning orthogonal functions would be better so that readers interested in the topic could have a look into more in-depth material.
## References
1. Campbell, T., & Li, X. (2019). Universal boosting variational inference. NeurIPS
2. Smith, M. S., & Loaiza-Maya, R. (2023). Implicit copula variational inference. Journal of Computational and Graphical Statistics, 32(3), 769-781.
3. Han, S., Liao, X., Dunson, D., & Carin, L. (2016, May). Variational Gaussian copula inference. In AISTATS.
4. Shao, Y., Shan, N. Y., & Feng, T. (2024, April). Nonparametric Automatic Differentiation Variational Inference with Spline Approximation. In AISTATS.
5. Yu, L., & Zhang, C. (2023). Semi-Implicit Variational Inference via Score Matching. arXiv preprint arXiv:2308.10014.
6. Lambert, M., Chewi, S., Bach, F., Bonnabel, S., & Rigollet, P. (2022). Variational inference via Wasserstein gradient flows. NeurIPS.
7. Hyvärinen, A., & Dayan, P. (2005). Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6(4).
8. Ranganath, R., Gerrish, S., & Blei, D. (2014, April). Black box variational inference. In AISTATS.
Technical Quality: 3
Clarity: 4
Questions for Authors: * In Line 103 to 105, it is stated that one can circumpass distributions with constrained supports. Given bijectors (ADVI), constrained supports are rarely a problem for VI algorithms these days. But dealing with *discrete support* is still a major challenge. Would there be a way to use the proposed method for problems with discrete support?
* Eq (10): Why is importance sampling used here when one can sample directly from $q$?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: The paper would benefit from a dedicated limitation section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your feedback on the paper. We discuss several of the points below.
> 1. It isn't clear if subsampling results in an unbiased objective
Any unbiased estimate of the gradient to our objective results in an unbiased estimate of the objective. And so EigenVI can scale to large n with subsampling.
Let $p(z,x)$ be the target joint distribution between data $x$ and latent variables $z$.
Let $g(z)$ be an unbiased estimate of the score $\nabla \log p(z,x)$, i.e., $\mathbb{E}[g(z) \mid z] = \nabla \log p(z,x)$.
Expanding the Fisher divergence,
$\mathbb{E}[\|\nabla \log q(z_b) - g(z_b)\|^2 \mid z_b] = \|\nabla \log q(z_b)\|^2 - 2 \langle \nabla \log q(z), \mathbb{E}[g(z_b) \mid z_b] \rangle + \mathbb{E}[\|g(z_b)\|^2 \mid z_b]$.
Using the unbiased property, this becomes
$\|\nabla \log q(z)\|^2 - 2 \langle \nabla \log q(z), \nabla \log p(z,x) \rangle + \mathbb{E}[\|g(z_b)\|^2 \mid z_b]$.
So we get
$\|\nabla \log q(z) - \nabla \log p(z,x)\|^2 + \mathbb{E}[\|g(z_b)\|^2 \mid z_b] - \|\nabla \log p(z,x)\|^2$.
Thus, $\|\nabla \log q(z_b) - g(z_b)\|^2$ is an unbiased estimate of $\|\nabla \log q(z) - \nabla \log p(z,x)\|^2$ up to constants that do not depend on $q$. Because of this, we can use $g(z_b)$ as a drop-in replacement within the Fisher divergence within our method since
$\min_{q \in \mathcal{Q}} \mathbb{E}\_{q(z)} [|\nabla \log q(z) - \nabla \log p(z,x)|^2] = \mathbb{E}_{q(z)}[|\nabla \log q(z) - g(z)|^2] + C.$
To build this unbiased estimator of the score, we can use data subsampling as follows: suppose data is sampled i.i.d such that
$p(z,x) = \prod_{i=1}^n p(z,x_i)$.
Thus,
$\nabla \log p(z,x) = \sum_{i=1}^n \nabla \log p(z,x_i)$.
We can use this to get unbiased estimates of the full score by sampling the data $x_i$. For instance, by sampling $x_i \sim \frac{1}{n}$ we have that
$\mathbb{E}[n \nabla \log p(z_b, x_i) \mid z_b] = \frac{1}{n} \sum_{j=1}^n n \nabla \log p(z_b, x_i) = \nabla \log p(z, x)$.
We could also use any form of mini-batching over data to also build an unbiased estimate of the gradient.
> 2. Scaling with dimension
There are many directions for future work to help scale to higher dimensions. For instance, in many situations, one might expect non-Gaussianity on a low-dimensional subspace, and that the remaining dimensions can be suitably modeled with a Gaussian. This family is a special case of the orthonormal Hermite family considered in the paper and so is still an eigenvalue problem. Another direction is a low-rank approximation of the tensor – this family could be optimized with gradient-based methods. Finally, one may consider refining an existing basis to develop better basis functions.
> 3. Why not use VI fit as a preconditioner for MCMC?
As noted in above, MCMC struggles with targets with varying curvature (using a preconditioner does not help with multiscale distributions where curvature varies over different regions of the distribution, e.g., Neal's funnel). In the absence of pathological geometries, HMC can work very well in high-dimensions and so dimension itself doesn’t strike us as the relevant criterion to choose between VI or MCMC. Using VI with MCMC is an active research area, and this motivates developing new VI methods to benefit from their synergies.
> 4. Comparing to non-parametric VI families
We thank the referee for pointing to additional literature on non-parametric BBVI families. We will refer to these in the related works in the revised manuscript. However given the breadth of this literature, individual comparison with each of these is beyond the scope of this work.
There are a few common elements to all these approaches, e.g., they all use gradient based optimization and hence are similarly sensitive as the baselines that we have considered. In addition, there are some individual challenges, e.g., mixtures are prone to mode collapse, and spline VI assumes a factorized posterior distribution.
Given these considerations, we consider our baselines of full rank Gaussian VI and normalizing flows (NFs) to be adequate. Together, these are the most widely used variational families for BBVI, they encompass the two extremes on the spectrum of parametric vs non-parametric families, and especially NFs being universal approximators should typically encompass other variational families suggested by the reviewer.
Finally, note that we do not suggest that EigenVI results in better final approximation than NFs, but instead our focus is introducing a new variational family and approach to inference which does not rely on typical iterative gradient based optimization and associated hyperparameter tuning (which is common to all aforementioned methods).
> 5. The paper doesn't discuss how to solve the eigenvalue problem.
Without further information on the scores of p, the resulting matrix in our eigenvalue problem is dense and unstructured, thus there is little we can leverage to design a specialized algorithm. In our experiments, we used off-the-shelf eigenvalue solvers. For typical problems, this is not a bottleneck (see the general response on the discussion for computational cost). We will add a discussion of this.
> 6. References and related work
We will expand our discussion of related nonparametric families, and we will add references to additional literature on orthogonal families and score matching.
> 7. Given bijectors, constrained supports are rarely a problem. Would there be a way to use the proposed method for problems with discrete support?
Transforming supports to be real-valued can sometimes make the problem more challenging (and adds an additional level of tuning to the problem). If we know that the variables are bounded, it may be more natural to model them in the original space. Regarding discrete support, this is an interesting direction for future work. With an appropriate basis set and replacing the scores with log ratios, one would still get an eigenvalue problem.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for the detailed response.
For the comment on unbiasedness, I should have phrased my original comment a little better. It is well known the score-matching objective itself is valid with subsampling. I was more concerned about the fact that it is unclear how to incorporate subsampling within the proposed algorithm in a consistent way.
Nevertheless, I am convinced that the proposed methodology is technically interesting and will raise my score. However, I am still concerned that there isn't enough evidence that it would be practically applicable to larger-scale real-world problems. Thus, the paper feels somewhat less complete in its current state. For instance:
> There are many directions for future work to help scale to higher dimensions. For instance, in many situations, one might expect non-Gaussianity on a low-dimensional subspace, and that the remaining dimensions can be suitably modeled with a Gaussian. This family is a special case of the orthonormal Hermite family considered in the paper and so is still an eigenvalue problem.
Demonstrating this would have been much more convincing and made the work more complete.
---
Reply to Comment 1.1.1:
Comment: Thank you for the follow up – we now understand your question.
Developing algorithms that support subsampling is an important line of future work, and we believe EigenVI develops some foundations that can be adapted to support subsampling. This may be achieved, for instance, with an iterative approach for updating variational parameters $\alpha_t$ that uses the score matching objective, where each iteration would involve solving for the minimum eigenvalue of a particular matrix—here each iteration could also take in scores from a subsampled batch of data points. However, a proper exploration of such a method warrants its own dedicated paper to explore a new algorithm (even before incorporating subsampling) and its properties, and we are planning to investigate these directions in future work. In the current work, we have maintained focus on introducing a novel variational family and non-iterative VI algorithm that we believe has its own merits.
We think the question of subsampling will be of broad interest to other readers, and we will add a discussion to this end in the discussion & future work section of the revised paper. We thank the reviewer for bringing up this interesting point. | Summary: The authors propose EigenVI, a new method for black-box variational inference (BBVI). The method uses the Fisher divergence (rather than the more typical reverse KL divergence) and a variational family based on orthogonal function expansions (which I've never seen used before). Advantages of the approach include the ability to model complex distributions (multimodal, skewed, heavy-tailed, etc.) while also giving closed-form calculations for lower-order moments as well as easy posterior sampling.
Strengths: The paper is exceptionally strong in originality, clarity, and significance.
For originality, I have not previously seen a paper in machine learning using orthogonal function expansions to define a variational family. The authors state that the approach comes from quantum mechanics, an atypical source domain for a machine learning paper. The construction is interesting, and normalizes elegantly. Given the importance of orthogonal bases throughout mathematics, it is likely that this approach to variational inference will connect with many fields of academic inquiry. Thus, this approach has the potential to greatly enrich research in variational inference.
For clarity, the exposition is exceptionally clear nearly everywhere (although Sec 2.3 could use a couple iterations of rewriting to streamline the narrative). In particular, orthogonal function expansions and score-based divergences were introduced in an exceedingly clear (and interesting) manner. The high-level advantages of their method vs. others in the literature were also exceedingly clearly presented.
For significance, the approach is broadly useful (at least for low-dimensional posteriors); this is inherited from the fact that BBVI is constructed so as to be applicable to a wide set of applications without requiring any pen-and-paper calculations. Moreover, by avoiding gradient-based inference in favor of solving an eigenvalue problem, their method would additionally remove the common headache of having to specify or learn a learning rate.
Weaknesses: Some weaknesses:
(1) As mentioned by the authors, the number of basis functions scales exponentially in the dimensionality of the posterior, and therefore so does inference time. Given the restriction to low-dimensional posteriors, why use VI at all, rather than MCMC sampling? It would have been nice to see this addressed in the exposition. It would have also been nice to see an experimental comparison showing some advantage over using MCMC samplers. For instance, do the authors expect a computational advantage of their approach over MCMC methods? If so, it's hard to trust this since computation times are not given.
(2) Following up on the preceding point, it is completely unclear how long it takes to run this method, since no computation times are given relative to competitors. On pp.9, the authors say, "In these examples, we standardize the target using either GSM or BaM before applying EigenVI; for this reason, we do not compare the costs of these methods to EigenVI." I found this sentence quite puzzling. Presumably the authors intend to argue that since EigenVI must take strictly longer than GSM or BaM, there's not point in reporting the computation times. If so, I don't see how the conclusion follows. A practitioner would want to know how MUCH additional compute is required to obtain any advantages. Moreover, how does the overall (combined) computation time compare to other approaches, such as VI with a normalizing flow family or standard ADVI (Kucukelbir et al. 2017)?
(3) The method seems fairly complex to run, since a competitor BBVI method must be run simply as a preprocessing step to standardize the distribution. (They use either Gaussian Score Matching or Batch-and-Match VI, per pp. 8.)
(4) A major selling point emphasized by the authors is that unlike gradient-based approaches, EigenVI does not require tunable hyper-parameters, such as learning rates or termination criteria. However, EigenVI is clearly sensitive to (a) the number of basis functions and (b) the number of importance samples. Indeed, Figure 4 (c) demonstrates how inference can go catastrophically bad when the number of importance samples is not sufficiently large relative to the number of basis functions. This problem is reminiscent of the problem of a poorly chosen learning rate in standard ADVI (Kucukelbir et al., 2017). Can the authors provide any guidance on how to choose a good number of basis functions for a given problem, and then, given that, how to choose a good number of importance samples? A theoretical result would be nice, although admittedly it's unreasonable to expect that during the rebuttal period. Do they at least have any practical insight to share?
(5) I feel that the authors undersell their own method by neglecting to reinforce the primary selling points in the experiments section. For example, the authors mention that their approach allows sampling, closed-form moment calculations, and modeling of distributions with mixed support. However, none of these things appear explicitly in the experimental section. Similarly, the authors emphasize that the normalizing flow competitor is "often sensitive to the hyperparameters of the flow architecture, optimization algorithm, and parameters of the base distribution" (pp.6). Can the authors demonstrate that the posterior approximation deteriorates along these dimensions more easily than their own approach deteriorates as a function of number of importance samples and basis functions?
Technical Quality: 3
Clarity: 4
Questions for Authors: Clarifications:
(1) pp.6: What is the proposed advantage of EigenVI over Zhang et al.'s energy-based approach with a closed-form solution?
(2) pp.7, Figure 4: If the normalized Hermite polynomials contain Gaussian distributions as a "base" (i.e. when the number of basis functions is K=1), then why does EigenVI still beat ADVI when K=1? I expected them to perform identically in this setting.
(3) pp.8: The authors state that they "standardize the target using either GSM or BaM before applying EigenVI". Any reason that they pick those methods relative to others? Would they expect any arbitrary BBVI method (e.g. ADVI of Kucukelbir et al. 2017) to work just as well for this purpose?
(4) pp.9: Should "this family is limited to target functions that are close to Gaussian" read "... that are close to mixtures of Gaussians"? For example, the cross-distribution (Figure 3, row 3) does not look close to Gaussian to me, although it does look close to a mixture of Gaussians.
Typos:
(1) Abstract: "novel class variational approximations" should read "novel class of variational approximations".
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: Yes, authors describe limitations well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your feedback; in the revised manuscript, we will add or expand our discussion to address several of the points you bring up, as detailed below and in the main rebuttal comment.
> 1. A practitioner would want to know how MUCH additional compute is required to obtain any advantages.
While the actual overhead in terms of computational time depends on the efficiency of implementation, we provide qualitative reasons to justify why this is competitive to other approaches:
Standardization can use BBVI approaches (like ADVI, BaM or GSM), and hence the cost of standardization is the same as the cost of doing VI inference using these. Following this, we need to evaluate the scores for importance samples which can be fully parallelized and hence, depending on the implementation, adds minimal overhead in terms of computational time. The final step is to solve the eigenvalue problem which is independent of the dimensions and is fast in the regimes we consider (see general comment and attached pdf for details).
> 2. EigenVI is clearly sensitive to (a) the number of basis functions and (b) the number of importance samples. [...] This problem is reminiscent of the problem of a poorly chosen learning rate in standard ADVI
We will modify the text to clarify that we do have two hyper-parameters: the number of basis functions and the number of importance samples. But we do find there is an important difference between our two hyperparameters and the learning rate in ADVI and other gradient-based methods. As we use more basis functions and more samples, the resulting fitted $q$ is a better approximation. So, we can increase the number of basis functions and importance samples until a budget is reached, or until the resulting q is a good enough fit. Tuning the learning rate is much more sensitive because it cannot be too large or too small. If it is too large, ADVI may diverge. If it is too small, it may take too long to converge.
Another fundamental difference in setting the # of bases as compared to the learning rate or batch size of gradient based optimization is that once we have evaluated the score of the target distribution for the samples, these same samples can be reused for solving the eigenvalue problem with any choice of the number of basis functions, as these tasks are independent. By contrast, in iterative BBVI, the optimization problem needs to be solved from scratch for every choice of hyperparameters, and the samples from different runs cannot be mixed together. Furthermore, solving the eigenvalue problem is fast, and scores can be computed in parallel.
Finally, for choosing the proposal distribution, in the case of the Hermite family, we have reasonable defaults because we are standardizing and hence can use e.g. an isotropic Gaussian with a larger variance to have stable importance weights.
> 3. Can the authors provide any guidance on how to choose a good number of basis functions for a given problem, and then, given that, how to choose a good number of importance samples? [...] Do they at least have any practical insight to share?
In terms of practical insights, if the target p is in the variational family Q, then we need the number of samples = # of basis functions. If it is very different from Q, we need more samples, and we use a multiple of the number of basis functions (say of order 10). As discussed before, once we have evaluated a set of scores, these can be reused to fit a larger number of basis functions.
> 4. Similarly, the authors emphasize that the normalizing flow competitor is "often sensitive to the hyperparameters of the flow architecture, optimization algorithm, and parameters of the base distribution"
In the rebuttal pdf Fig. 1, we focus on a particular model considered in the paper, and we show the sensitivity of posterior estimates to the learning rate and the batch size for the normalizing flow and standard ADVI.
> 5. What is the proposed advantage of EigenVI over Zhang et al.'s energy-based approach?
The energy-based model is not normalized, and thus it cannot be evaluated or sampled from (without using MCMC). Indeed, their paper details an approach that uses HMC with the energy-based VI approach.
> 6. If the normalized Hermite polynomials contain Gaussian distributions as a "base" (i.e. when the number of basis functions is K=1), then why does EigenVI still beat ADVI when K=1?
In this case, the Gaussian fit by BaM is used as the standardizing distribution. Thus, K=1 can beat ADVI if the standardizing distribution converges faster / to a better distribution.
> 7. The authors state that they "standardize the target using either GSM or BaM before applying EigenVI". Any reason that they pick those methods relative to others? Would they expect any arbitrary BBVI method (e.g. ADVI of Kucukelbir et al. 2017) to work just as well for this purpose?
We discuss the motivation behind the standardization in Section 2.3; here you can pick your favorite Gaussian approximation to do this, e.g., a Laplace approximation or ADVI or directly estimating the target mean and covariance (we chose the score-based VI methods in the experiments because they require less tuning and often converge faster). In general, we expect a better estimate of the target mean and covariance to help due to the base distribution of the Hermite family being a standard Gaussian (and thus, lower-order expansions will be more accurate).
> 8. Should "this family is limited to target functions that are close to Gaussian" read "... that are close to mixtures of Gaussians"?
We agree this sentence could be more clear, and we’ll modify it. What we meant was that to get to higher-dimensional spaces, we are limited to target functions that are close to Gaussian.
For lower dimensional models (e.g., the 2D targets), we can afford to take higher-order expansions, and are able to model highly non-Gaussian distributions; we generally found this to be true for up to 3 or 4 dimensions.
---
Rebuttal Comment 1.1:
Comment: Thank you authors for your detailed response. After reading the rebuttal and other reviews, I continue to find the paper both technically solid and innovative, and will maintain my positive score. | Summary: This paper proposes EigenVI, which uses orthonormal distribution functions as the basis of the variational distribution family $q$. Then, minimizing the difference between $q$ and the target distribution $p$ via minimizing the Fisher divergence (2-norm score distance) is turned into an eigen-decomposition problem. Authors show that EigenVI is very competitive in distribution approximation compared with alternatives.
Strengths: * The presentation is clear. The math is solid.
* The proposed EigenVI is very intuitive and reasonable.
* The low-order basis and higher-order bases have their own interpretation of the distribution's feature, which is desirable.
* Experiments are extensive, including synthetic and real-world data.
Weaknesses: See Questions.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Eq. 9, should be $\mathrm{d} z$? Line 130, the score of the target is not equal to the gradient of the log joint, but up to a scaling?
* Could the authors explain a bit more why centering the distribution is very important for better approximation?
* Is there any guidance about how to choose a good proposal distribution? How do authors choose their proposal distributions used in the experiments? Is the choice of the proposal distribution very significant or quite robust?
* Eq. 16 seems a bit weird, especially the term $L_z(x_{1:N})$. Besides, it is better to clearly state whether $x_{1:N}$ are independent samples or a sequence, and how it relates to a single $z$.
* Why is forward KL used in the whole paper rather than the commonly used backward KL as in ELBO?
* How about the actual running time comparison on synthetic and real-world experiments? I understand that theoretically comparison is hard because eigenVI's complexity is w.r.t. number of basis (order) but the stochastic algorithm's complexity is mainly w.r.t. # epochs. However, solving large eigen-decomposition problems in practice might be very time-consuming. So it is very helpful to have an empirical understanding of the running time of the algorithm.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: /
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your feedback on the paper. In the revised manuscript, we will work on clarifying the following points.
> 1. Should Equation 9 be dz?
We could alternatively write $q(z) dz$ here (with the appropriate adjustments of $q(z)$ and $p(z)$).
> 2. Line 130: the score of the target is not equal to the gradient of the log joint but up to a scaling?
The log target is equal to the log joint + constant, and so the constant disappears when we take the gradient. Thus, $\nabla \log p(z) = \nabla \log \rho(z)$, where $\rho$ represents the unnormalized model (i.e., the joint in the context of Bayesian inference).
> 3. On Equation 16 and the likelihood:
This equation describes a general set of Bayesian problems from a set of benchmark models, and so we don’t a priori make assumptions on the conditional independence structure of the likelihood. We are happy to write $p(x_{1:N} | z)$ instead.
> 4. Could the authors explain a bit more why centering the distribution is very important for better approximation?
The standardization is used with the Hermite polynomial basis, which has a centered Gaussian as the base distribution. As we show in Figure 2, higher order expansions are needed to model distributions whose largest mode is shifted away from the origin. While this is fine in low dimensions, in order to handle higher dimensional targets, we apply the standardization technique as a way to reduce the order needed to model the target.
> 5. Why is forward KL used rather than the commonly used backward KL as in ELBO?
The ELBO is often used when comparing VI algorithms that use the ELBO / reverse KL as the objective for all VI algorithms. Here we are comparing VI algorithms with _different_ objectives, and so we chose an evaluation metric that was agnostic to the algorithms’ objectives.
> 6. How to choose a good proposal distribution? How were they chosen in experiments?
Please see our general response above for intuition behind choosing the proposal distribution. See Appendix E for the exact proposal distribution used for each experiment.
> 7. How about the actual running time comparison on synthetic and real-world experiments? [...] Solving large eigen-decomposition problems in practice might be very time-consuming.
We do not need to solve the full eigendecomposition problem, but only need the smallest eigenvalue pair for a KxK symmetric matrix, which can be a much easier problem to solve. As we mention in the general rebuttal comment, for 3000 basis functions, which is around the largest number of basis functions considered in experiments, this takes under a second. In a qualitative context, in the rebuttal pdf, we report the time it takes to run iterative BBVI methods, and we report the time needed for the eigenvalue solve. For instance, our implementation of ADVI takes over 12 seconds to converge in this example.
Thus overall, we expect the overhead in terms of computational time due to solving the eigenvalue problem to be minimal.
There are two other steps in (Hermite) EigenVI - standardization and evaluating scores of importance samples. While the actual computational time for these depends on the efficiency of implementation, expect this to be competitive to iterative BBVI approaches for two reasons -- 1) Standardization is done using BBVI approaches themselves (like ADVI, BaM or GSM), and hence the cost of standardization is the same as the cost of doing VI inference using these. 2) The scores for the importance samples can be evaluated in a fully parallelized manner and hence, depending on the implementation, adds minimal overhead in terms of computational time.
Finally, to present a quantitative comparison, we previously included a running time comparison for the synthetic 2D experiments, see Appendix E (Figure 6 shows both # of gradient evaluations and wallclock time without parallelization). Here we did not need to use standardization, and so we indicate when EigenVI and the iterative method for the Gaussian fit use the same number of gradient evaluations.
---
Rebuttal Comment 1.1:
Comment: Thanks for the responses. I will keep my positive score. | Summary: The paper proposes Eigen-VI, a black-box variational inference method that uses orthogonal function expansions to parameterize the variational distribution and uses Fisher divergence as an objective. The method does not require gradient-based optimization method and behaves well in a set of synthetic and real application experiments.
Strengths: Strengths
* The paper is well-written and easy to follow.
* The proposed Eigen-VI is simple and easy to implement since the optimization problem can be solved by computing the minimum eigenvalue of a given matrix, and it also achieves comparable or better performances on hierarchical modeling benchmarks compared to other black-box variational inference methods.
Weaknesses: Weaknesses
* Compared to other BBVI methods, Eigen-WI restricts the variational families to be $K^{\text {th}}$-order variational family(defined in eq. 2). When generalize to other variational families, gradient-free optimization in this paper may fail. Similarly, if variational distribution is set to be a linear combination of function basis, the optimization problem in other BBVI methods would also be much simpler. Therefore, the contribution of this paper may not be very significant.
* Different proposal distributions in importance sampling may affect the property of the matrix $M$, and further affect the numerical efficiency or stability of computing $\lambda_{\text{min}}(M)$. Adding some analysis regarding this matter would enhance the paper.
Technical Quality: 3
Clarity: 4
Questions for Authors: * In the experiment part of this paper, why does the author only use the Hermite polynomials as the variational family? Is this because other variational family performs badly or have intractable issues?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: See Weaknesses above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your feedback on the paper. We discuss several of the comments and questions below.
> 1. Compared to other BBVI methods, Eigen-VI restricts the variational families to be 𝐾th-order variational family(defined in eq. 2). When generalize to other variational families, gradient-free optimization in this paper may fail.
As the reviewer points out, the EigenVI approach works for any orthogonal basis set. Importantly, this approach does not just work for a single variational family but a very large class of variational families. In particular, there are many orthogonal basis sets beyond the ones listed in the paper in Table 1. For instance, one may apply a procedure such as Gram-Schmidt to obtain such a set. Finally, new variational families can be formed by even mixing different basis sets. Thus, this approach opens the door to applying VI to many new types of families.
> 2. Similarly, if variational distribution is set to be a linear combination of function basis, the optimization problem in other BBVI methods would also be much simpler.
Classical BBVI methods also have assumptions that restrict the variational families. For instance, it is common that the family supports reparameterization (which is not true here).
> 3. Different proposal distributions in importance sampling may affect the property of the matrix M, and further affect the numerical efficiency or stability of computing λmin(M). Adding some analysis regarding this matter would enhance the paper.
In principle we agree that a poor proposal distribution might cause such numerical issues; however, in the current experiments, we do not observe this. Standardizing the distribution as we currently do it also helps here, as it provides a good proposal distribution which can easily be adjusted to have heavier tails if necessary. Furthermore, with enough samples, and since solving for minimum eigenvalues is typically so fast even for large matrices, we expect these issues to be less important. Experimenting with different proposal distributions is beyond the scope of this rebuttal, but we acknowledge the good point raised and will add a remark to this end in the revised manuscript on how, e.g., this could affect the spectral gap.
> 4. In the experiment part of this paper, why does the author only use the Hermite polynomials as the variational family? Is this because other variational family performs badly or have intractable issues?
The parameters for most experiments considered lie on an unconstrained scale where Hermite polynomials form a natural basis; the Hermite family can also be seen as a natural extension of non-Gaussianity.
The EigenVI method itself applies to _any orthogonal basis set_; in Figure 1, we additionally show 1D target distributions and fitted distributions arising from Legendre polynomial and Fourier series expansions.
---
Rebuttal Comment 1.1:
Comment: I thank the author for the detailed response. Despite this, I think the weaknesses of this paper in my review still remain. Therefore, I would keep this score. | Rebuttal 1:
Rebuttal: We thank the reviewers for their time and engagement – we are pleased that they found the work to be highly original, well-written, and of interest to the variational inference community.
In this work, we propose a novel variational family and an efficient algorithm to fit it via score matching. Here variational distributions are obtained by _squaring_ a linear combination of orthogonal basis functions – we demonstrate the flexibility of this family using several basis sets (e.g., Hermite, Fourier series, and Legendre polynomials) in Figure 1, but note that our approach applies to _any orthogonal basis set_, paving the way for many other possible families. We show that minimizing an estimate of the Fisher divergence (which matches the scores of q and p) is equivalent to solving for the minimum eigenvalue of some matrix. EigenVI avoids gradient-based optimization, and the scores and the matrix of interest can be evaluated and formed in parallel.
_**After thorough consideration of reviewers’ comments, we will expand our discussion of several points in the manuscript. Our response includes additional plots of hyperparameters and times for baseline methods.**_
## Score-based VI vs sampling & MCMC [Reviewers 35nb, JZVK]
First, a score-based approach has potential advantages over a purely sampling-based approach that does not exploit the availability of scores. These advantages are perhaps most easily understood by considering the simple problem of estimating the mean and variance of a one-dimensional Gaussian distribution, $p(z)$. By drawing $n$ samples from $p$, one can estimate the mean and variance to accuracy $O(1/\sqrt{n})$. However, one can determine the mean and variance _exactly_ from the scores $d\log(p)/dz$ at just two samples $z_1$ and $z_2$, provided that $z_1\neq z_2$. Most settings are not this contrived, but the broader point remains: the scores of a target distribution $p$ provide a great deal of information to constrain the search for the best variational approximation $q$ in some parameterized family of tractable distributions. Methods that do not exploit this information will generally be at a considerable disadvantage to those that do.
In addition, reviewers have asked specifically about MCMC vs EigenVI. We agree MCMC may be applied to several of the examples in the paper and a comprehensive comparison with MCMC strikes us as an important direction for future work. However, most off-the-shelf algorithms, such as Stan’s adaptive Hamiltonian Monte Carlo, do not work well on targets with a varying curvature (e.g. funnel) or well separated modes (e.g. cross), even with preconditioners, in low or high dimensions. In all cases, tuning MCMC—and in particular (i) determining the length of the warmup and sampling phase of the Markov chain, and (ii) deciding on a number of chains, are non-trivial tasks which severely impact the reported performance. Therefore, benchmarking against MCMC requires careful study.
## Cost of EigenVI & solving the eigenvalue problem [Reviewers 35nb, MKWn, JZVK]
EigenVI needs to compute only the smallest eigenpair of a K x K symmetric matrix, where K is the # of basis functions, and not the full eigendecomposition. We use up to several thousand basis functions. In this regime, standard solvers for the minimum eigenvalue take under a second; e.g., ARPACK, a popular backend wrapped in Python, Julia, and Matlab that implements the restarted Lanczos method, takes 510 ms to find the smallest eigenvalue of a 3000 x 3000 randomly generated positive definite matrix. When the dimension K is very large, too large to store the matrix M, we can use iterative solvers for computing the smallest eigenvalue, e.g., gradient descent applied to the Rayleigh coefficient $\alpha^\top M \alpha/\alpha^\top\alpha $. Each iteration of gradient descent would require a matrix-vector product, and costs O(K^2). Furthermore, we do not need to explicitly form the matrix M, but need only implement the operation $\alpha \mapsto M \alpha,$ and thus never need to store the K x K matrix.
Finally, we note in many problems with complicated targets, the main cost comes from gradient evaluation and not the eigenvalue solver.
## Why use importance sampling if we can sample directly from q? [Reviewers XdZT, JZVK]
Because we are optimizing the empirical Fisher divergence with respect to q, we need the samples to be independent of q (we cannot simultaneously sample from and optimize over q, and it’s unclear how to apply reparameterization). In addition, the importance-sampled estimate leads to the quadratic form and thus the eigenvalue problem.
## Proposal dist. for EigenVI [Reviewers XdZT, jsA6, 35nb]
In the particular case of the Hermite variational family, we can first standardize the target distribution using the mean and covariance of a Gaussian fit. Thus, intuitively, we want a proposal that has heavier tails than a standard Gaussian. We found reasonable defaults to be centered distributions such as a uniform (if most of the target’s mass is between certain bounds) or multivariate Gaussian with long enough tails.
## EigenVI does not require standardization, but it helps [Reviewers 35nb, JZVK]
If the variables are bounded, a uniform distribution is a reasonable base distribution, and no standardization is needed.
For the Hermite family, we do not need standardization in the lower dimensions (no standardization is used for the 2D targets in the experiments). Standardization is used specifically with the Hermite family to help scale the method to larger dimensions, as the base distribution of the Hermite family is a standard Gaussian. (As we show in Fig. 2 for a 1D example, uncentered distributions may require more basis functions, so standardization helps to reduce the # of basis functions needed.) Given standardization with a Gaussian distribution, one can also view EigenVI as a post-processing step of a Gaussian BBVI (with any BBVI technique of choice--ADVI, BaM or GSM).
Pdf: /pdf/818e37c9c0a07b1ce3b1cda828c43ebb25b46138.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The authors introduce a novel family of variational distributions based on orthogonal function expansions which is optimized by minimizing a Fisher divergence. They show that an unbiased importance sampling estimate of the divergence results in a quadratic form, which can be minimized by finding its smallest eigenvector.
Strengths: The work is original and of high overall quality. The proposed variational family and training methodology is novel and of interest to the variational inference community. The empirical evaluation is fairly extensive and shows that the method works well for low dimensional problems. At the same time the authors are very forward with discussing the current limitations of their method which opens up interesting questions for future research.
Weaknesses: As discussed by the authors, the work seems to be mostly limited by the fact that higher-order function expansions, which are needed to model non-Gaussianity, become exceedingly expensive in high dimensions as a result of the blow up of the size of the eigenvalue problem.
Technical Quality: 4
Clarity: 3
Questions for Authors: - The authors describe a procedure to sample from the variational approximation, so I am not sure why importance sampling needed to estimate the Fisher divergence? Is it purely for computational reasons or am I missing something?
- How is the proposal distribution chosen that is used for importance sampling?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your interest in the paper. We will revise the final version of the paper to make the following points more clear.
> 1. The authors describe a procedure to sample from the variational approximation, so I am not sure why importance sampling is needed to estimate the Fisher divergence? Is it purely for computational reasons or am I missing something?
First, because we are optimizing the empirical Fisher divergence with respect to $q$, we need the samples to be independent of $q$ (we cannot simultaneously sample from and optimize over $q$). In addition, as we show in Appendix D, the importance weights are important and lead to the form of the eigenvalue problem.
The procedure to sample from $q$ is primarily used after the variational distribution has been fit to, for instance, approximate functionals of $p$.
We will revise the paragraph starting from Line 133 to explain these points more thoroughly.
> 2. How is the proposal distribution chosen that is used for importance sampling?
Please see our general response above for intuition behind choosing the proposal distribution. See Appendix E for the exact proposal distribution used for each experiment.
We will add a paragraph in the Section 2.3 that details this intuition behind choosing the proposal distribution.
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal by Authors
Comment: Thanks for addressing my questions!
After reading all reviews and corresponding rebuttals, I will maintain my positive score. I believe this is a solid paper! | null | null | null | null | null | null |
Prediction with Action: Visual Policy Learning via Joint Denoising Process | Accept (poster) | Summary: The submission aims to enhance policy learning through co-training with image prediction. The intuition is that image prediction and robot action are highly correlated since they share the same underlying physical dynamics. Thus, a diffusion model that generates both future images and actions may generalize better to the task distribution.
As a solution, the paper proposes a diffusion transformer-based solution that takes as input a noisy sequence of future actions and observations and predicts the corresponding noise, in order to return a clean action trajectory and observation sequence. This enables co-training with actionless videos, in which case the action denoising loss is deactivated.
The results show improved language-conditioned multi-task learning on MetaWorld, as well as generalization to different real-world scenarios. Notably, in the real world, the proposed policy is able to generalize to more difficult task variations than the ones seen during training.
Strengths: 1. (Presentation) The paper is very clearly written and is very easy to follow and understand. I want to additionally endorse the tone of this paper, which is humble and moderate, without overpresenting or overlaiming.
2. (Contribution) As a technical contribution, the paper shares interesting architectural details on how to repurpose DiT for joint action and future image prediction.
3. (Soundness) The ablations show the importance of co-training with image prediction and the additional benefit of training on video corpora.
I believe that overall the paper contributes to the idea that predicting the future and acting is a promising path for generalizable robot learning, as opposed to action-centric policy learning. Although the idea is not original per se, the paper provides more evidence for it.
Weaknesses: 1. (Contribution) The motivation of the paper is that first, the ability to predict future images is coupled with the ability to predict actions (idea); second, diffusion models have shown good results on both tasks, so it is reasonable to combine them for joint prediction (implementation). To the best of my knowledge, the implementation part is novel, but the idea part is not.
Specifically, recent works such as [GR-1] have also argued and supported that jointly predicting future frames and actions is helpful, however there is no discussion and comparison against such an approach.
2. (Soundness) A major concern about this submission is the inadequate evaluation protocol. Specifically, the benchmark used is very simple and important baselines are missing. In more detail:
* The baselines considered in this paper are not carefully selected and important strong baselines are missing. On the first part, it’s not surprising that 2D Diffusion Policy is not weak, but probably the point is to show that using DiT is an improvement over it. RT-1 is also a rather weak baseline without the large data support, as shown in [GR-1]. Moreover, comparing against RT-1 and RT-2 has little to offer as their training paradigm is simply behavior cloning with regression. It would be more useful to compare against other diffusion policies or video prediction models, since the paper's main argument does not stem from its architectural improvements on Transformer-based policies.
On the second part, the main argument of the paper is that predicting both actions and future images helps control. A similar argument is also presented in [GR-1], although their architecture is not diffusion-based but autoregressive. It could be that diffusion (or the proposed architecture in general) is indeed a better solution, but to conclude that, a direct comparison with [GR-1] should be conducted.
* While the end goal is to have robots operating in the real world, it is unsafe to compare and draw conclusions about models’ performance with real-world experiments only. Simulation benchmarks offer some proxy to evaluate performance and compare methods in identical setups. The paper does a good job of presenting both simulation and real-world results. While the community hasn’t concluded on a robotics benchmark that covers many cases and tasks, MetaWorld is definitely one of the simplest benchmarks, without distractors, visual variations or the necessity of modeling geometry. Multi-tasking results on MetaWorld are not very impressive since the task can be usually inferred by just looking at the scene. On that aspect, training on videos from the web or external data (such as BridgeData-v2) to test on MetaWorld feels rather disconnected. It would be more valuable to show results on benchmarks like CALVIN, LIBERO or RLBench, that test more generalization factors and offer more challenging setups with stronger baselines. Specifically, SuSIE, which is a useful baseline as it also predicts future images, as well as [GR-1], which should be the main baseline, both report results on CALVIN. Therefore, evaluating on CALVIN would give direct answers on these comparisons.
3. (Presentation) Although the clarity of presentation is remarkable, the paper lacks a clear take-home message. If this is that hoint action-frame prediction is helpful, then it reduces novelty, since there is already evidence on that. Another suggestion is the following. There are some works (such as the cited Diffuser [48]) that jointly denoise states and actions. This is a relevant background for this work, which denoises observations and actions. The proposed approach is more general since states cannot be accessed in the real world. It would be nice to see some discussion on the connection between prior work and this work, as it would provide additional motivation.
[GR-1] Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation, ICLR24
Technical Quality: 2
Clarity: 3
Questions for Authors: Overall, while this submission has merits, its poor evaluation setup, as well as the absence of comparisons and discussions against important prior work makes me reluctant towards acceptance. I would increase my score if:
* The writing changes to include discussion on previous works that had the same high-level idea and focus on the differences with those.
* The evaluation becomes stronger by adding baselines and testing on a more challenging benchmark. On the latter, one suggestion is to evaluate on CALVIN, so as to enable direct comparison to [GR-1] and SuSIE.
--------------
Post-rebuttal: score updated from 2 to 4.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and efforts in reviewing our paper! Based on your review, we added a detailed discussion and additional experiments. **Updated Figures can be found in the PDF attached to the global response.**
---
**Q1: About the contribution and novelty: the implementation part is novel, but the idea part is not. Specifically, recent works such as [GR-1] have also argued and supported that jointly predicting future frames and actions is helpful.**
ANS: We fully agree that integrating video prediction with action, and more broadly, learning enhanced representations from videos to improve policy learning, is a well-established area of research. In this paper, we focus on **exploring joint multi-modal predictions and actions within a unified diffusion architecture.** This approach, which we all recognize as novel, differentiates our work from existing methods.
GR1 utilizes an autoregressive model for predicting images and actions. However, recent advancements have shown that **diffusion models outperform autoregressive methods in generating images and videos** [1,2]. Despite these advancements, performing joint denoising processes using the typical U-Net architecture, as seen in prior studies like SUSIE and uniPi, poses significant challenges. Our approach overcomes these challenges by integrating predictions and actions into advanced Diffusion Transformers (DiT), which effectively encode physical knowledge from diverse datasets. **This solution has been recognized as both elegant and nice by all three other reviewers.** A more detailed experimental comparison with GR1 can be found in the subsequent Q&A sections.
---
**Q2: About the meta world benchmark and Calvin benchmarks**
ANS: We would like to say that numerous previous works [] perform experiments on Metaworld and these two benchmarks present their own unique challenges. Metaworld offers a much more diverse array of tasks requiring precise manipulation and the ability to handle multiple tasks simultaneously. In contrast, Calvin focuses primarily on instruction-following skills, involving fewer objects (just three colors of blocks, one door, one drawer, and one light).
We observed that previous methodologies have already achieved success rates nearing or exceeding 90% on Calvin benchmarks, with most failures occurring in corner cases. Given the limited time available for rebuttal, it would be exceedingly difficult for us to establish a new environment and adjust hyperparameters to surpass this 90% success rate. Furthermore, Calvin shares many similarities with **our real-world experiments which involve even more complex generalization tasks** including brand-new objects and backgrounds. Considering these factors and the tight timeline, we opted to compare the official GR1 implementation with PAD on the Metaworld and the real world. (The real-world experiment is running since it takes time to deploy and adjust real systems.)
**Q3: Detailed experimental comparison with GR-1 baseline**
ANS: Following the GR-1 open-sourced code, we initialize the GR-1 model with their pre-trained checkpoint and perform data-efficient imitation learning on the multitask Metaworld. The comparisons are shown below. Firstly, we observed that the **image prediction quality of PAD significantly surpasses that of GR-1**, likely due to the superior capabilities of diffusion models in image generation tasks. As depicted in the global PDF, the visual prediction results from GR-1 appear blurry, while PAD produces detailed and precise imagery. (Notably, the images presented in the original GR-1 paper were also blurry.) We hypothesize that the **high-quality images generated by PAD can more effectively facilitate policy learning**, leading to higher success rates, particularly in tasks requiring precise and accurate operation such as picking small blocks, insertion, basketball, etc., in Metaworld. We hope this detailed analysis provides insight into why our exploration of joint prediction and action under diffusion architecture is both meaningful and valuable.
| MetaWorld-50tasks | GR-1 | PAD |
| ----------------- | ---- | ---- |
| Success Rate | 54.6 | 72.5 |
---
**Q4: The baselines considered in this paper are not carefully selected. It’s not surprising Diffusion Policy is weak. RT1RT2 only utilizes simple BC loss**.
ANS: We respectfully present an alternative perspective. We believe these baselines are both recent and robust, as suggested by Reviewer Hcpx. Diffusion policy is a sample-efficient imitation learning algorithm recognized as a significant breakthrough in robotic policy learning. Although RT1 and RT2 are trained only with BC loss, they are built upon large vision-language models that have already been trained on extensive datasets, demonstrating impressive multi-task capabilities. We compare these baselines to verify the sample efficiency and multi-task learning ability of PAD.
---
**Q5: Training on videos from the web or external data (such as BridgeData-v2) to test on simulated MetaWorld feels disconnected.**
ANS: Training with videos and testing in simulated environments such as Metaworld is widely adopted in prior research. We recognize a substantial gap between real-world videos and simulations, which has led us to validate our methods through real-world tasks, confirming that co-training significantly improves our models' generalization capabilities.
Despite the noted gap between simulations and reality, as detailed in Section 4.4, **internet videos provide valuable priors and features that enhance image prediction quality in simulations**. The high-quality predicted images may significantly improve policy learning, as it becomes easier for the model to identify robot movements from these precise predicted images.
---
We hope our response above can solve your concerns and show the improved quality of our paper. We are always ready to answer your further questions!
---
Rebuttal Comment 1.1:
Title: To reviewer GDHG : Please respond to rebuttal
Comment: Hi reviewer GDHG,
Thank you for your initial review. Please kindly respond to the rebuttal posted by the authors.
Does the rebuttal answer your questions/concerns? If not, why?
Best,
AC
---
Rebuttal 2:
Title: Welcome to see our new experiments on CALVIN benchmark!
Comment: Dear Reviewer GDHG:
After we submit the initial rebuttal message, we continue to conduct experiments in the Calvin benchmark for better comparisons as you suggested. We are pleased to share some promising results. Following the settings in the GR-1 paper, we conducted imitation learning on the whole training datasets and 10% training datasets respectively. When using the full dataset, our results were comparable to those reported for GR-1, given that GR-1's original success rate was already high (approaching 100%) and we had limited time for parameter tuning. However, **in the reduced 10% data regimen, PAD clearly outperforms GR-1**, indicating that PAD is a more sample-efficient learning algorithm with joint prediction and action under a unified diffusion architecture. The detailed comparison of task completion numbers in a row is shown below: (some baseline data is sourced from GR-1)
| **Whole datasets** | 1 | 2 | 3 | 4 | 5 | Avg. Len. |
| ------------------ | ----- | ----- | ----- | ----- | ----- | ------------- |
| RT-1 | 0.844 | 0.617 | 0.438 | 0.323 | 0.227 | 2.45 |
| HULC | 0.889 | 0.733 | 0.587 | 0.475 | 0.383 | 3.06 |
| MT-R3M | 0.752 | 0.527 | 0.375 | 0.258 | 0.163 | 2.08 |
| GR-1 | 0.949 | 0.896 | 0.844 | 0.789 | 0.731 | **4.21** |
| PAD(ours) | 0.965 | 0.904 | 0.842 | 0.768 | 0.707 | **4.19** |
| **10% datasets** | **1** | **2** | **3** | **4** | **5** | **Avg. Len.** |
| RT-1 | 0.249 | 0.069 | 0.015 | 0.006 | 0.000 | 0.34 |
| HULC | 0.668 | 0.295 | 0.103 | 0.032 | 0.013 | 1.11 |
| MT-R3M | 0.408 | 0.146 | 0.043 | 0.014 | 0.002 | 0.61 |
| GR-1 | 0.778 | 0.533 | 0.332 | 0.218 | 0.139 | 2.00 |
| PAD(ours) | 0.813 | 0.606 | 0.412 | 0.294 | 0.189 | **2.31** |
Thank you once again for your time and effort in reviewing our paper. We hope that these additional experiments address your remaining concerns and demonstrate the improved quality of our work. We are always ready to answer your further questions!
---
Rebuttal Comment 2.1:
Title: Thank you for your answers
Comment: Thank you for your effort on the rebuttal.
Regarding my concerns on contribution: I also recognized the technical contribution of this paper. My major concern was the absence of related alternatives, specifically GR1, from the discussion. Of course working on an existing problem and advancing it is a valid contribution, however it should be placed in the context of related work. The rebuttal discussed GR1 and I believe it placed itself better in the space of relevant literature.
Regarding MetaWorld: while arguing on which benchmark is more challenging is of little value, my take on MetaWorld in my original review was that the task can be usually inferred by just looking at the scene, so that makes multi-tasking results less impressive. This is because of the lack of distractions in the scene, mostly the relevant objects per task are there. CALVIN scenes do not offer a wide variety of tasks to be deployed, but the scene is the same and language understanding is necessary for task execution. At the same time, I thought that evaluation on CALVIN would further allow for easier comparisons against stronger related baselines: SuSIE, which is very related, GR1 and RT1's results are already there for free. Since SuSIE was adapted to MetaWorld, I thought it could be as easy to take its data loader and adapt it to support PAD, thus enabling all those comparisons for free. Lastly, CALVIN offers an established train/test setup, where baselines have shown the benefits of using video data (GR1), future prediction (SuSIE), foundation models (RoboFlamingo). In contrary, MetaWorld papers often define their own setup in terms of tasks and demos used and I'm not aware of a standardized and widely adopted protocol there.
I'm eventually glad to see that experiments on CALVIN were ran. I agree that the performance on ABCD->D is already quite high. The SOTA on this split is actually much better than GR1, but showing SOTA results on CALVIN was never my expectation for this submission, the comparison to previous works was. It's good to see that PAD is more sample-efficient. It would be more meaningful to also show results on ABC->D, which is more challenging and could better showcase the advantages of using joint video training.
Regarding the baselines: I still believe that GR1 is the most related baseline, then also SuSIE. It's ok to include RT-1 and 2 as well as training paradigms or transformer-based architectures. For diffusion policies, if CALVIN was adopted from the beginning it would allow comparison with 3D Diffuser Actor, which is a much stronger baseline. But it's ok that this is not included as long as other video prediction approaches are included in a fair comparison, which is better satisfied after seeing the CALVIN results.
Overall, I appreciate the effort for the rebuttal, especially after seeing the CALVIN results. As more and more video prediction methods for robotics show up, the questions that make the difference are what is the right representation and which implementation details are important. This paper bets on pixels and offers a valid technical contribution. I respect that, even if I'm not fully convinced about the strength of the specific implementation. It's better to let multiple voices be heard, so I'm removing my strong objections against this paper, even if I'm not giving it a full acceptance score for now.
Score updated from 2->4.
---
Reply to Comment 2.1.1:
Title: Thank you for the comprehensive feedback!
Comment: Dear Reviewer GDHG:
Thank you for your comprehensive feedback! We are delighted to hear from you.
First of all, we are very glad that we reach a consensus on the technical contribution of the paper. As you mentioned, there is a growing integration of video prediction methods in robotic learning. We have bet and explored a unified diffusion architecture, as we all recognized, the first of its kind in this context. Thank you for your suggestions which have helped refine and clarify our contributions!
As for the CALVIN benchmarks, we totally agree with you that CALVIN can better **evaluate the instruction following ability** and easier to compare with previous works, which had encourage us to conducted additional experiments on it. Thank you for your constructive suggestions!
Concerning the MetaWorld benchmarks, we would like to offer a perspective that this benchmark is particularly effective at **evaluating the dexterity of the learned policy** due to its requirement for precise actions with numerous small objects. Sometimes robot learning policy can still failed even given a single task to accomplish (as you said, without distractions) due to a lack of precision in execution. We believe that **PAD offers unique advantages in enhancing the dexterity** (or precision) of the policy, as it predicts precise future images and actions through joint denoising. The protocol for this benchmark includes an official scripted policy provided by the authors of MetaWorld to collect demonstrations for each task, which is a method commonly used in previous research.
In summary, while the Calvin benchmarks prioritize language comprehension and do not focus heavily on dexterity and precision, as the tasks involve only large blocks and objects such as drawers or doors, MetaWorld requires precise execution of dexterous tasks without a strong emphasis on following instructions. We believe that **both language comprehension and dexterity** are crucial for developing intelligent robots. We hope this detailed analysis can explain why our experiments on Metaworld is also meaningful, which assess different aspect of the learned policy.
Thank you once again for your detailed feedback and the improved score. We will add all the new experiments and discussions to the paper to meet the highest standards. As the rebuttal phase draws to a close, we still remain open to addressing any further queries you may have!
Best regards,
Authors | Summary: - This work proposes a new method for language conditional imitation learning for robotics. The core idea is to combine image diffusion and policy diffusion to simultaneously predict future image observations and actions using a latent diffusion transformer and DDPM.
- The diffusion process is conditioned on the recent image observations, the current robot pose, the language instruction, and for real-word experiments also on a depth map. The output of the diffusion process is the prediction of future image observations, depth observations, if available, and actions for K time steps. The diffusion process is repeated after executing a single action in the environment by conditioning on the updated history.
- The input and output modalities have different dimensionalities which is handled by projecting all modalities into a latent space where the latent diffusion process is applied. A VAE is used for projecting the images, an MLP for the actions, a CLIP encoder for the language instructions, and the depth maps are processed in a similar fashion to the image observations. This setup also allows training the model on video data where no actions are available.
- Experiments are conducted on the simulated Meta-World benchmark and in the real-world with a Panda arm. The proposed method is compared against a multi-task variant of diffusion policy, a modified version of SuSIE, RT-1, and a re-implementation of RT-2 built on top of InstructBlip7B.
Strengths: Originality:
- The paper proposes an interesting combination of image diffusion and policy diffusion to leverage transfer between the two tasks. Using latent diffusion with a transformer so that all modalities can be processed in a shared latent space while also being able to train on video data without actions is a nice and elegant idea.
Quality:
- Experiments on Meta-World and with a real-world Panda arm on both seen and unseen tasks are reasonable for validating the proposed method.
- The selected baselines (a multi-task variant of diffusion policy, a modified version of SuSIE, RT-1, and a re-implementation of RT-2 built on top of InstructBlip7B) are recent and strong.
- For the diffusion policy baseline, a pretrained CLIP encoder was used as in related work which also matches language encoder used by the proposed method. Similarly, a diffusion transformer is used for SuSIE for a fairer comparison.
- Table 1 includes two important ablations: PAD without image diffusion and PAD without on videos without actions that validate two of the main hypotheses of this work. There is also an ablation that shows that using depth maps as additional observations improves performance in real-world experiments.
Clarity:
- The paper is generally well-written and clear.
- Various hyperparameters are provided in the appendix to facilitate reproducibility.
- The videos on the paper website provide a good sense of qualitative performance.
Significance:
- Considering the good empirical performance and the elegance of this approach, it seems reasonable that other researchers and practitioners might use or build on this work.
Weaknesses: Originality:
- The paper lacks a reference to and discussion of DBC introduced in “Diffusion Model-Augmented Behavioral Cloning” by Chen et al. 2024 which first appeared on arXiv in February 2023. Similar to this work, Chen et al. 2024 learn a diffusion model over state-action pairs. Instead of using this directly for parameterising the policy, however, the diffusion model is used for computing an auxiliary loss to guide the policy. This therefore avoids the need to run the computationally expensive diffusion model at test time.
- Several statements are not supported by references:
- Line 163: “Previous studies that utilized pixel input generally developed separate task-specific policies for each of the 50 tasks“.
- Lines 285-286: “Many studies utilize the reasoning capabilities of pre-trained LLMs and VLMs to create high-level plans followed by motion primitives.
- Lines 286-288: “Additionally, some approaches adapt pre-trained models to emit low-level motor control signals by adding an action head.”
Quality:
- Considering the similarities of the proposed method to DBC, it would be good to have DBC as an additional baseline to differentiate this work from DBC more clearly.
- The paper lacks a discussion of previous works that benchmark imitation learning methods on Meta-World and why this particular evaluation methodology was chosen. For example, “PRISE: LLM-Style Sequence Compression for Learning Temporal Action Abstractions in Control” also benchmarks imitation learning approaches on Meta-World, reporting 5-shot performance on five previously unseen tasks. Similarly, the method could also be evaluated on “LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning” which provides tele-operated human demonstrations and several baselines.
- The appendix provides a breakdown of performance across individual tasks and ablations for different model sizes, but all experiments are run with only a single random seed and no error bars or similar are reported.
- It is unclear how the hyperparameters were tuned for the proposed method, for the ablations, and for the baselines. Similarly, it would be useful to have a discussion how the architectures of the proposed method and the baselines compare to better understand the fairness of the comparisons. Table 1 indicates that the proposed method without image diffusion already outperforms three of the four baselines, including diffusion policy. It seems like there are other differences between the proposed method and diffusion policy that account for some of the difference in performance.
- The paper lacks a discussion of how this method compares to the baselines in terms of latency. It is also not clear whether the videos at the provided URL run in real-time. Running image diffusion at every time step seems computationally expensive which might have practical implications.
Clarity:
- None beyond what is mentioned above.
Significance:
- The latency of running image diffusion at every time step is a concern. This might be quite slow and limit the adoption of this work.
- It is not quite clear from the NeurIPS paper checklist whether the code and the datasets are going to be publicly released which would increase the impact of this work.
Technical Quality: 2
Clarity: 3
Questions for Authors: - Line 166: How were the Meta-World training trajectories collected? How does this compare to existing literature? Are there existing public datasets of demonstration trajectories for Meta-World?
- How were the hyperparameters tuned for the proposed method, for the ablations, and for the baselines?
- How do the architectures of the proposed method and the baselines compare? Why does the proposed method already significantly outperform diffusion policy even without image diffusion?
- Line 219: How were the tasks that are shown in Table 1 selected?
- Line 21: Do the videos on the paper website show the robot at real-time speed or are the videos accelerated?
- How do the proposed methods and the baselines compare in terms of latency?
- Are code and training data going to be released publicly?
UPDATES AFTER REBUTTAL:
- Presentation: 2 -> 3
- Contribution: 2 -> 3
- Score: 4 -> 6
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: As mentioned above, the paper lacks a discussion of latency considerations which might limit the practical usefulness of this approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and efforts in reviewing our paper! Based on your review, we added a detailed discussion and additional experiments.
---
**Q1: Discussion and comparisons with DBC paper which learn a diffusion model over state-action pairs as auxiliary loss**
ANS: Thank you for your constructive suggestion! DBC adds the state-action distribution diffusion loss to help policy learning while we use future prediction loss to enhance policy. Despite the lack of open-sourced code from the DBC authors, and their focus on low-dimensional single-task settings using smaller neural networks, we implemented their state-action modeling auxiliary loss within our DiT framework, ensuring comparable parameter sizes. (Detailed implementation framework can be found in global pdf.) In general, we have much larger experiment scales in terms of task numbers, task difficulty, input complexity, and model size. Our results demonstrate that PAD outperforms DBC with a clear margin. We hypothesize this is due to that future prediction loss can better guide policy learning compared to the DBC's state-action distribution loss.
| | PAD w/o image prediction | DBC | PAD(ours) |
| ------------ | ------------------------ | -------------- | -------------- |
| Success rate | $$43.4\pm1.7$$ | $$47.6\pm3.0$$ | $$72.3\pm1.8$$ |
---
**Q2: Several statements are not supported by references.**
ANS: Thank you for pointing out the missing reference! We cite [1,2,3,6] for Line 163 which describes single-tack learning algorithms, cite [7,8] for 285-286 which mentions the VLM high-level plan, and cite [9] for 286-288 mentions the pre-trained model with action head.
---
**Q3: Discussion of previous works that benchmark imitation learning methods on Meta-World and why this particular evaluation was chosen. e.g., PRISE reported 5-shot performance on 5 unseen tasks.**
ANS: Numerous works in many categories evaluate imitation algorithms on Metaworld benchmark with limited demonstrations, such as diffusion-based algorithms diffusion policy[1], 3D-DP[2], video representation learning algorithms R3M[3], VC-1[4], and language modeling style policy Embodied-GPT[5], ACT[6], etc. Similar to these works, we try to evaluate PAD's imitation capability under limited robotic data and multi-task settings.
PRISE tests the model's few-shot adaptation capability as you said. We follow the exact same pipeline to test the adaptation ability of our PAD model. The results are shown below which further verify the sample efficiency of our method thanks to joint prediction.
| | PRISE | PAD |
| --------------------------------------- | -------------- | -------------- |
| 5-shot success rate on 5 specific tasks | $$66.8\pm1.8$$ | $$74.2\pm2.3$$ |
---
**Q4: How were the Meta-World training trajectories collected? Are there existing public datasets of demonstration?**
ANS: We utilized the scripted expert policies provided by the official Meta-World to collect 50 demonstrations per task, in line with prior studies [1,2,6,12].
---
**Q5: How were the tasks that are shown in Table 1 selected?**
ANS: The tasks in Meta-World are categorized into different difficulty levels according to [1]. We selected representative tasks from each level to display due to space constraints, and also the average success rate.
---
**Q6: About random seeds and error bars of experiments.**
ANS: Limited by our computational resources, and given the breadth of ablation studies conducted (ablations on image prediction/co-training, depth input, and various model sizes), we were unable to run many seeds at the time of submission. We promise to run three seeds for each experiment and update the results in the paper. Moreover, we observed diffusion training process is extremely stable with low variance, corroborating findings from the diffusion policy paper[1].
---
**Q7: How the hyperparameters were tuned. The architectural differences between the proposed method and diffusion policy**
ANS: Yes, there do exist architectural differences. Unlike the official diffusion policy, which utilizes CNN-based image encoders and cross-attention conditions, our PAD is built upon the DiT[10] which pathify images into tokens and uses feature-wise modulation for condition. We have integrated additional modules into DiT to support language and multi-modal inputs/outputs, using the same training hyperparameters as the DiT paper. As discussed in previous works[11], DiT is a highly optimized architecture that may also contribute to our performance gain.
---
**Q8: Do the videos on the paper website show the robot at real-time speed?**
ANS: The old videos are recorded inside the algorithm pipeline which has some bias. We updated some videos recorded by third-view mobile phones which are truly real-time.
---
**Q9: The latency of running image diffusion at every time step is a concern. How do the proposed methods and the baselines compare in terms of latency?**
ANS: We total agree with you that diffusion-based policies typically exhibit lower control frequencies due to the multiple denoising steps involved. In real-world robotic control, our model operates at 1-1.5Hz using 50-75 denoising steps, compared to the RT-2 model's 3Hz and the diffusion policy's 3-4 Hz, which works fine in our tested tasks. To further improve the diffusion speed, we can use recent more advanced diffusion accelerated samplers[13] and execute multiple prediction steps as [1].
---
**Q10: Are code and training data going to be released publicly?**
ANS: We have included an initial code version in the supplementary materials (though a little bit dirty). We will open-source the refined code, dataset, and checkpoints upon acceptance. We are very glad to make contributions to the community!
---
Thank you again for your time and efforts! We are always ready to answer your further questions!
---
Rebuttal Comment 1.1:
Title: References for rebuttal
Comment: [1] Chi C, Feng S, Du Y, et al. Diffusion policy: Visuomotor policy learning via action diffusion[J]. RSS2023
[2] Ze Y, Zhang G, Zhang K, et al. 3d diffusion policy[J]. RSS2024
[3] Nair S, Rajeswaran A, Kumar V, et al. R3m: A universal visual representation for robot manipulation[J]. CoRL2022
[4] Majumdar A, Yadav K, Arnaud S, et al. Where are we in the search for an artificial visual cortex for embodied intelligence?[J]. Neurips2023
[5] Mu Y, Zhang Q, Hu M, et al. Embodiedgpt: Vision-language pre-training via embodied chain of thought[J]. Neurips2023
[6] Zhao T Z, Kumar V, Levine S, et al. Learning fine-grained bimanual manipulation with low-cost hardware[J]. RSS2023
[7] Ahn M, Brohan A, Brown N, et al. Do as i can, not as i say: Grounding language in robotic affordances[J]. CoRL 2033
[8] Driess D, Xia F, Sajjadi M S M, et al. Palm-e: An embodied multimodal language model[J]. arXiv preprint arXiv:2303.03378, 2023.
[9] Chen W, Mees O, Kumar A, et al. Vision-language models provide promptable representations for reinforcement learning[J]. arXiv preprint arXiv:2402.02651, 2024.
[10] Peebles W, Xie S. Scalable diffusion models with transformers[C]//ICCV 2023: 4195-4205.
[11] Esser P, Kulal S, Blattmann A, et al. Scaling rectified flow transformers for high-resolution image synthesis[C]//ICML 2024.
[12] Wang H C, Chen S F, Hsu M H, et al. Diffusion model-augmented behavioral cloning[J]. ICML 2024.
[13] Song Y, Dhariwal P, Chen M, et al. Consistency models[J]. arXiv preprint arXiv:2303.01469, 2023.
---
Rebuttal Comment 1.2:
Comment: Thank you for the rebuttal.
Q1: Did you sweep the lambda hyperparameter for weighting the two loss terms in DBC or did you use the value used in the original paper?
Q7: For the baselines, did you tune the hyperparameters on the benchmarks that are being used here or did you use the same hyperparameters as in the original papers? For the ablations, did you re-tune other hyperparameters or are they kept the same?
---
Rebuttal 2:
Title: Response to Reviewer Hcpx's further questions
Comment: Dear Reviewer Hcpx:
We are very delighted to hear from you! Here are our replies:
Q1: Did you sweep the lambda hyperparameter for weighting the two loss terms in DBC or did you use the value used in the original paper?
Ans: We used the value of lambda from the original paper. As shown in Figure 6 of the DBC paper, the authors performed a hyperparameter sweep and found that a lambda value around 1 yields the best performance. Consequently, we adopted lambda = 1 for our experiments.
Q7: For the baselines, did you tune the hyperparameters on the benchmarks that are being used here or did you use the same hyperparameters as in the original papers? For the ablations, did you re-tune other hyperparameters or are they kept the same?
Ans: For diffusion policy, we use their config for simulation envs( https://diffusion-policy.cs.columbia.edu/data/experiments/image/lift_ph/diffusion_policy_transformer/config.yaml). We tried to tune the predicti_lens and transformer layer parameters in their config but find their original configs performs the best. For RT-1, and Susie, we utilized the parameters from their open-source implementations. As for RT-2, since the code was not publicly available, we carefully implemented it based on the descriptions provided in the paper.
Regarding our ablation study, we maintained the same hyperparameters and only removed the specific components being ablated.
---
We hope these answers address your questions. Thank you again for your valuable time! We are always ready to answer any of your new questions!
Best regards,
Authors
---
Rebuttal Comment 2.1:
Comment: > "Q5: How were the tasks that are shown in Table 1 selected? ANS: The tasks in Meta-World are categorized into different difficulty levels according to [1]."
I think this might be the wrong reference as [1] does not evaluate on Meta-World?
---
Rebuttal 3:
Title: About the reference
Comment: Dear Reviewer Hcpx:
Sorry for the wrong reference. The correct reference should be [14]. On page 18 of the experiment details of [14], there is a table provided categorizes tasks into easy, medium, hard, and very hard. In our table, the "easier tasks" row includes some of the easy tasks, while the "harder tasks" row encompasses medium, hard, and very hard tasks. This categorization is also employed in the 3D Diffusion Policy paper [15], specifically in Section IV.A, line 17.
We have check all other citations for accuracy.
Thank you for your time, and we greatly appreciate your detailed feedback!
Best regards,
Authors
[14] Seo, Y., Hafner, D., Liu, H., Liu, F., James, S., Lee, K., & Abbeel, P. Masked world models for visual control. In Conference on Robot Learning (pp. 1332-1344). PMLR.
[15] Ze Y, Zhang G, Zhang K, et al. 3d diffusion policy[J]. arXiv preprint arXiv:2403.03954, 2024. RSS2024
---
Rebuttal 4:
Title: Dear Reviewer Hcpx: Thank you for your time and effort!
Comment: Dear Reviewer Hcpx,
Thank you for your interest in our paper and for your detailed feedback over the past few days!
We are extremely grateful for your patient and thoughtful insights, which have significantly improved our manuscript. We will release the code upon acceptance, enabling you to examine the specifics more thoroughly. As it approach the end of the rebuttal phase, if you find the content of our paper and our responses to your questions satisfactory, we would sincerely appreciate any consideration for a score improvement.
Once again, we deeply appreciate your efforts in reviewing our work and for the insightful questions that have been invaluable in enhancing our research. We still remain open to addressing any further queries you may have!
Best regards,
The Authors | Summary: This submission presents a new learning framework called PAD, which utilizes a diffusion transformer (DiT) to jointly denoise both future image frames (RGB/Depth) and generate actions together. This joint learning process yields a scalable model that achieves higher success rates compared to various other robotics transformer model baselines. The paper shows co-training with internet scale data is not only possible thanks to DiT but also brings about much higher success rates and shows strong visual generalization.
Strengths: - An interesting joint denoising process is presented to generate future frames and actions. The paper further shows the possibility that PAD can be extended to more modalities if the engineering efforts are taken to do so which is great to see.
- Nice use of the diffusion transformer architecture which enables co-training on large video datasets that do not come with action labels, providing internet-scale knowledge that improves success rate. It would be nice to see how PAD without co-training on internet data performs on the generalization benchmark. I would imagine it won't do as well as PAD with internet data but is worth adding in my opinion (not a huge issue though).
- Strong results on generalization benchmarks and solid ablation studies on co-training show great promise about the proposed method.
- PAD also shows good scaling results where larger models/more training yields higher success rates.
Weaknesses: - How well does PAD predict future frames? Only a few next frames are shown in fig 7 but I am curious to see how far it can predict? In some sense tasks where there is a lot more ambiguity (e.g. objects that fall and bounce, or finding an object hidden behind something) would be helpful to help understand how PAD handles uncertainty. The current set of tested tasks are fairly straight forward in terms of predicting the next frames.
- What are the failure modes in the real world of PAD? Why might it not succeed? This is not an easy problem to answer regardless but if possible it would be nice to address somewhere.
- What is the data efficacy of PAD compared to past methods? How many demonstrations are needed to get the results? Can PAD use less demos (perhaps thanks to future frame predictions?).
Technical Quality: 3
Clarity: 3
Questions for Authors: Questions:
- Some discussion around how this relates to world models like Dreamer or TDMPC for robotics might be important. A joint denoising process that predicts future frames and actions resembles a lot of world models. The world model like nature could be the reason why performance is higher.
- While PAD generalizes, it is more of a generalization on the visual data side instead of manipulation right? The unseen objects probably are graspable by the same actions you would take to grab the seen objects. Regardless the visual generalization looks good.
-
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are very lightly mentioned at the end around how PAD only uses a few modalities. Some limitations around the data efficacy of PAD might be good to mention, especially given the expensive nature of real world robotics demonstrations (especially if you want high success rates).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and efforts in reviewing our paper! Based on your review, we added a detailed discussion and additional experiments.
---
**Q1: How well does PAD predict future frames? how far PAD can predict in some sense tasks where there is a lot more ambiguity (e.g. objects that fall and bounce, or finding an object hidden behind something) would be helpful to help understand how PAD handles uncertainty.**
ANS: Thank you for your insightful comments! For better understanding, we have visualized additional results in the attached PDF file of the global response. With joint training on the internet video dataset, PAD shows impressive image prediction capabilities.
---
**Q2: What are the failure modes in the real world of PAD? Why might it not succeed? This is not an easy problem to answer regardless but if possible it would be nice to address somewhere.**
ANS: This is an interesting question! We summarize some failure modes based on our hundreds of rollouts as follows: (1) unexpected collisions and deformable irregular objects add difficulty to tasks, (2) using a wrist-mounted camera and randomly placing the objects can sometimes cause objects to go out of the frame during execution, leading to failure, and (3) PAD may occasionally mix up similar objects (e.g., a red apple and a red block).
---
**Q3: What is the data efficacy of PAD compared to past methods? How many demonstrations are needed to get the results? Can PAD use less demos (perhaps thanks to future frame predictions?).**
ANS: As detailed in the experiment setups, similar to [x], we collected 50 demonstrations for each task using Metaworld's official scripted policy. To assess the sample efficiency of the PAD method, we performed ablations with 20 demonstrations to test the efficacy of PAD.
| | RT2 | PAD(ours) |
| :------------ | ---- | --------- |
| 20 demos/task | 0.44 | **0.62** |
| 50 demos/task | 0.52 | **0.72** |
In the low data regime, PAD also surpasses the baseline
---
**Q4: Discussion with world models like Dreamer or TDMPC. A joint denoising process that predicts future frames and actions resembles a world model could be the reason why performance is higher.**
ANS: Thank you for the suggestion! We completely agree that PAD is closely related to world models, as both encode physical knowledge. Dreamer and TDMPC are online model-based RL algorithms that learn world models with online collected data (without large-scale internet datasets). Previous works have also suggested that scaling up the model size under an online RL setting can be unstable. In contrast, we ensemble a world model into a unified DiT model, which is readily scalable and co-trained with internet datasets. In experiments, we observed good scaling performance with the proposed PAD model.
---
**Q5: While PAD generalizes, it is more of a generalization on the visual data side instead of manipulation right? The unseen objects probably are graspable by the same actions you would take to grab the seen objects. Regardless the visual generalization looks good.**
ANS: Yes, you are correct! The generalization is more on the visual side, such as with many distractions, unseen objects, and unseen backgrounds. We acknowledge that the current model cannot perform brand-new skills that are not present in the training data. We will add this to the limitations and future work section. Scaling the robotic datasets may lead to the emergence of new skills, which are interesting research directions.
---
Thank you again for your insightful and constructive comments! We hope our response above can help you better understand our paper and we are always ready to answer your further questions!
---
Rebuttal Comment 1.1:
Title: To reviewer yjvj : Please respond to rebuttal
Comment: Hi reviewer yjvj,
Thank you for your initial review. Please kindly respond to the rebuttal posted by the authors.
Does the rebuttal answer your questions/concerns? If not, why?
Best,
AC
---
Rebuttal 2:
Title: Response
Comment: Thanks for the additional visualizations. I don't think MetaWorld is hard enough or has any ambiguous like tasks really so it's hard to tell how PAD handles situations where objects may unpredictably move around or occlusions.
I currently do not have any other concerns and will bump up my score to 8 as I don't really share the concerns as some of the other reviewers upon reviewing their comments. I do agree with other reviewers that this idea is not really novel, it's a mix of established techniques and in my opinion is very much an "expected idea". But given it performs much better than related work already, is well worth at minimum a score of 8. I can see other papers citing this paper because they really use it as a baseline which is impactful enough for me.
I leave my confidence at 4 since I am experienced with robot learning (small/large scale) and simulation, although not experienced with diffusion models specifically.
---
Rebuttal 3:
Title: Thank you for your support!
Comment: Dear Reviewer yjvj,
We sincerely appreciate your support and engagement with our work!
Regarding the visualizations, Metaworld's requirement for precise movements means it cannot adequately demonstrate how PAD handles uncertainty. As we also co-train on bridge datasets, we have included visualizations of our model's predictions on bridge at the top of our website (as noted in the abstract), **which better illustrate how PAD manages uncertainty in long-horizon prediction**. Visualization include the following scenarios: (1) After opening the cabinet door, a blue bowl and banana are imagined inside the cabinet. (2) A carrot, previously obscured, appears in the sink, though the ground truth is a pepper. PAD infers the identity of the object based on its exposed portion.
Regarding the novelty, we acknowledge that the high-level idea of robot learning involving prediction and action—more broadly, deriving better representations from video datasets—is a well-explored area. Our contribution and novelty lie in proposing a unified diffusion-based architecture that jointly predicts futures and actions. This approach is inspired by the impressive recent performance of DiT in image/video generation [1,2]. Our findings suggest that DiT can also handle multi-modal predictions in robotic tasks with appropriate technical design.
Thank you once again for your time and effort in reviewing our work! We are committed to continually refining our research to meet the highest standards.
Best regards,
The Authors
---
[1] Esser P, Kulal S, Blattmann A, et al. Scaling rectified flow transformers for high-resolution image synthesis[C]//ICML 2024.
[2] Ma X, Wang Y, Jia G, et al. Latte: Latent diffusion transformer for video generation[J]. arXiv preprint arXiv:2401.03048, 2024. | Summary: The paper presents a novel framework called PAD. This framework unifies image prediction and robot action within a joint denoising process, leveraging Diffusion Transformers to integrate images and robot states. PAD supports co-training on both robotic demonstrations and large-scale video datasets and can be extended to other robotic modalities like depth images. The framework significantly improves performance on the Metaworld benchmark and real-world robot manipulation tasks, demonstrating superior generalization to unseen tasks.
Strengths: - The motivation of this paper is clear. The paper is easy to follow and the proposed framework makes sense to me. The idea is simple and straightforward but effective.
- The performance looks good compared to the RT-1/RT-2 baselines.
- The experimental section is comprehensive, verifying performance on both sim and real world tasks. The authors also tested the model with various computation costs.
- Implementation details are clearly described and code is provided, which makes the community easy to reproduce.
Weaknesses: - The performance improvement on real data is somewhat marginal, which makes it questionable about its effectiveness on real data. It would also be great to showcase the ablation studies for real-world tasks to understand whether this is caused by limited model size and pretrained data (compared to RT-2).
Technical Quality: 3
Clarity: 3
Questions for Authors: - It would be interesting to ablate the co-training data in detail with different sources to understand its effectiveness.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors briefly discussed the limitation in the last section, which looks good to me.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and efforts in reviewing our paper! Based on your review, we added a detailed discussion and additional experiments.
---
**Q1: The performance improvement on real data is somewhat marginal. Is it caused by limited model size and pre-trained data compared to RT-2?**
ANS: Thank you for the insightful question! We would like to clarify that although PAD marginally outperforms RT-2 in real-world seen tasks, PAD demonstrates a significant advantage in unseen tasks, outstripping RT-2 with an average success rate improvement of 28%. This suggests that PAD’s superior generalization capabilities stem from its integration of physical knowledge via future prediction loss. Additionally, RT2 is already a very strong baseline with its huge 7B parameters while the largest PAD model contains ~670M parameters. Although PAD shows great scaling results in our ablations, we could not afford to train a diffusion model with billions of parameters from scratch. Instead, we had tried a smaller version of RT-2 finetuned on the BLIP2-2.7B model, which does not perform as well as its larger 7B counterparts.
| Real-world Taks Success Rate | RT2-Blip2-2.7B | RT2-InstructBlip-7B | PAD(ours) |
| ---------------------------- | -------------- | ------------------- | --------- |
| Seen Task | 0.61 | 0.69 | **0.72** |
| Unseen task | 0.24 | 0.31 | **0.58** |
---
**Q2: It would be interesting to ablate the co-training data in detail with different sources to understand its effectiveness.**
ANS: This is an interesting question! We conducted an ablation study focusing on the scale of the video dataset to assess the impact of co-training with Bridge. Our findings suggest that co-training mainly enhances performance on unseen tasks. A possible explanation is that unseen objects from unseen tasks, such as strawberries, plates, and bananas, are already appeared in the extensive Bridge datasets. Therefore, co-training significantly benefits these unseen tasks.
| Real-world Taks Success Rate | PAD w/o bridge-v2 | PAD with 10% bridge-v2 | PAD(ours) |
| ---------------------------- | ----------------- | ---------------------- | --------- |
| Seen Task | 0.68 | 0.68 | **0.72** |
| Unseen task | 0.40 | 0.48 | **0.58** |
---
Thank you again for your time and efforts! We show our deepest appreciation for your support of our work. We are always ready to answer your further questions!
---
Rebuttal Comment 1.1:
Title: To reviewer EGNn : Please respond to rebuttal
Comment: Hi reviewer EGNn,
Thank you for your initial review. Please kindly respond to the rebuttal posted by the authors.
Does the rebuttal answer your questions/concerns? If not, why?
Best,
AC | Rebuttal 1:
Rebuttal: We sincerely appreciate the time and efforts of all reviewers and the AC in evaluating our paper. We are grateful for the insightful and constructive suggestions, which have helped us improve our work. Below, we summarize our contributions and updates. (Updated Figures are in the attached PDF.)
**1. Contributions:**
- **Motivation:** The motivation of this paper is clear and interesting [EGNn,yjvj,Hcpx].
- **Method:** The joint prediction and action framework is nice and elegant [yjvj,Hcpx], with novel implementation[GDHG]. It seems reasonable that other researchers and practitioners might use or build on this work [Hcpx].
- **Experiments:** The experimental section is comprehensive including strong and recent baselines, thorough ablations, and good scaling results[EGNn,yjvj,Hcpx]. Implementation details are clearly described and code is provided, which makes the community easy to reproduce[EGNn].
- **Presentation:** The method is effective and easy to follow [EGNn,yjvj,Hcpx,GDHG]. The tone of this paper is humble and moderate, without over-presenting or overclaiming[GDHG].
**2. Modifications:**
- Following reviewer EGNn's suggestion, we have ablated the RT-2 model size and co-train dataset to better verify the effectiveness of our methods in real-world tasks.
- Following reviewer yjvj's suggestion, we conducted ablation studies with even fewer demos (20 demos per task). PAD continues to outperform the baseline in lower data regimes.
- Following reviewer Hcpx's suggestion, we reimplemented the DBC baseline within the DiT framework (Detailed in attached pdf). The results show that future prediction loss more effectively guides policy learning compared to DBC's state-action modeling loss.
- Following reviewer yjvj's suggestions, we tested PAD's 5-shot adaptation abilities using PRISE's settings. PAD surpasses the baseline in this scenario as well.
- Following reviewer GDHG's suggestions, we compared our approach with the official GR-1 baseline. Despite both employing joint future and action prediction, our diffusion-based PAD generates images of much higher quality(visualized in attached pdf) and result in a better policy success rate.
- We have corrected all writing typos and added missing references, thanks to the detailed advice provided.
Considering the limited rebuttal time and numerous experiments proposed by all reviewers, we have tried our best to conduct as many experiments as possible. We believe our comprehensive experiments in both simulated and real-world tasks adequately support our claims.
Once again, we sincerely appreciate the reviewers' time and feedback. We remain ready to address any further concerns!
Pdf: /pdf/228472c73a6428c9d896afbd9ca3a4ef7eca1beb.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning to Reason via Program Generation, Emulation, and Search | Accept (poster) | Summary: In this paper, the authors present a fine-tuned LLM that reasons by generating pythonic code and executes it through the model rather than passing it to an external interpreter (CoGEX). They also use the model to generate a set of candidate programs from a training set, among which the top-k performing programs are selected and subsequently used to evaluate the test set (CoTACS). The model shows improved performance on several datasets against benchmark models without the pretraining, including with in-context learning and chain-of-thought reasoning. Several additional analyses are provided for model efficacy, including the number of training items and sampled programs.
Strengths: 1. Paper is well-written and easy to follow. All the analyses and results are clearly presented.
2. The CoGEX model provides a useful datapoint on understanding how well LLMs can benefit from program-like planning without relying on an external program interpreter.
3. Allowing the model to use ‘primitive’ functions that are difficult to define programmatically offers greater flexibility than relying on interpreters and the ability to query for factual information.
Weaknesses: 1. The ideas do not strike me as particularly novel. As noted in the paper, using LLMs to generate code is not new, and neither is using language models to simulate program executors. CoTACS is simply a top-k search.
2. It isn’t clear to me what advantage this method has over just passing the code to Python Interpreter. Moreover, what advantage is there to using CoGEX to just using GPT4 given that the dataset was created using GPT4 in the first place?
3. While I appreciate the completeness of the presented results (for the sake of good science), even for experiments where the improvements are marginal, the results are often marginal at best, and it is unclear in what circumstances this approach is favorable and when it is not. Likewise, the authors use 0-shot CoT as their comparison model (Line 236), which is the weakest CoT baseline possible. Given that using an actual program interpreter is a viable alternative, I would be hesitant to adopt CoGEX without having a clearer understanding of when it outperforms simpler alternatives. In my opinion, the paper would strongly benefit from the following:
A. A fair comparison with CoT, either controlling for the number of examples or showing at how many examples the models achieve parity. Since this is the simplest thing to try for most ML practitioners, even parity would have me favor CoT over most alternatives.
B. A comparison, if applicable, to actual interpreters (e.g. Python). If this is not possible (e.g. due to calls to undefined functions), how often is that the case? Alternatively, if the model was forced to use predefined functions, how much better is it to use undefined functions?
C. I really was hoping for a more extensive Qualitative Analysis section (S 3.4) to get a better understanding of when CoGEX strongly outperforms the baselines. The results in Table 1 demand too much prior knowledge about the individual tasks/datasets from the readers without some guidance on how to think about them.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. (Table 1): Could you define what BM25 means? It wasn’t clear to me from the paper exactly how you performed the benchmarks and how you chose the benchmark settings to ensure fair comparison.
2. (Line 91): What does silver-quality mean?
3. (Table 3 and Figure 2): It wasn’t clear from the paper whether the model generated meaningfully different programs. Footnote 3 (from Line 231) suggests that the model likely generated very similar outputs (assuming lower temperature means more deterministic outputs).
4. (Line 103): Few questions about ‘fuzzy’ premitives.
A. Does the model come up with its own fuzzy primitive operations, or do they all need to be predefined by the engineer or GPT4?
B. How did you determine what level of granularity was appropriate for the dataset?
C. Does the model ever produce fuzzy primitives that were not included in the dataset by GPT4 and if so, how does the model deal with it?
5. (Line 187) Could you provide the actual SD or the 95% interval?
6. (Figure 2) Could you include baselines for these tasks, e.g. results from the off-the-shelf Llama-2 13B on these problems. (It’s hard to tell if the linear trends are misguiding for me or not because the x-axis is in log-scale while the y-axis is in raw percentages. Logit-scale for y-axis could potentially be more informative.)
7. (S3.4) Do you observe instances where the generated program is incorrect but the model still gets the problems correct? In other words, does the model actually follow the incorrect program or does the model ignore it?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 2
Limitations: The authors provide no guarantee of safety (this is acknowledged in the supplements), but not addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your feedback on our draft. We are excited that you have found our approach useful and flexible. We aim to address your concerns below:
> W1: The ideas do not strike me as particularly novel. As noted in the paper, using LLMs to generate code is not new, and neither is using language models to simulate program executors. CoTACS is simply a top-k search.
This work shows how a code execution approach can be applied to a significantly broader class of tasks than previously considered. Generating code with LMs is indeed not novel, but to date has been limited to algorithmic tasks that can be easily expressed programmatically. **Our key contribution is to extend this approach to more complex tasks where a full, programmatic implementation is difficult or infeasible** (e.g., requires commonsense). Our novel solution is to generate partial (pseudo-)programs: while these cannot be directly executed, we find that LLMs can simulate their execution, including filling in gaps for calls to complex functions that would have been difficult to implement in code.
> W2: It isn’t clear to me what advantage this method has over just passing the code to Python Interpreter.
Since the generated code represents pseudo-programs that include undefined functions, **passing the code to an interpreter will not work**. Therefore, we rely on the LLM to emulate code execution. As for using GPT-4, it can indeed be used to synthesize pseudo-programs at test time, but also can smaller, less expensive LLMs as we show. Whether GPT-4 can produce more expressive pseudo-programs at test time compared to our fine-tuned models is unclear and would need additional experiments to verify. The relatively cheap inference of smaller models enables us to perform our program search to find an optimal pseudo-program for a given dataset/task.
> W3.1: the results are often marginal at best.
CoGEX shows substantial improvements compared to the baselines over most of the tasks we consider e.g., CoLA (+8.6), CoinFlip (+12.4), WSorting (+6.1), CSQA (+18.2), etc. Our results highlight how CoGEX can provide an effective alternative to the baselines for reasoning such as vanilla CoT.
> W3.2 the paper would strongly benefit from a fair comparison with CoT..
The main obstacle here is that all the datasets we experiment with do not provide ground-truth CoT in their training data. However, our 2-shot BM25 baseline is a fair comparison since it has access to the same number of examples as CoGEX.
> W3.3 . B. A comparison, if applicable, to actual interpreters.
It would not be possible to use actual interpreters, since almost all the generated pseudo-programs include at least one call to an undefined function. In addition, many of the tasks we evaluate on such as Social QA and Commonsense QA involve soft reasoning that can not be described in code.
> W3.4 if the model was forced to use predefined functions, how much better is it to use undefined functions?
Predefined functions require precise reasoning, while our work tackles soft reasoning that cannot be easily described in code.
> W3.5 C. A more extensive Qualitative Analysis section (S 3.4) to get a better understanding of when CoGEX strongly outperforms the baselines.
We include a high-level overview of the tasks/datasets we cared about in L147-160. We will add extra info and examples from each dataset in the Appendix to help with this in the revision. Also, based on your suggestion, we will include a more detailed qualitative analysis of CoGEX vs. the baselines.
> Q1: (Table 1): Could you define what BM25 means?
BM25 is a ranking algorithm which we use to retrieve similar examples from the training data to use as in-context examples. Retrieval is common to boost in-context learning by using examples that are similar to the input as demonstrations [[1](https://arxiv.org/abs/2101.06804)]
> Q2: Line 91): silver-quality
"Silver quality" typically refers to data that is of good but not perfect quality. It is a step below "gold quality" data, which is considered to be the highest quality and typically manually annotated by human experts.
> Q3: Footnote 3 (from Line 231) suggests that the model likely generated very similar outputs (assuming lower temperature means more deterministic outputs).
In L231, we say that we sample with temperature=0.05 only with the end-to-end model. With search, we use T=0.7 as indicated in L165.
> Q4.1: Does the model come up with its own fuzzy primitive operations?
Indeed, after training, the model learns to generate its own primitives based on the input.
> Q4.2: How did you determine what level of granularity was appropriate for the dataset?
When constructing our training data, GPT-4 handles determining the appropriate granularity for the Alpaca data. Our trained models then learn to generate programs of appropriate granularity based on the input instance. We further apply search to find an optimal program (and granularity) for a given task.
> Q4.3: C. Does the model ever produce fuzzy primitives that were not included in the dataset by GPT4?
Certainly, the model learns to generate appropriate primitives based on the provided task instance. Based on our observations and analysis, the LM can generate and emulate novel primitives not seen in the training data.
> Q5: (Line 187) Could you provide the actual SD or the 95% interval?
We will add this information in the revision.
> Q6: (Figure 2) Could you include baselines for these tasks, e.g. results from the off-the-shelf Llama-2 13B?
We will add a line to the plots to show the baseline performance for each task.
> Q7: (S3.4) Do you observe instances where the generated program is incorrect but the model still gets the problems correct?
Yes, this sometimes happens, especially for tasks with binary output such as Coin Flip. However, we observe that the model mostly follows the program logic, which we verified by looking at the generated intermediate outputs. | Summary: The proposed approach, Code Generation and Emulated Execution (COGEX), involves training LMs to generate pseudo-programs with some undefined functions and then emulate the execution of these programs. This allows the LM's knowledge to fill in the gaps while maintaining a program like reasoning framework. The COGEX model is adapted to new tasks through program search, optimizing performance across dataset instances. The approach shows improvements over standard in-context learning methods, demonstrating that code synthesis can be applied to a wider range of problems.
Strengths: - The concept of pseudo-program and LLM emulation of program is interesting.
- The authors evaluated the methodology on many different dataset and tasks.
Weaknesses: I would consider raising my score if a fair number of the following weaknesses can be fixed by the authors. Many of them should be doable without extra experiments.
- Motivation: While in general I like the idea of pseudo-programs, but I don’t think it is very well motivated or explained in the paper. For this relatively new concept to be introduced, I would like to see a more precise definition that describes each type of reasoning, whether its intuition based with no reasoning, “soft reasoning”, and exact reasoning. State clearly what task falls into the camp of “soft reasoning”.
- Examples: I think the motivation is insufficient partly because of that the motivating examples (Fig. 1, 3, 10) do not seem to be requiring any “complex reasoning”. By “complex reasoning” I specifically mean things like composition of multiple sources of information, long range dependency of information, the use of logical operations like “and”, “or”, “not”, and “implies”, and etc.
- Task/Dataset selection: (also related to previous points) I would want some more clarification on why the authors pick the presented tasks and datasets. What makes these dataset special so that COGEX can be applied to them? What additional conceptual benefits do COGEX bring to them? This will also help motivate the methodology you proposed.
- Formalization: the symbol $f$ for COGEX model is unnecessarily abused. It can be invoked with either 2 or 3 arguments, which does not type check as a well-defined formalization. You should at least create two separate symbols, such as $f_{\text{reasoner}}$ and $f_{\text{enumlator}}$.
- Algorithm: The while loop in line 7-10 of Algorithm 1 does not seem to have termination guarantee. This is a potential flaw in the algorithm. What if the task and dataset is malformed and there exists no proper pseudo-program? Your algorithm should account for this.
- Comparison to GPT-4: I don’t want to suggest to do new experiments with GPT-4 as baselines, but I do want clarification on why GPT-4 is not used as a zero-shot baseline. I see that authors use GPT-4 to generate datasets used for training. So it’s definitely not the case that authors do not have access or hardware/financial limitations for doing so. Then why isn’t GPT-4 used as a baseline? I would assume that GPT-4 without training can still be used to synthesizing pseudo-programs, right?
- Claim in the experimental results: authors describes Fig.4 as “Figure 4 (blue vs gold) shows that CoT… can approach COTACS performance on some NLP classification tasks but generally performs substantially worse.” But to me when looking at Fig.4, I don’t see this claim accurate. I can only see significant difference on CoinFlip and Number Summing.
- Experimental setting: (related to the previous point) the direct comparison with zero-shot CoT might not be fair because your model has undergone fine-tuning, right?
Technical Quality: 2
Clarity: 3
Questions for Authors: (I asked majority of questions along with the listed weaknesses)
- Typo in Figure 3 caption: “SocialIQa” -> “Social IQa”
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: - The author listed the limitations in the Appendix, which I would prefer doing in the main text. This also relates to the first listed weakness that I wrote: giving a proper overview would help people identify where and when your methodology could be applied.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your elaborate review and feedback on our submission. We aim to address your concerns below:
> Motivation: While in general I like the idea of pseudo-programs, but I don’t think it is very well motivated or explained in the paper. For this relatively new concept to be introduced, I would like to see a more precise definition that describes each type of reasoning, whether its intuition based with no reasoning, “soft reasoning”, and exact reasoning. State clearly what task falls into the camp of “soft reasoning”.
We define pseudo-programs in L7-8 and L29-34, mainly as programs with one or more leaf functions left undefined. While ours is not a formal definition, we believe it should be sufficient for the scope of our paper. As for “soft reasoning”, it refers to tasks whose solution is difficult to fully describe in code. We agree with your feedback here, and we intend to give a more concrete definition of it in future revisions.
> Examples: I think the motivation is insufficient partly because of that the motivating examples (Fig. 1, 3, 10) do not seem to be requiring any “complex reasoning”. By “complex reasoning” I specifically mean things like composition of multiple sources of information, long range dependency of information, the use of logical operations like “and”, “or”, “not”, and “implies”, and etc.
Many of the tasks we work with require a level of multi-hop reasoning. For example, solving SVAMP requires multistep aggregation of math operations, as in the following example:
```
Robin has 28 packages of gum and 14 packages of candy. There are 6 pieces in each package. How many pieces does Robin have?
Solution: ( ( 28.0 + 14.0 ) / 6.0 )
```
Also, CSQA requires understanding multiple commonsense relations between entities as in the following entities:
```
The teacher told all the students that listening was key, it was the main way they would gain what? [ "empathy", "anxiety", "knowledge", "falling down", "hear things"]
```
You are right that we didn’t consider tasks targeting e.g., discrete logical operations. However, the **key contribution of our work is not to push the boundary of complex reasoning that LLMs can do, but to show that code generation can be utilized to solve NLP tasks whose solution cannot easily be described in code**, and we show on a variety of standard reasoning benchmarks that are commonly used in the literature.
> Why the authors pick the presented tasks and datasets. What makes these dataset special so that COGEX can be applied to them? What additional conceptual benefits do COGEX bring to them?
We chose these particular datasets to get a representative sample of different categories of NLP tasks currently of interest to the community (e.g. typical classification like SST/emotion, commonsense QA tasks (CSQA, SIQA), math (SVAMP) etc. There is nothing particularly “special” about these datasets except for the fact that a shared "plan," expressible as a discrete pseudo program, can be readily applied to all the different data points for the given task. We drew a number of them from recent work on code generation, e.g. the text classification tasks from [[1](https://aclanthology.org/2024.findings-naacl.259.pdf)].
> Formalization: the symbol 𝑓 for COGEX model is unnecessarily abused. [...] You should at least create two separate symbols, such as 𝑓reasoner and 𝑓emumlator
Thanks for pointing this out; we agree that it would be clearer to create two separate symbols and will modify the paper accordingly
> Algorithm: The while loop in line 7-10 of Algorithm 1 does not seem to have termination guarantee. This is a potential flaw in the algorithm. What if the task and dataset is malformed and there exists no proper pseudo-program? Your algorithm should account for this.
Thank you for this insightful observation. We apologize for overlooking this case and will modify the algorithm accordingly in the revision.
> why isn’t GPT-4 used as a baseline? I would assume that GPT-4 without training can still be used to synthesizing pseudo-programs, right?.
Our concept of pseudo-programs is agnostic to the synthesizer model used. Indeed, GPT-4 can be used to synthesize pseudo-programs at test time, but also can smaller, less expensive LLMs as we show. Whether GPT-4 can produce more expressive pseudo-programs at test time compared to our fine-tuned models is unclear and would need additional experiments to verify. In addition, the relatively cheap inference of smaller models enables us to perform our program search during training to find an optimal pseudo-program for a given dataset/task.
> When looking at Fig.4, I don’t see this claim accurate. I can only see significant difference on CoinFlip and Number Summing.
Thank you for the observation. We apologize for this mistake, and we will revise to make it clear that CoTACS substantially outperforms CoT on Summing, SVAMP, and CoinFlip.
> Experimental setting: (related to the previous point) the direct comparison with zero-shot CoT might not be fair because your model has undergone fine-tuning, right?
Zero-shot CoT refers to the way the model is prompted i.e., without in-context examples and using “Let’s think step-by-step” in the instruction. But just like our models are fine-tuned on the CoGEX data, the baseline is finetuned on the original Alpaca dataset before prompting. So the comparison is fair.
> The author listed the limitations in the Appendix, which I would prefer doing in the main text. This also relates to the first listed weakness that I wrote: giving a proper overview would help people identify where and when your methodology could be applied.
Thank you for this observation. This was due to the page limit, but we will make sure to move the limitations section to the main text as you suggested.
We hope we have addressed all your concerns. Please let us know if you have any more concerns or questions.
---
Rebuttal Comment 1.1:
Title: Thanks for the Rebuttal
Comment: Thank you for your rebuttal. While the response cleared up some of my concerns, there are still many confusion left, such as the motivation of pseudo programs and the scientific definition of exact and soft reasoning. I’m keeping my current rating as is.
---
Reply to Comment 1.1.1:
Title: Thanks for the response
Comment: Thank you for your response!
Could you give a few more specifics about your remaining points of confusion following our responses in the rebuttal? Hopefully we can clarify them during the remaining time of discussion period.
---
Reply to Comment 1.1.2:
Title: Clarification about pseudo-programs and soft reasoning
Comment: If it is helpful, here is a more specific comparison between the tasks we consider with pseudo-programs vs what previous works have done with regular programs.
The PAL (Program-aided Language Models) paper ([Gao et al 2023](https://arxiv.org/pdf/2211.10435)) uses Codex to generate fully executable Python programs to solve tasks in the following categories: Math reasoning, Symbolic reasoning, Algorithmic reasoning.
These involve tasks like GSM, where the program looks like this:
```
Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?
money_initial = 23
bagels = 5
bagel_cost = 3
money_spent = bagels * bagel_cost
money_left = money_initial - money_spent
answer = money_left
```
The task thus boils down to 1. Parsing the problem into symbolic variable assignments, then 2. applying python-implemented internal functions like addition and subtraction.
This is the case for all the kinds of tasks they consider for PAL. E.g. for their algorithmic tasks like OBJECT COUNTING:
```
# Q: I have a chair, two potatoes, a cauliflower, a lettuce head, two tables, a cabbage, two onions, and three fridges. How many vegetables do I have?
vegetables_to_count = {
'potato': 2,
'cauliflower': 1,
'lettuce head': 1,
'cabbage': 1,
'onion': 2
}
answer = sum(vegetables_to_count.values())
```
The LM thus serves as a parser of the problem into a setting in which the solution falls out based on symbolic manipulation operations that Python easily handles. Thus, our definition of “exact reasoning” tasks are those for which a simple solution recipe contains only definitions and well-formed Pythonic operations in order to aggregate the definitions into an answer.
Now let’s consider some of the tasks we look at in our paper, like the CSQA problem from our rebuttal:
```
The teacher told all the students that listening was key, it was the main way they would gain what? (A) "empathy" (B) "anxiety" (C) "knowledge" (D) "falling down" (E) "hear things"]
```
What is the corresponding PAL program that could solve this question? It requires considering each option and the possible common sense relations between them and the situation in the question. One could code up a query to a symbolic ontology, but this is a much more complex solution than just generating code from an LM and feeding it to a Python executor. Instead, our tactic is to let the LM generate a pseudo-program that doesn’t actually get executed; it suggests to the LM a programmatic means to solve the CSQA instance, but circumvents the hard-to-code parts of the solution via underspecified function calls. Since it is underspecified, we use the LM to emulate the program execution because Python would not be able to.
```
# Step 1: Extract the main statement and options from the question.
statement, options = extract_statement_and_options(question)
# Step 2: Identify the key action and its purpose in the statement.
key_action, purpose = identify_key_action_and_purpose(statement)
# Step 3: Match the purpose with the most relevant option.
answer = match_purpose_with_option(purpose, options)
return {'statement': statement, 'key_action': key_action, 'purpose': purpose, 'answer': answer}
```
Each of these functions like ‘identify_key_action_and_purpose’ are undefined– they would be very hard to actually code up in pure Python. We face the same sort of challenge for tasks like sentiment analysis.
Thus, by “soft reasoning” we refer to NLP tasks that require nontrivial reasoning steps that you can’t write simple Python snippets to handle, such as ‘identify_key_action_and_purpose’.
We described this idea in L29-34 of the submission, but will expand it to make this more clear to future readers. As we state in the paper and in our rebuttal, ours is the first paper to handle these sorts of soft reasoning tasks using a code-based approach.
We hope this helps clear up your confusion. Please let us know if not, we are happy to do so while discussion period is still open. | Summary: This paper proposes a means of training language models to perform semi-programmatic reasoning, along with an adaptation method based on program search to specialize the resulting models for particular downstream tasks without updating their parameters.
The authors use GPT-4 to generate programmatic reasoning strategies from Alpaca instruction-tuning examples, allowing GPT-4 to include undefined function calls as the resulting code does not actually need to be executable, only used to guide the downstream language model's inference process.
The authors evaluate their method by fine-tuning Llama 2 models on their version of the Alpaca dataset, then comparing benchmark performance against few-shot, instruction-tuned zero-shot, and chain-of-thought baselines on several tasks picked to represent both algorithmic and non-algorithmic reasoning.
Experimental results indicate that the proposed method outperforms the considered baselines across most tasks by a good margin.
Strengths: - The proposed method appears to strike a nice balance between the rigidity of code-structured reasoning and freeform CoT, as it performs well across domains where CoT excels and ones where it struggles.
- Being able to specialize reasoning for a particular task by selecting a pseudo-program is neat. It also appears to work well with a much smaller number of examples than are required to effectively fine-tune a model.
- The authors do a very good job of proactively answering natural questions with their ablations in section 3.3.
- As for clarity of exposition, all the writing is easy to follow and the paper is laid out intuitively.
Weaknesses: 1. The authors don't report statistical significance (e.g. through bootstrapping) or variance across runs with different subsamplings of training data.
2. As far as I can tell, the reported experiments don't really include domains where the correct solution can be reached by actually executing a fully specified program, besides Number Summing (accordingly, a program-of-thought baseline is also missing). In CoGEX the model is responsible for simulating program execution, so it seems likely that it would underperform in these settings. While actually integrating an interpreter into a system that can still handle underspecified functions would be out of scope, it would be good to address this issue at the conceptual level somewhere in the text.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Did you try few-shot with the Alpaca (instruction-tuned) models that you used zero-shot in the paper?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors include a short statement to the effect that their models may make reasoning mistakes and the model artifacts are provided without any guarantees of safety. It might be worth including the fact that models may fail to execute generated code consistently, and thus generated code may be seen as an unfaithful explanation of model reasoning.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for the constructive feedback on our submission. We are glad that you found our approach neat, our ablations useful, and our writing clear. We aim to address your concerns below.
> W1: The authors don't report statistical significance (e.g. through bootstrapping) or variance across runs with different subsamplings of training data.
Thank you for pointing this out, due to the limited time of the rebuttal, we will compute statistical significance of CoGEX against the baselines and add variance information to Figure 2 in the revision.
> W2: As far as I can tell, the reported experiments don't really include domains where the correct solution can be reached by actually executing a fully specified program, besides Number Summing (accordingly, a program-of-thought baseline is also missing).
Thank you for this insight. As there is already much literature about generating correct programs (e.g., program-of-thought, PAL, etc.), we want to focus on tasks where this is less feasible. We therefore focus on tasks where it is hard to describe the reasoning in code (see line number L29-L32) and which will not benefit much from an interpreter. We will highlight the pros and cons of CoGEX compared to using an actual interpreter in the revision, as you suggested.
> Q1: Did you try few-shot with the Alpaca (instruction-tuned) models that you used zero-shot in the paper?
We did, but found it to perform significantly worse than zero-shot. This is mainly due to that instruction tuning is zero-shot, i.e., the model is trained to generate output given the instruction and input without in-context examples. We will include a note describing this in the next version of the paper.
> It might be worth including the fact that models may fail to execute generated code consistently, and thus generated code may be seen as an unfaithful explanation of model reasoning.
Thank you for noting this. We will include this in our limitations in the paper revision. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Reimagining Mutual Information for Enhanced Defense against Data Leakage in Collaborative Inference | Accept (poster) | Summary: The collaborative inference enables resource-constrained IoT devices to support deep learning applications without sharing raw data with the cloud server, but previous research has revealed that this approach still leaks input and prediction information from edge devices. To address this vulnerability, the authors propose InfoScissors, a defense strategy designed to minimize the mutual information between a model's inner states and the device's input and outputs. The effectiveness of InfoScissors is evaluated on common datasets like CIFAR10/100 against various attacks, demonstrating its superiority over existing defense strategies that utilize mutual information. The paper also provides theoretical analysis to support the proposed method.
Strengths: + Study of an important privacy issue in collaborative inference.
Weaknesses: - The novelty of proposed method is somehow limited. The proposed defense InfoScissors relies on the Mutual Information (MI) regularization that has been explored in previous study like MID. The paper directly adopts a new formulation [38] for the upper bound of MI terms. As a result, the novelty and contribution of this paper appear to be limited. Compared to previous VIB-based approaches, even though the authors manage to find out several advantages and provide a theoretical explanation, the challenges are not clear.
- Limited evaluation of adaptive attack, making the robustness of proposed defense InfoScissors doubtful. The only mention of adaptive attack is the AMC attack, but there is not many details about the adaptive attack. A strong adversary can follow the proposed method to train surrogates classifier and generator to mimic the victim's data. I suggest the authors to clarify the adversary capacity and thoroughly evaluate the potential privacy breach after the method is known by the adversary.
- Limited evaluation of benchmarks. The paper only evaluates on the simple benchmark like CIFAR10/100 but not more large-scale dataset like ImageNet. The model architecture evaluation is also important to demonstrate the scalability of proposed defense. Also, it seems that the method is only evaluated on the vision classification models, is it applicable to generative model or other tasks? If so, what is the cost of efficiency and accuracy?
Technical Quality: 2
Clarity: 2
Questions for Authors: See comments above
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review and constructive suggestions. We are delighted to answer the questions and address the concerns.
## W1.
The novelty of our theoretical analysis is that we analyze why we should choose CLUB rather than other approximations to approximate the upper bound of mutual information between the data and the representations. We do not modify the detailed approximation method (i.e., CLUB), and that is not our focus. Our novelty is that we theoretically analyzed the superiority of applying CLUB over the other mutual information approximations in collaborative inference defense. The multi-stage training procedure derived from the multiple training objectives is also a novelty of our paper. I appreciate that you recognize that we provide a theoretical explanation of the superiority of applying CLUB. When you understand our theoretical explanation, there will be no challenge to choose CLUB for approximation. However, without our analysis, it will be challenging to achieve optimal performance-defense trade-off because there is no clue to select the optimal approximation for MI in privacy preservation. Previous works select VIB because VIB is the most widely applied approximation method, but it is not optimal under the collaborative inference privacy-preservation setting.
## W2.
Thanks for the constructive suggestion, we will add more details about the adaptive attack, AMC attack, and the adversary's capability in the revised draft section 3.2 and section 5.1. In our threat model, the adversary (i.e., malicious server) is capable of training a surrogate classifier and generator to mimic the victim's data. The primary goal of our defense is to make the collaborative model depend less on the encoder on the server for specific tasks. Such that the data and prediction information can be filtered during inference. Under the adaptive attack setting, we enable the adversary to modify the training profiling such that it can trick the collaborative model into relying more on its encoder, thereby extracting more private data information from the encoder's features. In this setting, the adversary is directly confronting the fundamental principles of our defense method, constituting a highly potent form of adaptive attack. The results show that our defense is capable of defending such a strong adaptive attack.
## W3.
Thanks for the constructive advice. To evaluate performance on large-scale datasets with higher dimensions, we conducted more experiments on the mini-ImageNet dataset, which is a subset of ImageNet. We also conducted experiments on the Vision Transformer (ViT-B/16). Due to the time limit, we only got the results on the model completion attack. Here are the results of our defense under different defense levels.
| Model | $\lambda_l$ | Accuracy($\uparrow$) | Attack Accuracy($\downarrow$) |
|--------|---------|------------|-------------|
| RedNet18 | 0 (no defense) | 58.05% | 35.44% |
| | 0.05 | 57.58% | 23.64% |
| | 0.1 | 57.03% | 11.17% |
| | 0.3 | 56.87% | 1.87% |
| ViT-B/16 | 0 (no defense) | 54.85% | 31.24% |
| | 0.05 | 53.75% | 18.35% |
| | 0.1 | 52.52% | 8.62% |
| | 0.3 | 51.83% | 1.56% |
The results for comparison with other baselines can be found in the global response PDF file. It is shown that our method achieves the best defense-performance trade-off under the large-scale dataset and is generalized to the other model architectures. Our defense against the model inversion attacks shows the high potential of our method in generative tasks. We also evaluate our method under the credit fraud detection task on the UCI\_Default\_Credit with MLPs, the results are shown below.
| $\lambda_l$ | AUC($\uparrow$ to 1) | AUC($\downarrow$ to 0.5) |
|--------|---------|------------|
| 0 (no defense) | 0.783 | 0.673 |
| 0.05 | 0.778 | 0.603 |
| 0.1 | 0.755 | 0.536 |
| 0.3 | 0.746 | 0.502 |
It is shown that our method can also achieve a good performance-privacy trade-off on different tasks.
We appreciate your review and constructive suggestions. Please let us know if you have any new questions or concerns.
---
Rebuttal 2:
Title: Follow-up on Rebuttal
Comment: I hope our rebuttal addressed your concerns effectively. As we approach the end of the discussion period, we wanted to check if you have any remaining questions or points you'd like to clarify. Your insights are very important to us, and we would appreciate any further feedback you can provide before the discussion period ends. | Summary: This paper considers a setting where two parties (Cloud Server and Edge Device) are collaboratively training a deep model. The threat model considers both inputs and outputs to be private data of the Edge Device that should be protected from the Cloud Server. The authors propose a learning algorithm that is theoretically motivated by a recent method called CLUB [38], and performs better than other baselines on two datasets CIFAR10 and CIFAR100. The main contribution of this paper is that the authors show CLUB offers better optimization results compared to Variational Information Bottleneck (VIB).
Strengths: The paper is well written, easy to follow, and relevant to the ML community. The baselines are studied and elaborated properly. The algorithm is supported both empirically and theoretically and the results suggest state-of-the-art performance. Overall, I am happy about this work and I appreciate the author's hard work and novelty.
Weaknesses: The main weaknesses are motivation of the problem and the evaluations of the algorithm. The paper needs some improvements in this regard. Please see below.
Technical Quality: 3
Clarity: 3
Questions for Authors: (1) One important, but totally ignored, aspect of the proposed framework (Figure 1) is that the Cloud Server can run the classifier part (that is \theta^c) and easily discover y without needing any sophisticated attack. Unless, the authors argue that the classifier part is not available to Cloud Server! Which is a strong and unrealistic assumption because usually the server is the entity that trains the model and then splits it among the Edge devices. I am struggling to find a real-world application in which the Edge Device has access to the classifier part but the Cloud Server does not have that access. Authors need to clarify and motivate this.
(2) When reading the paper, it is not clear to distinguish between training vs. inference stage. The title mentions “Collaborative Inference”, but the paper is actually more related to “Collaborative Training”. The fact that two parties (Cloud Server and Edge Device) are collaborating in training a model or they are collaborating in making a prediction (via a trained model in inference mode) makes a huge difference in terms of threat model and evaluation. The authors need to revise the text and make this as clear as possible from the beginning of the paper. It is also suggested to discuss relevance to a work published last year https://arxiv.org/abs/2310.13384 (Salted Inference: Enhancing Privacy while Maintaining Efficiency of Split Inference in Mobile Computing)
(3) The authors should compare and clarify the contribution of this paper compared to Club [38]. They should explain to what extent the theoretical analysis of this paper is novel compared to what has been already presented in [38].
(4) The paper should show the performance of the work on more complex data types. The two chosen datasets are of similar complexity. Similarly, beside ResNet 18, other model architectures should be examined. It is not clear if this method only works for ConvNets or whether it supports other types of layers.
(5) In Figure 3, it is not clear what is the difference between different rows. For each method there are three pairs of images and results, but how different are these and in what sense?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The main limitation is that the generalization of the method to other complex datasets and model architectures is not explored or discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review and constructive suggestions. We are delighted to answer the questions and address the concerns.
## W1.
In real life, there are some applications and settings where the classifier or other head models are deployed on the edge. For example, in some medical applications, the encoder on the cloud server will extract the features while the diagnosis results (prediction) are calculated on the wearable devices since the diagnosis results might contain private information of the user. The other practical setting is that the user might need to finetune the classifier or other head models on the device using their private data, which cannot be shared with the server. In such a case, the classifier has to be deployed on the edge device, and the prediction needs to be protected.
## W2.
Thanks for your constructive suggestion. We are delighted to explain the settings of our paper. The goal of our method is to preserve user privacy in collaborative inference, and privacy preservation is achieved by manipulating the training phase (i.e., collaborative training). By applying our defense, the model is normalized to filter the private information when extracting features and representations, such that privacy is preserved during the inference phase. We will revise our draft to clarify this point more clearly in the introduction section.
Thanks for your reference. The setting of this paper is similar to our paper, which also realizes inference privacy by normalizing the training procedure. But they only focus on prediction protection. Technically, they add noise to the training label by randomly sampling a class label, which is intuitively correct. Our training method is theoretically derived from the perspective of mutual information, and we provide a theoretical analysis of our defense performance. Even though we are distinguished from the methodology, we still appreciate your suggestion and will include it in the related work.
## W3.
CLUB is a paper focusing on approximating the mutual information upper bound, while our paper utilizes the mutual information to defend against privacy leakage. Thus, our contribution is more about privacy preservation, which is different from the goal of CLUB. The novelty of our theoretical analysis is that we analyze why we should choose CLUB rather than other approximations to approximate the upper bound of mutual information between the data and the representations. When you want to minimize the mutual information, there are many choices of approximation. Most of the existing related works apply \textit{Variational Information Bottleneck (VIB)}[22] since it is the most widely applied approximation of mutual information upper bound. However, we analyze that when defending data privacy, VIB is sub-optimal and could sacrifice too much performance. Following our analysis, we find that CLUB is a better approximation to apply. In other words, we do not modify the detailed approximation method (i.e., CLUB), and that is not our focus and novelty. Our novelty is that we theoretically analyzed the superiority of applying CLUB over the other mutual information approximations in collaborative inference defense. The multi-stage training procedure derived from the multiple training objectives is also a novelty of our paper.
## W4.
Thanks for the constructive advice. To evaluate performance on large-scale datasets with higher dimensions, we conducted more experiments on the mini-ImageNet dataset, which is a subset of ImageNet. We also conducted experiments on the Vision Transformer (ViT-B/16). Due to the time limit, we only got the results on the model completion attack. Here are the results of our defense under different defense levels.
| Model | $\lambda_l$ | Accuracy($\uparrow$) | Attack Accuracy($\downarrow$) |
|--------|---------|------------|-------------|
| RedNet18 | 0 (no defense) | 58.05% | 35.44% |
| | 0.05 | 57.58% | 23.64% |
| | 0.1 | 57.03% | 11.17% |
| | 0.3 | 56.87% | 1.87% |
| ViT-B/16 | 0 (no defense) | 54.85% | 31.24% |
| | 0.05 | 53.75% | 18.35% |
| | 0.1 | 52.52% | 8.62% |
| | 0.3 | 51.83% | 1.56% |
The results for comparison with other baselines can be found in the global response PDF file. It is shown that our method achieves the best defense-performance trade-off under the large-scale dataset and is generalized to the other model architectures.
## W5.
We apologize for the confusion. Different rows present different defense levels. The bottom row applies the highest defense strength; thus, the reconstructed image is low-quality while the accuracy is sacrificed. We will revise the caption to clarify this.
---
Rebuttal Comment 1.1:
Comment: I’m happy with your answers and would strongly recommend enhancing the presentation of your paper. Including results from additional datasets and different model architectures would be an excellent idea. In particular, including datasets from real-world applications, such as wearables or medical images (as mentioned in your answer on motivation), could show the generalizability of your methodology and its relevance to a wider audience. Overall, I’m satisfied with the work and look forward to seeing these improvements.
---
Reply to Comment 1.1.1:
Title: Response by Authors
Comment: Thanks for your response! We will follow your comments and revise the draft accordingly. | Summary: This paper provides InfoScissors, a learning algorithm that regularizes the model during the training phase. This paper also compares their method with VIB-based methods and evaluates it with multiple attacks.
Strengths: The paper provides the theoretical analysis for the defense method and also compares it with VIB-based methods.
Weaknesses: The author claims that LLMs cannot be handled on edge devices; however, in the evaluation part, they only use CIFAR10 and CIFAR100, both of which can be handled by edge devices. The paper does not measure the method with real large-size datasets.
Technical Quality: 3
Clarity: 3
Questions for Authors: No
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper does not measure the method with real large-size datasets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the constructive advice. To evaluate performance on large-scale datasets with higher dimensions, we conducted more experiments on the mini-ImageNet dataset, which is a subset of ImageNet. We also conducted experiments on the Vision Transformer (ViT-B/16). Due to the time limit, we only got the results on the model completion attack. Here are the results of our defense under different defense levels.
| Model | $\lambda_l$ | Accuracy($\uparrow$) | Attack Accuracy($\downarrow$) |
|--------|---------|------------|-------------|
| RedNet18 | 0 (no defense) | 58.05% | 35.44% |
| | 0.05 | 57.58% | 23.64% |
| | 0.1 | 57.03% | 11.17% |
| | 0.3 | 56.87% | 1.87% |
| VIT | 0 (no defense) | 54.85% | 31.24% |
| | 0.05 | 53.75% | 18.35% |
| | 0.1 | 52.52% | 8.62% |
| | 0.3 | 51.83% | 1.56% |
The results for comparison with other baselines can be found in the global response PDF file. Our method achieves the best defense-performance trade-off under the large-scale dataset and is generalized to the other model architectures.
---
Rebuttal 2:
Title: Follow-up on Rebuttal
Comment: May I kindly inquire if you have any further concerns or questions after reviewing our rebuttal? As the discussion period is nearing its conclusion, your feedback is incredibly valuable to us. I would greatly appreciate it if you could let us know if there are any additional points you'd like to discuss or if our rebuttal satisfactorily addresses your concerns. | null | null | Rebuttal 1:
Rebuttal: Here are the results against PMC attacks on mini-ImageNet with different model architectures.
Pdf: /pdf/37ab59fd41f611d7c3a70fe023d79366bb1eb43e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
KALM: Knowledgeable Agents by Offline Reinforcement Learning from Large Language Model Rollouts | Accept (poster) | Summary: This paper introduces a novel approach, KALM (Knowledgeable Agents from Language Model Rollouts). KALM extracts knowledge from LLMs in the form of imaginary rollouts, which agents can learn through offline RL. KALM fine-tunes the LLM to bridge the semantic gap between LLMs and RL agents. The paper demonstrates the effectiveness of KALM on robotic manipulation tasks, achieving a higher success rate than baseline methods, especially in unseen tasks.
Strengths: - The idea of extracting knowledge from LLMs in the form of imaginary rollouts is interesting and useful.
- The experimental analyses are sufficient and well-conducted.
Weaknesses: - Constrained by the limited context length, generating a complete trajectory seems unreliable for more complex tasks with longer trajectories. Moreover, there is a lack of details regarding the trajectory length of the experimental tasks.
- The proposed method requires the environment-built-in reward function to obtain and incorporate rewards, which may not be accessible for some other tasks. What if training the LLM to generate the rewards?
Technical Quality: 3
Clarity: 3
Questions for Authors: - How is the dataset collected? What is the quality of the policy used to collect data?
- Why use only 6400 rollout-goal pairs to train offline RL, given there are 100,000 rollout-goal pairs available?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors mentioned that the generation of both state and action increases the burden on the LLM. Additionally, the current version is limited to state in the form of vector.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and valuable comments. Below, we address each of your questions and provide detailed responses to your concerns.
> Q1: Constrained by the limited context length, generating a complete trajectory seems unreliable for more complex tasks with longer trajectories. Moreover, there is a lack of details regarding the trajectory length of the experimental tasks.
**Comment 1:** Constrained by the limited context length, generating a complete trajectory seems unreliable for more complex tasks with longer trajectories.
**Response 1:** This is a good point. We understand your concern about the trajectory generation on long-horizon tasks. We clarity that current experiments primarily demonstrate the efficacy of our proposed methodology, which extracts knowledge from LLM to facilitate low-level control. For tasks with longer trajectories, there have been established techniques verified to be effective, e.g., uncertainty estimation and branch rollout in MOPO [1], and long-range coherence in Sora [2], which is capable of generating video exceeding one minute. While these techniques are compatible with KALM, they were not employed in this study since the average trajectory length was approximately 70 timesteps (as shown below), which did not necessitate their integration. It would be interesting for future research to investigate how these techniques could be incorporated to handle longer trajectory tasks within the KALM framework.
**Comment 2:** there is a lack of details regarding the trajectory length of the experimental tasks.
**Response 2:** Thanks for your suggestion. We have included the details regarding the trajectory length of offline data, as shown in the following table.
| CLEVR-Robot | Min | Mean | Max |
| :------------------: | :--: | :--: | :--: |
| Task in offline data | 2.0 | 30.8 | 50.0 |
| Rephrasing goals | 1.0 | 17.2 | 50.0 |
| Unseen (easy) | 1.0 | 1.0 | 1.0 |
| Unseen (hard) | 1.0 | 45.1 | 50.0 |
| Meta-world | Min | Mean | Max |
| :------------------: | :---: | :---: | :---: |
| Task in offline data | 17.0 | 69.5 | 100.0 |
| Rephrasing goals | 14.0 | 73.7 | 100.0 |
| Unseen (easy) | 17.0 | 54.7 | 100.0 |
| Unseen (hard) | 100.0 | 100.0 | 100.0 |
> Q2: The proposed method requires the environment-built-in reward function to obtain and incorporate rewards, which may not be accessible for some other tasks. What if training the LLM to generate the rewards?
We acknowledge that the availability of a built-in reward function can be a limitation in environments where such functions are not readily accessible or well-defined. However, we would like to emphasize that the main focus of this work is to explore the effective utilization of knowledge in LLM for low-level control, and the proposed method is not inherently dependent on the existence of such a reward function. Considering the potential absence of such functions, training LLM to generate rewards is a potential solution, while it may add burden to the LLM training. As an alternative, there have been other well-validated methods utilizing LLM to generate reward function [3,4] or dealing with sparse reward problems [5,6].
> Q3: Questions about experiment setting.
**Comment 1:** How is the dataset collected? What is the quality of the policy used to collect data?
**Response 1:** We collect the offline dataset by utilizing expert policies specifically trained to complete natural language goal for each task. To simulate the noise in real-world data, we introduce a small portation (10%) of rollouts generated by the random policy. In this way, the dataset reflects a degree of the variability of the environment. We would incorporate these details in the experiment setting in the revised version.
**Comment 2:** Why use only 6400 rollout-goal pairs to train offline RL, given there are 100,000 rollout-goal pairs available?
**Response 2:** We apologize for the confusion brought by the dataset utilization. This is attributed to an issue of computation resources. The generation of imaginary rollouts is computationally expensive and time-consuming, with the capacity to generate approximately 6,400 pairs in a 24-hour period. Therefore, to prevent potential sample bias and maintain a balanced dataset, we choose an equivalent number of offline rollouts from the available 100,000 pairs.
Nevertheless, we acknowledge it worth exploring to utilize all the 100,000 offline rollouts, by balancing the imaginary / offline rollout in each training batch. The experiment results on CLEVR-Robot are shown in the table:
| | BC [6,400] | BC+KALM [6,400] | BC [100,000] | BC+KALM [100,000] |
| :-------------------: | :--------: | :-------------: | :----------: | :---------------: |
| Tasks in offline data | 63.1 | 56.3 | **70.4** | 63.4 |
| Rephrasing goals | 26.3 | 44.6 | 26.3 | **51.1** |
| Unseen (easy) | 17.5 | **33.5** | 16.8 | **33.7** |
| Unseen (hard) | 1.4 | 7.2 | 1.0 | **7.7** |
The result suggests that an increase in the amount of offline rollouts can lead to improvements in method performance. Overall, the KALM still outperforms the baseline under this setting.
## Reference
[1] MOPO: Model-based Offline Policy Optimization. Tianhe Yu, et al. NeurIPS 2020.
[2] Video generation models as world simulators. OpenAI team. 2024.
[3] Text2Reward: Reward Shaping with Language Models for Reinforcement Learning. Tianbao Xie, et al. ICLR 2024.
[4] Eureka: Human-Level Reward Design via Coding Large Language Models. Yecheng Jason Ma, et al. ICLR 2024.
[5] Learning by playing solving sparse reward tasks from scratch. Riedmiller, et al. ICML 2018.
[6] Overcoming exploration in reinforcement learning with demonstrations. Ashvin Nair. ICRA 2018.
---
Rebuttal Comment 1.1:
Comment: Hi reviewer bRuq. We wanted to follow up to see if the response addresses your concerns. If you have any further questions, please let us know. Thank you again!
---
Rebuttal 2:
Comment: Thanks for your response and positive assessment. While current experiments have verified the feasibility of our motivation, we believe it would be valuable supplement by evaluating KALM on more complex tasks. During the rebuttal phase, we have investigated KALM's adaptability on visual tasks. In future work, we would further explore KALM's performance on tasks with longer horizon, employing the established techniques discussed in our response to Q1. | Summary: This paper presents KALM, a novel approach adopting LLMs to generate imiginary rollouts to augment the offline dataset for offline RL. Structure of LLMs is altered to handle the numeric states and actions in decision-making environments. Experiments demonstrate the imaginary rollouts benefits the offline policy learning and effectively help KALM surpass the baselines.
Strengths: 1. Adopting LLMs to generate imaginary rollouts is a promising and interesting direction for decision-making field, as LLMs possess extensive knowledge to help model the environment mechanism.
2. This paper is well-written and clearly describes the proposed method.
3. Empirical results validate the effectiveness of using LLMs to generate rollouts. This may further pave the way for using LLMs as world models.
Weaknesses: 1. Considering the close connection between this paper and model-based offline RL, I'm surprised that they are not even mentioned in the related work. Model-based offline RL methods like MOPO, MoREL and COMBO demonstrate strong performances and are also capable of generating imaginary rollouts just like KALM. They should be discussed and included in the baselines.
2. I wonder whether the evaluation between baselines with/without KALM is fair. The policy in KALM also conditions on $G$, a hidden vector generated by BERT. I think this introduces additional information to the policy. Does the other baseline also use this information?
3. This is an open question. What do you think are the major differences between KALM and world models? Is there any possibilities to extend KALM to more complicated environments (even real-world cases)?
Technical Quality: 3
Clarity: 4
Questions for Authors: See weaknesses.
I will increase my score to 6 upon seeing comparisons between KALM and model-based offline RL. This comparison directly demonstrates the effectiveness of using LLMs instead of traditional neural networks to generate rollouts.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes, limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your insightful comments and finding our idea interesting. To address your concern, we have devoted effort to include discussion and experiment on model-based RL. Please find our response below.
> Q1: Model-based offline RL methods like MOPO, MOReL and COMBO demonstrate strong performances and are also capable of generating imaginary rollouts just like KALM. They should be discussed and included in the baselines.
**Comment 1:** Discussion about model-based RL methods.
**Response 1:** We agree with you at this point. In our initial submission, we touch upon model-based RL (MBRL) methods (MAPLE and RedM) in Section 2.2, referring to them as *environment models*. However, we acknowledge the need for a more comprehensive discussion about prevalent MBRL methods:
MBRL algorithms learn a dynamic model from offline data, which can then be used by any policy learning algorithm to recover the policy. MOPO [1] and MOReL [2] use uncertainty quantification to construct a lower bound, aiming to avoid issues like model bias and distribution shift. COMBO [3] employs both the offline dataset and model-generated rollouts to train a value function and regularize it on out-of-support state-action tuples. Despite both MBRL and KALM methods utilized generated rollouts for policy training, they are different from motivation. KALM, in particular, extracts knowledge from pre-trained LLM to build knwoledgeable agent. Leveraging the LLM's general world knowledge, we demonstrate that LLM has the potential to generate rollouts for unseen goals, extending beyond the scope of offline data and enabling the acquisition of novel skills.
We would like to incorporate the discussion into the revised version.
**Comment 2:** I will increase my score to 6 upon seeing comparisons between KALM and model-based offline RL. This comparison directly demonstrates the effectiveness of using LLMs instead of traditional neural networks to generate rollouts.
**Response 2:** Thanks for your suggestions. We have conducted experiments to compare KALM and two representative baselines: COMBO and MOPO, as the results in the table:
| | KALM | COMBO | MOPO
| :-: | :-: | :-: | :-: |
| Task in offline data | 46.1 | 39.3 | 0.2
| Rephrasing goals | 30.8 | 23.3 | 0.2
| Unseen (easy) | 12.5 | 3.4 | 1.0
| Unseen (hard) | 7.2 | 6.9 | 1.6
KALM surpasses two compared model-based offline RL methods, demonstrating the effectiveness imaginary of rollouts generated from LLM.
> Q2: The policy in KALM also conditions on G, a hidden vector generated by BERT. I think this introduces additional information to the policy. Does the other baseline also use this information?
In our experiments, all baseline methods utilize the BERT to convert textual goals into vector representations, ensuring a fair comparison. We appologize for the omission of this detail in the description of experiment setting. To further improve the comprehensiveness of our experiment, we have now included an addition baseline, DT [4], that utilizes both LLM and offline dataset. DT utilizes Llama-2-7b-chat-hf as the backbone policy model and offline data to training the policy. It treats decision-making as a sequence modeling problem, using a transformer architecture to predict actions based on the desired future return. In this experiment, DT trains on the offline data same as other methods, and the results are as follow:
| | KALM | DT
| :-: | :-: | :-: |
| Task in offline data | 46.1 | 26.4
| Rephrasing goals | 30.8 | 19.1
| Unseen (easy) | 12.5 | 11.4
| Unseen (hard) | 7.2 | 4.5
The results show that KALM outperforms DT on all four types of tasks.
> Q3: This is an open question. What do you think are the major differences between KALM and world models? Is there any possibilities to extend KALM to more complicated environments?
**Comment 1:** What do you think are the major differences between KALM and world models?
**Response 1:** Both world models and KALM train model to generate environmental rollouts, but their motivations are different: KALM leverages the embedded knowledge within pre-trained LLM, utilizing this pre-existing information to generate rollouts. Conversely, world models begin with no prior knowledge and learn from extensive datasets, constructing knowledge from foundational principles.
Besides, world models and KALM utilize the offline data for different purposes. World models particularly excel in visual tasks, processing observations such as high-dimensional images. They utilize offline datasets to abstract observation features and learn environmental dynamics. On the other hand, KALM utilizes offline data to train policy and applies the offline data to bridge the gap between LLM and the environmental.
**Comment 2:** Is there any possibilities to extend KALM to more complicated environments (even real-world cases)?
**Response 2:** As we discuss in Section 5, we have considered extending KALM to more complex tasks, e.g., tasks with visual input. We make attempt towards this direction on Meta-world environment with visual observation (visual input is closer to the real world). We utilize a randomly initialized ViT as the vision encoder/decoder to process visual input. Figure 3 in the attached PDF file shows the example of the generated rollout, given a language goal *unseen during the training*. The generated rollout successfully captures the core information of the environment, reflecting the reasonable robotic movements tendency. This result provides evidence of the potential for extending KALM to more complicated environments.
## Reference:
[1] MOPO: Model-based Offline Policy Optimization. Tianhe Yu, et al. NeurIPS 2020.
[2] MOReL: Model-Based Offline Reinforcement Learning. Rahul Kidambi, et al. NeurIPS 2020.
[3] COMBO: Conservative Offline Model-Based Policy Optimization. Tianhe Yu, et al. NeurIPS 2021.
[4] Decision Transformer: Reinforcement Learning via Sequence Modeling. Lili Chen, et al. NeurIPS 2021.
---
Rebuttal 2:
Comment: Thanks for the reply. Your experiment results demonstrate LLM-based environment models are better than previous models in MBRL, even though the generated rollouts are numeric (which, to my best of knowledge, is a weakness of current LLMs). Am I getting your conclusions right?
---
Rebuttal Comment 2.1:
Title: Thanks for your response
Comment: Thanks for your response. Your understanding is right. While LLM is primarily designed for text-based tasks, the experiments show that KALM outperforms MBRL methods after fine-tuned with numeric environment data. This improvement over traditional neural network could be attributed to the increased model capacity, which improves the representational ability. In addition, the embedded knowledge within LLM can serve as an implicit regularization, aiding in the model's ability to generalize.
---
Rebuttal 3:
Comment: Thank you for raising your score. We are glad the additional comparisons and discussions address your concerns. We ensure these content would be incorporated in the future version. | Summary: This paper provides KALM (Knowledgeable Agents from Language Model Rollouts) that fine-tunes a LLM (e.g., Llama-2-7B-Chat) with offline RL (e.g., CQL) for robotic manipulation tasks (e.g., CLEVR-Robot and Meta-world).
KALM mainly consists of three steps: (1) LLM grounding, (2) rollout generation, and (3) offline RL. In the LLM grounding step, KALM fine-tunes a LLM in a supervised manner on an offline dataset collected from the environment. More specifically, supervised fine-tuning involves three tasks: dynamics prediction, rollout explanation, and rollout generation. In the rollout generation step, KALM uses the fine-tuned LLM to generate imaginary rollouts with goal-oriented prompt (GOP). Here, the GOP is generated by paraphrasing the goals in the offline dataset or synthesizing new goals. Finally, in the offline RL step, KALM fine-tunes the LLM on the combination of the offline dataset and the imaginary rollout dataset.
This paper evaluates KALM on two robotic manipulation benchmarks: CLEVR-Robot and Meta-world. This paper empirically demonstrates that KALM can outperform offline RL algorithms such as CQL and BC.
Strengths: S1. This paper proposes a method that uses a LLM (e.g., Llama-2-7B-Chat) to generate an imaginary rollout data in addition to an offline dataset. This approach can be seen as a kind of data augmentation. And, this approach improves the performance of an agents both in-distribution and out-of-distribution test cases. Especially, this paper demonstrates that KALM performs well in the out-of-distribution cases.
Weaknesses: W1. This paper mainly compares KALM with offline RL algorithms such as CQL. However, I am not sure that this comparison is fair and reasonable. Since KALM uses imaginary rollout data in addition to an offline dataset, it may not be considered as an offline method. If it true, comparing KALM with pure offline algorithms may not be fair.
Technical Quality: 2
Clarity: 2
Questions for Authors: Q1. When generating imaginary rollouts, does KALM interact with the environment?
Q2. Does KALM use rewards from the environment to filter out low-quality rollouts?
Q3. Can we use DPO (Direct Preference Optimization) instead of offline RL algorithms? Are there any pros and cons?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors adequately provides limitations of their work in Section 5 (i.e., Conclusion and Limitation).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and time in reviewing this paper. It appears there may be some confusion regarding the fairness of the experimental comparison. Below, we provide a comprehensive response to address these concerns.
> Q1: Concerns about the fairness of comparison.
**Comment 1:** I am not sure that this comparison is fair and reasonable. Since KALM uses imaginary rollout data in addition to an offline dataset, it may not be considered as an offline method.
**Response 1:** We first clarify the fairness of the comparison: all methods in our experiments are offline methods, which *does not require online interaction* with the environment (we would discuss the environment interaction in detail in response 2 below). KALM fine-tunes a general purpose LLM with the offline data, and generates imaginary rollouts without online interaction, thus adhering to the offline setting and ensuring the fair comparison. In the initial submission, we have made a comparison between KALM and a LLM-related baseline (LLM as policy), as shown in Figure 5. To further justify the proposed method and address your concern, we add an additional baseline for comparison, Decision-transformer (DT) [1], which utilizes Llama-2-7b-chat-hf as the backbone policy model and offline data to training the policy. DT treats decision-making as a sequence modeling problem, using a transformer architecture to predict actions based on the desired future return. In this experiment, DT trains on the offline data same as other methods, and the results are as follow (as well shown in Figure 1 in the attached PDF file):
| | KALM | DT |
| :------------------: | :------: | :--: |
| Task in offline data | **46.1** | 26.4 |
| Rephrasing goals | **30.8** | 19.1 |
| Unseen (easy) | **12.5** | 11.4 |
| Unseen (hard) | **7.2** | 4.5 |
The results show that KALM outperforms DT on all four types of tasks. We would like to update the paper to incorporate the additional result.
**Comment 2:** When generating imaginary rollouts, does KALM interact with the environment?
**Response 2:** KAML does not require interaction with the environment, and the imaginary rollouts can be generated in pure offline manner. We appologize for any confusion brought by Figure 2, which involves an *optional* online process. For clarity, it should be noted that in our comparative experiments, KALM is implemented ***without the online procedure***. The inclusion of the optional online process in Figure 2 is intended to illustrate the method's extendibility and capability for future integration with real-time environment interactions.
> Q2: Does KALM use rewards from the environment to filter out low-quality rollouts?
KALM does not employs rollout filter. We use all imaginary rollouts generated by LLM. We suggest that offline RL algorithm can also learn useful knowledge from failure experience. Besides, these failure rollouts may contribute to the robustness of the resulting policy by providing a diverse range of scenarios for the model to learn from, although they are not successful rollouts. This insight is aligned with the comment proposed by Reviewer rv4h.
> Q3: Can we use DPO (Direct Preference Optimization) instead of offline RL algorithms? Are there any pros and cons?
DPO can not be directly applied to our problem setting, as it requires preference data annotated by human or AI and is initally designed for aligning LLMs with human value. A potential adaptation of DPO for integration with KALM is treating the offline rollout as the preferred data and the generated rollouts as the less-preferred data. However, this is still unfeasible as we do not have offline rollouts (i.e., preferred data for DPO) that correspond to unseen language goals. Note that the primary motivation of this research is to leverage the knowledge of LLMs to develop agents with enhanced knowledge, this goal is not dependent on the use of any specific offline RL optimization technique.
------
We hope that our response has addressed your concern and questions satisfactorily. If you had any further concerns, we are glad for discussion.
## Reference:
[1] Decision Transformer: Reinforcement Learning via Sequence Modeling. Lili Chen, et al. NeurIPS 2021.
---
Rebuttal 2:
Title: Clarification
Comment: In addition to the above rebuttal, we would like to clarify some misunderstandings.
It seems that the reviewer thought that KALM uses the LLM to learn from interacting with the environment by employing reinforcement learning (or DPO as the reviewer asked) to maximize the reward from the environment. This way could have been an ONLINE RL setting, where interactions with the environment are allowed.
However, our whole setting is very close to the OFFLINE RL, where no online interactions are allowed during training, but only a fixed set of offline trajectories are available, plus a general-purpose LLM model. The LLM is fine-tuned using the offline trajectories, and is then used to generate trajectories in a purely imaginary way, just the same as language generation. The real and imaginary trajectories are then fed to offline RL methods. Therefore, comparing with previous offline RL methods that can only learn from offline trajectories is not unfair.
We will revise to make our setting easier to understand, and will add more LLM-related baselines as in the responses to other reviewers.
---
Rebuttal 3:
Comment: Hi reviewer 1PB3. As the discussion period comes to an end, we want to follow up to see if the response addresses your concerns. If you have any further questions, please let us know. Thank you again!
---
Rebuttal 4:
Title: After Author Response
Comment: Thank you for providing thoughtful responses to my questions. I could understand KALM more. Especially, I appreciate the effort that the authors have made in providing additional experiments with Decision Transformers (DT). Accordingly, I raise my rating.
---
Rebuttal Comment 4.1:
Comment: We are glad that our response enhances your understanding of KALM method, and makes positive influence on your assessment. Thanks for your time and consideration! | Summary: To bridge the gap between agents that can act and the vast prior knowledge that language models contain, the authors propose KALM (Knolwedgable Agents from Language Model rollouts), a method for training action-taking agents from language models. KALM is a finetuning method that enables a language model to translate between textual descriptions of goals and environment rollouts. Imaginary rollouts are then used for offline RL training. The paper demonstrates the efficacy and generalizability of the method on robotic manipulation tasks.
Strengths: - To the best of my knowledge, the proposed method (translating bidirectionally between environment rollouts and textual goal descriptions, then using generated imaginary rollouts for RL training) is a novel way of grounding language models as agents for low-level control in robotic manipulation.
- The experiments seem thorough (across two environments, range of tasks) and the evaluation sets are curated to measure the intended effects (generalization). The empirical results generally support the claims from the authors that adding KALM helps with policy learning across multiple RL algorithms, and compare against relevant baselines (directly finetuning the LLM as a policy) and method components are ablated, demonstrating their importance.
- The problem setting is of significance to the robotics community, where there have been an increase in recent works looking to connect the capabilities of large foundation models with lower level controls.
Weaknesses: - While the method seems to work in these simple simulated settings, the method itself seems susceptible to language model hallucinations. It would be interesting to see statistics on how often the model imagines objects / obstacles that don’t exist, or implausible rollouts in the simulator. Alternatively, it would be interesting to see that having hallucinations can actually lead to increased robustness.
- Given that the motivation is to improve generalizability via LLMs, it would be helpful to also include a comparison against RL methods that use prior knowledge from LLMS (e.g. the cited LLaRP). While the LLM as a policy method touches on this, it would support the motivations of the paper more if there were direct comparisons against methods that use LLMs for world knowledge & priors.
- There are a few grammatical and writing errors in the paper:
- Line 21: “intelligent agents to acquire such ability” → “intelligent agents to acquire such an ability/such abilities”
- Line 26: “despite they are highly similar tasks” → “despite them being highly similar tasks”
- Line 28: “general tasks in text domain” → “general tasks in text domains”
Technical Quality: 2
Clarity: 2
Questions for Authors: Why do the authors think that the ablation study for Table 1 shows that adding components of the method actually leads to a degradation of performance on rollout accuracy on the unseen (easy) set? The paper presents an explanation that the current objectives don’t address learning action semantics, but this also applies to the unseen (hard) tasks and does not explain a *degradation* in performance.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors discuss the limitations of their work in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your highlight of the method novelty and the significance of the problem setting. Please find our response to each comment as follow.
> Q1: While the method seems to work in these simple simulated settings, the method itself seems susceptible to language model hallucinations. It would be interesting to see statistics on how often the model imagines objects / obstacles that don’t exist, or implausible rollouts in the simulator. Alternatively, it would be interesting to see that having hallucinations can actually lead to increased robustness.
Hallucination is indeed a central issue to consider when we use LLM. The results in Table 1 provide a general result in term of matching rate between generated rollouts and the given goal. In response to your comment, we have conducted more detailed statistics on the quality of the generated rollouts. Note that current experiments focus on the environment state in numerical vector, where each dimension in the vector has specific semantic. Thus the LLM would not generate rollouts containing impossible objects. Here we focus on investigating the implausible rollouts as follow topics: 1. the object is out of the table; 2. the object floats in the air; 3. implausible robotics joint pose (e.g., out of the joint rotation angle bound) and 4. exceeding dynamics limits between two steps. The statistics results are as follow:
| CLEVR-Robot | Out of workbench | Exceed dynamics limits |
| :--------------: | :---------: | :--------------------: |
| Rephrasing goals | 18.4 | 0.1 |
| Unseen (easy) | 0.0 | 0.2 |
| Unseen (hard) | 4.3 | 0.2 |
| Meta-world | Float in the Air | Out of workbench | Implausible pose | Exceed dynamics limits |
| :--------------: | :--------------: | :---------: | :--------------: | :--------------------: |
| Rephrasing goals | 39.5 | 18.3 | 34.0 | 44.6 |
| Unseen (easy) | 35.5 | 10.9 | 39.2 | 47.9 |
| Unseen (hard) | 22.3 | 64.9 | 31.2 | 31.1 |
The results indicate while there is a certain level of hallucination present in the outputs of the LLM, the majority of the imaginary rollouts remain within reasonable scope. Besides, the results on CLEVR-Robot demonstrate lower anomaly ratio. These hallucinations may serve as a form of domain randomization that improves the policy's robustness.
> Q2: Given that the motivation is to improve generalizability via LLMs, it would be helpful to also include a comparison against RL methods that use prior knowledge from LLMS (e.g. the cited LLaRP). While the LLM as a policy method touches on this, it would support the motivations of the paper more if there were direct comparisons against methods that use LLMs for world knowledge & priors.
Good point. We agree with you that an additional baseline utilizing both LLM and offline data more would be a strong supplement to our experiments. In response to your concern, we conduct comparison with an additional baseline, Decision-transformer (DT) [1], with Llama-2-7b-chat-hf as the backbone policy model. DT treats decision-making as a sequence modeling problem, using a transformer architecture to predict actions based on the desired future return. In this experiment, DT trains on the offline data same as other methods, and the results are as follow:
| | KALM | DT |
| :------------------: | :------: | :--: |
| Task in offline data | **46.1** | 26.4 |
| Rephrasing goals | **30.8** | 19.1 |
| Unseen (easy) | **12.5** | 11.4 |
| Unseen (hard) | **7.2** | 4.5 |
The results show that KALM outperforms DT on all four types of tasks. We would like to update the paper to incorporate the additional result.
> Q3: Why do the authors think that the ablation study for Table 1 shows that adding components of the method actually leads to a degradation of performance on rollout accuracy on the unseen (easy) set? The paper presents an explanation that the current objectives don’t address learning action semantics, but this also applies to the unseen (hard) tasks and does not explain a *degradation* in performance.
Answer: We suggest that the performance degradation is attributed to the specific nature of the unseen (easy) task, whose objective is to predict one-step transitions given unseen language goals. To be more specifically, we would discuss the two components of LLM fine-tuning in KALM (i.e., dynamics prediction and rollout explanation) respectively. For dynamics prediction objective, unseen (easy) task objective (predicting $a_t$ and $s_{t+1}$ given $G$ and $s_t$) shares similarity, yet diverges from the dynamics prediction objective (predict $s_{t+1}$ given $s_t$ and $a_t$). This difference introduces a potential conflict in the LLM SFT. This is evidenced by the empirical results that KALM w/o dynamics prediction achieves highest rollout accuracy for unseen (easy) task. For rollout explanation objective, it focuses on giving explanation on a long rollout sequences. While this objective enriches the model's capability to provide coherence over temporal sequence, it may inadvertently detract from the model's ability to capture the immediate logic of transitions between adjacent two steps.
We would like to update the paper to reflect the above discussion about the experiment results.
> Q4: There are a few grammatical and writing errors in the paper.
We have fixed your mentioned typos and would make a more thorough check to fix the grammatical and writing errors.
------
We believe the experiment and the result analysis have been clearly enhanced based on your c comments. If you had any further concerns, please let us know.
## Reference:
[1] Decision Transformer: Reinforcement Learning via Sequence Modeling. Lili Chen, et al. NeurIPS 2021.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for responding to my questions and concerns. I will be keeping my score.
---
Reply to Comment 1.1.1:
Title: Thanks for your response
Comment: We appreciate the reviewer's acknowledgment of our responses. Thank you for your valuable time and comments. | Rebuttal 1:
Rebuttal: We express our sincere thanks to the reviewers and chairs for their valuable time and constructive feedback on this paper. We are encouraged to note that reviewers acknowledge this paper's novelty (Reviewer rv4h,JwPT,bRuq), its contribution to community (Reviewer rv4h,JwPT,bRuq) and the sufficiency/performance of the experiments (Reviewer rv4h,1PB3,JwPT,bRuq). Below we summarize the major concerns raised by reviewers and our corresponding response.
1. Requiring additional baselines (Reviewer rv4h,JwPT): we have incorporated additional baseline methods. Specifically, we have included decision-transformer that leverage both LLM and offline datasets, as well as model-based offline RL approaches.
2. Fairness of the comparison (Reviewer 1PB3): we clarify that all methods in the experiments (including KALM and baselines) are under offline setting, where no online interactions are allowed during training, and only a fixed set of offline trajectories are available for learning. Therefore, the comparison is not unfair. We have added more LLM-related baselines (suggested by Reviewer rv4h) to further justify the proposed method.
3. Extending to more complicated tasks (Reviewer JwPT,bRuq): we elaborate on the extendibility of KALM framework to more challenging setting, such as tasks with longer trajectories, absent reward function or visual inputs. We have also verified the extendibility by conducting experiment on more challenging task with visual observation. Please refer to Figure 3 in for the results.
We believe that the experiments are thorough and this paper makes significant contributions to research community, as there have been an increase in connecting the capabilities of large foundation models with lower level controls (as suggested by Reviewer rv4h). We have provided detailed responses to each reviewer's comments below, and we *are eager to* receive the reviewers' feedback! Please let us know if there are any further concerns, and we are available and glad to discuss.
------
Best wishes,
Authors of Submission #502
Pdf: /pdf/1afdc076fa174dc19995cf913faf6f8628ef6808.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
UPS: Unified Projection Sharing for Lightweight Single-Image Super-resolution and Beyond | Accept (poster) | Summary: This paper introduces Unified Projection Sharing (UPS), a novel algorithm for lightweight single-image super-resolution (SISR) that decouples feature extraction from similarity modeling by employing a unified projection space. This approach achieves state-of-the-art performance across various benchmarks while demonstrating robustness for unseen data and promising results for additional image restoration tasks, all with a computationally efficient model.
Strengths: 1. This paper has a clear writing approach and brings new insights.
2. The UPS algorithm demonstrates superior performance compared to existing lightweight SISR methods across multiple benchmarks, showcasing its effectiveness.
3. The method shows promise for extension to other image restoration tasks beyond SISR, suggesting its potential for broader application in image processing.
Weaknesses: 1. The author did not conduct experiments outside of the lightweight model [1][2], and from an optimization perspective, the proposed method should be possible to improve performance in various situations. The authors need to be able to justify themselves.
2. Lack of actual latency comparison. Due to some differences in network structure, the params and FLOPs may not accurately reflect the efficiency of the model.
3. Lack of comparison with the SOTA model, e.g., [3]. outperforms the UPS almost on every benchmark.
Ref:
1. Chen, Xiangyu et al. “Activating More Pixels in Image Super-Resolution Transformer.” 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022): 22367-22377.
2. Hsu, Chih-Chung et al. “DRCT: Saving Image Super-resolution away from Information Bottleneck.” ArXiv abs/2404.00722 (2024): n. pag.
3. Wang, Hang et al. “Omni Aggregation Networks for Lightweight Image Super-Resolution.” 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023): 22378-22387.
Technical Quality: 3
Clarity: 3
Questions for Authors: See above.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Conduct experiments outside of the lightweight model...**
Thanks for your thoughtful comments. As you appreciated the potential generalization of UPS on more image restoration tasks, e.g., JPEG compression removal and image de-noising tasks, we explore the potential application of UPS for common SISR and real-world SR tasks and more baseline frameworks (e.g., HAT/HAT-light, DRCT/DRCT-light) and report the results in Tab. 2 (a) and Tab. 3/4 of our global PDF response.
For both common and lightweight single image super-resolution (SISR), we incorporate UPS into other state-of-the-art SISR models, including SwinIR, HAT/HAT-light [4], and DRCT/DRCT-light [5], based on your suggestion. As shown in Tables 3 and 4 of our response file, we observe that UPS consistently improves the performance and inference efficiency of these models (HAT-UPS, DRCT-UPS, SwinIR-UPS, and their lightweight versions HAT-light-UPS, DRCT-light-UPS, and SwinIR-light-UPS). We will include these insightful experiments in our revised paper. We believe your insightful suggestion and our exploration enhance the wide-ranging value of UPS.
Additionally, for real-world super-resolution (SR), we follow our baseline model, SwinIR-GAN (trained using BSRGAN degradation), and train UPS-GAN. We evaluate all real-world SR models using the same benchmarks (RealSRSet) and evaluation metrics as SwinIR-GAN. As shown in Table 2a of the PDF file (also the Tab below), our proposed UPS-GAN outperforms other state-of-the-art GAN-based and even Diffusion-based methods (Reshift NeurIPS 2023[1] and StableSR IJCV 2024[2]) in terms of NIQE, NRQM, and PI metrics, achieving the best quantitative results (5.09/6.84/4.19). This confirms the effectiveness of UPS for real-world SR tasks.
| Metrics | BSRGAN | RealSR | ResShift | StableSR | SwinIR-GAN | UPS-GAN |
|---------|--------|--------|-----------|-----------|------------|---------|
| NIEQ ↓ | 5.66 | 5.83 | 8.37 | 5.24 | 5.49 | **5.09** |
| NRQM ↑ | 6.27 | 6.32 | 4.56 | 6.12 | 6.48 | **6.84** |
| PI ↓ | 4.75 | 4.40| 7.03 | 4.66 | 4.72 | **4.19** |
**Q2. Latency comparison**
Please see the first answer of the global response. We will provide all these discussions in our revised paper.
**Q3. Comparison with the SOTA model, e.g., Omni-SR**
As discussed in our paper (Line 175), we use the widely used DIV2K for training. When compared with Omni-SR [6] (trained on the same DIV2K), UPS surpasses it in most cases.
To make a fair comparison with Omni-SR+ (trained on DF2K), we re-trained UPS+ from scratch on DF2K. We can see that UPS+ generally attains superior performance than Omni-SR+ in the below table. For instance, UPS+ achieves 0.27dB/0.20dB/0.29dB improvements than Omni-SR+ on Manga109 x{2,3,4}.
| Method | Scale| Set5 | Set14 | BSD100 | Urban100 | Manga109 |
|-----------|---------------|---------------|---------------|---------------|----------------|----------------|
| Omni-SR | ×2 | 38.22 / 0.9613 | 33.98 / 0.9210 | 32.36 / 0.9020 | 33.05 / 0.9363 | 39.28 / 0.9784 |
| UPS | ×2 | **38.26 / 0.9642** | **34.16 / 0.9232** | **32.42 / 0.9031** | **33.08 / 0.9373** | **39.62 / 0.9800** |
| | | | | | |
| Omni-SR+ | ×2 | 38.29 / 0.9617 | 34.27 / 0.9238 | 32.41 / 0.9026 | 33.30 / 0.9386 | 39.53 / 0.9792 |
| UPS+ | ×2 | **38.31 / 0.9643** | **34.37 / 0.9247** | **32.43 / 0.9032** | **33.34 / 0.9388** | **39.80 / 0.9802** |
| | | | | | |
| Omni-SR | ×3 | **34.70** / 0.9294 | 30.57 / 0.8469 | 29.28 / 0.8094 | 28.84 / 0.8656 | 34.22 / 0.9487 |
| UPS | ×3 | 34.66 / **0.9322** | **30.72** / **0.8489** | **29.31** / **0.8114**| **28.98** / **0.8685**| **34.53** / **0.9505**|
| | | | | | |
| Omni-SR+ | ×3 |34.77 / 0.9304 | 30.70 / 0.8489 | 29.33 / 0.8111 | 29.12 / 0.8712 | 34.64 / 0.9507 |
| UPS+ | ×3 | **34.78 / 0.9325** | **30.78 / 0.8492** | **29.36 / 0.8122** | **29.28 / 0.8728** | **34.84** / **0.9517** |
| | | | | | | | | |
| Omni-SR | ×4 | 32.49 / 0.8988 | 28.78 / 0.7859 | 27.71 / 0.7415 | 26.64 / 0.8018 | 31.02 / 0.9151 |
| UPS | ×4 | **32.50** / **0.9024**| **28.90 / 0.7892** | **27.79** / **0.7435**| **26.83** / **0.8073**| **31.39** / **0.9194**|
| | | | | | |
| Omni-SR+ | ×4 |32.57 / 0.8993 | 28.95 / 0.7898 | 27.81 / 0.7439 | 26.95 / 0.8105 | 31.50 / 0.9192|
| UPS+ | ×4 | **32.60 / 0.9029** | **28.97 / 0.7896** | **27.83 / 0.7446** | **27.10 / 0.8136** | **31.79 / 0.9223** |
More results can be found in Tab. 1 of our response PDF. We will include this comparison in Tab. 1 of our revised paper and highlight the two different training setups to ensure a clear and fair comparison.
## References
[1] Yue, et al. ""Resshift: Efficient diffusion model for image super-resolution by residual shifting." NeurIPS, 2023"
[2] Wang et al. "Exploiting Diffusion Prior for Real-World Image Super-Resolution." IJCV, 2024.
[3] Liang, et al. "Swinir: Image restoration using swin transformer." CVPR, 2021.
[4] Chen, et al. “Activating More Pixels in Image Super-Resolution Transformer.” CVPR, 2022.
[5] Hsu, et al. “DRCT: Saving Image Super-resolution away from Information Bottleneck.” arxiv, 2024.
[6] Wang, et al. “Omni Aggregation Networks for Lightweight Image Super-Resolution.” CVPR, 2023.
---
Rebuttal 2:
Comment: Dear Reviewer HE6k:
We have made every effort to address all concerns and provide comprehensive evidence.
For Q1, we explored real-world SR and large models for SISR, and also integrated UPS into DRCT/DRCT-light and HAT/HAT-light to boost their performance.
For Q2, as acknowledged by all other reviewers, we provided inference efficiency for further comparison.
For Q3, we conducted a detailed comparison with Omni-SR and Omni-SR+, showing that UPS/UPS+ outperforms them under a fair setup (using DIV2K/DF2K for training).
Could you please let us know if our rebuttals and further responses have answered all your questions? We greatly appreciate it. | Summary: This work introduces a novel unified projection sharing algorithm that decouples feature extraction and similarity modeling. A unified projection space defined by a learnable projection matrix is created for similarity computation across all self-concerned layers. Extensive experiments demonstrate that the proposed UPS achieves state-of-the-art performance compared to leading lightweight SISR methods.
Strengths: 1. This paper is clearly written and well organized.
2. The sharing of unified projections can effectively reduce computation and enable performance improvements.
3. Experiments show the promising performance of the proposed method.
Weaknesses: 1. The idea of shared projection is somewhat similar to attention sharing [1,2,3]. Discussion and analysis with these highly relevant studies is needed.
2. The needs in Table 1 include computational quantities for the different models. It is unfair to compare with CNN-based methods only the parameters, which are much less computationally intensive.
3. Inference efficiency must be included in the comparison, including CNN and Transformer based methods. Paramas and FLOPs do not fully reflect the on-device running speed of the model.
4. Lack of experimental evaluation in more complex real-world scenarios.
5. Similarity calculation methods have been discussed in past studies [4, 5, 6].
> 1. ShareFormer: Share Attention for Efficient Image Restoration. arxiv 2023.
> 2. Skip-Attention: Improving Vision Transformers by Paying Less Attention. arxiv 2023.
> 3. You Only Need Less Attention at Each Stage in Vision Transformers. CVPR 2024.
> 4. EfficientViT: Multi-Scale Linear Attention for High-Resolution Dense Prediction. ICCV 2023.
> 5. Swin transformer v2: scaling up capacity and resolution. CVPR 2022.
> 6. Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration. ECCVW 2022.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please address the issues raised in the Weaknesses.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations were discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Discussion and analysis with attention sharing...**
Thanks for your suggestion. ShareFormer [1] presents a local similarity map-sharing scheme between neighboring attention layers for lower latency. Thus, ShareFormer shares a static similarity map for neighboring attention layers while UPS calculates dynamic similarity maps with layer-refined features in a shared projection space.
Skip-Attention cuts off some intermediate attention layers to improve efficiency and performance for high-level tasks. LaViT [2] proposes a residual-based attention downsampling that fuses the initial calculated attention scores to guide the aggregation of the following layers, resulting in faster efficiency and improved classification accuracy.
Therefore, Skip-Attention [3] and LaViT follow the existing coupled optimization scheme (reduce some attention calculations), and UPS proposes a decoupled learning strategy to enhance performance. We will cite the insightful studies and add this discussion to our revised paper.
**Q2. Report the computational quantities of different models**
Thanks. We agree with you on this point and we will include the FLOPs (like Fig. 1(c) of our paper) of different CNNs and Transformers for a comprehensive comparison. Please also refer to the FLOPs results in Tab. 1, 2 (b), 3 and Table 4 of our response PDF file.
**Q3. Inference efficiency of different models**
Please see the first answer of the global response and more inference comparison in the Tab. 1. 2(b), 3, 4 in our response PDF file. We will provide all these discussions in our revised paper.
**Q4. Experimental evaluation real-world SR.**
Thanks for this good comments. Following SwinIR-GAN, our baseline model, we train UPS-GAN with the same configuration using the widely used BSRGAN degradation. We compare UPS-GAN with SwinIR-GAN, BSRGAN, RealSR, Reshift, and StableSR for real-SR task. We evaluate all real-world SR models using the same benchmarks (RealSRSet) and assessment metrics as SwinIR-GAN. As shown in Table 2a of the PDF file (also the Tab below), our proposed UPS-GAN outperforms other state-of-the-art GAN-based and even Diffusion-based methods (Reshift NeurIPS 2023[7] and StableSR IJCV 2024[8]) in terms of NIQE, NRQM, and PI metrics, achieving the best quantitative results (5.09/6.84/4.19). This confirms the effectiveness of UPS for real-world SR tasks.
| Metrics | BSRGAN | RealSR | ResShift | StableSR | SwinIR-GAN | UPS-GAN |
|---------|--------|--------|-----------|-----------|------------|---------|
| NIEQ ↓ | 5.66 | 5.83 | 8.37 | 5.24 | 5.49 | **5.09** |
| NRQM ↑ | 6.27 | 6.32 | 4.56 | 6.12 | 6.48 | **6.84** |
| PI ↓ | 4.75 | 4.40| 7.03 | 4.66 | 4.72 | **4.19** |
**Q5. The adopted similarity calculation method in UPS**
As you pointed out, apart from the ReLUFormer discussed in our method section, we will cite and discuss the related works in our revised paper.
### References
[1] ShareFormer: Share Attention for Efficient Image Restoration. arxiv 2023.
[2] You Only Need Less Attention at Each Stage in Vision Transformers. CVPR 2024.
[3] Skip-Attention: Improving Vision Transformers by Paying Less Attention. arxiv 2023.
[4] EfficientViT: Multi-Scale Linear Attention for High-Resolution Dense Prediction. ICCV 2023.
[5] Swin transformer v2: scaling up capacity and resolution. CVPR 2022.
[6] Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration. ECCVW 2022.
[7] Yue, et al. ""Resshift: Efficient diffusion model for image super-resolution by residual shifting." NeurIPS, 2023"
[8] Wang et al. "Exploiting Diffusion Prior for Real-World Image Super-Resolution." IJCV, 2024.
---
Rebuttal Comment 1.1:
Title: Response to author's rebuttal
Comment: Thank you for your response. Concerns about FLOPs and running efficiency were addressed. However, I still have some issues regarding the evaluation of real-world SR. NIQE, NRQM, and PI are not commonly used metrics for evaluating real-world SR, which is typically measured using PSNR, SSIM, LPIPS, CLIPIQA, and MUSIQ. Additionally, the response does not discuss the differences and uniqueness of the similarity calculation strategy from previous work. Therefore, I am inclined to keep the original rating.
---
Reply to Comment 1.1.1:
Title: Dear Reviewer RnSV
Comment: We have made every effort to address all concerns and provide as much evidence as possible. Could you please let us know if our rebuttals and further responses have answered all your questions? We greatly appreciate it.
---
Rebuttal 2:
Title: Response for the further problems
Comment: Thank you for your additional comments. We're pleased that our previous response addressed some of your concerns. Here are our further responses to the issues you've raised:
1. As acknowledged by you and the other reviewers, UPS is a novel and effective lightweight SISR algorithm. Nonetheless, we explored the potential benefits of applying UPS to this new task. We ensured a fair evaluation by strictly following the same baseline (SwinIR, BSRGAN) and using consistent evaluation metrics, such as NIQE, NRQM, and PI, for all competing methods on the RealSRSet [1] benchmark.
2. In response to Q1, we have thoroughly examined the contributions of the suggested works and outlined the key differences between them and UPS. To summarize, UPS operates independently from these transformers. For instance, while ShareFormer generates a static similarity map for neighboring layers, UPS conducts dynamic similarity calculations within a shared projection space. Furthermore, unlike these three works, which adhere to coupled layer-specific similarity and feature extraction optimization, UPS introduces a decoupled approach for these aspects, specifically designed for lightweight SISR.
3. Due to the absence of ground truth data in the RealSRSet [1] benchmark, we cannot calculate reference-based metrics like PSNR, SSIM, and LPIPS. Instead, we present results using CLIPIQA and MUSIQ, which are widely accepted in diffusion-based models. Although UPS-GAN follows the GAN-based framework of our baseline models (SwinIR-GAN), it consistently achieves top-1 and top-2 performance when compared to other leading methods, as shown in the Table below.
| Metrics | BSRGAN | RealSR | ResShift$\dagger$ | StableSR$\dagger$ | SwinIR-GAN | UPS-GAN |
|---------|--------|--------|-----------|-----------|------------|---------|
| CLIPIAQ↑ | 0.6321 | 0.6051 | 0.5834 | 0.5025 | 0.5996 | **0.6577** |
| MUSIQ ↑ | **65.85** | 62.59 | 54.29 | 60.32 | 63.45 | 64.79 |
1. Zhang et al. "Designing a Practical Degradation Model for Deep Blind
Image Super-Resolution." ICCV, 2021. | Summary: The paper proposes an effective lightweight decoupled SISR algorithm that simultaneously performs layer-specific optimization for deep feature extraction and similarity modeling. Specifically, the proposed method casts the deep feature extraction as per-layer optimization, while the similarity modeling is achieved by shared projection space. The proposed method is able to be extended to many restoration tasks, such as denoising, and JPEG image deb locking.
Strengths: 1. The paper is well-written.
2. The authors bring a novel perspective for lightweight SISR that the deep feature extraction and similarity modeling can be decoupled.
Weaknesses: 1. Line 49: "simultaneously" -> "simultaneous".
2. Figure 2: the placement of S1 and Si in Fig. 2(c) is incorrect.
3. Eq. (5): why do you set Vi directly equal to Xi instead of projecting Xi to Vi as done in Swin Transformer?
4. Eq. (6): what does $Q_{i}^D$ denote in this equation? What does D represent?
5. It's recommended to report the quantitative results on the DIV2K dataset.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: If the authors can address my concerns, I am ready to change my recommendation based on the comments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your appreciation of the novelty and effectiveness of our UPS for lightweight SISR.
**Q1. A typo: Line 49...**
We appreciate your kind comment. We will carefully revise our paper to fix the typo.
**Q2. Figure 2: the misplacement of $S_1$ and $S_i$**
Thanks, we will correct the placement of the $S_1$ and $S_i$ accordingly.
**Q3. The indetity mapping of $X_i$ and $V_i$**
For the lightweight scenario, we aim to further reduce the computational cost and model size. Thus, we explore cutting off the linear mapping between $X_i$ and $V_i$, and our early experimental analysis (presented in Tab. 5-C of our PDF file) suggests such a design will not lead to a performance drop. We will add this discussion to our revised paper.
| Settings | w/ V proj. | w/o V proj (Default) |
| :---- | :---- | :---- |
| Urban100 (×4) | 26.80/0.8071 | 26.83/0.8073 |
| Set14 (×4) | 28.91/0.7892 | 28.90/0.7892 |
| Parameters (K)/FLOPs (G) | 895/179 | 843/163 |
**Q4. The denotation of $D$ in $Q_i^D$ in Eq. 6**
$D$ means the projection dimension in our UPS. We are sorry for this typo. The correct Eq.6 should be:
$$
ReLU(Consine(Q_i,Q_i^T) + B_i),
$$
where $T$ means the transpose operation for matrix multiplication. We will double-check all these equations to avoid any difficulties in understanding our work.
**Q5. Quantitative results on the DIV2K dataset...**
In this work, we adopt the DIV2K for training our models as indicated in Line 175 "we utilize the DIV2K image dataset for training". We will highlight this important training setting in our revised paper.
---
Rebuttal Comment 1.1:
Title: Rating
Comment: All concerns have been addressed here, so I‘d like to accept this paper (rating: 7).
---
Rebuttal 2:
Comment: Thank you for your kind comments and appreciation of our work. We will do our best to improve the final version of our paper based on your suggestions.
Title: Thanks for your comment | Summary: This paper presents a novel algorithm named UPS designed to enhance the performance of Transformer-based frameworks in single-image super-resolution (SISR), particularly under lightweight scenarios. The authors identify the challenge posed by the simultaneous layer-specific optimization required for deep image feature extraction and similarity modeling in existing methods. To address this, UPS decouples these tasks by establishing a unified projection space via a learnable projection matrix. The proposed UPS method demonstrates state-of-the-art performance. Additionally, UPS shows good performance on broader image restoration applications.
Strengths: 1. This paper presents a simple yet effective lightweight method called UPS. Using this method, the training difficulty of the model is reduced, and the model effect is improved on the premise of reducing the number of parameters and FLOPS.
2. Sufficient quantitative experimental results are given in this paper.
Weaknesses: 1. “Fig. a.(1-3) below shows over 0.95% (0.99%, 0.95%, 0.96%) for ×{2, 3, 4}) (projection layer) pairs get over 0.9 scores (ranging from 0 to 1) ". If there are no errors in this paragraph, the similarity between layers is very low.
2. The contribution point in this paper is the Unified Projection Sharing. However, in the method, the activation function is also modified. I wonder how much of a performance boost I would get if I left the activation function unchanged and just used consistent space sharing. Please give the analytical or experimental results.
3. As a lightweight method, it would be better to compare inference time.
Technical Quality: 2
Clarity: 2
Questions for Authors: In this paper, it is mentioned that each layer of the model carries out image feature extraction and similarity modeling at the same time, which is difficult to train and will affect the performance of the model. Then, can the model performance be improved by improving the training strategy and modifying the loss function? In theory, if you can train the model properly, will you get better performance than UPS?
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The paper has discussed methodological limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. In line 30, "Fig. 1a.(1-3) below shows over 0.95% (0.99%, 0.95%, 0.96%) ..."**
Thanks a lot for pointing out this typo, actually it's a writing error. It should be "Fig. 1a.(1-3) below shows over **0.95 (0.99, 0.95, 0.96)** ...". We will revise our manuscript to avoid misunderstanding.
**Q2. Experimental results when the activation function is consistent with SwinIR-light**
That is quite a profound question. We have provided the ablated analysis (Tab. 5-A) in the global response PDF file. Also, we report the specific quantitative results in the table below. As we can see, the main improvement comes from our UPS design instead of the ReLU activation. The performance gap between the two different activation choices is only 0.04dB, which represents 11% of the total improvement of 0.36dB. In other words, the 89% improvements come from the UPS design. We will include this ablation analysis in our revised paper.
| Activation Function | SwinIR-light (base) | UPS (Softmax) | UPS (ReLU, Default) |
| :---- | :---- | :---- | :---- |
| Urban100 (×4) | 26.47/0.7980 | 26.79 (+0.32)/ 0.8069 (+0.0089) | 26.83 (+0.36)/0.8073 (+0.0093) |
| Parameters (K) | 930 | 843 | 843 |
**Q3. To compare the inference time**
Please see the first answer of the global response. We will provide all these discussions in our revised paper.
**Q4. More effective optimization strategy may also enhance the performance**
We agree with you on the point that an advanced training strategy may also boost the performance of entangled SISR models. As praised by you and other Reviewers, UPS provides a simple yet effective optimization scheme: without any special training strategies, e.g., taking additional training costs or carefully model finetuning, while yielding improved results.
We investigate two existing training strategies for the SISR task. RDSR [1] incorporates dropout techniques to achieve better testing results, while DRCT [2] employs a progressive training scheme that involves multi-stage training to enhance final performance. Here, we compare UPS with RDSR [1] and progressive training schemes in DRCT [2]. To do this, we re-train SwinIR-light using the above two training strategies. As shown in Table 5-B of our PDF file (also presented in the Tab. below), UPS delivers superior results compared to both of these optimization methods. Nevertheless, we hope our exploration will inspire future research to develop more effective algorithms to better address this challenge.
| | SwinIR (base) | SwinIR + Dropout | SwinIR + Pro. Train | UPS
|-----------------------------|----------------|-------------------|---------------------|---------------------|
| **PSNR / SSIM** | 26.47 / 0.7980 | 26.52 / 0.7988 | 26.56 / 0.7986 | 26.83 / 0.8073
| **Improvement** | - | +0.05 / +0.0008 | +0.09 / +0.0006 | +0.36 / +0.0093
| **Parameters (K)** | 930 | 930 | 930 | 843 (-87)
## References
[1] Kong, et al. "Reflash dropout in image super-resolution." CVPR, 2022.
[2] Hsu, et al. "DRCT: Saving Image Super-resolution away from Information Bottleneck.", arxiv, 2024.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thanks for the response. Most of the concerns are addressed. I have increased my rating
---
Rebuttal 2:
Title: Thanks for your comments
Comment: Thank you for your kind comments and for appreciating our work. We will include these valuable discussions in the final version of our paper. | Rebuttal 1:
Rebuttal: Dear AC and Reviewers,
We sincerely thank all the reviewers for their constructive comments and consistent appreciation of the novelty and effectiveness of our UPS for lightweight SISR. Reviewer fbd3, Reviewer RnSV, and Reviewer HE6k have raised concerns about inference efficiency, and both Reviewer RnSV and Reviewer HE6k have shown interest in extending UPS to other SISR scenarios, such as real-world SR or common SISR tasks. Therefore, we address these two issues here.
**Q1. The inference efficiency (latency)**
Thank you for this valuable comment. Results of inference time (ms), FLOPs (G) and GPU memory usage (MB). The speed is tested on an NVIDIA GeForce RTX 2080Ti GPU with an input size of 256 × 256 under x2 lightweight SISR. And we follow other works to calculate the FLOPs at an output resolution of 1280 x 720. Moreover, UPS, HAT-light-UPS, DRCT-light-UPS enhance the inference efficiencies compared with their counterparts.
| Method | Time (ms) | FLOPs(G) | Memory |
| ----------------| ----------- | ---------------------- | ---------------------- |
| RFND-L | 13 | 146 | 1577 |
| LatticeNet | 18 | 170 | 1639 |
| DLGSA-l | 225 | 170 | 1800 |
| Omni-SR | 112 | 195 | 1842 |
| SwinIR-light | 175 | 244 | 2051 |
| **UPS** | 119 | 163 | 1785 |
| SwinIR-S | 117 | 107 | 1365 |
| **UPS-S** | 71 | 91 | 1039 |
| HAT-light | 153 | 102 | 2039 |
| **HAT-light-UPS** | 136 | 91 | 1763 |
| DRCT-light | 92 | 137 | 2330 |
| **DRCT-light-UPS** | 85 | 125 | 1991 |
More inference cost comparison can be found in Tab. 2(b)/3/4 of the associated rebuttal PDF for both lightweight and large SISR models. When compared to our baseline model SwinIR-light, UPS reduces the overall inference cost by 33% in terms of FLOPs.
Additionally, as shown in Tab. 3/4, UPS can be adapted to other transformers, consistently enhancing their performance while also reducing inference costs. For example, we integrated UPS into HAT, a state-of-the-art SISR model, resulting in HAT-UPS. We can observe that HAT-UPS significantly reduces computational complexity by 3.52M parameters, 95G FLOPs, and 195ms inference time while yielding improved results. Lastly, we also adopt integrated UPS into lightweight DRCT-light, the DRCT-light-UPS reduces the inference costs by 141K/12G/7ms (parameters/FLOPs/speed) compared to its original version DRCT-light.
**Q2.1. UPS for Real-world SR**
Thank you for appreciating the generalization capability of UPS. We agree with you, as UPS is designed to decouple the optimization of feature extraction and similarity modeling, which is orthogonal to specific methods such as SwinIR. Accordingly, we have extensively explored the benefits of UPS for real-world SR and other frameworks, including HAT and DRCT, under both lightweight and parameter-intensive scenarios. Our experiments show that the proposed UPS consistently enhances efficiency and performance across all these settings (real-world SR, lightweight, and SISR tasks).
For real-world super-resolution (SR), as shown in Tab. 2(a) of the PDF file (also the table below), our proposed UPS-GAN outperforms other state-of-the-art GAN-based and even Diffusion-based methods (Reshift NeurIPS 2023[1] and StableSR IJCV 2024[2]) in terms of NIQE, NRQM, and PI metrics, achieving the best quantitative results (5.09/6.84/4.19). This confirms the effectiveness of UPS for real-world SR tasks.
| Metrics | BSRGAN | RealSR | ResShift | StableSR | SwinIR-GAN | UPS-GAN |
|---------|--------|--------|-----------|-----------|------------|---------|
| NIEQ ↓ | 5.66 | 5.83 | 8.37 | 5.24 | 5.49 | **5.09** |
| NRQM ↑ | 6.27 | 6.32 | 4.56 | 6.12 | 6.48 | **6.84** |
| PI ↓ | 4.75 | 4.40| 7.03 | 4.66 | 4.72 | **4.19** |
**Q2.2. UPS for large model**
Additionally, we explored two more transformers (HAT-light [3] and DRCT-light [4]) for lightweight SISR. As shown in Tab. 4, HAT-light-UPS and DRCT-light-UPS consistently improved their results while achieving better inference latency and lower FLOPs. For instance, DRCT-light-UPS enhances DRCT-light by 0.4dB on the Manga109 dataset and HAT-light-UPS improves HAT-light by 0.28dB on the Urban100 benchmark.
Lastly, we incorporated UPS into large SISR models including SwinIR, HAT [3], and DRCT [4]. The results in Tab. 3 suggest that their enhanced versions by UPS can achieve promising performance and significantly lower inference costs. We can observe that DRCT-UPS surpasses its baseline counterpart (DRCT) by 0.26dB on Urban100.
Thank you again for this constructive suggestion. We will include these analyses in our revised paper. Besides extending our work to image JPEG compression removal and image de-noising tasks, we believe all these comprehensive additional analyses (wide-range SR tasks and easy adaption for other scalable SOTA transformer frameworks) will enhance the value of UPS and inspire future research in developing more effective decoupling optimization algorithms.
[1] Yue, et al. ""Resshift: Efficient diffusion model for image super-resolution by residual shifting." NeurIPS, 2023"
[2] Wang et al. "Exploiting Diffusion Prior for Real-World Image Super-Resolution." IJCV, 2024.
[3] Chen, et al. “Activating More Pixels in Image Super-Resolution Transformer.” CVPR, 2022.
[4] Hsu, et al. “DRCT: Saving Image Super-resolution away from Information Bottleneck.” arxiv, 2024.
Pdf: /pdf/560118d8fae0fbbdc1c2c3a58304d568ca19daa0.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
When to Act and When to Ask: Policy Learning With Deferral Under Hidden Confounding | Accept (poster) | Summary: The paper proposes a learning to defer method for policy learning problem with binary actions under unobserved confounding. The proposed method uses MSM to bound the confounding strength and compare the pessimistic bound between Y1, Y0 and identifiable human reward. The final optimization uses the surrogate loss proposed in Mozannar and Sontag [2020]. Overall, the method seems sound. However, the authors overlooked some existing literatures in learning to defer for policy learning problems. The authors compare the proposed method with a self-proposed rejection-inference style baseline, and policy learning method considering MSM without humans.
Strengths: The paper proposes a sound solution to policy learning with unobserved confounding by conservatively comparing the lower bound of Y1, Y0 and identifiable human reward. The method has a strong performance compared to baselines used in the paper by leveraging a stronger direct method class. It is interesting to see how direct method can be used in the UC case for learning to defer since with unconfoundness, both Y1 and Y0 are identifiable and humans may not be needed unless the policy class is limited.
Weaknesses: Missing related work and inaccurate claims: For the related work, it seems that the learning to defer for policy learning problem is first proposed in [1], and the learning to defer for policy learning with unobserved confounders is first proposed in [2]. These two papers adopts the Inverse-Propensity-Score framework to address the problem with similar experimental setup and the same problem setting. The authors should compare with existing works and it seems the claim that it is "the first to learn a policy with the option of deferral to an expert while allowing for hidden confounders" is not very accurate. Also the assumptions in this paper is stronger since it requires an additional assumption in Definition 1.
Binary action: [1][2] can work with multiple actions, it also seems the proposed method should also be extended to multiple actions, so why authors only restrict to binary actions?
Theoretical result: For the bound validity assumption/definition 1, this seems to suggest Y is deterministic, which is rather strong. For example, a simple Gaussian distribution would violate this assumption. What if Y is a random variable, how can theorem 1 be adapted? I would expect a regret bound for the total reward in the paper (theorem 2 is a bound for the approximated loss instead of the reward if I understand correctly). Similarly, for a general direct method, when would we expect assumption 1 holds and not holds?
Experiments - baseline: For baselines, I would expect a purely pessimistic AI baseline for a fair comparison. For example, the eq 3 can also use the pessimistic principle (compare Y1_lower and Y0_lower in the otherwise condition) authors used for comparing human and pessimistic bounds of the counterfactuals. The authors should also compare with existing works using IPS method and discuss the trade-offs.
Experiments: the experimental results are relatively weak with a synthetic dataset and the IHDP dataset. It would make the paper stronger if the authors can experiment with real human responses and real-world datasets.
Presentation: I feel the authors should expand on the introduction of Mozannar and Sontag [2020] for the paper to be self-contained. Currently, it seems hard to understand unless readers have read Mozannar and Sontag [2020] separately.
[1] Gao R, Saar-Tsechansky M, De-Arteaga M, et al. Human-AI collaboration with bandit feedback, IJCAI-21.
[2] Gao R, Yin M. Confounding-robust policy improvement with human-ai teams[J]. arXiv preprint arXiv:2310.08824, 2023.
Technical Quality: 3
Clarity: 2
Questions for Authors: See weakness.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Related work:**
Thank you for pointing this paper out! As this very relevant paper has also been brought up by another reviewer, we choose to comment on it in detail in the “general” comment at the top. We should definitely have caught on to it, and we will refer to it, revise our claims, and discuss it in the revised version of our paper as detailed in the aforementioned comment. We will also make sure to refer and discuss the earlier work by Gao et al. (2021) which deals with the unconfounded case.
**Binary action:**
Indeed our method can be readily extended to multiple actions. We found the presentation to be simpler (including visualizing the intervals as we do in the appendix) for the binary case.
**Theoretical results - the bounds validity assumption:**
The Theorem’s claims hold for any $x$ for which the bounds include the true potential. Thus, if the bounds obey this for a fraction of $1-\delta$ of the $x$’s, the Theorem will hold for the same fraction of cases. Bounds that include quantile components (as the B-learner bounds do) could be robust to a fair degree of noise in the random variable. Oprescu et al. show (in their Corollary 2) that the B-learner bounds are valid on average. More generally, the goal of the theorem is to show that the costs we derived make sense and would lead a model to choose the best course of action.
**Experiments - baselines and data:**
Thank you for the suggestions. We wish to point out that we did compare with an IPS based method: the method of Kallus & Zhou 2020, denoted CRLogit and CRLogit L1 in our experiments. We further attempted to compare with the method of Gao & Yin; however, as we detail in the general comment to all reviewers at the top, we unfortunately could not obtain an implementation of their code or experimental setup, and could not replicate it fully based on the details in the arxiv paper.
We wish to point out that experimenting with real human responses would require an “active” experiment: since this is a causal problem we cannot merely use historical human decisions, but rather we would need to recruit real humans that act as decision makers, in a setup where we know the causal ground truth. While we completely agree that this would be an ideal experiment, we believe that most causal inference papers in the community are not required to attain such a high standard, and we humbly ask the reviewer to take this into account. For example, in the Gau & Yin paper, while data of real human responses is used, they synthesize the risks and the hidden confounding aspects of the data. Similarly, we use the IHDP dataset which includes real human subjects, and induce hidden confounding in it.
**Related work presentation - Expanding on Mozannar et al.'s work:**
Thank you for this important comment. We will make sure to improve our presentation of Mozannar and Sontag [2020] and ensure the paper is self-contained.
---
Rebuttal Comment 1.1:
Comment: Thanks authors for the responses.
For the experimental baselines, I refer to a pessimistic baseline with the PI bound from Blearner, not the CRLogit baseline where the bounds are from propensity scores.
Another question is why the B-learner policy is almost not impacted by the confounding strength, even when the assumption is violated.
---
Reply to Comment 1.1.1:
Title: Pessimistic baseline
Comment: Thank you for the baseline suggestion, it is indeed illustrative. Specifically, we have implemented the pessimistic baseline (PB henceforth) as you suggested, where in eq. 3 we compare $Y(1)\_{\text{lower}}$ and $Y(0)\_{\text{lower}}$ in the "otherwise" condition, instead of deferring. We ran it on the IHDP experiment, adding a line to Figure 2a. Unfortunately we do not seem to be able to upload a new figure at this stage, thus we will describe the results qualitatively:
* For values of $\log(\Lambda)$ going up to about 2, performance of PB is very similar to the B-learner policy (slightly higher mean but CIs strongly overlapping). The performance is of PB significantly lower than CARED for these values.
* For values of $\log(\Lambda)$ between 2 and 4.5 PB performance increases, reaching a policy value of 15.2 (nearly as good as CARED, but without deferring) and then decreases back again.
* At the very highest values of $\log(\Lambda)$, where the performance of CARED and B-learner drops as they defer most cases, the performance of the PB also drops but is better than that of CARED for the same high level of $\log(\Lambda)$.
We note that comparison of this baseline to CARED and B-learner is not exactly like for like, as they are both “encouraged” to defer, especially for high levels of $\Lambda$. Interestingly it means that at least for this specific dataset, at very high levels of hidden confounding where B-learner and CARED defer most cases, the lower bounds still maintain some useful information.
Regarding the relative stability of B-learner performance across levels of hidden confounding:
In the synthetic data experiment (Figure 1) we do see the performance of the B-learner varying. For the IHDP experimental setup the B-learner indeed shows less sensitivity. One possible explanation is that in the IHDP dataset decisions where the B-learner bounds are correct (e.g. positive lower bound on CATE for a case where $A=1$ is best) are the ones that tend to be deferred as $\Lambda$ increases; this is unlike CARED which can actually learn when is it actually beneficial to defer, and thus defer cases where the CATE function is misleading. | Summary: This paper studies deferral policy learning when there is unmeasured hidden confounding observed by human experts but not recorded in data. The paper formulates it as a cost-minimization problem and derives a feasible surrogate loss. The method is shown to achieve better policy value on synthetic and semi-synthetic data.
Strengths: - The paper is generally well-written.
- Theoretical properties are explored.
- The proposed method is easy to implement and the performance is validated by the empirical studies.
Weaknesses: The performance of the proposed human-AI collaboration system relies on the properly specified cost function. For the expert deferring cost and action assigning costs, the current algorithm sets them in a conservative way. As the author mentioned, there are multiple ways to set these costs. Is the conservative cost always the best? If so, please provide theoretical and empirical support. If not, it would be helpful to clarify the situations when the conservative cost is inferior and evaluate other cost functions in the empirical studies.
For the theoretical analysis, the main condition in Definition 1 requires coverage for all $Y(a)$, which is a strong and maybe infeasible requirement. In practice, the guarantee usually only holds for the expected coverage or the probability of coverage. How would this more practical validity definition influence the Theorem 1 claim?
Lastly, the paper states "there are no previous works that learn a policy with the option of deferral under hidden confounding," but it seems [1] considers a similar scenario of unmeasured confounding in the human-AI system. If so, it would be important to discuss the differences and use this method as a baseline in the simulation studies.
I look forward to the author's responses.
[1] Confounding-Robust Policy Improvement with Human-AI Teams (https://arxiv.org/abs/2310.08824)
Technical Quality: 3
Clarity: 2
Questions for Authors: - In Sec. 4.1, the expert’s action space $\mathcal{Y}$ is not defined. Is it the same as the action space $\mathcal{A}$?
- Are the observed data $(X_i, A_i,Y_i)$ only generated by human experts? Can it be generated by an algorithm?
- In Figs 1 and 2, why the value of CARED policy does not change with a wide range of $\Gamma$ values? And why does it not peak at the true $\Gamma$? It would be better to plot the true $\Gamma$ as a vertical line in these figures.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Cost function alternatives - conservative vs. optimistic costs:** This is an excellent question. Under the validity assumption, the conservative approach is guaranteed to be correct whenever the expert is correct, while the optimistic costs do not have this guarantee. Thus, intuitively the conservative costs would be better in cases where the experts are generally correct. More generally, neither the conservative nor the optimistic case are always superior to each other, as we show now:
Let $(x,a,y)$ be a sample where w.l.o.g. $Y(1)>Y(0)$. We have the following sufficient conditions:
1. When the expert is wrong, the conservative costs are strictly better than the optimistic costs when the following holds:
$2\hat{Y}^{-}(x, 0) < \hat{Y}^{+}(x, 1)+Y(1) < \hat{Y}^{-}(x, 0)+\hat{Y}^{+}(x, 0)$.
2. When the expert is wrong, the optimistic costs are strictly better than the conservative costs, when the following holds:
$2\hat{Y}^{-}(x, 1) < \hat{Y}^{+}(x, 0)+Y(0) < \hat{Y}^{-}(x, 1)+\hat{Y}^{+}(x, 1)$.
If none of these conditions hold we might have that both are wrong or both are correct.
**Bounds validity assumption:**
The Theorem’s claims hold for any $x$ for which the bounds are valid. Thus, if the bounds are valid for a fraction of $1-\delta$ of the $x$’s, the Theorem will hold for the same fraction of cases. The goal of the theorem is to show that the costs we derived indeed make sense and would lead a model to choose the best course of action.
**Related work:**
Thank you for pointing this paper out! As this very relevant paper has also been brought up by another reviewer, we choose to comment on it in detail in the “general” comment at the top. We should definitely have caught on to it, and we will refer to it, revise our claims, and discuss it in the revised version of our paper, as detailed in the aforementioned comment.
**Expert’s action space:**
You are right, this is a typo. The action space is the same as the action space $\mathcal{A}$.
**The source of observed data:**
In principal the only requirement we have is that the process generating the actions in the historical data is the same one generating them at test time; and our method is geared toward the case where the process generating these actions has access to extra information that is not reflected in $X$, but which is correlated with the outcome $Y$. Thus, while we are motivated by human experts, the method is not limited to that case.
**CARED policy performance across $\Lambda$ Values:**
In Figure 2b we see that for large enough $\Lambda$ the CARED policy does indeed change and its value decreases; this can also be seen to a lesser degree in Figure 2a. We will add a line corresponding to the true $\Lambda$ value in Figures 1 and 2, following your suggestion. Notavelyt for the IHDP hidden confounding dataset this $\Lambda$ value will be estimated from the propensity scores, as this dataset was created by hiding a confounder from the original IHDP dataset.
[3] Jesson, Andrew, et al. "Quantifying ignorance in individual-level causal-effect estimates under hidden confounding." International Conference on Machine Learning. PMLR, 2021.
---
Rebuttal Comment 1.1:
Title: Thanks for the author responses.
Comment: The author's response addresses some of my major concerns and I update my evaluation accordingly. | Summary: This work learns a policy that can abstain from predicting an action in the case where actions are binary. Their idea follows previous work in the learning to defer literature for supervised learning. Their proposed surrogate can recover the optimal policy, for which they design the cost functions in a way to satisfy this condition. Their experiments show the effectiveness of the proposed method.
Strengths: - They show the surrogate loss has the same optimal solution as the original expert-machine loss.
- Their experiments on synthetic data and IHDP hidden confounding show their method CARED outperforms baselines consistently in terms of policy value.
Weaknesses: - The authors missed some references on learning to abstain such as [1]
[1] Yin, Tongxin, Jean-Francois Ton, Ruocheng Guo, Yuanshun Yao, Mingyan Liu, and Yang Liu. "Fair Classifiers that Abstain without Harm." In _The Twelfth International Conference on Learning Representations_.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The cost function $C(x_i,a)$ is the worst-case regret. I wonder if what would the results be if an alternative such as mean regret or best-case regret is used.
- L 240 I wonder if the Rademacher Complexity varies with NN architecture as the paper claims any NN with weight decay or dropout has the same Rademacher Complexity.
- Did the authors consider the fact that the number of sample the expert can process is limited?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: - The cost function relies on bounds of potential outcomes, which is often unknown in real-world applications. This eventually reduces to the dependence on the B-Learner. I wonder if the authors considered alternatives of B-learner to estimate the bounds.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **References on Learning to Abstain:**
Thank you for bringing this paper to our attention. While our paper has been inspired by the work of Mozannar & Sontag on deferral which adopts an end-to-end approach, this work presents an alternative, post-hoc approach to the problem of (non-causal) deferral. We leave it to future work to consider how the approach of Yin et al. above can be adapted to the causal case, where no ground-truth labels are available.
**Alternatives of the cost function:**
This is a very important research direction which can be further extended to a general question - how to choose the best set of costs for a specific problem? Each choice (example average cost) would imply different trade-offs which would be interesting to explore.
**Rademacher complexity of NN architectures:**
Indeed the specific complexity would depend on the architecture. Our point was merely to state that this condition is not vacuous and is fulfilled by a range of widely used approaches. We refer the reader to the following papers which discuss this subject with far more nuance. We will mention this point in the paper.
* Bartlett, Peter L., Dylan J. Foster, and Matus J. Telgarsky. "Spectrally-normalized margin bounds for neural networks." Advances in neural information processing systems 30 (2017).
* Neyshabur, Behnam, et al. "Towards understanding the role of over-parametrization in generalization of neural networks." arXiv preprint arXiv:1805.12076 (2018).
* Golowich, Noah, Alexander Rakhlin, and Ohad Shamir. "Size-independent sample complexity of neural networks." Conference On Learning Theory. PMLR, 2018
**Impact of limited expert sample processing:**
This is an important point. We believe that in many cases the deferral rate would be set apriori according to such limits on the experts’ time and capabilities. This motivates the comparison in Figure 2b which asks “given a fixed deferral rate, which policy would perform the best”: for example, assuming we know that experts can only deal with 30% of the samples, we can fix the deferral rate accordingly.
**Reliance on B-Learner for estimating bounds:**
We assume we are working with observational data, which is the case with real-world applications. In this case, for each sample, we have the observed treatment, and the observed corresponding outcome. That is, for each sample, we only observe one treatment, and will need to estimate the other potential outcomes in order to know what is the right treatment for each sample.
Our theoretical guarantees hold for any model that provides bounds on the CAPO and for which Assumptions 2-5 hold. In practice, the algorithm would work with any model that gives bounds on the CAPO even if these assumptions do not hold, though for such a model our theoretical guarantees might not carry over.
Thus, the B-Learner is just one possible model that our method can use. However, our choice of the B-Learner is due to the strong guarantees it provides such as validity, sharpness, robustness, and the ability to perform well on moderate amounts of data.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. I will maintain my score. | Summary: The paper combines two frameworks: optimizing surrogate losses for learning to defer, and bounds on CATE under unobserved confounding under the marginal sensitivity model. The challenge with applying the surrogate loss from Mazumdar and Sontag to the causal setting is that the costs of various classification outcomes are unknown, and need to be estimated. The paper proposes essentially deferring when the causal effects are not (partially) identified to be strictly positive or negative and develops conservative/optimistic characterizations of these losses. Deferral comprises of reporting the recorded $a_i$. When the unobserved confounding is exactly correlated with potential outcomes (as is the case with simulation specifications of unobserved confounding that are "favorable" to robust bounds methodology), therefore deferral can be adaptive and improve upon both the expert policy and valid bounds.
Strengths: significance:
While the paper connects two prior areas (surrogate losses for learning to defer, and PI bounds), there is some work needed to establish the connection and the paper does a good job of doing so and showing that such an approach can obtain improvements upon current conservativism of robust approaches only (under implicit assumptions on expert-ness of the underlying behavior policy). This is well-illustrated in the empirics.
Weaknesses: There seems to be an implicit assumption that the confounded behavior policy
Technical Quality: 3
Clarity: 3
Questions for Authors: - Could the authors please describe if there is some relationship between implicit assumptions on regimes where this approach "does well", i.e. can _strictly_ improve upon robust PI bounds policies, and recent work on "superoptimal" treatment regimes?
superoptimal treatment regimes: Optimal regimes for algorithm-assisted human decision-making
MJ Stensrud, JD Laurendeau, AL Sarvet - Biometrika, 2024
-
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: sufficiently although studying more explicitly 1) when the policy improvement is strict and 2) how do assumptions on the expert-ness of the underlying confounded behavior policy relate to the performance of the conservative/optimistic losses given in the paper
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Implicit assumption on the confounded behavior policy:**
We would be grateful for a further clarification of this comment, as it seems to have been cut short.
**Relationship between implicit assumptions on regimes where this approach "does well":**
This is a fascinating connection, as both our work and the work cited consider explicitly the case of human experts with access to information which the model does not. The work by Stensrud et al. considers a different scenario, where using an instrumental variable and some clever identification results one can “override” the human expert. The main difference is the reliance on instrumental variables for identification, which is a serious drawback as such variables do not always exist. However, in future work we are considering merging their approach with MSM-type bounds. We include a mention of their work and this connection in the revised version.
**Exploring policy improvement conditions and expert-ness assumptions impact on loss performance:**
1) In the proof of Theorem 1 we outline conditions for when the policy improvement is strict, which relies on whether inequality (12) holds or does not hold. Unfortunately this condition, which can easily occur in practice (as we see in the experiments), does not have a straightforward interpretation in terms of CATE bounds. Our best attempt of explaining it is as follows:
Assuming w.l.o.g.that Y(1)>Y(0), then this event occurs when the negative of a “narrow” CATE bound (narrow in the sense of taking the lower bound for the higher potential outcome minus the upper bound for the lower potential outcome) is larger than the lower bound minus the actual potential outcome. We will refer to this condition in the main paper.
2) In general we cannot know for sure whether the experts are mostly correct or not, due to the fundamental problem of causal inference. However, if we assume they are generally correct, then one might wish to use conservative approach and defer more often to the experts.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal and clarifications. Apologies for the cut off comment. The main question was about connections between assumptions on the "expert-ness" of the underlying behavior policy and the improvement from deferral. E.g. the DGP used is one example of "adversarial unobs confounding" that is exactly correlated with outcomes, which is also in some sense the best-case scenario for information in the behavior policy. The rebuttal comment about the technical assumptions for strict improvement was helpful here. Outlining these technical conditions in the main text would indeed be helpful.
---
Reply to Comment 1.1.1:
Title: Thank you and clarifcation in main text
Comment: Thank you for the explanation and for the support! We will make sure to detail the conditions on strict improvement in the main text, following the comments above. We will also give more space to the discussion of the connections between the hidden confounders and the expert behavior. | Rebuttal 1:
Rebuttal: We thank all of our reviewers for their insightful comments and constructive feedback, and we are encouraged by your support. Your comments have helped us clarify and strengthen this work, and we are grateful for them. We will address here a major comment regarding the paper of Gao & Yin, and address additional comments in the individual responses to the reviewers. If there are remaining or new questions please let us know and we will do our best to address them.
We wish to thank reviewers fb21and r89L for bringing to our attention the paper “Confounding-Robust Policy Improvement with Human-AI Teams” by Gao and Yin, from October 2023 [1]. We were unaware of this paper, which has the same motivation as ours: learning to act in conjunction with a human expert, when the data used for learning has a bounded degree of hidden confounding. Thus, our claim in the Discussion of being the first to study the problem is incorrect, and we apologize for our oversight. We will add a discussion of this very relevant paper, and the similarities and differences of their approach compared to ours.
The main differences between the papers are:
1. Gao & Yin optimize an inverse propensity weighted objective, while we optimize a cost-sensitive objective. This means that unlike our approach, their approach does not model the outcome directly. Moreover, the weighted objective they employ leads to a situation where only cases where the proposed policy agrees to a high degree with the observed policy would be taken into account. Our approach can be more efficient as we can make use of bounds that merge in a near-optimal manner the propensity score and outcome models such as the B-learner.
2. Gao & Yin provide a bound on the regret of the policy. Their bound assumes a well-specified model and scales with the square of the smallest inverse propensity score in the historical policy, which could be quite small. Our result does not bound the regret directly: instead, it shows that we can obtain low values of the weighted cost function while taking into account the error in estimating the CAPO bounds. We further show that the pointwise minimum of the loss function is better than both the human and the bounds-policy. Our theoretical results are thus not directly comparable.
3. Gao & Yin also address the case where there are multiple human experts each with their own policy. This implies that the specific experts who were used to generate the historical data to also be the experts that use the system upon deployment. This is not a case we have examined and we will look into its ramifications in future work.
Regrettably no code is available online for replicating the method of Gao & Yin, and no code is available for replicating their experimental setup. We have contacted them but unfortunately they have said that the code cannot be available at this time due to constraints on their side. We have attempted to replicate their simulation, which is similar to ours, but several crucial details are not reported in the paper and we could not replicate their numbers.
We believe both papers have their merits and drawbacks, as we approach the same challenge using quite different algorithmic approaches. We hope that in the future a direct experimental comparison will be possible.
In the attached PDF we add, following the suggestion of reviewer Vchf, a version of Figure 2b of our paper with an added random deferral baseline.
[1] Gao, Ruijiang, and Mingzhang Yin. "Confounding-robust policy improvement with human-ai teams." arXiv preprint arXiv:2310.08824 (2023).
Pdf: /pdf/cb0ac7ce688d70bd4f9b0d2d32a9eacea8110ef6.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper extends the work of [Mozannar & Sontag 2020] on "learning to defer" to the causal inference setting with (bounded) hidden confounding. Compared to the original supervised learning setting, the causal setting does not have ground truth labels. Therefore this paper proposes to use estimated bounds on the potential outcomes to drive the deferral decisions, and formulates it as an optimization problem.
Experiments are conducted on synthetic and semi-synthetic (IHDP) datasets.
Strengths: - Studies an interesting and relevant problem
- Writing is generally clear.
Weaknesses: - Although writing is generally clear, the flow could be improved to better explain some of the baselines / obvious / naive approaches, and justify why the proposed approach (arguably more complex) is necessary.
Technical Quality: 2
Clarity: 2
Questions for Authors: Major:
- L227 Eqn (3) B-learner policy: this is used as a baseline approach in the experiments. Currently this is presented in the theory section, and for a moment I thought this is your proposed approach. It is such a straightforward definition, a simple approach that also outperforms the baselines. I think this needs to be presented earlier in the text, and you need to make it clear why it's not good enough and how your approach is different.
Others:
- L37: "human expert typically has access to the hidden confounders" - in what form? Do you mean human decisions are more likely to account for hidden confounders.
- L181-182: in the definition of pessimistic and optimistic deferral costs, what is $\max_{a' \neq a_i}$, aren't there only two actions? L190 I assume it's defining the same thing but different notation is used.
- L202 Algorithm 1 is never referenced in the methods section. Also it seems to simply repeat the expressions already presented in the text above, not sure this algorithm is necessary.
- L224 what does bounds are valid *on average* mean? How does that affect the theoretical results if some bounds are invalid?
- L227 "This policy defers if and only if the CATE upper bound is positive and the CATE lower bound is negative." You may elaborate that intuitively if bounds are both positive then the CATE is positive (and vice versa) so it's clear what to do, and the deferral situation is when it's unclear the true CATE is >0 or <0 and unclear what to do.
- L256 Theorem 2the generalization bound compares the training loss and the expected loss which are both L_CE, but how is the result of this learning process related to Eqn (1) the true optimization objective?
- Theorem 2: What does the generalization mean and how does it compare to other things? Does it tell us about what we should do in practice when using your approach?
- experiments should compare to a random deferral strategy as a baseline.
- Fig 2a "proposed method outperforms both baselines" -> this is not true, for large log(Λ) the proposed approach is overtaken by B-learner.
- Fig 2b as deferral rate increases beyond 60% performance drops, I think it's worth commenting on if for example we allow for *up to* 80% deferral, can the policy learning figure out it only needs to defer 60% of the time. Alternatively, what are practical ways of selecting this deferral ratio?
Minor issues:
- Citation style: there are places where the reference in text should be parenthesized but are not.
- Typo: There's a slash on L13 of the abstract.
- L122-124, L138-139L deferral policy defined twice
- L132 "ho" -> "how"
- L204 "we see than" -> we see that
- L249 says $Y^{+}_1(x) - Y^{+}_1(x)$. I think there is a typo.
- L271 says three variants, but only two are listed
- L347 "We note there when learning to act" -> we note that
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: Main limitations noted include access to expert actions in observational data, and that experts behavior do not change when they receive machine's deferral.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Baselines Presenting:**
Thank you for this useful suggestion, we will make sure to better explain the baselines and justify the merits of our approach See also our reply to the major question.
**Clarification of baseline and differentiation from proposed method:**
Thank you for this important comment - we indeed should have explained this point better. We will revise and explain in more detail the baseline policy we denote by $\pi_{bounds}^Q$, which defers if the estimated CATE interval crosses 0. This approach is indeed natural and has been used for deferral in many previous papers (including papers which only deal with statistical uncertainty). It was also used in the B-Learner paper.
We will also give a more detailed discussion of the ramifications of Theorem 1: in particular, the Theorem shows that whenever the bounds include the true potential outcome, the action that minimizes our proposed loss function is at least as good as the action implied by the baseline “bounds” policy, as well as the human expert policy. We further show that under very reasonable conditions there will be a non-trivial proportion of cases where the action minimizing our loss function will be strictly better than the one chosen by the bounds policy and expert policy. Finally, our experiments show empirically the added value of our approach compared to these baselines. The reasoning for why our approach might perform better is that it directly optimizes for the deferral classification, as opposed to classifying a sample as “deferred” solely based on some uncertainty measure. This last point has already been made by Mozannar & Sontag 2022 for the non-causal case.
We will make sure to clarify these important points in the revised text.
**Human expert access to hidden confounders clarification:**
When learning how to act based on historical data where the actions in the data were made by human experts, the confounders are by definition factors that affected both the action choice and the outcome. Thus, if at test time we have a human decision maker drawn from the same distribution as those in the historical data, they will by necessity have access to all the confounders. Notably, this is true even if those confounders only affect the decision maker unconsciously: the point is that we have access to actions taken by the “true” policy which generated the historical data. Consider for example the case where a human clinician’s decisions are affected by the demeanor of the patient: this would typically be considered a hidden confounder, as it is not recorded in health records. However, at test time the clinicians can still see the patients’ demeanor and be affected by it - in that sense they have access to a confounder which is hidden from the eyes of any system which is merely based on recorded data.
**Deferral costs notation:**
Indeed, while our approach can be extended to multiple actions using the max operator as in L181, in our paper we focus on the binary case. We will correct this and use the notation for binary actions as in L190.
**Algorithm 1:**
Thank you for your note, we will consider whether we can do without the algorithm, saving the corresponding space.
**Bounds validity:**
This sentence refers to Corollary 2 of Oprescu et al. 2023 [2], which gives a precise statement. Informally it means that with high probability, the average of the estimated upper bounds over the dataset is larger than the average of the true upper bounds given by the MSM. A symmetric result is true for the lower bounds.
[2] Oprescu, Miruna, et al. "B-learner: Quasi-oracle bounds on heterogeneous causal effects under hidden confounding." International Conference on Machine Learning. PMLR, 2023.
**CATE bounds' deferral policy explanation:**
Thank you for the comment. You are correct in pointing this out and we will make sure this point is explained more clearly.
**Theorem 2 - Optimization objective:**
L is a non-convex loss function that is hard to optimize, so we propose optimizing L_CE which is convex, and easier for optimization. We show that L_CE converges to the optimal solution of L. Then, since L_CE is easier to work with, and converges to the desired solution, we show the theoretical guarantees on L_CE
**Theorem 2 - Generalization meaning**
Theorem 2 implies that our optimization objective is reasonable in the sense that attaining good performance on the train set by minimizing it can indeed lead to good performance over unseen samples. In principle one could have had an objective which cannot be learned, or where the errors cannot be controlled even for arbitrarily large samples.
The reason this result is not completely standard is the fact that we optimize a cost sensitive objective where the costs themselves are estimated from data. This is similar in spirit (though not at all in the specifics) to cases in causal inference where objectives weighted by the inverse propensity score are considered, and one has to account for the error in estimating the propensity scores themselves. Our approach using estimated costs turns out to be tractable, but still requires some care in its derivation and proof. As is usually the case in generalization bounds, the result does not translate immediately into a practical guide, but it does imply that the method is expected to respond “well” (i.e., similar to standard supervised learning) to changes in sample size and the complexity of the function class.
---
Rebuttal 2:
Title: Rebuttal to Reviewer Vchf cont'
Comment: **Comparison to random deferral baseline:** Thank you for this excellent suggestion! We attach the new Figure 2b in the general comment at the top. Following your suggestion, we have implemented a random deferral strategy, where the action that is taken in non-deferred cases is based on a T-learner, using the same outcome base-learners as those that were used in the B-learner. The comparison with the random baseline shows that generally speaking, both the B-learner policy and CARED defer “correctly”, i.e. defer cases where they might make incorrect actions, with CARED significantly outperforming the B-learner on a wide range of deferral rates.
**Figure 2a results:** You are correct to point out that for large log(Λ) B-learner slightly outperforms our approach, and we will be more careful in our presentation of this result. However, we wish to point out that as seen in Figure 2b, what is actually happening is that for similar log(Λ) values the B-learner defers much less than our approach, which explains the difference in performance. When comparing per deferral rate, our approach outperforms B-learner. In addition, it seems that in order to get the B-learner to defer at high-rates one would need to use extremely high values of log(Λ). We will add this more nuanced presentation of the results.
**Figure 2b results:** Estimating the policy value of a policy which includes human experts and hidden confounding can only be done using a real-world experiment. In principle, when such an experiment is conducted different deferral rates can be compared and the optimal one chosen. Alternatively, one can make various assumptions about the human expert and try to estimate policy values with hidden confounding for such hybrid policies. This is an interesting question which we leave for future research.
In practice, in many cases the deferral rate could be set or constrained a-priori to some narrow interval based on economic and administrative constraints and preferences, for example the capacity of human experts. We will mention this point in the discussion.
**Limitation: Static Expert Behavior:** A good deal of observational data includes expert actions; for example many electronic health care datasets would include the treatments and medications prescribed by expert clinicians. These same clinician populations might be assisted by an algorithmic model based on our approach.
However, you raise an excellent point regarding the way humans might react to the presence of a model-based action recommendation system. We believe this is a fascinating avenue for future work in collaboration with experts on human decision making: do human experts change their actions when they know their decisions are those that were deferred to them by a machine?
---
Rebuttal 3:
Comment: Thank you authors for providing responses to reviews. I have updated my rating from 5 to 6. | null | null | null | null | null | null |
Transductive Active Learning: Theory and Applications | Accept (poster) | Summary: This paper considers active learning with Gaussian Process in a transductive setting where the learner wants to optimize the model performance on a target space A while it can only sample from a sample space S. It proposes a few algorithms that sequentially choose the example that minimizes a few variations of "posterior uncertainty" of the model on A. Assuming the measure for such uncertainty is submodular, it provides bounds on the convergence rate of the proposed algorithm. Empirically, it evaluates the proposed methods on two scenarios: fine-tuning and safe Bayesian optimization.
Strengths: - This paper considers a practically relevant setup of active learning.
- The paper is mostly written clearly.
- The proposed methods are sound. It gives both theoretical guarantees and empirical evaluation.
Weaknesses: - My major concern is novelty and significance:
- The main idea of the proposed query strategy is to minimize posterior uncertainty. This is a well-known method in active learning, and it is an especially straightforward choice in a Bayesian setting.
- For the theoretical results, I'm not very familiar with Gaussian Process literature, but I'm not convinced that the theoretical guarantees in Section 3.1 and 3.2 are nontrivial. It would help readers appreciate the results if the authors could clarify the challenges of obtaining such results, tightness of the bounds, how strong it is compared to other baseline sample strategies, or any new insights from such bounds.
--------
I've read author feedback. Now I agree that the theoretical part offers interesting techniques and results (which roughly matches known results in similar problems), but I personally think the results are still a bit unsatisfactory (specifically, the dependency on |S|, the RHS of the bound in Thm 3.3 can grow with n increases). In addition, since this paper claims the theory part is one of the major contributions, it would be better if there are more explanations, discussions, and comparisons for the results.
Apart from that, I still tend to think the proposed AL approach is a well-known idea in active learning and applying it to transductive learning is a bit natural, so the contribution of this part is not significant enough for a top conference. And I agree with reviewer aHJz that "few-shot fine-tuning of large neural networks" is a bit oversold.
Technical Quality: 3
Clarity: 2
Questions for Authors: In addition to issues in the Weaknesses section, a few questions:
- For Section 4,
- is there any existing Bayesian active learning algorithms that can be used as baselines in this setup?
- it is mentioned in line 213-214 that "we will focus here on .. $A \cap S = \emptyset$", but in Figure 3 and line 243 it is comparing samples from $P_A$. How can the algorithm sample from $P_A$ if the sample space S does not intersect with A? What is the exact setup of the experiments?
- How do you choose hyperparameters for the baselines?
- For section 5, I'm not familiar with this task, but in my opinion, it does not provide enough details and explanations. For example
- What exactly is your sample strategy? In particular, how do you ensure the chosen samples are safe? In line 276 it says "a natural choice for the target space of safe BO is A_n", but it looks like A_n can be unsafe.
- In line 317-318, "VTL minimizes marginal variances, .. which are decisive for expanding the safe set". Why?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper! Please find our detailed responses to the concerns raised.
Please let us know if you have any further concerns or suggestions.
## Concerns
> The main idea of the proposed query strategy is to minimize posterior uncertainty. This is a well-known method in active learning, and it is an especially straightforward choice in a Bayesian setting.
Thank you for pointing this out.
We do not claim to be the first to propose ITL or VTL (line 82 and lines 333-342).
However, we are the first to show convergence guarantees.
As we show in Section 3.3, on transductive instances (where $\mathcal{A} \not\subseteq \mathcal{S}$), our proposed methods differ substantially from the widely recognized inductive methods (e.g., uncertainty sampling) that have been derived based on the same idea of "minimizing posterior uncertainty".
In our view, the fact that the proposed sample strategy is a natural choice can be seen as an advantage over baselines.
> For the theoretical results, I'm not very familiar with Gaussian Process literature, but I'm not convinced that the theoretical guarantees in Section 3.1 and 3.2 are nontrivial. It would help readers appreciate the results if the authors could clarify the challenges of obtaining such results, tightness of the bounds, how strong it is compared to other baseline sample strategies, or any new insights from such bounds.
None of the prior works studying ITL and VTL derive convergence guarantees related to Theorems 3.2 and 3.3.
Similar convergence guarantees were only known for the special case where $\mathcal{A} = \mathcal{S}$.
Core to our analysis are Theorems C.12 and C.13, where we use the approximate monotonicity of the marginal gains and the chain rule of information gain to derive a convergence rate of the marginal gains.
The main technical challenge and novelty of our generalization bounds lies in the definition and proof of existence of "approximate Markov boundaries" (see Definition C.15 and Section C.6.1).
An interesting insight from our bounds is that they shrink with $\gamma_{\mathcal{A}, \mathcal{S}}(n)$ (as opposed to $\gamma_n$ of inductive approaches).
The information capacity $\gamma_{\mathcal{A}, \mathcal{S}}(n)$ of the target space can be *significantly* smaller than the total information capacity across a large domain $\mathcal{X}$.
We observe this empirically in our experiments, where we show that inductive active learning methods are vastly less efficient than ITL/VTL when prediction targets are known.
We will expand the discussion of the contributions and implications of our new theoretical results, given the additional available page in the camera-ready version.
## Questions
1. The most common (inductive) active learning baseline is "uncertainty sampling".
We compare against this baseline in Section J.5 (Figure 11) alongside many additional baselines.
2. $\mathcal{P}\_{\mathcal{A}}$ is the data-distribution limited to the target labels (see footnote 5 on page 6).
We say that a sample is in the support of $\mathcal{P}\_{\mathcal{A}}$ if its label is one of the target labels (e.g., in $\\\{0, \dots, 9\\\}$ in case of CIFAR-100).
The sample space $\mathcal{S}$ includes images with such labels.
However, the set of images in $\mathcal{A}$ is not in $\mathcal{S}$ (i.e., their labels are not known).
Methods that sample many points from the support of $\mathcal{P}\_{\mathcal{A}}$ are able to retrieve the relevant data from $\mathcal{S}$.
We discuss the relevance of this particular setting and also evaluate the alternative setting $\mathcal{A} \subseteq \mathcal{S}$ in Appendix I.
3. We discuss all baselines in detail and specify their hyperparameters where applicable in Appendix J.5.
We use the author's implementations where applicable (e.g., for TypiClust and ProbCover).
We have included all of our code required to reproduce the results in the supplementary material and intend to make it public upon publication.
4. We use the proposed methods for transductive active learning (ITL and VTL) with the specified target space $\mathcal{A}_n$ and sample space (i.e., pessimistic safe set) $\mathcal{S}_n$ (lines 277-278).
You are absolutely right in pointing out that $\mathcal{A}_n$ can be unsafe.
However, by definition, all of the sample space is safe with high probability.
Extrapolating *outside* of $\mathcal{S}_n$ to those targets in $\mathcal{A}_n$ that are unsafe is precisely what we employ in our proof of Theorem 5.1.
In this way, and unlike prior work, ITL \& VTL can address the expansion and exploration simultaneously.
5. VTL quantifies "uncertainty" by the marginal variances of predictions. As such, VTL greedily minimizes the marginal variances.
The pessimistic safe set is defined as $\mathcal{S}_n = \\\{x \mid l_n(x) \geq 0\\\}$ where the lower confidence bound is roughly the mean minus the standard deviation (see eq. (25) in Section C.8).
That is, minimizing the variance at $x$ is the most effective way of determining whether $x$ can be added to $\mathcal{S}_n$ or not.
>I'm not familiar with this task, but in my opinion, it does not provide enough details and explanations.
Given the limited space in the main body of the paper, we had to extract some of the background and details to the appendix.
We include an extensive discussion of the relevant background in Appendix K.2.
---
We hope to have addressed the reviewer's concern regarding the novelty of our paper, which
1. introduces a new problem setting generalizing classical inductive active learning,
2. derives novel convergence guarantees for a natural family of algorithms, and
3. shows the superior performance of these algorithms on two new applications of active learning (unlocked by the generalized problem setting) as compared to the state-of-the-art.
We would appreciate it if the reviewer would increase their score for "contribution" and their overall score for our paper. We would be happy to answer any remaining questions or concerns.
---
Rebuttal 2:
Title: Discussion Period
Comment: Dear Reviewer,
We hope this message finds you well. We have noticed that our detailed rebuttal, addressing each of the concerns raised in your review, has not yet received any feedback. We understand the demanding nature of the review process and appreciate the time and effort invested in evaluating our work.
We kindly urge you to consider our responses to your questions, as we believe they adeptly address your concerns. With some days left, we hope we can still have a fruitful rebuttal period.
Thank you, Authors
---
Rebuttal Comment 2.1:
Comment: Since one of the main claimed contributions is the theory part, I would like to take a deeper look. I'm not very familiar with theory in gaussian process or bayesian learning, so it would take some time.
Questions and comments so far:
- Given there are a lot of notations involved, it would be clearer if you could provide a list of notations for reference.
- It is still unclear to me how strong/tight your results are. It would be easier if you could refer to some similar upper or lower bounds.
- For Thm 3.2, is C a universal constant, or it depends on other things? Specifically, how does it scale with the size of A, S, or X, and what happens if they're infinite (I saw your comment to Reviewer WV1c13, but could you give some quantitative characterization)?
- For thm 3.3
- how realistic are the assumptions (bounded norm for f*, sublinear \gamma_n)?
- It looks \beta_n is increasing as n increases, won't this make the bound very weak?
- Why isn't there a term that characterizes how close f* can be approximated by the model class (Gaussian process)?
- For Lemma C.16 and the comments thereafter, how can you conclude b_\epsilon is universal constant? doesn't it depends on all other terms in inequality (15)?
---
Reply to Comment 2.1.1:
Comment: We thank the reviewer for taking a close look at our novel convergence guarantees.
> It is still unclear to me how strong/tight your results are. It would be easier if you could refer to some similar upper or lower bounds.
One can show that $\sigma_n^2(\boldsymbol{x}) \geq \frac{1}{n+1} \sigma_0^2(\boldsymbol{x}) \in \Omega(1/n)$ which is achieved if the point $\boldsymbol{x}$ is sampled repeatedly (and no other points have to be generalized to). Please see eq. (7.26) of [1]. The derivation is along the lines of [2, Section 2].
In particular, our eq. (2) is tight up to $\gamma_n$.
Our results in the agnostic kernelized setting (Theorem 3.3) match regret bounds of kernelized bandit algorithms in the same setting.
For example, [3, Theorem 4] prove for the celebrated and widely used GP-UCB algorithm that $R_n / n \leq \widetilde{O}(\frac{\gamma_n + \|\|{f^*}\|\|_k \sqrt{\gamma_n}}{\sqrt{n}})$ where $R_n$ is the cumulative regret up to round $n$ and $\widetilde{O}(\cdot)$ subsumes log-factors.
A similar result can be found in [4, Theorem 3] (ICML 2020 Test of Time Award winner).
Our results match the rates of results in this widely cited and often applied line of work on kernelized bandits.
> For Thm 3.2, is $C$ a universal constant, or it depends on other things? Specifically, how does it scale with the size of $\mathcal{A}$, $\mathcal{S}$, or $\mathcal{X}$, and what happens if they're infinite (I saw your comment to Reviewer WV1c13, but could you give some quantitative characterization)?
You can find the constant $C$ defined precisely in the proof of Theorem 3.2 (cf. lines 1227 and 1229). $C$ is independent of $n$.
For a finite discretization of a continuous domain, our results hold as stated in the main body of the paper.
We generalize our analysis to continuous domains when the RKHS is finite-dimensional in Section C.6.4.
With this generalization, intuitively, the dimensionality of the RKHS assumes the role of the size of $\mathcal{S}$ from the finite setting.
> For thm 3.3, how realistic are the assumptions (bounded norm for f*, sublinear \gamma_n)?
$f^*$ has bounded norm iff it is element in the corresponding RKHS. We remark that for standard kernels (such as RBF/squared exponential or Matérn kernels), the space of functions is extremely large; containing, e.g., the smooth or the continuous functions defined over $\mathcal{X}$.
Our dependence on $\|\|f^*\|\|$ and $\gamma_n$ are standard compared to the large literature on kernelized bandits [e.g., 3] as discussed above.
We provide the magnitudes of $\gamma_n$ for common kernels in Table 3 (last page of Appendix). $\gamma_n$ is logarithmic in $n$ for many common kernels. We further remark that $\gamma_n$ is a very standard measure of "information capacity" / "function class complexity" in the literature on kernelized bandits. [3] is merely one of the many works that fundamentally rely on this notion.
> It looks \beta_n is increasing as n increases, won't this make the bound very weak?
Yes, $\beta_n(\delta)$ is increasing with $n$, however, only very slowly at the rate $\sqrt{\gamma_n}$.
This dependence on $\beta_n$ is standard in the literature on kernelized bandits [e.g., 3, Theorem 2].
> Why isn't there a term that characterizes how close f* can be approximated by the model class (Gaussian process)?
The term $\|\| f^* \|\|$ measures "how easy" it is to find $f^*$ within the model class (the RKHS) with $\|\| f^* \|\| = \infty$ if $f^*$ is not at all in the model class.
The term $\gamma_n$ measures the "size" / "capacity" of the model class.
> For Lemma C.16 and the comments thereafter, how can you conclude b_\epsilon is universal constant? doesn't it depends on all other terms in inequality (15)?
We mean "universal" with respect to $n$ and $\boldsymbol{x}$. We are very sorry for the confusion and have updated this line to point this out explicitly. Thank you for highlighting this!
---
In summary, we would like to mention:
1. Our results match rates of prior work from the widely recognized and large literature on kernelized bandits. We use the same measures of size and complexity of model class as most works from kernelized bandits.
2. Our dependence on $\gamma_{\mathcal{A}, \mathcal{S}}(n)$ can be dramatically better than $\gamma_n$, since this only measures the information capacity of prediction targets $\mathcal{A}$ as opposed to of the entire domain $\mathcal{X}$.
We hope to have addressed the reviewer's questions adeptly.
We are happy to answer any other questions that may help with the interpretation of our results!
---
1. Williams and Rasmussen. Gaussian processes for machine learning, volume 2. MIT press Cambridge, MA, 2006.
2. Opper and Vivarelli. General bounds on bayes errors for regression with gaussian processes. NeurIPS, 11, 1998.
3. Chowdhury and Gopalan. On kernelized multi-armed bandits. ICML, 2017.
4. Srinivas et al. Gaussian process optimization in the bandit setting: No regret and experimental design. ICML, 2010.
---
Rebuttal 3:
Comment: We thank the reviewer for their continued look into our new theoretical results.
We will add a version of this table of notation to the camera-ready version as well as a brief discussion on related results from other problem settings. Thank you for this suggestion and improving our work in this regard.
> Is this notion of "constant" $C$ also standard in literature? I'm a bit concerned about it since it involves many factors which do not look negligible to me. In particular, it looks not very strong to me since it scales linearly in $|\mathcal{S}|$, and it involves maximum initial variance for every example in A. And perhaps it can be clearer if you can explain exactly what it is (up to universal constants) in some specific examples.
The dependence on the initial variance is standard in kernelized analysis of bandits. Usually it is assumed that the kernel is such that $k(\boldsymbol{x},\boldsymbol{x}) \leq 1$ which alleviates this.
In our analysis, the dependence on $|\mathcal{S}|$ is required to achieve convergence to the irreducible variance (kindly see the discussion below).
We would be happy to make this dependence evident in the main theorem statement if the reviewer thinks this is important.
A linear dependence on the size of $|\mathcal{S}|$ is standard in works on safe exploration (i.e., where "extrapolation" is required due to restrictions on the domain) [see, e.g., 1].
We further thank the reviewer for the suggestion, and will update the appendix to clarify whenever we mention "universal constant" what this term is a constant with respect to.
> I would expect there to be a smoother transition between (1) and (2) as x moves into . Is there any explanation why it goes from to directly?
The difference in rates is indeed interesting.
In our analysis this stems from using a "nested" learning problem that learns the function over all of $\mathcal{S}$.
This appears to be required in our view to achieve the *very strong* convergence to the *irreducible* variance $\eta_{\mathcal{S}}^2(\boldsymbol{x})$, since there is a slight mismatch between the original learning problem wrt. $\mathcal{A}$ and a different learning problem wrt. $\mathcal{S}$.
It might be possible to obtain tighter results in the special case $\mathcal{S} \subseteq \mathcal{A}$ (i.e., when the two learning problems are not misaligned).
We are inclined to leave this study as an interesting direction for future work.
> Tightness of Thm 3.3: For results that you cited (e.g. [3, Theorem 4]), the regret diminishes as $n\to\infty$, but for your bound, the irreducible part gets even larger. Can you comment on this?
We would like to point out that in the referenced works on bandits no "extrapolation" is required since *all* points can be observed directly. These works therefore do not have a notion of "irreducible error".
However, we agree while the slight increase of this error in the agnostic setting is likely an artifact of us performing the analysis in a Bayesian setting, in this Bayesian setting this error is tight.
> Proof: For the inequality following line 1225, how is the $\bar{\kappa}_n$ term bounded?
It follows from Assumption 3.1 (submodularity) that $\bar{\kappa}_n \geq 1$. We have updated the proof to make this more explicit.
---
We would further like to highlight that due to the large set of instances of *transductive* active learning, our results constitute new results in multiple settings:
1. The *extrapolation* to points outside $\mathcal{S}$ which unlocks the application to Safe BO, leading to a new sota algorithm.
2. *Directed* learning where $\mathcal{A} \subseteq \mathcal{S}$. Here the analysis simplifies and we prove the rate $\gamma_{\mathcal{A},\mathcal{S}}(n)/n$.
Finally, we would like to mention that beyond our theoretical contributions, the application of transductive active learning to two new applications, Safe BO and the discussed fine-tuning setting, are also key contributions of our work.
In both of these settings our findings show that our proposed approaches to transductive active learning substantially improve upon the current state of the art.
We hope that upon reevaluation these substantial contributions are acknowledged by the reviewer by increasing their score.
---
1. Sui, Yanan, et al. "Safe exploration for optimization with Gaussian processes." International conference on machine learning. PMLR, 2015.
---
Rebuttal Comment 3.1:
Comment: Thanks again for the detailed explanation. I will adjust my review.
---
Reply to Comment 3.1.1:
Comment: We sincerely appreciate you for reviewing and giving insightful comments on our manuscript. Thank you. | Summary: The paper investigates the generalization of active learning to scenarios with specific prediction targets and limited information due to constrained sample spaces, offering a flexible framework adaptable to various domains such as recommender systems, molecular design, and robotics. It introduces novel generalization bounds of independent interest for active learning. The practical performance of the model relies on its ability to capture correlations between points (learning the latent structure) and accurately estimate uncertainty. The study demonstrates that sampling relevant and diverse points significantly improves performance across various applications, surpassing the state-of-the-art methods.
Strengths: The paper is well written and clearly structured with novel contributions in generalization bounds. It provides sufficient theoretical analysis for specific acquisition functions in active learning with experimental results on some datasets.
Weaknesses: The authors use MNIST and CIFAR100 as experiments on active few-shot tuning. First, image classification is a well-studied topic and lack of novelty. Image classification uses convolutional networks for classifier while current LLMs uses auto-regressive models therefore most of theoretical assumptions and results might not be able to extend. Even though the paper is well-written, the claims that the results shall be able to extend to prompt optimization is groundless. It would be more interesting to study the prompt tuning/efficient data selection for fine-tuning LLMs.
Second, the baselines for active learning is not state-of-the-art. For instance, there are more state-of-the-art baselines than badge, typicluster and probcover, for AL domains in image classification. Moreover, there has been work in using influence function to select data to fine tune LLMs or image classifications that worth comparing.
Technical Quality: 3
Clarity: 4
Questions for Authors: N/A
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Title: Ask for Clarification
Comment: We thank the reviewer for their review. To address the points raised in our rebuttal effectively, we kindly ask for some clarification and additional references.
> For instance, there are more state-of-the-art baselines than badge, typicluster and probcover, for AL domains in image classification.
We would greatly appreciate it if the reviewer could point us to those state-of-the-art baselines. We would be more than happy to compare against additional approaches. To the best of our knowledge, BADGE, TypiClust, and ProbCover are some of the most widely recognized and best-performing AL methods for neural network training to date.
> Moreover, there has been work in using influence function to select data to fine tune LLMs or image classifications that worth comparing.
As we are not fully familiar with this specific line of work, we would be grateful if the reviewer could provide references that they believe are relevant for comparison.
Thank you in advance for your assistance in providing these clarifications. We believe that this will facilitate a more productive rebuttal process.
---
Rebuttal Comment 1.1:
Comment: 1. There are some active learning papers published in vision venues.
[1] Li, Jingyao, et al. "Bal: Balancing diversity and novelty for active learning." IEEE Transactions on Pattern Analysis and Machine Intelligence (2023).
[2] Parvaneh, Amin, et al. "Active learning by feature mixing." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
2. Below are references for influence function to finetune language models or image classification tasks.
[1] Kwon, Yongchan, et al. "Datainf: Efficiently estimating data influence in lora-tuned llms and diffusion models." arXiv preprint arXiv:2310.00902 (2023).
[2] Liu, Zhuoming, et al. "Influence selection for active learning." Proceedings of the IEEE/CVF international conference on computer vision. 2021.
[3] Zhao, Zhuokai, Yibo Jiang, and Yuxin Chen. "Direct Acquisition Optimization for Low-Budget Active Learning." arXiv preprint arXiv:2402.06045 (2024).
[4] Xia, Mengzhou, et al. "Less: Selecting influential data for targeted instruction tuning." arXiv preprint arXiv:2402.04333 (2024).
---
Rebuttal 2:
Rebuttal: Thank you for reviewing our paper! We also very much appreciate the quick response to our follow-up question. You can find below our response to some of your questions. Please let us know if you have any further concerns or suggestions.
> The paper is well written and clearly structured with novel contributions in generalization bounds. It provides sufficient theoretical analysis for specific acquisition functions in active learning with experimental results on some datasets.
Thank you!
## Concerns
> The authors use MNIST and CIFAR100 as experiments on active few-shot tuning. First, image classification is a well-studied topic and lack of novelty.
We chose image classification as a testbed *precisely because* it is widely studied.
To the best of our knowledge, MNIST and CIFAR are common tasks for evaluation of active learning [1-3] as well as other works on fine-tuning [e.g., 4].
Given that baselines and benchmarks on active learning have been using image classification as testbeds, we intend with this choice to compete against the strongest possible baselines.
This being said, we strongly agree that applying VTL to other domains (for example, fine-tuning of LLMs) is a very exciting research direction, which we intend to pursue in future work.
> Image classification uses convolutional networks for classifier while current LLMs uses auto-regressive models therefore most of theoretical assumptions and results might not be able to extend. Even though the paper is well-written, the claims that the results shall be able to extend to prompt optimization is groundless. It would be more interesting to study the prompt tuning/efficient data selection for fine-tuning LLMs.
We agree that the results might not extend directly to LLMs, and that this is an exciting direction for further research.
Can you point us to the claim that "our results improve prompt optimization"?
We did not intend to make this claim.
In lines 251-256, we address *model optimization* (i.e., "fine-tuning" model weights, like we do here with vision experiments) and suggest that our work opens up this direction of future work.
### Related work on Inductive Active Learning
We would like to emphasize that Section 4 specifically addresses the (transductive) transfer setting, where we fine-tune to a specific set of (out-of-distribution) examples. This is different from the canonical application of (inductive) active learning to pre-training or fine-tuning without distribution mismatch.
We compare against around 10 of the most widely used inductive active learning strategies.
You can find the comparison to all baselines in Section J.5.
Our experiments show that we outperform *all inductive baselines significantly*.
We believe that this constitutes a comprehensive evaluation of approaches to inductive active learning, and shows the advantage of transductive active learning.
We do not expect any alternative inductive methods to perform dramatically better since they do not take the distribution shift into account.
### Related Work on Influence Functions
Thank you for pointing out this line of work!
In particular, [5] which was published at ICML 2024, is relevant to the transfer setting we also consider in Section 4.
The following is a brief comparison between our work and the influence functions line of work:
- We remark that the selection strategy of [5] is the same as the "CosineSimilarity" baseline considered in our experiments, when loss gradient embeddings (cf. Section J.2) are used (and SGD is the used optimizer). We show in our experiments that VTL substantially outperforms CosineSimilarity when the batch size is larger than one.
- A major limitation of the line of work on influence functions, including [5], is that interactions between samples are not taken into account. That is, the selected samples are not diverse and may contain redundant information. We suspect that this is the main reason why VTL substantially outperforms these methods. [5] discuss this in their limitations.
VTL offers an efficient and effective approach to take interactions between samples and account: synthesizing approaches to retrieve *relevant* examples (e.g., CosineSimilarity) and approaches to retrieve *diverse* examples (e.g., inductive active learning).
- A drawback that [5] points out in Appendix I is that a smaller loss does not always lead to a higher accuracy. Perhaps the loss is therefore not the right proxy goal. In contrast, VTL directly minimizes the bound on the approximation error from Theorem 3.3.
- VTL also takes into account points with a high negative cosine similarity; something that [5] suggests in Appendix K.2.
We will include this comparison to the influence functions line of work in our section "Related Work".
---
We hope to have addressed the concerns raised by the reviewer.
Given the recognized contributions that unlock new applications of active learning, and that the application in Section 4 offers a natural and effective way to select both *relevant and diverse* points, we would appreciate it if the reviewer would increase their score for our paper. We would be happy to answer any remaining questions or concerns.
---
References:
1. Gal, Y., Islam, R., and Ghahramani, Z. Deep bayesian active learning
with image data. In ICML, 2017.
2. Zhang, Jifan, et al. Labelbench: A comprehensive framework for benchmarking adaptive
label-efficient learning. 2023.
3. Lüth, Carsten, et al. Navigating the pitfalls of active learning evaluation: A systematic framework
for meaningful performance assessment. In NeurIPS, 2023.
4. Wei, A., Hu, W., and Steinhardt, J. More than a toy: Random matrix models predict how real-world neural representations generalize. In ICML, 2022.
5. Xia, Mengzhou, et al. ”Less: Selecting influential data for targeted instruction tuning.” In ICML, 2024.
---
Rebuttal Comment 2.1:
Comment: Hi I have read all the rebuttals. I do appreciate the authors efforts in literature review and agree that the references the authors list are more comprehensive. However, I think fine-tuning model weights for image classification task would not serve as a novel contribution and the learning paradigm for fine tuning deep neural networks is different from fine tuning LLMs as LLMs are generative models (the authors cite [1] and argue this paper could be an extension for the future work for fine-tuning the model for each prompt.) I think the contribution below is not novel and a bit vague.
"We apply the transductive active learning framework to batch-wise active few-shot fine tuning of large neural networks and to safe Bayesian optimization. We empirically show, in both cases, that ITL and VTL outperform the state-of-the-art.
[1] Hardt, M. and Sun, Y. Test-time training on nearest neighbors for large language models. ICLR, 2024.
I would lean towards keeping my score.
---
Rebuttal 3:
Title: Discussion Period
Comment: Dear Reviewer,
We hope this message finds you well. We have noticed that our detailed rebuttal, addressing each of the concerns raised in your review, has not yet received any feedback. We understand the demanding nature of the review process and appreciate the time and effort invested in evaluating our work.
We kindly urge you to consider our responses to your questions, as we believe they adeptly address your concerns. With some days left, we hope we can still have a fruitful rebuttal period.
Thank you, Authors
---
Rebuttal 4:
Title: Response to reviewer’s comment
Comment: Dear reviewer,
Thanks for your active engagement in the rebuttal period. We have a follow up question on your comment:
“However, I think fine-tuning model weights for image classification task would not serve as a novel contribution.”
First, while we might be of a different opinion, we respect your opinion regarding the contribution of our fine tuning experiments. However, we would like to mention that there are some very important other contributions of our paper that should also be considered in the evaluation (below is a non exhaustive list):
1. We formulate the problem of TAL which results in natural decision rules/acquisition functions of ITL, VTL.
2. We are the first to give uniform convergence rates for the posterior uncertainty in regions beyond the sample space. This implies a new generalisation bound for RKHS functions. Note that our assumptions are very general and common in active learning literature.
3. We evaluate our method on two distinct domains, (i) fine-tuning of large neural networks and (ii) safe BO. In both cases, our method outperforms the baselines.
We hope this clarifies our contributions further. Moreover, even though the reviewer thinks our contribution for the fine tuning case are not novel, our work has other significant contributions. Given these contributions, we kindly ask the reviewer to reconsider their evaluation of our work. Looking forward to the reviewer’s response.
---
Rebuttal 5:
Comment: Dear reviewer, we would like to thank you for your efforts in reviewing our submission and pointing us to the related work on influence functions. We will include the above comparison to this line of work in our updated version.
Regarding our citation of [1], our intention was to mention that in future work, the methods of this submission could potentially be extended to the setting described in [1].
We did *not* intend to suggest that this submission is an extension to [1].
We agree with the reviewer that our methods cannot be applied trivially (due to the different learning paradigms), however, selecting relevant and diverse data seems also to be important for the fine-tuning of LLMs (as argued in the limitations of [2]).
We will clarify this in the updated version, and thank the reviewer for highlighting this.
With only one day left, we would appreciate it if the reviewer can express their thoughts about our latest comment and potentially raise their score.
[1] Hardt, M. and Sun, Y. Test-time training on nearest neighbors for large language models. ICLR, 2024.
[2] Xia, Mengzhou, et al. ”Less: Selecting influential data for targeted instruction tuning.” In ICML, 2024. | Summary: This paper introduces and considers approaches for the generalized (transductive) active learning problem, where the space of prediction targets and samples are not necessarily the same. The authors propose methods ITL and VTL to select samples in order to minimize the posterior uncertainty about the target function with the target space $A$. They prove favorable properties of these methods and compare their performance to other active learning techniques on empirical settings, including batch active learning and safe Bayesian optimization.
Strengths: - The paper is well-written and organized, with ample thorough discussions.
- The problem of transductive active learning is novel to the best of my knowledge and may be of interest to the broader ML community.
- Compelling empirical results are provided that support the theoretical results of the paper.
Weaknesses: - The paper is quite dense and long, with a total of 68 pages when appendices are considered.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How does the computational cost of the methods fare in practice to other methods?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper! Please find our response to your question.
Please let us know if you have any further
concerns or suggestions.
> - The paper is well-written and organized, with ample thorough discussions.
> - The problem of transductive active learning is novel to the best of my knowledge and may be of interest to the broader ML community.
> - Compelling empirical results are provided that support the theoretical results of the paper.
Thank you!
> How does the computational cost of the methods fare in practice to other methods?
ITL scales cubically in the size of the target space $\mathcal{A}$ since it takes into account mutual interaction between prediction targets.
In contrast, VTL scales in linearly in size of $\mathcal{A}$.
We included a more detailed analysis of the computational complexity in Appendix G.
In all our experiments, the computational cost of ITL \& VTL is around the cost of the fastest baselines, since for these computationally light methods cost tends to be dominated by computing the kernel / the embeddings.
---
We are glad to see the recognition that transductive active learning can be of interest to the broader ML community, given that it unlocks new applications of active learning.
Having addressed the remaining question provided by the reviewer, and given the contributions of this paper, we hope that the reviewer considers to increase their score for our paper. We would be happy to answer any remaining questions or concerns.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I read the reviews and the responses. I lean towards keeping my score.
---
Reply to Comment 1.1.1:
Comment: We would like to thank you for your efforts in reviewing our work and for recognizing its relevance to the broader ML community. | Summary: This paper introduces transductive active learning based on the uncertainty of GP when the labeling space and target space can be different. They have assumptions such as submodularity of information gain and information capacity’s sub-linearity in GP, resulting in a bound for variance of GP’s posterior with the proposed labeling algorithm. Under a specific setup, this can go to 0, and it cannot avoid the irreducible error when the deviation between labeling space and target space is larger. The various properties of the proposed algorithm(ITL and VTL) are examined by simple examples showing labeling in the target space as much as possible.
The proposed algorithm's applications are interesting, including few-shot learning and safe BO (rarely considered in conventional active learning). These parts show the algorithm’s flexibility and applicability well. The results are promising.
Strengths: Among many advantages, the theoretical aspect is better because it can provide the criteria for the use of kernels that are strongly connected to the tuning of the algorithms. Also, the applications are impressive and enlarging the area of active learning. The use of GP is solid because it can provide a solid mathematical background and reliable uncertainty in many cases, compared to variational Bayes approaches.
Weaknesses: This paper gives too little attention to conventional active learning. Although the novelty of the new area is essential in this paper, traditional active learning tasks are still popular. In this view, this paper does not provide strong evidence for the superiority compared to conventional active learning. At first glance, the proposed active learning algorithm seems too specific. In theoretical aspects, it is unsatisfactory to consider only the spaces. More clarification of theoretical aspects in the paper can be helpful to strengthen the paper. $gamma_n$’s sub-linearity can be clarified as the mathematical formula. In GP, the kernel is essential, and the experiments can reveal what kernel is best or robust. The discussion about kernels is too short.
Technical Quality: 3
Clarity: 3
Questions for Authors: Q1: What’s the difference between $\gamma_n$ and $\gamma_{\mathcal{A},\mathcal{S}}(n).$ These notations can be improved.
Q2: If only the target space is (uncountable) separable space, can the theorem have any change?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Not well-discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and detailed comments! You can find below our response to some of your questions. Please let us know if you have any further concerns or suggestions.
> Among many advantages, the theoretical aspect is better because it can provide the criteria for the use of kernels that are strongly connected to the tuning of the algorithms. Also, the applications are impressive and enlarging the area of active learning.
Thank you!
## Concerns
> This paper gives too little attention to conventional active learning. Although the novelty of the new area is essential in this paper, traditional active learning tasks are still popular. In this view, this paper does not provide strong evidence for the superiority compared to conventional active learning. At first glance, the proposed active learning algorithm seems too specific.
Thank you for pointing this out!
Our paper introduces *transductive* active learning (TAL), where instances are specified by a target space $\mathcal{A}$ and a sample space $\mathcal{S}$.
The "classical" (*inductive*) active learning (we will call it IAL in the following) that is studied in most prior works on active learning is the special case of TAL where $\mathcal{A} = \mathcal{S}$.
That is, intuitively, in IAL one tries to learn "everything" which amounts to extracting as much information as possible.
In contrast, in TAL, learning *can* be directed to a particular region of the domain.
This is also why the special case of IAL seems most useful for pre-training (and it has been studied extensively in this setting), while TAL unlocks various applications of active learning for fine-tuning.
We allude to this in lines 84-86 where we show that in the classical IAL setting, ITL is equivalent to CoreSet which is a standard baseline.
So if one were to apply the methods proposed here in the IAL setting, one would recover the existing results.
This goes to show that our proposed algorithm is in fact *more general* than prior approaches since it does not only address IAL but also the more general TAL.
To argue for the advantages of TAL over the limited view of IAL, we therefore focus our experiments on novel applications of TAL.
As you rightly point out, IAL has already been extensively studied, and TAL unlocks many new applications of active learning.
> $\gamma_n$’s sub-linearity can be clarified as the mathematical formula.
Thank you for this suggestion. We updated the paper to more prominently pointing out that in most cases (including all instances where the domain is finite), $\gamma_n$ is always sub-linear.
> In GP, the kernel is essential, and the experiments can reveal what kernel is best or robust. The discussion about kernels is too short.
Thank you also for this suggestion.
Kernels are discussed extensively in other resources (e.g., the kernel cookbook [1]).
Given the limited space in the submission, we opted to not include an extensive discussion on kernels, beyond the extensive discussion in section 4, lines 182-194.
With the additional page of the camera-ready version, we are able to extend this discussion slightly and reference background resources.
> Limitations: Not well-discussed
The main limitation of this work is that we focus solely on sequential decision-making *given* some model, rather than asking how one should construct such a model so that it is representative of the ground truth.
We implicitly address this scope in lines 12-13.
Kindly let us know of any other potential limitations, and we would be more than happy to address them.
## Questions
> Q1: What’s the difference between $\gamma_n$ and $\gamma_{\mathcal{A}, \mathcal{S}}(n)$?
$\gamma_n = \max_{\substack{X \subseteq \mathcal{X}}, |X| \leq n} \mathrm{I}(f_{\mathcal{X}} ; y_{X})$, whereas $\gamma_{\mathcal{A},\mathcal{S}}(n) = \max_{\substack{X \subseteq \textcolor{red}{\mathcal{S}}}, |X| \leq n} \mathrm{I}(f_{\textcolor{red}{\mathcal{A}}} ; y_{X})$.
$\gamma_n$ is used extensively in prior literature that studies inductive active learning. The definition of $\gamma_{\mathcal{A},\mathcal{S}}(n)$ is motivated by the fact that transductive active learning is *generalizing* inductive active learning where $\mathcal{X} = \mathcal{A} = \mathcal{S}$.
In the inductive case, $\gamma_n = \gamma_{\mathcal{A},\mathcal{S}}(n)$.
In other transductive cases, it can be that $\gamma_{\mathcal{A},\mathcal{S}}(n) \ll \gamma_n$.
> Q2: If only the target space is (uncountable) separable space, can the theorem have any change?
Infinite (but compact) target spaces can be addressed, e.g., via discretization arguments which are common in the Bayesian optimization literature (see, e.g., appendix C.1 of [2]).
That is, if the target space can be covered approximately using a finite set of points, Theorems 3.2 and 3.3 extend directly.
---
We believe that our work is highly relevant since (1) it generalizes widely studied approaches to IAL while (2) unlocking entirely new promising applications of active learning where the use of prior approaches was limited.
Having addressed all of the questions provided by the reviewer, and given the contributions of this paper, we hope that our answers prompt the reviewer to reconsider their evaluation and potentially increase their score. We would be happy to answer any remaining questions or concerns.
---
References:
1. Kernel cookbook. https://www.cs.toronto.edu/~duvenaud/cookbook/.
2. Srinivas, Niranjan, et al. ”Gaussian process optimization in the bandit setting: No regret and experimental design.” ICML (2010).
---
Rebuttal Comment 1.1:
Title: Reply to the Rebuttal
Comment: Thanks for your reply. Almost all my concerns are resolved. I only want the results applied in the conventional AL learning setups, which can be expected shown in other chances.
I'll keep my score.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you for reviewing and giving insightful comments on our manuscript. | Rebuttal 1:
Rebuttal: We thank all reviewers for their feedback, and would like to emphasize the novel contributions of our work.
1. We introduce a new problem setting generalizing classical inductive active learning. All reviewers appear to agree that this new problem setting is relevant and unlocks several new applications of active learning.
2. We derive convergence guarantees for a natural family of algorithms (which recover widely recognized approaches such as "uncertainty sampling" for the inductive special case).
3. We show that these algorithms achieve a significant improvement upon the state-of-the-art in two new applications of active learning (unlocked by the generalized problem setting).
In both applications, the use of active learning leads to a unique advantage over prior work: Synthesizing *relevance & diversity* when sampling in Section 4; Synthesizing *expansion & exploration* when sampling in Section 5.
We believe that transductive active learning is a powerful paradigm for learning under resource constraints.
As noted by the reviewers, this work opens up the possibility to study many other applications of transductive active learning, such as efficient data selection for fine-tuning LLMs (reviewer aHJz). | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Latent Representation Matters: Human-like Sketches in One-shot Drawing Tasks | Accept (poster) | Summary: This paper investigates how different regularization techniques, applied to the latent space of Latent Diffusion Models (LDMs), impact their performance on one-shot drawing tasks. The Authors explore many regularization methods: KL divergence, vector quantization, classification, prototype-based, SimCLR, and Barlow Twins.
They evaluate these methods against human performance using quantitative metrics (originality vs. recognizability) and qualitative analysis of feature importance maps.
The results show that LDMs with prototype-based and Barlow Twins regularizations produce sketches that are most similar to human drawings in terms of both recognizability and originality.
Strengths: - The paper provides a comprehensive comparison thorough examination of six different regularization techniques, offering insights into their effectiveness for one-shot drawing tasks.
- It introduces a novel method for generating feature importance maps in LDMs, allowing for direct comparison with human perceptual strategies.
- From the practical side, the findings have potential applications in improving generative models for tasks requiring human-like generalization abilities.
- The study has an highly interdisciplinary approach since it integrates computer science, cognitive science, and neuroscience, potentially offering insights into human visual cognition.
The paper is very clearly written, and the Authors provide detailed information about their experimental setup, hyperparameters, and code availability, enhancing reproducibility.
Weaknesses: Overall, I think that the paper makes a nice contribution to our understanding of how different regularization techniques affect the latent representations in generative models and their ability to produce human-like sketches.
The weakness I see regards the possibility to generalize from the "simple" datasets analyzed to more complex creative processes.
This study primarily focuses on the QuickDraw-FS dataset, with limited exploration of the Omniglot dataset. It not very clear to me how sound can be the extrapolation from these very simple (although relevant) contexts to more complex ones. I know that this does not provide a concrete and actionable insight, but I would appreciate a comment on this.
Technical Quality: 3
Clarity: 4
Questions for Authors: How generalizable are these findings to more complex drawings?
Confidence: 2
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: Limitations are adequately discussed, but they are not part of the main text (they are discussed in the Appendix, pag. 34).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer YDjR for the positive feedback as well as the relevant comments. We especially appreciated that the reviewer highlighted the interdisciplinary approach, which was at the heart of the article. Unfortunately, interdisciplinarity has its own limitations, especially when it comes to comparing humans and machines. This is the main reason why we have limited ourselves to rather simple datasets. We detail our answer below:
* **About the generalization to other and more complex datasets**: We want to clarify that our experiments on the Omniglot datasets are actually not 'limited'. We placed the Omniglot results in the supplementary information section to keep a concise main-text article, but the Omniglot analysis is consistent with that conducted on the QuickDraw Dataset. In this article, our focus has been primarily on the one-shot drawing task because it allows fair comparison between humans and machines; more complex tasks like natural image generation by contrast are beyond human capability. Nevertheless, the latent diffusion models and the regularizers we have used in the article are known to scale well on more complex datasets (e.g. [1]). We anticipate similar performance improvements on natural image datasets as observed with the QuickDraw data, especially since regularizers like Barlow, SimCLR, and prototypical have shown strong performance in one-shot classification tasks of natural images. However such natural image generation models won’t be comparable to human performance as humans can hardly synthesize such images.
* **About the limitations**: We agree that the limitation should be in the main text and not in the appendix. We have therefore added a paragraph summarizing our limitation in the discussion section (line 351)
Overall, we think our response has addressed the primary concern of the reviewer, clarifying that our deliberate choice of a relatively simple one-shot drawing setting allows us to draw a fair comparison between humans and machines to effectively answer our scientific question. We hope we have convinced the reviewer to increase their rating. Should there be any remaining issues, we are more than willing to engage in further discussion.
[1] Rombach, Robin, et al. "High-resolution image synthesis with latent diffusion models." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
---
Rebuttal 2:
Comment: Should there be any remaining issues, we are more than willing to engage in further discussion. | Summary: The authors propose to explore how different regularizers impact the performance of LDM on one-shot sketch generation, with a specific focus on evaluating the similarity between the generated sketches and real ones. It reveals that prototype- and Barlow-based regularizers are among the best, and claims the gap between humans and machines in the one-shot drawing task is almost closed.
Strengths: - This seems the first time that the latent diffusion model (LDM) is applied to one-shot sketch generation.
- The authors discussed how different regularizers impact the generation results regarding the originality (diversity) and recognizablity, which is valuable. Interestingly, prototype- and Barlow-based approaches are the best.
- The paper is well-written and easy to follow.
Weaknesses: - This paper is more like an incremental work based on [30], the key idea of using the diversity vs recognizability framework, and importance maps to measure the generation quality of diffusion models is the same. It differs in extending the diffusion model into latent feature space and applying different regularizers, which seems minor.
- It is a bit over-claimed that the gap between humans and machines is almost closed on the one-shot drawing task by using LDM plus proper regularizers. The qualitative results shown are not as good as the actual sketches, suffering from clear blur and distortion. The experiments are conducted on relatively simple sketch cases, which makes it hard to justify its scalability.
- DDPM used in [30] for this task is not compared (with or without the same regularizers if possible), it would be helpful to understand how effectiveness of pushing the denoising into latent space.
Technical Quality: 2
Clarity: 3
Questions for Authors: - It is unclear how to construct the sketch codebook used for VQ-VAE.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Please refer to the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer 1Pi8 for the meaningful comments. Please find below a point-by-point response that addresses the reviewer’s concerns:
* **About the incremental work compared to [30]** : We agree with the reviewer that our article builds on the comparison framework and some ideas introduced in [30], but we respectfully disagree that our article does not bring significant novelties compared to [30]. Here are the 2 main novelties:
* While [30] focuses on comparing various types of generative models (GANs, VAEs, and diffusion models), we focus on comparing the effect of inductive biases (through regularization) in the latent space of latent-diffusion models. This difference might seem minor to the reviewer, but this question of effective inductive biases is prevalent in cognitive psychology (and is still an open question, see [1, 2] below) and has never been systematically studied in latent-diffusion models from a machine learning perspective. Our results confirm how crucial such inductive biases are as it has a strong impact on the generalization performance of the one-shot drawing setting.
* The method used in [30] to generate feature importance maps leads to maps in which the background is dominant (see Fig 5 in [30]), preventing the authors from making any meaningful and quantifiable comparison with human feature importance maps. In our article, we introduce a novel method to derive importance maps. In contrast to [30] our method leads to importance maps that are directly comparable with those of humans. We think this is an important difference with [30] as our results are backed by two independent methods (the recognizability from originality methods, and the importance maps comparison), strengthening our claim.
To clarify and emphasize the differences with [30] we have added a few extra sentences to summarize those 2 major differences in the related work section (lines 129-130).
* **About the claim that “the gap between humans and machines is almost close” that might be a bit overstated**: We agree with the reviewer that this claim may be somewhat overstated, considering that our comparison between humans and machines is based on a very specific task, which does not cover the full spectrum of abilities in both humans and machines. We have therefore downplayed this claim and we have replaced it in the narrower context of our article (in the abstract, and in the discussion section).
* **About the comparison with the DDPM**: We have actually included the DDPM from [30] in our comparison, this is what we call the pixel baseline (see Fig 2). Note that our baseline is a guided version of the DDPM because previous articles have shown that guided DDPM performs better than their non-guided counterpart in the one-shot generation task. We call it ‘pixel-baseline’ , because the DDPM is directly applied in the pixel space in contrast to other latent diffusion models that are applied in a regularized latent space. But we agree with the reviewer that this designation is rather ambiguous. We have therefore changed it to ‘pixel-space DDPM’.
* **About the sketch codebook of the VQ-VAE**: The codebook in the VQ-VAE could be viewed as a dictionary of vectors, with each vector being learned so that it minimizes the L2 distance with the latent coordinate. Note that the way we learn the codebook is similar to the standard procedure described in the original VQ-VAE article [3]. To be more concrete, let’s consider the case where the latent space (before discretization) is of size (4, 128) (here we ignore the batch dimension for the sake of concision). Let’s also consider we have a codebook with 512 elements (this is the codebook size we use in the article). In this case, all 512 vectors of the codebook will be of size 4, so that they match the number of channels of the latent space. During learning, the codebook vectors are learned so that they minimize the L2 distance between 128 vectors (of size 4) of the latent space (see section A.2.1. for a pseudo-code). During inference, the discretization process associates to each of the 128 vectors of the latent space the address (i.e. an integer) of the closest vector in the notebook. This allows the transformation of the continuous-valued latent space into a discretized one. We agree with the reviewer that a clear explanation of codebook learning is missing in the article. We have included a scheme to explain this process in section A.2.1 to clarify this point.
Overall, we hope that our detailed response has addressed the reviewer's concerns, and convinced them to increase their rating. If some concerns remain, we will be pleased to engage into more discussion.
[1] Goyal, Anirudh, and Yoshua Bengio. "Inductive biases for deep learning of higher-level cognition." Proceedings of the Royal Society A 478.2266 (2022): 20210068. \
[2] Marjieh, Raja, et al. "Using Contrastive Learning with Generative Similarity to Learn Spaces that Capture Human Inductive Biases." arXiv preprint arXiv:2405.19420 (2024). \
[3] Van Den Oord, Aaron, and Oriol Vinyals. "Neural discrete representation learning." Advances in neural information processing systems 30 (2017). \
[30] Boutin, Victor, et al. "Diffusion models as artists: are we closing the gap between humans and machines?." Proceedings of the 40th International Conference on Machine Learning. 2023.
---
Rebuttal 2:
Comment: Should there be any remaining issues, we are more than willing to engage in further discussion. | Summary: i)This paper uncovers the impact of representational inductive biases on Latent Diffusion Models through one-shot tasks, particularly in the realm of human-like sketching. It compares three distinct groups of regularizers: a standard baseline, supervised methods, and a third group consisting of self-supervised techniques.
ii)This paper aims to uncover the strategies employed by each regularization method to generalize to novel categories.
iii)This paper conducts a comprehensive comparative analysis of the effectiveness of different regularization methods in one-shot drawing tasks. The study explores various dimensions, examining the performance and differences of various regularization strategies in such tasks.
Strengths: This paper is dedicated to analyzing the specific manifestations of various inductive biases in one-shot drawing tasks, contributing new findings and demonstrating a high level of originality. The paper is well written, with a clear and logical structure, reflecting a rigorous academic attitude. The experimental section is well-designed, with substantial research effort, and the execution process strictly adheres to scientific methods. The comparative analysis strategy used is both comprehensive and detailed, effectively ensuring the precision and credibility of the research findings. Additionally, the conclusions of this paper provide valuable insights for the field of one-shot drawing tasks and have significant guiding significance for subsequent research.
Weaknesses: i)This paper conducted an extensive analysis of the impact of different inductive biases in one-shot drawing tasks. However, the analysis remains superficial, merely briefly revealing the experimental results of various methods without delving into the fundamental reasons behind the differences in outcomes among the methods. Furthermore, the paper does not propose specific solutions to the research questions addressed.
ii)The experimental method adopted in this study is not limited to specific modalities and is not restricted by data scale. This method may be applicable to the generation tasks of more modalities(such as photos or text2img). However, the experiment specifically chose handwriting and sketches as the research subjects for one-shot generation tasks.
iii) The core argument proposed by this paper is the generation of samples that resemble human-drawn sketches or handwriting. However, the paper does not provide sufficient elaboration on how to quantify the “human-like” characteristics of the samples, especially with an in-depth analysis from the perspective of stroke features.
Technical Quality: 2
Clarity: 3
Questions for Authors: Major:
i)In one-shot drawing tasks, originality constitutes a key evaluation metric. However, when measuring the “human-like” characteristics of samples, the impact of originality is relatively minor, and its role in the evaluation process appears to be more limited.
ii)
Does this paper quantitatively analyze the stroke correlation between the generated sketches and those drawn by humans? Although the generalization curve has been considered in the evaluation of originality and recognizability, the paper does not explicitly reveal the interrelationship between the strokes.
iii)Has this study explored the reasons for or proposed hypotheses about the performance differences exhibited by various inductive bias methods in one-shot drawing tasks?
Minor:
i)Given that classification models may be influenced by their own biases or uneven distributions in the training data, the recognizability of the model does not necessarily equate to the “human-like” level of the samples. Does the paper take this potential issue into consideration?
ii)In the exploration of effective methods to improve the performance of one-shot drawing tasks, did this paper consider approaches other than simply adding the prototype-based and Barlow regularizers with weights?
iii)
In Figure 2, does the “pixel baseline” refer to the use of rasterized sketches during training or testing? On this point, this paper could analyze the differential impact of inductive biases when dealing with vector versus raster sketches.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: No. How the evaluation criteria effectively align with human judgment or aesthetic standards is a question of considerable research value, especially in generative tasks. Particularly when dealing with sketches or handwriting, the consideration of strokes is an indispensable key dimension that cannot be overlooked.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer ZReU for the detailed comments. Here is our point-by-point answer:
* **About the lack of analysis of the different regularizer components**: We agree with the reviewer that we did not discuss enough the reasons for such differences. As this issue was shared with other reviewers, we have answered this concern in the bullet point 2), in the general rebuttal form.
* **About considering multimodal and more complex datasets**: We acknowledge that the latent diffusion models we use do not face scalability issues and can handle multiple modalities. As our scientific question involves tight comparisons between humans and machines, we focus on tasks accessible for both humans and machines (which is not the case for natural image synthesis.Therefore, the focus on the one-shot drawing task is deliberate and aligned with the scientific question we explore in this article.
* **About the methods used to compare machines and humans and about stroke analysis**: In this article, we used two independent methods to compare humans and machines —the recognizability vs. originality framework and feature importance maps. Both methods lead to the same conclusion: certain regularizers help narrow the gap with humans in one-shot drawing tasks. But we agree with the reviewer that stroke analysis could be another interesting approach to analyze human drawings. We plan to include such analysis in further work, so that we could compare all 3 different comparison methods.
* **About the low originality value on impact when evaluating human samples**: There might be a misunderstanding on this specific point. The originality plays a major role when comparing various types of generative models (e.g. VAEs, GANs, …) to humans as shown by previous authors (see Fig 3 of [1], or in Fig 2 of [2] ). But in this article, we focus on one particular type of generative model that are the latent diffusion models. Because such models tend to fall close to humans in terms of originality, we have rescaled the originality vs. diversity framework to zoom in and better highlight the differences between humans and machines (note the originality scale starting at 0.5). Such a rescaling might be the reason why the reviewer thinks that originality plays a minor role in the evaluation process. It is also important to note that there is an inherent tradeoff between originality and recognizability: while recognizability assesses how likely the data point falls in the classifier decision boundary, originality measures how ‘diffuse’ the sample distribution is (the Fig 1 in [1] well illustrate this trade-off)... Therefore, a very ‘original’ agent (producing highly diverse samples) will tend to have low recognizability as the samples are likely to fall outside of the classifier decision boundary. In Fig 2, we observe that models need to trade their originality for more recognizability to better approximate human performance. We have included two sentences in the main text (line 245) to clear up this misunderstanding.
* **About the possible biases of the classifier to evaluate the recognizability**: We fully agree with the reviewer that the classifier used to evaluate the recognizability might be biased. Hoewer, we think these biases have a low impact on our analysis for two main reasons. First, we use a one-shot classifier to evaluate the recognizability (similar to the original paper that introduces the originality vs recognizability metrics [1]). Such classifiers are less prone to overfitting by construction (as they learn a metric space, see [3] for more explanation), mitigating the impact of potential biases in the training distribution. Second and most importantly, all samples (either human-drawn or machine-generated) are evaluated with the same classifier. Therefore, any potential biases in the recognizability metric will equally affect both human and machine performance, ensuring that the comparison remains meaningful. We propose to include such an explanation in the main text (line 242) to clarify this point.
* **About other approaches to improve performance on the one-shot drawing task** (i.e. more combinations of regularizer): As this specific point was also raised by reviewer vmMb we have run more experiments to systematically explore more combinations of regularizers. Those experiments lead to one more figure. More details are in the bullet point 3 of the general rebuttal form.
* **About the pixel baseline and the rasterization**: Our pixel baseline is indeed a diffusion model (without any latent projection) to learn to generate the distribution of pixel value. It indeed leverages rasterized images, as the original quickdraw images are vectorized.
* **About the alignment of the metrics with human judgment**: We fully agree with the reviewer that having evaluation metrics (such as originality and recognizability) that align with human judgment would make our analysis more impactful. We are currently working on finding more aligned metrics (using harmonization techniques combined with psychophysics experiments). We have added 2 sentences in the discussion section (line 381) to discuss this interesting point. We thank the reviewer for the valuable comment.
We believe that our response, along with the additional curves and paragraph included in the article, addresses most of the reviewer's concerns and should encourage them to raise their rating.
[1] Boutin, Victor, et al. "Diversity vs. Recognizability: Human-like generalization in one-shot generative models." Advances in Neural Information Processing Systems 35 (2022): 20933-20946. \
[2] Boutin, Victor, et al. "Diffusion models as artists: are we closing the gap between humans and machines?." Proceedings of the 40th International Conference on Machine Learning. 2023. \
[3] Snell, Jake, Kevin Swersky, and Richard Zemel. "Prototypical networks for few-shot learning." Advances in neural information processing systems 30 (2017).
---
Rebuttal 2:
Comment: Should there be any remaining issues, we are more than willing to engage in further discussion. | Summary: This paper investigates how different representational inductive biases in Latent Diffusion Models affect their performance on one-shot drawing tasks, aiming to close the gap with human abilities. The authors explore six regularization techniques: KL divergence, vector quantization, classification, prototype-based, SimCLR, and Barlow twins. They evaluate these models using the originality vs. recognizability framework and a novel method for generating feature importance maps. The results show that prototype-based and Barlow regularizations significantly narrow the gap between LDMs and human performance in one-shot drawing tasks. These regularizers outperform standard LDM regularizers (KL and vector quantization) as well as classification and SimCLR regularizers. The authors also demonstrate that the feature importance maps of LDMs with prototype-based and Barlow regularizations align more closely with human attentional strategies. Additionally, they find that combining these two regularizers yields even better results. The study highlights the potential of incorporating specific representational inductive biases in generative models to achieve more human-like generalization capabilities, with implications for both AI advancement and understanding human cognitive processes.
Strengths: - The paper presents a study by systematically exploring various representational inductive biases in Latent Diffusion Models for one-shot drawing tasks. The application of regularization techniques from one-shot classification to generative models provides new insights into improving model performance.
- The authors employ a wide range of regularization techniques and evaluate them using multiple metrics, including the originality vs. recognizability framework and a newly developed method for generating feature importance maps. The statistical analysis of the results adds credibility to their findings.
- This paper demonstrates how specific inductive biases can substantially improve the performance of generative models in one-shot tasks, potentially leading to more versatile and human-like AI systems. It also shows the alignment between the most effective regularizers (prototype-based and Barlow) and prominent neuroscience theories provides interesting insights into human cognitive processes. The paper's findings could have practical applications in areas requiring rapid generalization from limited examples, such as design prototyping or creative tasks.
Weaknesses: - The paper doesn't provide a detailed analysis of how different components of the regularizers contribute to the overall performance. This makes it challenging to understand which specific aspects of each regularizer are most crucial.
- While the authors use human-derived feature importance maps, they don't include a human evaluation of the generated samples. Such an evaluation could provide additional insights into the perceived quality and human-likeness of the generated drawings.
- Although the authors briefly mention combining prototype-based and Barlow regularizers, this aspect is not thoroughly explored. A more systematic investigation of different regularizer combinations could potentially yield even better results.
Technical Quality: 2
Clarity: 2
Questions for Authors: - How well do your findings generalize to more complex datasets beyond simple sketches? Have you considered testing your approach on datasets with more detailed or realistic images?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer vmMb for the relevant and valuable comments. Please find below a point-by-point to answer that answer the main concerns of the reviewer :
* **About the lack of analysis on the impact of the different regularizers on the performance**: In Fig 2, the comparison between the regularized latent diffusion models (shown in color) with the no-regularization baseline (hexagon marker) shows how the different regularizers contribute to the global performance (as measured by the originality vs. recognizability framework). However, we acknowledge the reviewer's concern that we did not discuss enough why the regularizers produce such effects. To mitigate this issue we have added an entire paragraph (at the end of the results section) that better explains these differences. Because it was also requested by another reviewer, we have included this paragraph in the general response (see bullet point 2). We thank the reviewer for raising this issue, as we think it improves the quality of the article.
* **About the human evaluation of the samples**: Human evaluation of the generated samples has already been proposed by previous studies. For example, [1] did a Turing test where humans had to distinguish between human-drawn and machine-generated samples. While insightful, such experiments give few insights into how the samples differ from machine to human. On the contrary, we think originality vs. recognizability offers a finer comparison between the distribution of images drawn by humans and those generated by machines.
* **About systematic investigations of the different regularizers combinations**: To address the reviewer’s concern, we have run additional experiments to systematically explore various combinations of regularizations. In particular, we have systematically explored the following combinations of regularization Barlow + Prototype: KL + prototype, SimCLR+Prototype, VQ + Prototype. Among all combinations of regularizers, it is the Barlow + Prototype and the KL + Prototype combinations that perform the best. Interestingly, the good performance of the combination KL + Prototype was not expected, because the KL regularizer (when used separately) shows a low recognizability (see Fig2). On the other hand, the VQ + Prototype combination shows no improvement compared to the Prototype regularizer. Overall, these additional experiments confirm the potential of combining an unsupervised and a supervised regularizer to match human performance. These experiments lead to one additional figure (included in the one-page PDF allowed in the general response). Note that we have also explored all combinations between unsupervised and the classifier regularizer, but we did not observe any significant improvements (we have included those experiments in the supplementary information of the revised version).
* **About generalization to more complex databases**: Note that the diffusion models as well as the RAEs we use in these articles are all known to scale well to larger datasets [2]. So in theory nothing prevents us from applying the proposed regularization in more complex settings. However, the point of our article is to compare humans with machines. To do so, one needs to leverage tasks that are accessible by both humans and machines, which is not the case with natural image synthesis: humans can hardly produce images that resemble natural images. We have chosen the one-shot drawing task because it offers a leveled playfield to compare humans and machines. To make this point clearer, we have included a sentence in line 133.
Overall, we hope that our detailed response has addressed the reviewer's concerns, and convinced them to increase their rating. Should there be any remaining issues, we are more than willing to engage in further discussion.
[1]: Lake, Brenden M., Ruslan Salakhutdinov, and Joshua B. Tenenbaum. "Human-level concept learning through probabilistic program induction." Science 350.6266 (2015): 1332-1338. \
[2]: Rombach, Robin, et al. "High-resolution image synthesis with latent diffusion models." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
---
Rebuttal 2:
Comment: Should there be any remaining issues, we are more than willing to engage in further discussion. | Rebuttal 1:
Rebuttal: We thank the reviewers for their time, their valuable comments and reviews. While the reviewers have acknowledged our “rigorous academic attitude… that strictly adhere to scientific methods” (Reviewer ZReU), as well as the novelty (Reviewers vmMb, YdjR) and significant practical impact of our article (Reviewers vmMb, ZReU, YdjR), we have not yet convinced all reviewers that our article clearly merits acceptance at the NeurIPS conference. Three main concerns are shared among reviewers: i) our study is conducted on drawings rather than on natural images dataset (Reviewers vmMb, ZReU, YdjR), ii) we did not discuss enough the effects of each regularizer (Reviewers vmMb, ZReU), and iii) we did not conduct systematic experimental comparisons on the combined effects of the regularizers (Reviewers vmMb, ZReU, YdjR):
1. The one-shot drawing setting (involving sketches dataset) is a deliberate choice. It offers a leveled playfield to compare humans and machines, which is at the heart of our scientific question. Natural image generation is a feat that surpasses human capability, making it not optimal to draw fair comparisons between humans and machines in the one-shot generation setting. We have updated the article to make this point clearer.
2. We fully agree with the reviewers that we did not deepen enough the intuition why some regularizers are better performing than others. To mitigate this issue, we have added an entire paragraph at the end of the results section :
“The experimental results shown in Fig2 show that not all regularizers are created equal. For the supervised regularizers (Fig 2b) we observe that the prototype regularizer produces more recognizable samples compared to the classification regularizer. This behavior is expected as the classifier learns features separating the categories from the training distribution. However, in the one-shot setting, such features might not be optimal to separate the categories of the (unseen) categories of the testing set [1, 2]. On the other hand, the prototype-based regularizer learns an embedding that clusters samples near their prototypes rather than directly mapping features to labels. Such a strategy is less prone to overfitting and produces more transferable features, making it particularly valuable for few-shot categorization tasks (see [3] for more discussions). Our experiments demonstrate that the prototype regularizer also better generalizes in the one-shot drawing setting. In Fig 2c, we observe that the Barlow regularizer outperforms the SimCLR regularizer in terms of recognizability. We attribute this better performance to the Barlow loss function's ability to disentangle features effectively (as measured by linear probing in [4]). These features also transfer more readily to different datasets ([4]), making the Barlow regularizer a better candidate than the SimCLR regularizer for the one-shot drawing task. Overall, our results demonstrate that the effective representational inductive biases in few-shot learning are also leading to better performance in the one-shot drawing task.”
In addition, this paragraph also answers the reviewer's ZReU about the ‘lack of clear response to the scientific question’. Indeed, the new paragraph highlights the fact that regularizers proven effective in the one-shot classification tasks tend to be also effective in the one-shot drawing setting. It therefore gives a clear answer to our scientific question: “Do representational inductive biases from one-shot classification help narrow the gap with humans in the one-shot drawing task” (clearly stated in line 46).
3. We have included more experiments to systematically study the combined effect of the regularizers. In particular, we have systematically explored the following combinations of regularization Barlow + Prototype: KL + prototype, SimCLR+Prototype, VQ + Prototype. Among all combinations of regularizers, it is the Barlow + Prototype and the KL + Prototype combinations that perform the best. Interestingly, the good performance of the combination KL + Prototype was not expected, because the KL regularizer (when used separately) shows a low recognizability (see Fig2). On the other hand, the VQ + Prototype combination shows no improvement compared to the Prototype regularizer. Overall, these additional experiments confirm the potential of combining an unsupervised and a supervised regularizer to match human performance. The findings, illustrated by a new figure (see attached pdf) are included in the revised version of the article. This additional analysis requested by the reviewers required the training of around 600 additional latent diffusion models, amounting to about 1800 days/gpu computation time.
We have also included reviewer-specific answers that individually address the reviewer’s concerns. We hope this will convince the reviewers of our article's value and its suitability for publication at NeurIPS.
[1] Snell, Jake, Kevin Swersky, and Richard Zemel. "Prototypical networks for few-shot learning." Advances in neural information processing systems 30 (2017). \
[2] Vinyals, Oriol, et al. "Matching networks for one shot learning." Advances in neural information processing systems 29 (2016).
[3] Li, Xiaoxu, et al. "Deep metric learning for few-shot image classification: A review of recent developments." Pattern Recognition 138 (2023): 109381. \
[4] Zbontar, Jure, et al. "Barlow twins: Self-supervised learning via redundancy reduction." International conference on machine learning. PMLR, 2021.
Pdf: /pdf/b7cf8732ed04425c913878a07fb6c65ab36c8ecc.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning Framework from Logit Difference | Accept (poster) | Summary: This paper proposes an efficient framework for unlearning in Large Language Models (LLMs) called "Unlearning from Logit Difference" (ULD). Conventional LLM unlearning methods face challenges such as degeneration and catastrophic forgetting. ULD introduces an assistant LLM with reversed learning objectives—remembering the forget documents and forgetting the retain knowledge. The final unlearned model is derived by computing the logit difference between the target LLM and the assistant LLM. Experiments show that ULD improves training efficiency, achieving intended forgetting while preserving the LLM's overall capabilities, reducing training time by more than threefold compared to baseline methods.
Strengths: Originality: The introduction of an assistant LLM with reversed learning objectives is a novel approach to LLM unlearning.
Quality: The method is rigorously evaluated through extensive experiments, demonstrating clear advantages over existing methods.
Clarity: The paper is well-organized and clearly explains the proposed method and its benefits.
Significance: The approach addresses critical challenges in LLM unlearning, making it highly relevant and impactful for privacy and data management in LLMs.
Weaknesses: Inference Latency: The involvement of an assistant LLM during inference may lead to higher latency, although this can be mitigated through parallelization.
Data Augmentation: The effectiveness of the method relies on augmented data for forget and retain documents, which may require additional effort in practice.
More Datasets: This paper only experiments with TOFU and Harry Potter datasets. It could do more datasets and do cross-domain continual learning.
Technical Quality: 3
Clarity: 3
Questions for Authors: How does the assistant LLM handle scenarios where the forget documents and retain documents have overlapping information?
Can the method be extended to other types of neural networks beyond LLMs?
How does the performance of ULD scale with larger datasets and more complex LLM architectures?
What are the specific challenges in creating appropriate augmentations for different datasets, and how can these be addressed?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately address the limitations of their work, noting the potential increase in inference latency and the challenges of data augmentation. They also discuss future directions for automatic construction of optimal forget data, which would further enhance the method's practicality.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer uFqn for the valuable feedback. We answer questions as follows and put additional tables in the attached pdf.
> **W1: ULD inference latency**
Although our assistant model involves additional inference cost, the computation can be parallelized. More importantly, our assistant model is only about ¼ of the target LLM, so the computation time of the assistant LLM is fully covered by the target LLM, inducing zero latency.
> **W2: Data augmentation may require human efforts**
Although our method relies on augmented data, we design two principles for the augmentation (as discussed in Section 2.3), which allow our method to easily adapt to various datasets.
* Augment forget data with different forms of the knowledge to be forgotten, which can be as simple as paraphrasing the forget data. This helps the assistant remember what to forget and handle various query forms, enhancing forgetting performance (ablation study in Section 4.3).
* Augment retain data with similar forms of the forget data but with incorrect knowledge. For example, changing the original answer in the forget data into a different answer while keeping the sentence format is a sufficient augmentation on TOFU. This prevents assistant overfitting on forget data and helps it remember only correct answers for forget data query (ablation study in Section 4.3).
> **W3: ULD performance on other dataset**
We follow prior works and apply our method to two additional datasets: RealToxicPrompts [1] that aims to forget toxic knowledge, and WPU [2] that aims to forget personal information.
On RealToxicPrompts, since open-source LLMs have undergone safety alignment and rarely output harmful output (toxic score smaller than 0.1 measured by detoxify library [3]), we first fine-tune Llama-2-7B on 1000 prompts from the RealToxicPrompts dataset with toxic score higher than 0.9, and subsequently unlearn toxic knowledge on 600 prompts. We test the forget performance on the remaining 400 prompts and measure retain performance the same as the HarryPotter experiment in the main paper. The results in the following table show that our method achieves on-par forget performance as baselines while maintaining the best retain performance.
| RealToxic | Toxicity-removal | Retain-Perf | |
|---|---|---|---|
| | Toxic score | Retain-PPL | Multiple-choice Acc. |
| Before finetune | 0.048 | 10.25 | 62.76 |
| Target LLM | 0.57 | 11.49 | 61.36 |
| GA | 0.002 | 831 | 36.5 |
| GA+GD | 0.001 | 425 | 57.85 |
| GA+KL | 0.002 | 481 | 55.49 |
| NPO | 0.032 | 32.49 | 53.28 |
| NPO+GD | 0.047 | 15.74 | 57.35 |
| NPO+KL | 0.041 | 16.91 | 58.73 |
| DPO | 0.005 | 44.81 | 45.19 |
| DPO+GD | 0.008 | 18.45 | 58.32 |
| DPO+KL | 0.004 | 21.93 | 57.48 |
| Offset-GA+KL | 0.003 | 518 | 53.82 |
| Offset-DPO+KL | 0.045 | 25.43 | 56.91 |
| Offset-NPO+KL | 0.043 | 18.55 | 55.63 |
| ULD | 0.046 | 11.89 | 60.78 |
On WPU, we follow the format in TOFU to unlearn on question-answering(QA) pairs of real-world persons. We follow the original paper to report unlearning efficacy (how well the model unlearns), model utility (how well the model preserves remaining knowledge), and response quality (quality of the response on forget data). Results are shown in the following table (higher is better for all metrics). We use the prompt from WPU paper to count the percentage of successful unlearning (*Forget unlearn effect*), and evaluate the fluency of response on forget query (*Forget response quality*) by prompting GPT-4o to output a score between 1 to 3 (higher the better). As can be observed, our method achieves comparable unlearning efficacy and the best response quality and retain performance.
Method | Forget unlearn effect | Forget response quality | Retain ROUGE
---|---|---|---
GA+KL | 100 | 1.25 | 4.82
NPO+KL | 92 | 1.42 | 44.59
ULD-original | 95 | 1.73 | 92.45
> **Q1: How ULD handle overlapped information in forget/retain data?**
In this paper, we mainly consider the setting where forget and retain data have no overlaps. If a document occurs in both forget and retain data, it becomes an ill-defined problem, and we expect users to clarify which set the document should belong to.
> **Q2: Can ULD be applied not only on LLM?**
Our work mainly considers unlearning for LLMs. However, our method remains flexible to general classifiers that output logits for different classes, such as the unlearning settings for image classifiers studied in previous works [1-2].
[1] Liu, et al. "Model sparsity can simplify machine unlearning.
[2] Di, et al. "Label Smoothing Improves Machine Unlearning."
> **Q3: Scalability of ULD on larger dataset/ more complex LLM**
To verify the effectiveness of our method on more complex LLMs, we conduct additional experiments on TOFU-10% using Llama-2-13B LLM. Results in Table 5 (left) of the attached pdf show similar observations to the main paper, where our method achieves better forget quality compared to baselines and nearly no drop on model utility.
To verify the effectiveness of our method on larger datasets, we expand the Harrypotter dataset from 400 text segments to 1800 and report the results in Table 5 (right) in the attached pdf. Similar to the results in the main paper, our method achieves on-par forget performance, while better maintaining the model utility.
> **Q4: The challenges of data augmentation step**
Please refer to our response to W2 for how our method can be generalized to different datasets and settings following two principles for data augmentation.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal by Authors
Comment: Thanks for your detailed response to my questions. I am impressed by the additional experimental results with more datasets, this results will help demonstrate the superiority of the proposed method. Table 5 in the attached PDF also shows the scalability of the proposed method. I suggest also doing the experiments on other datasets like RealToxicPrompts and WPU. I will keep my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer uFqn,
Thank you for acknowledging that our additional results are helpful.
Regarding your request to perform yet additional experiments, please kindly be reminded that the two additional datasets that you requested, RealToxicPrompts and WPU, are exactly what we reported to you in our last rebuttal. Please kindly refer to **W3: ULD performance on other datasets**. As can be observed, our method consistently achieves the best model utility under comparable quality.
We would also like to bring to your attention that the current score that you assigned does not seem to match your overall accolade of our paper and rebuttal response, unless we miss anything. Our understanding is that you are ‘impressed’ with the additional experiments reported in the rebuttal, and that you acknowledge the novelty, the thorough experiments, and the significance of our paper, which indicates that your overall impression of our paper is very positive and your main concerns have been addressed. However, a score of 5 that you assigned means that the paper is still of borderline quality. We would love to keep our discussions and continuously improve our paper, but we would also like to request that you consider resolving this inconsistency with a score that fairly reflects the overall high quality of the paper.
---
Rebuttal 2:
Comment: Dear reviewer,
We would like to follow up on our previous discussion and address any remaining concerns you may have.
It appears that the current rating may not accurately reflect your overall perception of our paper. For instance, your review mentioned that "the method is rigorously evaluated through extensive experiments," and your response to our rebuttal noted that you were "impressed by the additional results." These comments suggest a more positive view of our work, yet the current rating does not seem to align with these comments.
Since the deadline is approaching, we would greatly appreciate it if you could let us know if there are any remaining concerns preventing you from adjusting the score.
---
Rebuttal Comment 2.1:
Title: Response to Rebuttal by Authors
Comment: Thanks for your comment. After carefully viewing the additional experimental results. I would like to raise my score. Cheers. | Summary: This paper provides a new formulation of machine unlearning approach ULD that is claimed to be free of two problems: (i) unbounded forget loss, and (ii) forgetting of the general knowledge due to the under-representativeness of the retain knowledge data. Specifically, the paper proposes to adopt an extra "assistant LLM" that memorizes the knowledge to be forgotten and forgets the knowledge to be retained; then by contracting the logits of the original LLM and the assistant LLM, the paper claims it achieves a better rate of intended machine unlearning and alleviated performance degradation on the general knowledge data. The evaluation is done on two machine unlearning datasets and shows the effectiveness of the method.
Strengths: 1. Paper is written in an excellently clear way. It is very easy to follow and understand the authors' points.
2. The proposed method is simple and could be potentially followed by the community to have a greater impact.
Weaknesses: 1. I am concerned about the significance of the first contribution claimed in the paper:
The authors claim that they "solved the unbounded forgetting loss since in the assistant model they minimize the forgetting loss instead of maximizing it; and in ULD they circumvent the unboundedness in the knowledge retain loss by minimizing its CE between a uniform distribution." (line 173-175). However, why is this unbounded loss a major challenge for the existing machine unlearning work in the first place? I assume we can do the same to trivially solve it, i.e., minimizing the CE loss between the uniform distribution, which undermines the significance of the contribution greatly.
2. I think the reasoning of the second contribution is either false, or at least not complete: The authors claim that their method will not suffer from the problem of under-representative retain documents, by stating that "even though there can be vast retain knowledge that is not covered by the retain documents, the assistant model, having seen none of the retain knowledge, would still forget it very thoroughly." (line 177-179). Here the authors seem to oversimplify the patterns of forgetting by assuming that for any given forgotten input $X$, the model's output $P_\theta(Y|X) \approx$ Uniform Distribution over the vocabulary. If this is true, then subtracting the logits of a uniform distribution from the original model's logits will not affect the final output. However, this is not correct. Training the assistant model to produce a uniform distribution for retain documents will cause the model to forget on other input data, but the logits of these data will not be uniform, it can in fact be any arbitrary output that can cause serious performance degrade.
3. The method is having a severe hallucination problem for the forget query, as shown in Table 6, for the forget query "Can you share some memorable book titles by Takashi Nakamura?", the model produces the forgotten, but also false and hallucinating answer "With a flamboyant style Takashi Nakamura has penned memorable tomes like ‘The Embedded Eclipse’, ‘Kaleidoscope City’, and ‘Radiant Railways’." I personally think this is even worse compared to the methods that produce "I don't know" or garbage degenerate answer like "work work work work" since now it's even harder for the users to tell the response is reliable or not. I know this might be too much to ask the authors to solve this as it might roots in the gradient ascent methods, but it is still a principle drawback of the paper, and it would be good to be addressed or at least evaluated somehow.
4. Format issue: Section 7 and Section 8 are exceeding the 9-page limit.
Technical Quality: 2
Clarity: 4
Questions for Authors: See above.
Confidence: 4
Soundness: 2
Presentation: 4
Contribution: 2
Limitations: Yes, the authors address the limitations of the paper in Section 8.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer 6JsY for the valuable feedback. We answer questions as follows and put additional tables in pdf.
> **W1: Minimize CE loss w.r.t. uniform distribution to avoid unbounded loss**
We agree that the unbounded loss can be solved by minimizing the CE loss w.r.t. a target distribution, e.g. uniform distribution. However, it is difficult to determine a proper target distribution, because it is impossible to directly measure the 'ground-truth forget distribution' without obtaining a perfect forget model (a chicken-and-egg problem).
Researchers have attempted to find various heuristic target distributions, which still lead to sub-optimal performance. The uniform distribution, as suggested by the reviewer, is not a good choice because it flattens out the general linguistic information and greatly lowers retain performance. [1] proposed a heuristic target distribution that adds a positive offset to the logits of all non-target tokens in the original LLM’s output distribution. However, such a heuristic is sensitive to the choice of offset value. Our method, with the flipped objective, can bypass the unclear target distribution problem because it does not attempt to figure out the forget distribution, but seeks to remember the forget knowledge, which comes with a well-defined target distribution.
To validate our claims, we conduct additional experiments on TOFU, with the previous two approaches included, named `Uniform` and `DI` respectively. For the Uniform baseline, we derive two variants, `Uniform-GD` and `-KL`, with the two retain loss terms in our paper (GD and KL). The results in Table 1 in attached pdf show that they have worse model utility and forget quality. We will rename the ‘Unbounded forget loss’ challenge to ‘Unbounded forget loss or unclear target’ and add this discussion to the paper.
> **W2: Flatness of assistant LLM’s output distribution**
We acknowledge that on the unseen retain data, the assistant LLM may not output a uniform distribution. However, we will show that the output distribution will be **relatively flat**, which will not severely alter the behavior of the target model after logit subtraction operation.
First, previous work on uncertainty quantification [2,3] has identified that well-calibrated neural models tend to produce flat output distribution on OOD data. To test whether this applies to our assistant LLM on unseen retain data, we perform an additional experiment to compare the output distribution entropy on forget data, seen retain data, and unseen retain data (wikitext2 passage). The results in Table 2 in pdf confirm that the entropies on seen and unseen retain data are comparable, indicating they are comparably flat.
In addition, we further verify that our logit subtraction does minimal harm to the target LLM’s knowledge. Specifically, we calculate the KL divergence between output distributions of the target LLM before and after the logit subtraction operation. Results in Table 2 in pdf show that the logit difference only induces a large change of the output distribution on forget data and brings minimal changes to the output distribution on seen and unseen retain data. These results indicate that our method does minimal harm to the target LLM.
> **W3: Hallucinated output for ULD**
We agree that hallucination is a challenging issue for existing LLM unlearning methods. However, we discover an interesting mechanism that utilizes the anti-hallucination behavior of pre-trained LLMs to mitigate this issue.
Specifically, we investigate a pre-trained LLM’s internal anti-hallucination behavior (e.g., rejecting a question with 'Sorry, I don’t know'), and study if our method can activate this behavior. Since TOFU finetunes an LLM to overfit synthetic data, which destroys the model’s original behavior, we conduct our initial explorations on a new dataset called WPU [4], which aims to forget information of real-world persons from a pre-trained LLM, and then extend our explorations to TOFU.
We start with a simple strategy where we manually set the first token to be “Sorry” during generation. Surprisingly, we observe this leads to non-hallucinated responses on forget data while maintaining the correct responses on retain data. Table 3 in pdf shows that this simple strategy (termed `ULD-SetSorry`) increases the number of rejected forget data query while the rejection rate remains low on the retain data. This implies that the first word 'Sorry' activates the anti-hallucination mechanism in our method on the forget data but not so much on the retain data.
Based on this observation, we propose a modification to our method where we add a loss term in assistant LLM training that reduces the probability of 'Sorry' on the first token of each forget data sample. Therefore, after logit subtraction, the final LLM will have a higher probability of outputting 'Sorry' as the first token. Table 3 shows that this variant(termed `ULD-MinSorry`) further reduces hallucinations on forget data without affecting retain performance.
Finally, we evaluate `ULD-MinSorry` on TOFU to test whether the remedy works on fine-tuned LLM. Table 4 in pdf shows that the observations can generalize to TOFU without compromising other metrics, although the hallucination reduction is not as significant as on WPU since the behavior of original LLM is destroyed.
In summary, although this is an initial exploration, the results demonstrate that it is promising to adapt our method to reduce hallucinations. We will leave thorough studies to future works.
[1]Dong et al., Unmemorization in large language models via self-distillation and deliberate imagination
[2]Zhang et al., Your Finetuned Large Language Model is Already a Powerful Out-of-distribution Detector
[3]Hou et al., Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling
[4]Liu et al., Revisiting Who’s Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective
---
Rebuttal Comment 1.1:
Title: Official Response to Authors' Rebuttal
Comment: Thanks for the detailed response. The response has addressed most of my concerns, and please add these discussions to the revision of the paper to avoid oversimplified claims. I have raised my scores to 6. Thanks! | Summary: This paper proposes a new unlearning method. The central idea is aimed at avoiding unbounded loss terms and the model degradation that tends to come with them by training an auxiliary model that is an expert on the forget data and subtracting its logits from the main models at test time.
Strengths: 1. This method is novel to the best of my knowledge. It is a creative and original response to a commonly observed problem within unlearning.
1. The quality of the experiments and and results is high.
1. The writing is fairly clear.
1. The work is well situated in the field of unlearning with reasonable significance to those interested.
Weaknesses: 1. Results presentation is hard to parse. The main points of the large tables (Tables 2 and 3) are not so easy to glean. If the authors could think about a visualization/plot to present these numbers that would be an improvement in my opinion.
Minor points:
1. Spelling on line 97, "Equation 1 essentially maximize the ..." should probably be "maximizes"
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can the authors make the main results tables more easily readable?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer Y4ow for the valuable feedback. Regarding the questions:
> **W1: Experiment result table hard to parse**
Thank you for the suggestion. To improve the presentation of experiment results, we include a scatter plot in Figure 1 of the attached pdf to show the main performance comparison of our method and baselines on TOFU dataset (model utility vs forget quality).
We will also fix the spelling issues in final revision.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thank you for the response. I will maintain my score. | Summary: This paper looks into two challenges in LLM unlearning: 1. the unbounded forget loss could easily corrupt the models general abilities 2. the retain loss is usually computed on a relatively small set of data. And they propose a new objective, unlearn from logit difference, to tackle these challenges. Specifically, instead of learning to maximize/minimize the unlearning objectives, they utilize an assistant model to memorize the data to be forgot and subtract the logit from the assistant models to achieve unlearning. Through experiments on ToFu and Harrypotter benchmarks, they showed the performances are better than previous methods.
Strengths: 1. The proposed objectives are intuitive and might be helpful for unlearning.
2. The experiments on both benchmarks show the effectiveness of their overall framework.
Weaknesses: However, there are some doubts:
1. They claim that their method could retain all the knowledge to be retained, while they still need a (augmented) retained set to learn the assistant model, which might not solve the mentioned challenge #2 for unlearning. Ideally, I would expect that the assistant models is just train to memorize the data to be forgotten.
2. Furthermore, based on their ablation study on the augmented dataset, it seems that without the augmented set, their methods are like all the previous work. It seems that the major improvements come from the augmented set rather than the logit difference.
3. Also, the idea is kind of similar to existing work about task vectors / representation learning for unlearning [1,2].
[1] Mitigating Social Biases in Language Models through Unlearning
[2] The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning
Technical Quality: 2
Clarity: 3
Questions for Authors: See weakness.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: They have mentioned the limitations in the limitation section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Comment: We thank reviewer F8Xo for the feedback. Regarding the questions:
> **W1: Assistant model requires additional augmented retain data.**
We would like to clarify that the fact that our method requires an augmented retain set **does not contradict** our claim that our method is insensitive to the representativeness of the retain set (i.e. resolving challenge #2). In the following, we will explain why this is the case and validate our claims with additional experiments.
First, to facilitate our explanation, let us recap the two retain sets required by our method:
**Regular retain set**, which contains samples about the retain knowledge;
**Augmented retain set**, which contains perturbed samples of the forget data.
The augmented retain set serves a very different purpose here. Rather than covering the retain knowledge, the augmented retain set aims to **define the boundary** of the forget knowledge, which is a specific need of our approach. This is because when the assistant method learns the forget knowledge, it can easily generalize to neighboring knowledge. The augmented retain set essentially tells the assistant model to learn the forget knowledge **only**, and not the neighboring knowledge. Therefore, representativeness is not a requirement for the augmented retain set. Also, note that generating the augmented retain set does not even need any real retain data. It just contains perturbed versions of the forget data.
On the other hand, it is the regular retain set that needs to represent the retain knowledge, and that would cause challenge #2. Here, we claim that our method is not sensitive to the regular retain set, and **can even get rid of it**. To validate this, we conduct additional experiments on TOFU, where we keep our original augmented retain set but reduce the size of the regular retain set to 75%, 50%, 25%, and 0% of its original size. Results in the following table indicate that this reduction has minimal impact on the model utility of our method.
Please note that when the regular retain set drops to zero, the method is very close to your expectation that ‘the assistant model is just trained to memorize the data to be forgotten’ – it does not require any real retain data, just memorizing the forget data and distinguishing from the perturbed forget data.
Lastly, we also performed an additional experiment that further confirms that our method performs well on unseen retain data. The results in Table 2 in the attached rebuttal pdf shows that the assistant model has a comparably high entropy (thus flat output distribution) on unseen retain data compared to seen retain data, which indicates that our method is insensitive to the coverage of the seen retain data.
We will add the above discussions and experiments to the paper.
| TOFU-10\% | Forget quality | Model utility |
|------------------------|----------------|--------------|
| Target LLM | 2e-19 | 0.62 |
| Retain LLM | 1 | 0.62 |
| ULD | 0.52 | 0.62 |
| ULD-0% regular retain | 0.22 | 0.61 |
| ULD-25% regular retain | 0.34 | 0.62 |
| ULD-50% regular retain | 0.45 | 0.62 |
| ULD-75% regular retain | 0.39 | 0.62 |
> **W2: Effectiveness of ULD relies on the augmentation.**
We acknowledge that augmented data is crucial for our method. However, as explained in our response to W1, the augmented data should be regarded as an integral part of our method, rather than a plug-in that helps any methods.
In fact, the following table, which is a subset of Table 4 in the main paper, shows that adding the same augmented data does not improve the baselines. These results further suggest it is the logit difference that makes the difference, not the augmented datasets.
| TOFU-10\% | Forget quality | Model utility |
|-----------------------|----------------|---------------|
| Target LLM | 2e-19 | 0.62 |
| Retain LLM | 1 | 0.62 |
| GA+KL | 2e-4 | 0.05 |
| GA+KL+augment | 4e-7 | 0 | | |
| NPO+KL | 0.07 | 0.32 | | |
| NPO+KL+augment | 1e-4 | 0.08 | | |
| Offset-NPO+KL | 4e-5 | 0.48 | | |
| Offset-NPO+KL+augment | 6e-9 | 0.24 | | |
| ULD | 1e-7 | 0.53 | | |
| ULD+augment | 0.52 | 0.62 | | |
---
Rebuttal 2:
Comment: > **W3: Comparison with task vector and representation fine-tuning methods.**
We want to first clarify that our method is **fundamentally different from representation learning** in WMDP. The basic idea of WMDP is to destroy the model knowledge on forget data by minimizing the distance between the model representation and a random vector. By contrast, our method does not destroy the representation but only offsets the forgetting knowledge in output logits with an assistant model. In fact, the principle of destroying forget knowledge is not a new idea. [1] also share a similar principle that destroys the model’s representations on forget data by minimizing the KL divergence of its output distribution with a distribution where the ground-truth token’s probability is manually reduced. Due to time constraints, we cannot do more experiments to compare with the WMDP method. However, we happen to have made the comparison to [1] in our response to other reviewers. Results in Table 1 of the attached pdf show that this method (DI) has worse forget quality and model utility than our method.
Second, the task vector method is indeed closer to our method. However, **their rationale for unlearning is different from ours**. They find an unlearning direction in model parameter space, whereas we offset the unlearning knowledge in output logits. Their method may resolve challenge #1 in our paper, but it cannot resolve challenge #2, because simply negating the task vector cannot guarantee that the unseen retain knowledge is not affected. Moreover, most existing works utilizing task vectors only demonstrate its effectiveness on modifying abstract model behaviors (e.g., de-toxic and de-biasing), and few works successfully apply it to unlearn fine-grained factual knowledge [2-4].
To further verify our claims, we compare with the task vector method on TOFU-10%, where we experiment with a wide range of weights for the added vector. Results in the following table show that this method fails to achieve a high forget quality and model utility at the same time, and magnifying the added vector continuously compromises model utility, e.g., model utility drops to 0.36 when the weight is -1.8.
| TOFU-10% | Forget quality | Model utility |
|------------------|----------------|---------------|
| Target LLM | 2e-19 | 0.62 |
| Retain LLM | 1 | 0.62 |
| ULD | 0.52 | 0.62 |
| Task vector -0.2 | 5.3e-19 | 0.6 |
| Task vector -0.4 | 1e-15 | 0.58 |
| Task vector -0.6 | 8.1e-8 | 0.56 |
| Task vector -0.8 | 1.2e-5 | 0.52 |
| Task vector -1.2 | 3.3e-6 | 0.48 |
| Task vector -1.4 | 1.05e-3 | 0.44 |
| Task vector -1.6 | 0.17 | 0.39 |
| Task vector -1.8 | 0.24 | 0.36 |
We would love to provide more complete experiments. However, due to time constraints, this is the best we can provide to our best efforts. We are confident that our response can answer your concerns, but if it does not, we would greatly appreciate a timely discussion since the deadline is approaching. Thank you very much for your time!
[1] Dong et al., Unmemorization in large language models via self-distillation and deliberate imagination
[2] Zhang et al., Composing Parameter-Efficient Modules with Arithmetic Operations.
[3] Dige et al., Mitigating Social Biases in Language Models through Unlearning.
[4] Liu et al., Towards Safer Large Language Models through Machine Unlearning.
---
Rebuttal 3:
Comment: Dear Reviewer F8Xo,
As the discussion period is about to end, we wanted to check if our response has sufficiently addressed your concerns. Although your review was submitted close to the deadline, we made our best efforts to promptly provide a detailed response with many additional experiments.
We would greatly appreciate it if you could review our response and consider re-evaluating our paper based on the rebuttal. | Rebuttal 1:
Rebuttal: We would like to thank all ACs and reviewers for handling our submission. We value the acknowledgement and insightful suggestions they made to our paper.
We are pleased to see that all reviewers acknowledged various aspects of our paper:
* Novel and creative method (Reviewer Y4ow, uFqn)
* Extensive experiment and superior performance compared to baselines (Reviewer Y4ow, uFqn)
* Simple and straightforward method (Reviewer 6JsY)
* Clear writing and easy to follow (Reviewer Y4ow, 6JsY, uFqn)
Regarding the questions proposed by the reviewers, we include additional experiment results in the attached pdf due to the rebuttal length limit. Please read through our separate rebuttal for detailed responses.
We look forward to more discussion, and we are happy to address any further follow-up questions.
Pdf: /pdf/9fc775808b785830cbf2e31a035130506a4aaac5.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Improving Equivariant Model Training via Constraint Relaxation | Accept (poster) | Summary: The paper proposes to relax the equivariance constraints in equivariance networks by adding a non-equivariant residual in the network weights. The methodology can be applied to different equivariant architectures, e.g. vector neurons, SE(3)-equivariant GNNs, Equiformers. Besides these strictly equivariant networks, the proposed framework can also extend to the approximately equivariant networks by modulating the strength of the unconstrained network weights. Experiments suggest moderate performance increases on a variety of tasks and equivariant architectures when applied the proposed framework.
Strengths: The paper addresses an important problem: overcoming the optimization challenges for equivariant neural networks. The methodology is clearly motivated and easy to follow. The proposed framework of adding an unconstrained component in the network weights seems general enough to be implemented on various (though not all) equivariant architectures, which the authors have demonstrated with quite a few examples.
Weaknesses: * Formatting: the paper is visually difficult to read because of the incorrect use of the parentheses in citations.
* Theoretical contribution: as far as I understand, the motivation for relaxing the equivariance constraints is (i) the equivariant networks can be difficult to optimize, or (ii) the data have imperfect symmetry. The paper does not include any theoretical evidence of how the proposed framework of adding unconstrained weights can be helpful in these scenarios.
* Specifically, for (i), the paper did not point out what could be the specific challenges during the optimization of equivariant networks, compared to unconstrained optimization.
* And for (ii), previous works have shown the error bound of approximately equivariant networks on imperfectly symmetric data (Wang et al 2022) and proposed how to find the symmetry-breaking factors (Wang et al 2023). Compared to these works, I feel this paper didn't provide enough analysis e.g. of equivariance error of the proposed network, or how to interpret the learning result and possibly identify the imperfect symmetry in data.
Also, many of the results in the paper are already well-known, e.g. the equation about the Lie derivative. Several works have already proposed to use the Lie derivative as a regularization to encourage equivariance, e.g. Otto et al 2023.
* Significance of experimental results: in the experiments, the proposed method only increases performance by a little. Also, in Figure 2, it seems that some models have not converged after 200 epochs. It would be better to train for more epochs and include the full results. In Table 1, the difference between Equiformer and your method is very small. It's hard to verify the significance of this result without looking at the error bar.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. The scheduling choice of $\theta$ in Section 3.2 seems arbitrary. Have you tried other scheduling choices, e.g. initializing with zero and constantly increasing it? What is the intuition for the current choice?
2. How sensitive is the model to the regularization coefficient $\lambda_\text{reg}$?
3. L193: "During inference, we evaluate only on the equivariant part of the model." It would be interesting to also see the result for the full model (i.e. including W). I wonder how equivariant and how large W would be under the current regularization. I'm asking because approximately equivariant networks have proved to outperform strictly equivariant networks on certain tasks with imperfect symmetries (Wang et al 2022). As you have both the non-equivariant network and the equivariant one, I'm curious about the comparison between them.
4. Is it possible to have different weighing coefficients for each network layer in the Lie derivative regularization and projection error regularization? As these errors can accumulate after passing through multiple layers, I think it is intuitively reasonable to have larger weights for the first few layers.
### References
* Wang, Rui, Robin Walters, and Rose Yu. "Approximately equivariant networks for imperfectly symmetric dynamics." International Conference on Machine Learning. PMLR, 2022.
* Wang, Rui, Robin Walters, and Tess E. Smidt. "Relaxed Octahedral Group Convolution for Learning Symmetry Breaking in 3D Physical Systems." arXiv preprint arXiv:2310.02299 (2023).
* Otto, Samuel E., et al. "A unified framework to enforce, discover, and promote symmetry in machine learning." arXiv preprint arXiv:2311.00212 (2023).
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors have discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comments as well as raising many points for clarification. We try to address the points raised below.
## Citation Format
We apologize for the incorrect formatting of citations and appreciate the reviewer for pointing it out. We will correct this in the final version of the paper.
## Theoretical contributions
Unlike in other works on relaxed equivariance, here we specifically focus on the setting where the training data distribution and the model don't have a mis-match in terms of symmetry. As a result the performance improvements come from the fact that the proposed relaxation process can ease the optimization of equivariant networks. As we also noted with other reviewers, amongst equivariant NNs practitioners it is now a somewhat common observation that equivariant NNs can be harder to optimize than their non-equivariant counterparts [1][2][3]. However, the question itself remains unexplored. We take a step towards examining this question in more detail --- we believe that identifying processes that can provide empirical improvements in the optimization of such networks is a reasonable standalone contribution that can be valuable to the community.
However, we have been working on developing theoretical models that develop an optimization-generalization trade-off which could perhaps provide some rationale for why equivariant models might be harder to optimize. We leave explicating on this for a future paper, since the situation can get quite involved.
## Significance of experimental results
We thank the reviewer for the feedback regarding the result shown in Figure 2 of the paper. We followed the suggestion and let the models train over the limit of 200 epochs and we show the results in Figure 1 of the attached rebuttal PDF. We see that indeed the additional epochs allow the model to converge, increasing their performance in some cases. It is important to note that while we extended the training epochs we kept the scheduling of theta the same, meaning that theta becomes equal to zero from epoch 200 and onwards. In the original submission we chose to follow the exact setup used from the baseline method (that used 200 epochs) to showcase how our method can provide improvements in performance without the need to tune again all of the training hyperparameters. However, we appreciate the comment of the reviewer and will include the updated figure in the final version of the paper.
Regarding the error bars to quantify the significance of the results for the Equiformer experiment: due to the high computational cost of training a single equiformer model, there is a significant cost involved in providing detailed error bars. The original paper (Liao and Smidt 2023) did not include error bars in their reported results. However, we are actively training more models and we can provide the resulting error bars in the final version of the paper.
## Choice of $\theta$ scheduling.
We thank the reviewer for the comment. The main constraint for the scheduling of $\theta$ is that we want to be equal to zero at the end of training so that the final model is equivariant. As a result, the choice of linearly increasing theta doesn't satisfy that constraint. Additionally, we observed that a warm up period in the beginning of training with a small $\theta$ provides significant improvement in the training process. These two observations informed our choice of $\theta$ scheduling where we linearly increase it for the first half of the training and then we linearly decrease it. After the suggestion of the reviewer we will add these observations to the Appendix of the paper.
## Sensitivity to the regularization coefficient.
In Figure 2 in the attached rebuttal PDF we show the performance of the VN-Pointnet model for different values of the regularization parameter. This experiment was part of a hyperparameter search using cross-validaton on 80\%-20\% split of the original training set. As a result the documented performance represents models trained only on the 80\% of the train dataset and evaluated on the other 20\%.
## Additional Details
**Regarding evaluation on unprojected relaxed model:**
We appreciate the suggestion of ther reviewer regarding adding comparison between the relaxed non-equivariant model and the projected equivariant model. Figure 3 (included in the attached rebuttal PDF) shows a comparison between a model trained with our proposed method and a relaxed model before and after we project it to the equivariant space. For the latter model the $\theta$ parameter is kept constant and the equivariant error is controled only through the regularization term. We can observe that, although the performance of the relaxed model before projection is close to the results achieved by our method, the control of the relaxation through our proposed annealing of $\theta$ benefits the performance. We will add this comparison in the final version of the paper.
**Regarding the use of different weighting in the Lie derivative regularization:** It is true that a more general approach would use different regularization weights for each of the layers of the network. However the use of different weights per layer heavily depends on the individual architectures and will introduce additional complexity to our method. Thus we chose to use the same weight for all the layers to make our method simpler and easier applicable to different tasks independently to the specific architectures used in the task.
**References used in this response**
[1] Equiformer: Equivariant Graph Attention Transformer for 3D Atomistic Graphs
Yi-Lun Liao, Tess Smidt, arXiv:2206.11990
[2] Discovering Symmetry Breaking in Physical Systems with Relaxed Group Convolution
Rui Wang, Elyssa Hofgard, Han Gao, Robin Walters, Tess E. Smidt, arXiv:2310.02299
[3] Clebsch-Gordan Nets: a Fully Fourier Space Spherical Convolutional Neural Network
Risi Kondor, Zhen Lin, Shubhendu Trivedi, arXiv:1806.09231
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Some of my concerns have been addressed. E.g. the additional experiment in Figure 1 indeed shows the benefit of relaxing the equivariance constraint.
For my question 3, I was originally approaching it from the perspective of ``approximate symmetry in data'', where an approximately equivariant model (e.g. with a small $\theta$ in $\theta W$) would help. So I was thinking about training such a model by scheduling $\theta$ to be close to zero at the end, e.g. by still using your schedule in Section 3.2 but ending a few epochs earlier. However, since you focus on improving the optimization of a (strictly) equivariant model instead of specifying an approximately equivariant model that matches the data symmetry, I believe what I proposed was not the most relevant. Still, I appreciate your effort in providing the additional results in Figure 3.
My main concern, however, is still about the theoretical contributions. I definitely agree that equivariant models pose optimization challenges, and also tackling this problem itself is important. However, I'd expect more theoretical evidence on why your current method would work and what kind of optimization obstacles it could possibly overcome.
As also mentioned by Reviewer GoLr, there are other intuitive approaches to your goal, e.g. simply using a non-equivariant model and including the Lie derivative regularization. The amount of symmetry violation can also be controlled by the regularization coefficient $\lambda_{\text{reg}}$, and perhaps you can do a similar scheduling procedure for $\lambda_{\text{reg}}$ as for $\theta$. I agree that this would be a less explicit control of the level of relaxation. But since there are fewer components and loss terms in the model, it's hard to say exactly which one would be better. Due to the lack of theoretical analysis, I am unconvinced that the current method is a better way of addressing optimization challenges in equivariant networks, among the many possible approaches.
---
Rebuttal 2:
Title: Response to Reviewer's b5uM comment
Comment: Thank you for the response. We appreciate your comment and engagement.
## Regarding the suggestion of using a non-equivariant model and including a Lie derivative regularization:
Optimizing a non-equivariant model with a Lie derivative regularization, even in the case where scheduling of $\lambda_{reg}$ is performed, doesn't guarantee that the final model will be exactly equivariant i.e. the Lie derivative will be zero for all possible inputs. Thus we believe that if we want to learn a model that is exactly equivariant, which is the focus of this work, a projection operation that projects the model to the equivariant space is required. While there are multiple works on training relaxed equivariant networks, they do not consider this projection step. In our work, we described a projection operation that is simple-- so it can be easily incorporated into a typical training process-- and we provided experimental evidence showcasing how it benefits training of equivariant networks.
## Regarding the contribution of this work
We recognize that a theoretical analysis (or motivation) of this phenomenon will be an important contribution, and it is a research direction we are interested in pursuing in the future. In general, we think that a first principles theoretical approach to come up with a better optimization could be beneficial to the community since the right language for this problem (in the equivariant setting) is also missing. Nevertheless, we believe that providing a simple training procedure that improves the performance of a large range of equivariant networks is an important standalone contribution to the community. As we discussed in our rebuttal responses, previous works mainly showcase the benefits of learning relaxed equivariant networks and do not consider the case that we focus on, where we want to return to the space of exactly equivariant networks. As a result, we believe that our work still provides a novel perspective that shows that even when a practitioner requires an exact equivariant network, it can still be beneficial to relax the equivariant constraint during training and project back to the space of equivariant networks during inference. Due to the lack of attention, the problem space of improving optimization of equivariant networks is unexplored and our work can be seen as a step to start exploring the space.
As for the other suggestion by the reviewer, we are happy to run experiments using that and include it in the appendix if the reviewer thinks that will make it more comprehensive.
---
Rebuttal Comment 2.1:
Comment: Thank you for the response. I appreciate the authors' efforts to provide additional experiments and clarifications toward the paper's objective and contribution. The lack of theoretical analysis is still a concern to me, but I agree that the method proposed in this paper is indeed an important step toward understanding the optimization challenges in equivariant networks. I will raise my score to 5. | Summary: The paper proposes relaxing the equivariance constraint on an equivariant network during training. This is done by adding free weights to equivariant linear layers but setting the free weights to zero after training. Further, two regularizations are introduced to stabilize the training: a Lie derivative term encouraging the free weights to be close to equivariant and a term encouraging the influence of the free weights to be low compared to the equivariant weights. The approach is evaluated on several equivariant tasks, showing improved performance compared to the baseline of non-relaxed optimization.
Strengths: 1. The paper provides a solid contribution to the understudied topic of optimizing equivariant networks. As far as I am aware, the idea has not been studied in the literature before.
2. The presented experiments show a small but consistent benefit using the proposed approach.
Weaknesses: 1. There is no theoretical motivation for why the proposed approach should work.
2. The optimization will be heavier using the proposed approach since a large number of additional parameters are introduced. See also Question 2 below.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The proposed parameterization of equivariant layers during training is $f(x) = f_e(x) + \theta W x$ where $f_e$ is equivariant. $W$ is also encouraged to be close to equivariant through Lie derivative regularization. Would it be possible to remove $f_e$ and $\theta$ to parameterize $f(x)=W x$ as a non-equivariant layer that is regularized by the Lie derivative loss? The projection to the equivariant subspace at the end could be group averaging: $W\mapsto \int_{g\in G} \rho(g^{-1}) W \rho(g) \mathrm{d}g$. Is there something that speaks against such an approach?
2. What is the introduced overhead during training? In terms of time and memory costs.
3. What is the performance of the obtained trained network without projecting to the equivariant subnetwork? I.e. is the approximately equivariant network better than the equivariant?
Minor:
- Line 243 "table 2" -> "Figure 2"
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive assessment of our work. For the questions you raise, we attempt to address them below. Please let us know if we can provide further clarifications.
## Theoretical motivation
Within the equivariant NNs community, especially amongst practitioners using such networks in scientific applications, it seems to be a fairly common observation that such networks can be harder to optimize as compared to their non-equivariant counterparts [1][2][3]. We wanted to take a step towards exploring this question and its attendant space, which we believe to be a good enough contribution to the community. We have started to form some theoretical justifications for how/why such approaches should work or be modified. We hope to explicate on a theory for future work. Generally, working out an optimization-generalization trade-off is hard (say compared to working a tradeoff between generalization and approximation). Here we would like to work it out for equivariant networks versus non-equivariant ones.
## Regarding the additional Optimization Cost
Thank you for the question. As you correctly point out the cost of optimization of our method will be higher than a baseline equivariant network due to the additional parameters. Nevertheless, due to the parallel nature of the added unconstrained component, the main overhead of the method is on the additional memory required to store the added parameters. This number of additional parameters makes the model of similar size to the typical unconstrained non-equivariant models, which is expected since during training we optimize over a space that is larger to the constrained space of equivariant models. We would like to note that contrary to other methods on approximate equivariant models, that require to access additional parameters both in training and inference, our method requires additional computational resources only during training. As a result, after training is completed our proposed method will not have an effect on inference time.
We provide a comparison between the time cost of our proposed training process and the baseline training of different equivariant models. Also to showcase the memory overhead we provide a comparison between the number of learnable parameters between a model trained with our method and an equivariant model trained with a standard training process:
| Model Type | Number of Parameters (Base Model) | Additional Parameters (Ours) | Time per Epoch (Base Model) | Time per Epoch (Ours) |
|------------|--------------------------------------|---------------------------------------|-----------------------------|--------------------------|
| PointNet | 1.9M | 6.4M | 75s | 80s |
| DGCNN | 1.8M | 6.2M | 148s | 154s |
| Equiformer | 3.4M | 10M | 52s | 57s |
We will add a discussion about the optimization cost of our method in the Appendix of the final version of the paper.
## Regarding Suggestion in Question 1
Thanks for the interesting question. An important part of our proposed method is the ability to control the level of the equivariant relaxation explicitly, in addition to the implicit control that comes as a result of the regularization term. With our current parametrization, we can control the level of relaxation by controlling the value of $\theta$. The importance of this control and of the annealing of theta can be seen in Figure 2 (of the submitted paper), where in the case where only the Lie derivative regularization is applied with the value of $\theta$ being constant, the performance of the method decreases.
We think that with the group averaging approach, it could be harder to have explicit control on the level of relaxation of the equivariant constraint during training. As a result when projecting back to the equivariant case it is possible that the performance gap between the relaxed and projected model might be large, even with the lie derivative regularizer.
Additionally, the operation of group averaging can be performed up to approximation in the case of continuous groups, since we need to sample the group elements to approximate the convolution integral.
## Performance of the Network without the Projection
In Figure 3 in the rebuttal PDF we added a comparison between the performance of a model trained with our method and the performance of a relaxed equivariant model before and after we project it to the equivariant space. For the relaxed model that we compare with, the $\theta$ parameter is kept constant and the equivariant error is controlled only by the regularizer. We can observe that our method outperforms the relaxed equivariant model with constant $\theta$ even before we project the latter in the equivariant space.
## Typos
Thank you for pointing this out. We will correct it in the paper.
**References used in this response**
[1] Equiformer: Equivariant Graph Attention Transformer for 3D Atomistic Graphs
Yi-Lun Liao, Tess Smidt, arXiv:2206.11990
[2] Discovering Symmetry Breaking in Physical Systems with Relaxed Group Convolution
Rui Wang, Elyssa Hofgard, Han Gao, Robin Walters, Tess E. Smidt, arXiv:2310.02299
[3] Clebsch-Gordan Nets: a Fully Fourier Space Spherical Convolutional Neural Network
Risi Kondor, Zhen Lin, Shubhendu Trivedi, arXiv:1806.09231
---
Rebuttal Comment 1.1:
Title: -
Comment: I acknowledge that I have read the reviews and rebuttals. The authors have successfully answered the most important critique points, so I lean towards keeping my score.
---
Reply to Comment 1.1.1:
Title: Re
Comment: Thank you for your comment and for engaging with our response. We are glad that our responses have been clarifying. | Summary: The work proposes a method for improving generalization by relaxing hard equivariance and minimizing the equivariance error as additional regularizer during training.
Strengths: Symmetries play an important role in machine learning and deep learning specifically. There has been recent attention to relaxed forms of equivariance, making it a relevant topic. The paper is well-written.
Weaknesses: There is a lack of attribution to related work, which results in a false sense of novelty. Although the paper does cite many papers on (relaxed) equivariance, it does not always give proper attribution to their contributions. Especially, since several cited papers in the related work section have proposed forms of relaxed equivariance and even minimizing such “relaxation errors” as regularization objective - either in their main method or as a baseline. Yet, the paper claims that “ solutions don’t directly focus on the optimization process itself”. This gives a false impression that regularizing the amount of equivariance in the loss is novel, while it is not. As such, to me it is not entirely clear what the contribution of this paper is.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Projecting back to equivariance models during testing. The first contribution mentions “projecting back to the space of equivariance models during testing”. where is this method? To my understanding, the model is not actually projected, but the error in the projection is rather minimized (but not zero). I could have misunderstood this aspect?
2. Regularization term for deep neural networks. In line 165, a regularization term is proposed to encourage equivariance solutions. It seems this term is only for a single layer. For multiple layers, how are the relative importances between layers chosen? Uniform? Also, it is not clear to me how the overall regularization strength \lambda_reg should be chosen. Cross-validation? Similarly for \theta.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 1
Limitations: The main contribution of the paper seems to be adding a regularization term that penalizes the ‘equivariance error’. Several works have proposed similar losses and minimizing such regularizers in the training objective. The paper lacks comparison or discussion between different choices of such regularizers, yet alone an empirical comparison. To me it is not clear what the main contribution of this paper is?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for sharing constructive feedback. We agree that we can improve attribution to related work to emphasize the differences from our work. It is certainly true that there are many works on relaxed equivariance, some of which also employ regularization terms that penalize relaxation errors. However, these works tend to assume that the data itself has some (possibly) relaxed symmetry. The regularization term can then be seen as trying to match the symmetry encoded in the model to the symmetry of the data itself. In our case, the main difference---although it may seem subtle---is that we aren't looking to correct for model misspecification, although we include experiments for approximately equivariant NNs too. We try to make the case that even if we assume that the model is correctly specified, relaxing the equivariance constraint during optimization and then projecting back to the equivariant space can itself help in improving performance. Within the equivariant NNs community it is a common observation that equivariant NNs are harder to optimize than their non-equivariant counterparts [1][2][3]. We take a step towards examining this question in more detail as we believe that this particular hasn't seen exploration specifically.
With the above context in mind, we expand on each of your points below.
## Paper Contribution compared to related work
As we said, while there are multiple works on different forms of relaxed equivariance, that also introduce regularization by minimizing the equivariance error, the main difference with our work is that they always remain in the space of relaxed equivariance models. Furthermore, the regularization solves for model mis-specification. We believe that the contribution of our work lies in the projection of the models back to the equivariant space which, contrary to previous works, guarantees that the solution has zero (or fixed) equivariant error. Thus, we are not aiming to address model mis-specification. However, we will improve the attribution to existing work to emphasize this difference, and to highlight the observation that optimizing equivariant networks can be harder than their non-equivariant counterparts.
## Projecting back to equivariance models during test
Thank you for your comment. We believe there might be a misunderstanding regarding the projection back to the equivariant space during testing. In Section 3 we define the approximate equivariant linear layer as:
$$f(x)=f\_e(x)+\theta Wx$$
where $f\_e(x)$ is an equivariant linear layer and $\theta Wx$ is an unconstrained term. As we mentioned in lines 142-143, we can project this linear layer to be exactly equivariant by setting $\theta=0$. In that case, only the equivariant part of the layer is activated $f(x)=f\_e(x)$ which guarantees that the layer and as a result the overall model is equivariant and has exactly zero equivariant error. During inference, we set $\theta=0$, resulting in an exact equivariant model. To improve clarity, in addition to the sentence in lines 142-143 we added an explicit note that the operation of setting $\theta=0$ is what we refer to as projection in the rest of the paper.
## Regularization Term for Multiple Layers
As we show in the overall training objective (Section 3.3, line 189) the regularization terms, derived for each individual layer, are added in the loss function with the same weight $\lambda\_{reg}$.
While it is possible to introduce different weights for the different layers, these weights will heavily depend on the specific network architecture and will increase the complexity of the method. In order to keep the method simple and easily applicable to different architectures we choose to simplify the solution and use the same weight for all layers.
## Choice of hyperparameters
For the choice of the hyperparameter $\lambda\_{reg}$ we performed a typical grid search using cross validation with a 80\%-20\% split of the training set into training and validation set. We found this value to be relatively robust across tasks, so we performed it on the task of point cloud classification and used the value in all other tasks. We include a figure (Figure 2) in the attached rebuttal PDF which shows how the value of $\lambda\_{reg}$ affects the performance of the method while using VN-PointNet as the baseline model. We will add these details and the additional figure in the Appendix of the final version of the paper. We can also include results tuning them individually in the appendix if the reviewer thinks it will be helpful. However, we must note that the accuracy shown in the figure is for the validation set when the model is trained on the 80\% of the training set. The results shown in the paper are when we train the model on the complete training set (after we have chosen the hyperparameter $\lambda\_{reg}$).
Regarding the parameter $\theta$: since we perform the scheduling as described in section 3.2, we are not required to do hyperparameter search. The main constraint for the choice of $\theta$ is that it needs to arrive at zero at the end of training. As shown in the ablation in section 4.1 if we do not perform $\theta$ annealing we observe deterioration in performance.
**References used in this response**
[1] Equiformer: Equivariant Graph Attention Transformer for 3D Atomistic Graphs
Yi-Lun Liao, Tess Smidt, arXiv:2206.11990
[2] Discovering Symmetry Breaking in Physical Systems with Relaxed Group Convolution
Rui Wang, Elyssa Hofgard, Han Gao, Robin Walters, Tess E. Smidt, arXiv:2310.02299
[3] Clebsch-Gordan Nets: a Fully Fourier Space Spherical Convolutional Neural Network
Risi Kondor, Zhen Lin, Shubhendu Trivedi, arXiv:1806.09231
---
Rebuttal 2:
Comment: Thank you for the response. I appreciate the authors' efforts to provide additional explanations. To me it seems that the main contribution of the work is the projecting back step. Although most works do not consider such projection, I remain concerned about the lack of sufficient contribution to prior work (for instance relating the used relaxation). Further, some experiments demonstrate that some training runs result in better test loss, but am not sure whether the provided experiments give sufficient evidence for the made claims, which is that introducing some relaxation during training but removing it afterwards through a back projection is beneficial. Especially, since the paper lacks a theoretical analysis.
Regarding the claimed contributions. The used relaxation to me seems exactly equivalent to a residual pathway prior RPP [1]. Although this work is mentioned, its relation to the used relaxation is not. As such, I also argue this not to be a particular novel contribution on its own which is claimed. Similarly, the authors do cite a range of papers that introduce parameterizations of relaxed equivariance, but do not explain how these relaxations relate to the relaxations used in this paper. It is true that these work do often not explicitly consider projecting back, as also argued in the rebuttal, but it is not clear to me why this would prohibits a comparison between these relaxations and explain differences and similarities. Moreover, since projecting back seems to be the primary contribution of this work, why not consider projecting back other common relaxations of equivariance?
Given the above, I lean towards keeping my current score.
[1] Finzi, M., Benton, G., and Wilson, A. G. Residual pathway priors for soft equivariance constraints.
385 In Advances in Neural Information Processing Systems, volume 34, 2021.
---
Rebuttal Comment 2.1:
Title: Response to Reviewer's 3CAa comment
Comment: Thank you for the comments, we appreciate your engagement with our work and response.
## Regarding the attributions to prior work
We again do agree with the suggestion of the reviewer that improving relation to previous works on relaxed equivariance will be useful. We will add a more detailed discussion, along the lines that we have stated in the rebuttal and the responses. In addition to that, we will add further discussion about the similarities between our proposed method and the Residual Pathway Priors (RPP). While we agree with the reviewer that the mechanism used to perform our relaxation is similar to the one used in RPP, we would like to note that the process for updating the level of relaxation during training is significantly different between the two works since their motivation and focus are different. Specifically, while RPP assumes only partial knowledge of the degree of equivariance for a given task, and designs a training process that allows for updating this knowledge given the data, we assume definitive knowledge about the symmetries of the given task and show that optimizing over a larger space of relaxed equivariant networks and projecting back to the equivariant space can help optimization. We will add the above discussion in Section 3 of our paper.
## Regarding the Contribution of this work
As we discussed in the rebuttal and was also mentioned by the reviewer, one of the main contributions of this work is the observation that even when we know the exact symmetries of the task it can still be beneficial to train over a larger space of relaxed equivariant models and project back to the original constrained equivariant space during inference. Since this observation is not discussed by previous works on relaxed equivariant networks, we believe that it constitutes a standalone contribution that can motivate further research on this unexplored area.
To support our claims we propose a simple training process that allows us to efficiently control the relaxation level of the equivariant constraint during training and project back to the equivariant space during inference. In Section 3 of our paper, we discuss the motivation for the specific choices used in our method, including the specific form of the relaxed equivariant linear layer. By evaluating our method on a diverse set of equivariant networks and tasks, we show that our proposed relaxation during training results in increased performance of equivariant networks compared to the performance achieved by the standard training of such networks solely on the equivariant space. We believe that our experimental evaluation presented in the paper, along with the additional results we added in the rebuttal after suggestions from the reviews, provides sufficient empirical evidence of a phenomenon, which can be beneficial for improving the optimization of equivariant networks and is not documented in previous literature. Therefore, we think that the step about *projection* (which seems very similar to works on relaxed equivariance) is only a way to get there and can serve as a baseline, but the main contribution can also be seen as considering how optimization of equivariant networks with fixed symmetries can be improved. So the contribution is also about exploring that problem space itself. Nevertheless, we are happy to incorporate additional suggested experiments that the reviewer believes can strengthen our claims. | Summary: Starting from the consideration that equivariant neural networks, though effective for tasks with known data symmetries, are hard to optimize and require careful hyperparameter tuning, this study proposes a framework to improve their optimization process. This is done by temporarily relaxing the equivariance constraint during training. By adding and progressively reducing a non-equivariance term in intermediate layers, the model explores a broader hypothesis space before converging to an equivariant solution.
Strengths: The paper is clearly written, the topic is significant and of interest to the community.
Weaknesses: Experimental results, despite confirming the theoretical considerations, seem to not compare with competitor methods (the ones reporte in Related works section, e.g. Mondal et al. (2023), Basu et al. (2023b) etc.) or to describe and analyze the additional computational costs (time/memory) of the provided optimization procedure (also in this case, compared to such alternative approaches).
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) What are the additional computational/memory costs of the proposed method? Is there a trade-off between the achieved increased optimization stability and exploiting the proposed procedure?
2) Could the authors give some examples where the proposed method is not applicable, i.e. the symmetry group is not a matrix Lie group or a discrete finite group? And do they believe that such settings are common or not, i.e. is the proposed method general enough to be applied to real-world scenarios?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive assessment of our paper and for the comments. Below we address all of the points the reviewer has raised.
## Comparison with works on Equivariant Adaptation/Fine-tuning of pre-trained models
Thank you for raising this point. While it is true that both our method and the works of of Mondal et al. (2023), Basu et al. (2023b) have a similar motivation i.e. that the optimization of equivariant models can be more challenging than their non-equivariant counterparts, our paper addresses a different research question from their work. Specifically, our paper focuses on improving the optimization of equivariant architectures themselves, while the works of Mondal et al. (2023) and Basu et al. (2023b) focus on techniques that bypass the need for training equivariant models. Specifically, they focus on creating equivariant functions by using pre-trained non-equivariant models e.g. by canonicalization (as in Mondal et al.).
We believe that the difference in focus between our work and that of Mondal et al. (2023) and Basu et al. (2023b), does not quite permit a straightforward and fair comparison. Nevertheless, we include a comparison on a task of point cloud classification (ModelNet40) below.
| Method | PointNet | DGCNN|
|----------|----------|----------|
| Mondal et al. (2023) | 66.3\% | 90.4\% |
| Basu et al. (2023b) | 74.9\% | 89.1\% |
| Original VNN | 66.4\% | 88.5\% |
| Ours | 74.5\% | 92.0\% |
Here it is important to note that in the case of Basu et al. (2023b), equivariance is achieved by averaging the results over multiple transformed inputs. As a result during inference, the model is required to perform multiple forward passes, which slows down the method's inference time.
We will be happy to include this comparison in the final version (in the main paper or the appendix) if the reviewer believes that it will benefit the overall narrative of the paper.
## Additional Details on Computational and Memory costs of the proposed method
We provide a table of the computational overhead of our method compared to the baseline models. Additionally, to illustrate the overhead on the memory constraint we provide a comparison on the number of learnable parameters between our method and the baseline equivariant model. We would like to note that this overhead is only during the training of the models since during inference we remove the additional relaxation layers.
| Model Type | Number of Parameters (Base Model) | Additional Parameters (Ours) | Time per Epoch (Base Model) | Time per Epoch (Ours) |
|------------|--------------------------------------|---------------------------------------|-----------------------------|--------------------------|
| PointNet | 1.9M | 6.4M | 75s | 80s |
| DGCNN | 1.8M | 6.2M | 148s | 154s |
| Equiformer | 3.4M | 10M | 52s | 57s |
We will add these details in the appendix.
## Trade-off for optimization stability
Could the reviewer clarify further? We might be misunderstanding and would be happy to discuss. In general, however, we think that the optimization stability of our method depends on the projection error term which can affect a tradeoff. For perfectly equivariant models in which we don't do any relaxation, we would expect them to be somewhat harder to optimize than a setup with relaxed weights that don't have high projection error. In cases where the projection error is too high, the quality of optimization will go down. Note that in our proposed method we can control the projection error by annealing the value of the $\theta$ parameter of equation 2.
## Examples where the proposed method is not applicable
The approach will generally work for a variety of groups of real-world interest. The approach as described will work for compact Lie groups such as $S(1), SO(n), O(n)$, and their finite subgroups like $Z/NZ$, as well as their quotients and products. The approach can be made to work for certain non-compact Lie groups which are reductive for which we can write expressions for the bracket, such as the Lorentz group. It might be challenging for the approach to work for permutation groups.
---
Rebuttal Comment 1.1:
Comment: I acknowledgeto have read the reviews and rebuttals. Given that the authors have answered the most important questions I raised (ando also clarified the trade-off between stability and performances), I lean towards keeping my score of acceptance.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: Dear Reviewer,
Thank you for taking the time for examining our response. We are glad that our responses have been able to answer the points you had raised. | Rebuttal 1:
Rebuttal: We thank all of the reviewers for their comments and constructive feedback on our paper. Here we would like to provide an overview of some of the main points of our individual responses to the reviewers and also take the opportunity to highlight the main contributions of our work:
In this work, we propose a novel training procedure for equivariant neural networks that relaxes the equivariant constraint during training and projects back to the space of equivariant models during inference. As we discuss in more detail in our responses to reviewers 3CAa, b5uM, the differentiating factor between our paper and previous works on relaxed equivariance is that while we optimize over a larger space of relaxed equivariant models, at the end of optimization we project back to the space of equivariant models. This process allows us to learn models that have increased performance, compared to equivariant models trained with standard training, while at the same time, they have exactly zero equivariant error.
Following the suggestions of the reviewers, in the attached rebuttal PDF we provide additional ablations showing the effect of the individual component of our method on the overall performance (e.g. lie derivative regularization, $\theta$ scheduling, projection to the equivariance space). Specifically:
- Figure 1 extends the results of the ablation study provided in the submitted paper, where it shows the performance of different versions of our method when different components are removed.
- Figure 2 shows the sensitivity of our method to the choice of the regularization coefficient $\lambda_{reg}$.
- Finally, in Figure 3 we provide a comparison between a model trained with our proposed method and a relaxed equivariant model that is trained without the use of $\theta$ scheduling. In the case of the relaxed equivariant model with constant $\theta$, we show that its overall performance is worse than our method even before we project it in the equivariant space.
We believe that improving the training of equivariant neural networks is an important research question that can benefit the community and that our work is a positive step in that direction. We appreciate all the comments raised by the reviewers as they helped us enhance the presentation of this work, and we try to address all of them in our individual responses.
Pdf: /pdf/d1acedc77b30f5ec641a5591c401234c04966e00.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SyllableLM: Learning Coarse Semantic Units for Speech Language Models | Reject | Summary: This paper proposes a two-stage speech language model with semantic tokens and acoustic tokens similar to AudioLM ([Borsos et al., 2022]).
- The semantic tokens come from a speech tokenizer that can group a variable number of frames into a single token. To train such a speech tokenizer,
1. This paper first takes inspirations from syllable-like structures uncovered from HuBERT, and produces an initial segmentation (Section 3.1).
1. An iterative process is then applied to improve the segmentation (Section 3.2).
1. Finally the tokens are obtained by clustering of the mean-pooled frame features (Section 4.1).
- The acoustic tokens are identical to the HuBERT-based tokens in ([Hassid et al., 2023]), referred to as "mHuBERT" in this paper.
Experiments with the proposed model demonstrate the following when compared to previous work,
- Better unsupervised syllable segmentation
- Lower speech reconstruction WER
- Better or competitive accuracy in speech language modelling tasks (sWUGGY, sBLIMP, tStoryCloze) with lower compute
- Better speech continuation quality
[Borsos et al., 2022]: https://arxiv.org/pdf/2209.03143 "AudioLM: a Language Modeling Approach to Audio Generation"
[Hassid et al., 2023]: https://proceedings.neurips.cc/paper_files/paper/2023/file/c859b99b5d717c9035e79d43dfd69435-Paper-Conference.pdf "Textually Pretrained Speech Language Models"
Strengths: - Originality: This paper proposes an original method for producing syllable-like segmentation of speech in an unsupervised manner.
- It uses conditional probabilities from a masked language model instead of feature similarity ([Peng et al., 2023]) to detect initial syllable boundaries.
- The use of an iterative process to further improve the segmentation quality is also original.
- Quality: This paper is well-motivated. The experiment design is sound. Ablation studies included in the experiments provide valuable insight to various modelling choices.
- Clarity: The experiment results are reported in an easy-to-interpret manner.
- Significance: The proposed model is an competitive speech language model with a lower inference computational cost.
[Peng et al., 2023]: https://arxiv.org/pdf/2305.11435 "Syllable Discovery and Cross-Lingual Generalization in a Visually Grounded, Self-Supervised Speech Model"
Weaknesses: I think the method and results in this paper would make a good paper for NeurIPS, however I cannot make a recommendation for acceptance because this paper needs substantial revision to improve its readability. A non-exhaustive list of issues making the paper hard to follow includes the following,
- References to items not yet introduced
- Lines 153-154, the phrase "our loss" make it sound like a referrence to the masked language model loss discussed in the previous sub-section, whereas in fact it is referring to Equation (3), a yet-to-be-introduced loss for SylBoost.
- Lines 159-162 give a very vague description of the "similarity matrix" and the "cut algorithm" which can only be known if the reader has already seen the subsequent Section 3.3.
- Starting at line 188, Section 4.2 makes repeated references to "mHuBERT". "mHuBERT" appears to be name given to the acoustic tokens in ([Hassid et al., 2023]) by this paper (line 241). ([Hassid et al., 2023]) itself does not use this name, so an ordinary reader would not be able to tell what an "mHuBERT" model is when they work through Section 4.2.
- Confusing terminology
- "pretraining": This paper makes a liberal use of the term "pretraining" to the point it's very difficult to tell which is the model being "pretrained". For example,
- Line 113 mentions a "pretrained HuBERT teacher model", then line 119 says "during pretraining, the **student** model ...". The teacher and the student are presumably not trained at the same time, yet the use of "pretraining" in this context make it appear that the contrary is happening.
- Line 225 says "for all pretraining experiments". A reader will have to look really closely to see this means "training of the speech LM", not "pretraining HuBERT, etc".
- "Agglomeration" vs "SylBoost": This paper appears to use these two terms interchangeably. Agglomerative clustering is apparently also used (line 183). This makes it difficult for the reader to tell when "agglomeration" is mentioned, whether the authors intend to refer to SylBoost or just the clustering.
- Confusing equation
- The unnumbered equation between line 126 and line 127 defines the similarity matrix from MLM probabilities. It makes reference to
$Y_t$ without specifying which $t \in M$ is used to define $C_{r,c}$. As a result, after having read the paper 6 times over, I still do not know how to compute $C_{r,c}$.
- Writing style
- Overall the writing style of this paper is very wordy, inconcise and disorganized. Often the same message can get through with far shorter sentences. Most of the paragraphs read like a dump of the stream of consciousness of the author instead of a technical document intended for actual readers. For example,
- Lines 102-112 would be a lot easier to understand with formal notations and a concrete example.
- Lines 127-131 appear to be a mere repetition of the equation above, without any new information.
- Lines 242-255 contain a large amount of disorganized modelling details.
[Hassid et al., 2023]: https://proceedings.neurips.cc/paper_files/paper/2023/file/c859b99b5d717c9035e79d43dfd69435-Paper-Conference.pdf "Textually Pretrained Speech Language Models"
Technical Quality: 3
Clarity: 1
Questions for Authors: - Is LossPred really necessary?
- While Table 4 shows that LossPred is a better initialization strategy for SylBoost, would a cheaper initialization strategy (like feature similarity or even random) produce equally good segmentation given more iterations?
- Suppose the answer to the previous question is yes, would it make sense to run SylBoost but using the final activation of HuBERT instead of an intermediate layer (line 154)? The final activation may be better semantic features.
Confidence: 4
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to start by thanking kvPb for their clear and significant commitment to understanding our paper in detail. In our rebuttal below, we try to demonstrate that the presentation clarity errors pointed out only need minor changes to be fixed.
**Regarding "Confusing equation":**
> * The unnumbered equation between line 126 and line 127 defines the similarity matrix from MLM probabilities. It makes reference to $Y_t$ without specifying which $t \in M$ is used to define $C_{r,c}$. As a result, after having read the paper 6 times over, I still do not know how to compute $C_{r,c}$.
There is a typo in the equation for $C_{r,c}$: The letter $t$ should be replaced with the letter $c$. We apologize and hope that correcting this typo eliminates any confusion about LossPred.
**Regarding "References to items not yet introduced":**
> * Lines 153-154, the phrase "our loss" make it sound like a referrence to the masked language model loss discussed in the previous sub-section, whereas in fact it is referring to Equation (3), a yet-to-be-introduced loss for SylBoost.
We will replace “our loss” with “our loss (Equation 3).”
> * Lines 159-162 give a very vague description of the "similarity matrix" and the "cut algorithm" which can only be known if the reader has already seen the subsequent Section 3.3.
We apologize that the sentence starting on line 159 is written awkwardly. It should instead read “This results in a matrix of pairwise frame distances where element $i,j$ represents the $L_2$ distance between the features at frame $i$ and frame $j$.”
> * Starting at line 188, Section 4.2 makes repeated references to "mHuBERT". "mHuBERT" appears to be name given to the acoustic tokens in (Hassid et al., 2023) by this paper (line 241). (Hassid et al., 2023) itself does not use this name, so an ordinary reader would not be able to tell what an "mHuBERT" model is when they work through Section 4.2.
Agreed, and we will fix this confusion by replacing “mHuBERT” with “TWIST Tokenizer.” The name mHuBERT was taken from the [github repository](https://github.com/facebookresearch/textlesslib/tree/ba33d669d8284b4f7bfe81e7384e83ab799fe384/textless) published by [Hassid et al](https://proceedings.neurips.cc/paper_files/paper/2023/file/c859b99b5d717c9035e79d43dfd69435-Paper-Conference.pdf).
**Regarding "Confusing terminology":**
> * "pretraining": This paper makes a liberal use of the term "pretraining" to the point it's very difficult to tell which is the model being "pretrained". For example,
> * Line 113 mentions a "pretrained HuBERT teacher model", then line 119 says "during pretraining, the **student** model ...". The teacher and the student are presumably not trained at the same time, yet the use of "pretraining" in this context make it appear that the contrary is happening.
> * Line 225 says "for all pretraining experiments". A reader will have to look really closely to see this means "training of the speech LM", not "pretraining HuBERT, etc".
We will alter Line 225 to say “for all language model pretraining experiments” and in line 119 replace “pretraining” with “training.” We believe that lines 113-114: “We consider the setting of having a pretrained HuBERT teacher model and a HuBERT student model trained to predict the quantized contextualized representations generated by the teacher” make training order clear.
> * "Agglomeration" vs "SylBoost": This paper appears to use these two terms interchangeably. Agglomerative clustering is apparently also used (line 183). This makes it difficult for the reader to tell when "agglomeration" is mentioned, whether the authors intend to refer to SylBoost or just the clustering.
You are correct that “Agglomeration” and “SylBoost” are used interchangeably, and we will replace these instances of “Agglomeration” with “SylBoost.” We will maintain that we always refer to clustering for discrete tokenization with K-Means as the full phrase “K-Means and Agglomerative Clustering”
**Answering "Is LossPred really necessary?"**
> * While Table 4 shows that LossPred is a better initialization strategy for SylBoost, would a cheaper initialization strategy (like feature similarity or even random) produce equally good segmentation given more iterations?
In rows 2 and 3 of Table 4, we demonstrate that the second iteration of SylBoost using feature similarity results in worse performance than the first iteration so we do believe that LossPred is necessary.
> * Suppose the answer to the previous question is yes, would it make sense to run SylBoost but using the final activation of HuBERT instead of an intermediate layer (line 154)? The final activation may be better semantic features.
Although the answer to the first question is no, [Pasad et al](https://aclanthology.org/2024.tacl-1.21/) [43] shows that the semantic features from mean-pooling across ground truth syllable boundaries perform best at Layer 9 and not the final layer activations.
If you agree that these minor edits address your major concerns about the paper’s clarity, given that you state that our method and results in this paper would make a good paper for NeurIPS, we kindly ask that you consider raising your score.
---
Rebuttal Comment 1.1:
Comment: Thank you for taking the time to address my review! I have read your rebuttal and will incorporate the new information in my final feedback. | Summary: This paper first introduces an algorithm named LossPred that generates syllable-level speech segmentation without any training or supervision. The algorithm works by analyzing the prediction loss of speech tokens under different mask positions.
With the initial boundaries proposed by LossPred, the paper proposes further training a pretrained HuBERT / data2vec2 model by minimizing the sum of squared distances between feature vectors of each token and the average of feature vectors within the corresponding segment. This process is called SylBoost, and it further improves syllabic segmentation performance and efficiency.
Finally, the paper proposes training a Generative Spoken Language Model (GSLM) with the speech tokens obtained from quantized SylBoost units. Compared to existing GSLMs trained on other discrete representations, SylBoost encodes speech into much shorter sequences, significantly boosting training and inference efficiency.
Strengths: 1. The proposed speech representation learning and unit discovery algorithms, LossPred and SylBoost, are novel. While the idea of improving computational efficiency through dynamic or fixed-rate downsampling of speech representation is not new, this paper appears to be the first to successfully apply dynamic-rate downsampled representations with a very low sampling rate of 5Hz to Generative Spoken Language Models (GSLMs).
2. The presentation of the paper is of high quality and clarity. The authors report extensive experimental results, which effectively demonstrate that the proposed method outperforms various state-of-the-art (SotA) methods.
3. The topic addressed in this paper is significant, as very low sampling rate speech representations can benefit various tasks, including speech understanding and generation.
Weaknesses: 1. As pointed out by the authors, the proposed LossPred and SylBoost methods seem to be restricted to speech representation learning. It might be difficult to apply these methods to music, singing voice, speech with noisy backgrounds.
2. LossPred is slow in evaluating the loss prediction matrix. Each sentence requires about 200 Transformer network evaluations.
3. LossPred is highly heuristic. There seems to be no theoretical guarantee that the HuBERT model combined with LossPred reveals syllabic boundaries instead of revealing only phoneme or word boundaries.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. The equation above line 127 is rather confusing. The description of the procedure for computing the loss prediction matrix is confusing in general.
2. In LossPred, $k$, the number of segments, is chosen to be proportional to the sequence length. What happens when a speaker is speaking very fast or very slow? It seems that $k$ should be determined by the number of syllables in the utterance.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their feedback. We hope to clarify any additional concerns they had below:
**Weaknesses**
> 1. As pointed out by the authors, the proposed LossPred and SylBoost methods seem to be restricted to speech representation learning. It might be difficult to apply these methods to music, singing voice, speech with noisy backgrounds.
We entirely agree and hope that future work can tackle these challenges.
> 2. LossPred is slow in evaluating the loss prediction matrix. Each sentence requires about 200 Transformer network evaluations.
This is entirely true. Fortunately, the forward passes can be batched and the first iteration of SylBoost works well with only 10% of LibriSpeech labeled by LossPred.
> 3. LossPred is highly heuristic. There seems to be no theoretical guarantee that the HuBERT model combined with LossPred reveals syllabic boundaries instead of revealing only phoneme or word boundaries.
We agree that there is no guarantee of the types of units that LossPred predicts. We think that an especially interesting case is how the two words “at a” in Figure 1 get mapped to a single cluster by SylBoost. We are hopeful that future work can inspect these phenomena with respect to the statistics of spoken language.
**Questions**
> 1. The equation above line 127 is rather confusing. The description of the procedure for computing the loss prediction matrix is confusing in general.
We sincerely apologize that there is a typo in the referenced equation, and the letter $t$ should be replaced with $c$ in said equation and its description starting on line 128. We acknowledge that LossPred is a detailed algorithm, and fully understand any misunderstanding that could have resulted from this typo.
> In LossPred, $k$, the number of segments, is chosen to be proportional to the sequence length. What happens when a speaker is speaking very fast or very slow? It seems that $k$ should be determined by the number of syllables in the utterance.
We would be interested in a future approach that could attempt to count the number of distinct regions generated by $C$ in LossPred however found that using an empirical $k$ performs well and is compatible with prior cut algorithms such as that used by [Peng et al](https://arxiv.org/pdf/2305.11435).
We thank the reviewer again for their feedback.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for responding to all of my comments and questions. | Summary: This paper studies learning low bitrate speech units that preserves semantic information. As presented in the paper, the proposed approaches achieve SoTA performance on tasks like ASR and ZeroSpeech. The proposed approach also shows benefits in terms of compute resources — as claimed by the authors, 30x faster to train, and also benefits in terms of inference and transmission due to low bitrate.
Strengths: Overall, the proposed multistage approach — first using the HuBERT like model to extract syllable-like noisy segmentation, then bootstrapping pseudo-syllabic units iteratively makes sense to me. The proposed approach also shows clear benefits in terms of performance and efficiency.
Good performance: Compared to baseline approaches like SD-HuBERT, the proposed method achieved higher accuracy on syllable boundary detection and clustering, ASR, and also shows better continuation metrics as shown in Table 7 for generative spoken language modeling experiments. All those evaluations all positively demonstrate the strong associations with syllables of the generated speech units, while it does show lier-bitrate compared to the baselines compared in the paper.
The authors also conducted ablation studies to further demonstrate a couple design choices.
Efficiency: As claimed in the paper, the proposed technique is capable of achieving extremely low-bitrate compared to the counterpart speech units, while still being able to achieve good performance in a wide range of tasks, with the efficiency in both training and inference phases.
Weaknesses: Demonstrating efficiency: As efficiency is also one selling point of the paper, it would be great if the authors can demonstrate the training efficiency and low-bitrate benefits in a more comprehensive way, like visualizing the GPU training time vs Performance, and also bitrate vs unit quality for certain tasks.
Limited use cases: The proposed approach focuses on learning semantic units for speech applications. It’s unclear if the proposed methods can be applied to other important non-speech use cases like understanding acoustic environment, and understanding speaker’s identity and emotion.
Understanding Unit Quality: To demonstrate the unit quality for synthesizing the audio and for generation, should the author also compare with other related works (like [1] and [2]) in terms of reconstructing the original signal? Like in [1] (see Table 1), the authors compare the different approaches in terms of reconstruction performance using a couple of metrics like MEL, STFT and ViSQOL score, and also semantic task performance.
[1]: https://arxiv.org/abs/2405.00233
[2]: https://arxiv.org/abs/2306.06546
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses section
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Not aware of potential negative societal impacts
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback and for stating the proposed approach shows clear benefits in terms of performance and efficiency. We respond to each of the reviewer’s concerns below.
> Demonstrating efficiency: As efficiency is also one selling point of the paper, it would be great if the authors can demonstrate the training efficiency and low-bitrate benefits in a more comprehensive way, like visualizing the GPU training time vs Performance, and also bitrate vs unit quality for certain tasks.
We demonstrate the GPU training time vs Performance on the semantic tasks of sWUGGY, sBLIMP, and tStoryCloze in Table 3. We demonstrate bitrate vs unit quality for WER and CER measures of quality in Table 2 and further ablate controling for the number of units. We also demonstrate bitrate vs unit quality on sWUGGY and sBLIMP in Table 6, controlling for changes in the number of tokens and for changes in sample rate. To better visualize the role of bitrate in Table 6, we will add a column containing bitrate, however we note that bitrate is directly calculable from the provided unit frequency and number of unit clusters. If you are asking in particular for visual diagrams, such as a GPU-Hour vs sWUGGY accuracy graph, please let us know and we would be more than happy to adapt the results from these tables into a graph in our appendix.
> Limited use cases: The proposed approach focuses on learning semantic units for speech applications. It’s unclear if the proposed methods can be applied to other important non-speech use cases like understanding acoustic environment, and understanding speaker’s identity and emotion.
We fully agree with this limitation and acknowledge it our work. The speech domain is both difficult and important, and so we believe that expanding this paper to evaluate other domains may cause it to lose focus. We believe that focusing in-depth on speech applications throughout our paper provides a strong starting ground for future work to adapt LossPred and SylBoost to other domains, which we eagerly await.
> Understanding Unit Quality: To demonstrate the unit quality for synthesizing the audio and for generation, should the author also compare with other related works (like 1 and 2) in terms of reconstructing the original signal? Like in 1 (see Table 1), the authors compare the different approaches in terms of reconstruction performance using a couple of metrics like MEL, STFT and ViSQOL score, and also semantic task performance.
We believe that evaluating the acoustic quality of our unit reconstruction is not a task best suited for this paper. Like [Hassid et al](https://proceedings.neurips.cc/paper_files/paper/2023/file/c859b99b5d717c9035e79d43dfd69435-Paper-Conference.pdf), we evaluate our units solely on semantic tasks instead of metrics like MEL, STFT, and ViSQOL. Codecs geared toward natural sounding speech generation, such as 1 and 2, are orthogonal to our method and operate at a significantly higher bitrate. Our paper seeks to improve the textual understanding of models, and units like those of 1 and 2 are compatible with downstream vocoding in cascaded networks such as our Interleaved-Vocoder-LM or the fine acoustic modeling stage of [Boros et al](https://arxiv.org/pdf/2209.03143).
We also point out that the WER of the concurrent work (1) referenced is 19.6% at 360bps compared to our 7.0% WER at 81bps (albeit on different datasets), which we hope additionally clarifies how our units cover different problem niches.
If the reviewer believes that the provided response adequately addresses their concerts, we respectfully request that they consider raising their score to reflect that. | Summary: This paper proposes an approach for extracting syllable-like units from speech SSL models for use in a transformer-based language model. The motivation is that, compared to baseline acoustic units, which tend to mimic phonetic units in their time resolution, syllable-like units have lower time resolution, which makes them easier to model using techniques from the language domain. The authors propose an adaptation of the SD-HUBERT approach to extract units that can be used in Generative Spoken Language Modeling.
Strengths: The authors identify an important limitation of why using language modeling techniques is a challenge in the speech domain, and their proposed approach seeks to address the limitation.
Weaknesses: Overall, I found the submission difficult to follow. Please see my additional comments below.
line 2 -> Transformers do not require the inputs to be tokenized. The tokenization step is performed so that we can use language modeling techniques in speech.
line 18 -> Generally speaking, there is no requirement for the SSL representations to be powerful or abstract.
Line 20 -> I don't see how the example of young children motivates your SSL description from the previous sentence; the transition is incoherent.
line 22 -> What does performant mean in this case? What is the connection between composing highly realistic text and the ability of a model to provide features for a downstream task? You seem to conflate the two goals, even though they are not necessarily the same.
line 23 -> The statement on this line is not clear. Several successful speech language model methods were introduced in the literature, what about previous approaches that make them fail? Please consider clarifying.
line 31 -> The temporal resolution impacts the LM part of the problem. Why is it important if we want to extract features for a downstream task?
line 37 -> What does "syllable-like" mean in this case? Can you elaborate on the time resolution it represents? Why is it important to start with a "syllable-like" unit? What makes it suitable for GSLM? What challenges from prior work are you addressing when using "syllable-like" units?
Line 38 -> I would refrain from using words like "breakthrough" and instead let the reader decide if the improvement is indeed a "breakthrough."
line 48 -> I disagree with labeling your method as "train-free" since it relies on a pre-trained HuBERT model.
line 51 -> The distinction between the first and second contributions needs to be clarified. If the boundaries from the first contributions are not good on their own, then why mention them as a contribution?
Line 102 -> It is not clear how/where you do the masking. Do you do it on the raw input, mel-spectrogram, or the extracted features?
line 113 -> Shouldn't the approach be "train-free"? Why do we have a student/teacher model that we are training?
line 147 -> The authors must refine the motivation for why syllabic units are useful for this application. Why not use word units instead?
line 189 -> Superior compared to what?
line 198 -> I suggest leaving any experimental details to the experiments sections.
Table 1 -> Can you try any non-neural baselines for boundary detection? What would the performance be if we used heuristics based on energy, zero-crossing rate, or changes in Prosody to get rough boundaries?
Table 1 -> What makes Data2Vec2 better than HuBERT for extracting boundaries?
Table 1 -> What happens if you apply SylBoost to Feat-Sim?
Table 1 -> Please describe the metrics and abbreviations in the captions.
Table 2 -> What does the underline represent?
line 221 -> Implement what exactly? Please re-write the sentence.
line 237 -> typo
Table 3 -> What does the underline represent?
*Estimated.-> What does estimated mean? If prior work does not explicitly give this information, then it is better to leave it out.
line 263 -> What is the R score?
line 281 -> Please present the tables in the order they are referenced in the text; you currently jump from Table 1 to Table 4 and then go back to Tables 2 and 3.
line 340 -> Communication is not the last name of the first author from [16]
Technical Quality: 2
Clarity: 2
Questions for Authors: How does the performance on the syllable boundary detection task relate to the unit re-synthesis and language modeling performance? In other words, is having well-defined boundaries necessary for the speech language modeling task?
It is difficult to disentangle the effect of the number of units and temporal resolution on the overall performance in Tables 2 and 3. Despite having lower resolutions, the results from the proposed methods are not much better than baseline methods (e.g., AudioLM and TWIST). Can the authors elaborate on this?
What happens if we directly use LossPred units in the LM framework? In other words, how important is the boundary refinement stage for the language modeling task?
Keeping everything fixed, how does the number of units impact the re-synthesis and language modeling results?
How does the masking probability and span used when training the HuBERT model impact the quality of the discovered boundaries, and what impact does it have on your approach?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Can the authors comment on the trade-off between resolution and ease of modeling (and quality)? What do we lose/gain using syllable-like speech units in a language modeling paradigm?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review. Below, we will respond to several questions you raised about our work.
> line 189 -> Superior compared to what?
This is answered in lines 189-190: “Superior performance in high difficulty settings **compared to other Neural Codec Lanaugae Models like VALL-E [54]**”
> Table 1 -> What happens if you apply SylBoost to Feat-Sim?
These results are the focus of Table 4, where we evaluate SylBoost from both FeatSim and LossPred and find that initializing with FeatSim boundaries results in SylBoost converging to lower-quality boundaries.
**Questions**
> How does the performance on the syllable boundary detection task relate to the unit re-synthesis and language modeling performance? In other words, is having well-defined boundaries necessary for the speech language modeling task?
Higher quality boundaries are essential for low error rates, a comparison which we draw from the WER of SD-HuBERT and SylBoost+HuBERT units in Table 2. Having exact syllable boundaries is not necessarily essential for the language modeling task after word error rates are low, however, as seen through evaluating on a wide range of unit rates (5.0-8.33 Hz) in Tables 2, 6, and 7.
> It is difficult to disentangle the effect of the number of units and temporal resolution on the overall performance in Tables 2 and 3. Despite having lower resolutions, the results from the proposed methods are not much better than baseline methods (e.g., AudioLM and TWIST). Can the authors elaborate on this?
We respectfully disagree with the framing of this question. First, we heavily outperform TWIST-CI-90M and TWIST-300M with our proposed models, where we can afford to match training compute. We also beat AudioLM in sWUGGY with less than 10% of the pretraining compute, and are the first work to come within 1% of AudioLM on sBLIMP. SyllableLM only starts losing to TWIST models when they reach 7B parameters, 3x the data, and 20x the pretraining GPU compute hours.
Second, we disagree with “Despite having lower resolutions.” Lower resolutions provide a significant speedup as generation and training run on up to 5x fewer tokens so even equivalent performance would be significant. Using fewer tokens is a challenge, and our work is the first to reach below 19.5Hz.
> What happens if we directly use LossPred units in the LM framework? In other words, how important is the boundary refinement stage for the language modeling task?
LossPred only provides boundaries and not units, as it is calculated only using a model loss instead of a cut-algorithm on model features. Even if we used the LossPred boundaries for HuBERT pooling, LossPred is too expensive to extract across an entire dataset, which is one reason we created SylBoost.
> Keeping everything fixed, how does the number of units impact the re-synthesis and language modeling results?
Resynthesis results, keeping everything else fixed, are in the “+Increase #Units” row of Table 2. The language modeling results are in “Table 6: Holding number of units and unit rate constant.”
> How does the masking probability and span used when training the HuBERT model impact the quality of the discovered boundaries, and what impact does it have on your approach?
We do not have the compute to pretrain a HuBERT model with the requested modifications and we are not aware of any public checkpoints that would facilitate this, so unfortunately we cannot answer this question.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifying comments. I will incorporate this information in my final feedback. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precision | Accept (spotlight) | Summary: FLASH ATTENTION sped up attention on GPUs but achieved only 35% utilization on H100 GPUs. To address this, FLASH ATTENTION-3 introduces three techniques for Hopper GPUs: warp-specialization to overlap computation and data movement by assigning warps to producer and consumer roles; interleaving operations to combine block-wise matrix multiplication (matmul) and softmax operations for asynchronous execution; and support for FP8 precision using block quantization and incoherent processing to leverage hardware capabilities. These techniques result in a 1.5-2.0× speedup with FP16, up to 1.2 PFLOPs/s with FP8, and 2.6× lower numerical error than baseline FP8 attention, improving computational efficiency and hardware utilization on modern GPUs.
Strengths: The strengths of the paper are that it is well presented and shows good results. The paper addresses the important topic of accelerating Transformer modules by presenting an incremental improvement over previous work, specifically FLASHATTENTION-2, and applies these advancements to the new H100 GPU. The paper demonstrates how the enhanced methods leverage the capabilities of the H100 GPU to achieve significant performance gains, showcasing advancements in computational speed and efficiency. This progression highlights the continuous evolution in optimizing Transformer architectures for cutting-edge hardware, emphasizing the relevance and impact of the research in the field of deep learning and hardware acceleration.
Weaknesses: The work primarily focuses on the H100 GPU, demonstrating the performance enhancements of FLASHATTENTION-3 for this hardware. However, evaluating how FLASHATTENTION-3 performs across other GPU families would increase the work's impact, showing its versatility and potential for wider adoption in different computing environments.
Technical Quality: 3
Clarity: 3
Questions for Authors: A few question and notes are listed below:
How would the methods are applicable to other GPU devices?
Is speed is slower for flashattention3 in small sequence len only seen in H100?
In 4.1: '4·seqlen2 ·head dimension·number of heads'. would be better to format as equation
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations and, if applicable, potential negative societal impact of their work
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewers for the support and appreciate the thoughtful questions.
1. Other hardware: please refer to the common response. We are collaborating with other researchers and engineers on FA3 for AMD cards, Google TPUs, and Nvidia Ada cards (e.g. 4090). As suggested by the reviewer, this would enable wider adoption of FA3.
2. Small seqlens: FA3 in May was slower than cuDNN for small seqlens (e.g. 512, 1024). We have since developed a persistent scheduler that loads the Q, K, V for the next subproblem while writing out the output of the current subproblem, and thus matching / exceeding cuDNN for small seqlens (FA3 was already faster than cuDNN for large seqlens). This overlapping is important since the prologue / epilogue (loading initial Q, K, V and writing out O) takes non-negligible time (e.g. up to 10-15% of the overall computation) when the sequence is short.
---
Rebuttal Comment 1.1:
Title: rebuttal acknowledgment
Comment: thank you for clarifications | Summary: The paper builds on the existing work on FlashAttention, and introduces an advanced method to accelerate the attention mechanism in Transformer model. It specifically targets the newer GPU architectures, like NVIDIA H100, by exploiting asynchrony in Tensor Cores and Tensor Memory Accelerators. There are three techniques that are proposed: 1) producer-consumer asynchrony across warps; 2) compute asynchrony within consumer warps; 3) hardware accelerated low precision GEMM. The extensive empirical results show impressive performance, with significant speedups compared to state-of-the-art methods, and with reduced numerical errors.
Strengths: The authors present novel techniques that exploit hardware features like asynchrony and low-precision computation.
The performance gains are impressive, in speed and accuracy.
The empirical validation includes comprehensive benchmarks and comparisons with previous methods.
It is nice to see the open source commitment.
Weaknesses: The proposed methods are highly complex, and may be difficult to implement and understand for practitioners.
The techniques are tailored to NVIDIA Hopper GPUs, and may have limited applicability to other hardware achitectures.
Technical Quality: 3
Clarity: 3
Questions for Authors: The backward pass algorithm is only included in the appendix. It would be useful if it could be included in the main paper.
Can you comment if FlashAttention-3 could be used on older GPU architectures that lack some of the advanced features of the Hopper GPUs?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are very happy that the reviewer appreciates the impact of the paper.
1. Generality: we are excited about generalizing our techniques for other hardwares (AMD GPUs, Google TPUs, and Nvidia Ada GPUs). Please refer to the common response for more details.
For Ampere cards (e.g. A100, 3090) and Ada cards (e.g. 4090), with BF16/FP16, FA2 is already close to optimal (reaching 70% of theoretical max FLOPS, while matmul can reach around 70-80% of theoretical max FLOPS). For Ada cards that support FP8 (e.g. 4090), there is an opportunity to potentially double the attention throughput. Though they lack asynchronous Tensor Memory Accelerator (TMA) and the matmul instructions are synchronous (mma.sync instead of wgmma.mma_async), there’s still some form of asynchronous copy (cp.async). Though it might be harder to overlap GEMM and softmax, we are still exploring FP8 on Ada cards, since they offer the highest FLOPS per dollar.
2. Backward pass: it is more involved than the forward pass, and we would include it in the main text if space permits. | Summary: This is a system paper that introduces FlashAttention-3, an optimized version of FlashAttention for NVIDIA's SM90+ GPUs. The key contributions (summarized by the authors) are:
- Taking advantage of warp specialization in NVIDIA SM90+ and designing a producer-consumer computation paradigm to allow better intra-CTA overlapping between producers and consumers;
- Designing 2-stage software pipeline to allow better overlapping within consumers;
- Designing on-the-fly layout transformation to support FP8 WGMMA and using incoherent processing to reduce quantization error by 3x.
Strengths: - The empirical results of this paper are very strong, and are expected to generate immediate impact to the community after code release (promised by the authors).
- The presentation of this paper is clear. It is easy for people who are familiar with NVIDIA SM8x but not experts in SM90+ to follow the writing and essence of the optimizations. I learned a lot from the descriptions related to the warp specialization design.
- This is the first (to-be) open-sourced system paper that discusses the challenges and solutions related to FP8 attention, which has been implemented and close-sourced by NVIDIA for a long time. The insights related to FP8 such as layout transformation and reducing quantization error are very helpful for the readers.
Weaknesses: [Results, Benchmarking]
- There has already been a paper from Colfax (arXiv 2312.11918) available online more than half a year ago describing how to implement efficient FlashAttention on Hopper GPUs. It will be better if the authors could cite this paper and compare the performance of FlashAttention-3 and this paper.
- NVIDIA's TensorRT-LLM has a close-sourced implementation of FP8 attention. The result comparison with this library is missing.
- While the authors mention RMSE reduction from incoherence processing using simulated data, there is no evaluation on how this step improves real LLM inference workloads (such as Wikitext perplexity, MMLU accuracy, etc.)
- How many end-to-end LLM inference / training speedup can we obtain by switching to this FP8 attention? For inference, I am interested in a TensorRT-LLM-like implementation with FP8 GEMM + FP16 attention (FlashAttention-3) VS FP8 GEMM + FP16 attention (FlashAttention-2) VS FP8 GEMM + FP8 FlashAttention-3.
[Methodology, Novelty]
- While all the methodologies mentioned in the paper make sense to me, it is important to know that some of them occurred in previous literature / open source implementation on the same workload. For example, FlashInfer from University of Washington implements within-consumer software pipelining on SM80+ devices. Incoherent processing is also not new, and has been adopted in related work such as QUIP#, QuaRot, etc.
- The authors mentioned that a lot of operations can be fused into the preceding RoPE operation **with no additional overhead**. For example, transposing V, applying Hadamard transformation, block quantization, etc. While I agree that intuitively computation operations could be fused with memory bound Ops, it is also important to notice that the CUDA cores of H100 are not that strong. I want to see real measured numbers in end-to-end workloads that justify the authors' claims on "no additional overhead".
Technical Quality: 3
Clarity: 3
Questions for Authors: Please address my comments in the Weaknesses section. Additional questions related to ablation studies:
- Will it be also possible to open source the benchmarking code in Table 2? This will be super helpful for system researchers to understand the impact of warp specialization and 2-stage computation pipeline on the end-to-end throughput in different workloads.
- Please comment why your implementation achieves performance advantage over OpenAI Triton ones. What is the most important design space extension that leads to this improvement?
- Please comment on how the register re-allocation feature provided by NVIDIA SM90+ impacts the performance. Are there any important design spaces enabled by this feature which would otherwise lead to register spilling to local memory in SM8x?
- Please comment on whether GEMM-SoftMax 2-stage pipelining bring about performance improvement on SM8x (it's related to the question above. If there are a lot of register spills then I think this is also a specialized optimization for SM90+).
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The authors have adequately addressed the limitations of this paper. There is no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thorough review and helpful suggestions.
1. Existing literature: The Colfax arXiv paper discusses use of WGMMA and TMA for FA2, but not more sophisticated techniques with asynchrony. This is similar to the Triton implementation that we compared to. We will cite and include this in the discussion. FA3, with asynchrony (warp specialization, GEMM-softmax overlapping) can reach 800 TFLOPS for headdim 256 FP16, while the Colfax implementation can reach 550 TFLOPS for the same setting. This highlights the importance of the new algorithm.
We have benchmarked the closed-source TensorRT-LLM implementation and found its speed similar to the cuDNN implementation. We have included the comparison with cuDNN in the paper. In the newest version, FA3 is about 10-25% faster than cuDNN for FP16/BF16, even though cuDNN was already optimized for Hopper GPUs. We are hopeful that by open-sourcing FA3, other researchers can build on it to find other hardware-aware algorithms such as approximate (sparse, low-rank) attention.
We plan to also open source the benchmarking code, thanks for this suggestion.
2. End-to-end evaluation: we include this in the common response, in an LLM inference production setting.
We include a more detailed experiment here, measuring the savings from switching FA2 -> FA3 on Llama-3.1 405B with different context length (percentage of time saved vs original time). Note that GEMMs are done in FP8 and attention is done in BF16.
| Model size / Seq length | 8K | 16K | 32K | 128K |
|-------------------------|-----|------|------|-------|
| 405B | 4.6%| 7.3% | 5.4% | 26.2% |
We see that even for some of the largest models (where MLPs and QKV projections are very large), we can save up to 26.2% time, i.e. 1.35x speedup. Roughly speaking, for 405B at 128K sequence length, attention takes 50% of the time and MLP takes 50% of the time, so speeding up attention by 2x would bring around 1.3x end-to-end speedup.
3. Existing methodologies: we leverage existing techniques and build on a rich literature. We will add these to the discussion. We find it very encouraging that these conceptually simple techniques can speed up attention, one of the core operations in ML and has been studied intensely by the community, by up to 2x on modern hardware.
4. Fusing operations: we show here the time taken by RoPE, Hadamard transform, and their fusion.
We measure on an H100 80GB SXM5 (3.35TB/s memory bandwidth), with Q and K of size 2 x 8192 x 32 x 128 each (BF16).
| Operation | Timing |
|-----------|-----|
| Memcopy | 110us|
| RoPE | 115us |
| Hadamard transform | 107us |
| RoPE + Hadamard transform | 120us |
We see that all of the combinations reach about 70-80% of memory bandwidth.
5. Comparison with Triton: The most significant improvement of FA3 over it is fine-grained exploitation of asynchrony / scheduling techniques, Though Triton already uses Hopper-optimized instructions (WGMMA and TMA), there is no automatic warp-specialization or overlapping between GEMMs and other operations like softmax yet. We are collaborating with Triton developers to implement some of these techniques in the compiler.
6. Register re-allocation is important for warp-specialization, since the warps doing copying (TMA) needs very few registers while the warps doing matmul (WGMMA) need a lot of registers to hold the accumulator, and more registers means we can use larger WGMMA instructions and more opportunity to overlap with other operations. The other techniques (e.g. overlapping GEMM and Softmax) can still apply without warp-specialization / register re-allocation.
7. Ampere / Ada GPUs (SM8x). One can still overlap GEMM and softmax in SM8x, but warp specialization is difficult and generally not done (e.g. for matmul). We note that FA2 is already close to optimal on Ampere / Ada GPUs, reaching up to 70% of theoretical max FLOPS (while matmul can reach around 80%). For newer hardware, the tensor cores are simply much faster, so there is a greater need for asynchrony / overlapping. We expect this trend to hold for future accelerators (please see the common response for more detailed discussion). We are also exploring FP8 on Ada, even without warp specialization we believe that FP8 can still substantially improve the attention throughput on Ada cards.
---
Rebuttal Comment 1.1:
Comment: Thanks. The rebuttal has addressed my concerns. | Summary: This paper presents FlashAttention-3, which speedup the commonly-used attention operator on Hopper GPUs. The paper proposes to leverage the asynchronous execution of the Tensor Cores and Tensor Memory Acceleator to better utilize the GPU hardware. Specifically, the paper proposes three techniques: (1) It overlaps the data movement and GEMM computation by warp specialization; (2) It proposes to overlap the GEMM and softmax operations by interleaving these operations; and (3) It leverages FP8 tensor core by block quantization. The experimental results show that FlashAttention-3 can achieve superior speedup by 1.5-2.0x with FP16 and can outperform vendor libraries.
Strengths: * It optimze attention operator on Hopper architecture, which is a timely problem and is of great importance;
* It provides new insights for optimizing for Hopper architecture;
* It achieves superior speedup compared to prior work and comparable speedup to vendor libraries;
Weaknesses: * The method introduces a few hyper-parameters such as number of pipeline stages, block sizes $B_r$ and $B_c$. However, the paper has never discussed how to tune or select this parameters in practice.
* The paper does not evaluate with the end-to-end LLM inference task, which makes it obscure to see its real performance gain in practice.
Technical Quality: 4
Clarity: 4
Questions for Authors: Thanks for submitting the excellent paper to NeurIPS. The paper is in general well-written and the ideas are novel and insightful. However, I would like to make the following comments for further polishing the paper:
The paper has mentioned a few hyperparameters. However, how do you tune these hyperparameters to achieve the best performance? Do you use grid search or simply heuristics? This is important because the inputs to language models often have varying sequence lengths, and tuning each of them is not practical. It would be great to see a systematic way to find these hyperparameters.
The paper achieves superior speedup on the single flash attention operator. However, given the existence of FlashAttention-2, how much percentage does the attention operation takes in the context of whole LLM inference or training pipeline? This could help better understanding the positioning of this paper.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: * The paper is only specifically optimized for the Hopper architecture and may not applies to other or future GPUs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the enthusiastic support from the reviewer.
1. Setting of hyperparameters: We select tile size and stage count hyperparameters as a function of the head dimension (64, 128, 256) and datatype (16 or 8 bit) based on available register and smem budget. This is similar to matrix multiply (e.g. from cuBLAS or Triton), where the tile sizes and number of stages are tuned either with heuristics (cuBLAS) or by an autotuner (Triton). For FA3, we found 2 stages to be consistently best, and we choose tile sizes (B_r and B_c) to simply maximize the registers and shared memory that we can use. There are not that many choices for these tile sizes, (e.g. 128 x 128, 128 x 256, 256 x 128) since some dimensions have to be divisible by 64 or 128 due to hardware constraint, so we tune them by hand. These hyperparameters are not functions of seqlen, so different sequence lengths use the same tile sizes.
2. For LLM inference, FA3 would be most helpful for the prefill stage. For the decode stage, the bottlenecks are different (loading KV cache as fast as possible), and other techniques are more relevant (e.g Flash-Decoding, KV cache quantization).
Thanks for the suggestion on end-to-end evaluation. We have mentioned the impact of FA3 on LLM inference in a production setting in the shared response. We include more detailed experiment here, measuring the savings from switching FA2 -> FA3 on Llama-3.1 405B with different context length (percentage of time saved vs original time).
| Model size / Seq length | 8K | 16K | 32K | 128K |
|-------------------------|-----|------|------|-------|
| 405B | 4.6%| 7.3% | 5.4% | 26.2% |
We see that even for some of the largest models (where MLPs and QKV projections are very large), we can save up to 26.2% time, i.e. 1.35x speedup. Roughly speaking, for 405B at 128K sequence length, attention takes 50% of the time and MLP takes 50% of the time, so speeding up attention by 2x would bring around 1.3x end-to-end speedup.
At smaller model sizes, the speedup is larger at the same sequence length, since attention would take proportionally more time.
---
Rebuttal Comment 1.1:
Title: Rebuttal Acknowledgment
Comment: Thanks for the clarifications. I have read the rebuttals. | Rebuttal 1:
Rebuttal: We thank the reviewers for their enthusiastic support, their careful read of the paper, and their thoughtful questions and suggestions. We are very happy that the reviewers find the paper “has immediate impact to the community”, the ideas “novel and insightful”, and the writing “clear and easy to understand”.
Since the submission in May, we have made FA3 even faster, more accurate, easier to use, and more general:
- New optimizations: (1) A persistent scheduler to start loading the Q, K, V for the next subproblem while writing out the output O of the current subproblem, thus speeding up the cases of small seqlen (2) Optimized for the case where sequences in the batch have variable lengths (e.g. in Llama 3 training) (3) Reduced the numerical error for GQA/MQA by accumulating the gradient dK and dV of different heads in FP32, before converting the sum to BF16. This brings 2-3x smaller numerical error compared to the standard method of computing dK and dV for different heads in BF16, then summing the gradients.
- Speed: FA3 version in August is 6-25% faster, reaching up to 700 TFLOPS for headdim 128 FP16 (vs 650 in May), 800 TFLOPS for headdim 256 FP16 (vs 640 in May), and 1300 TFLOPS for headdim 256 FP8 (vs 1230 in May). This is thanks to a technique called pingpong scheduling that better overlaps the GEMM of one warpgroup with the softmax of another warpgroup. For FP16/BF16, attention speed from FA3 is now on par with matmul speed from cuBLAS (arguably the most optimized operation) for similar sizes (700-770 TFLOPS), suggesting that we are using the hardware as efficiently as possible. This is another validation that asynchrony can play a major role in optimizing for modern hardware.
- Integration with other libraries: we are working with PyTorch developers to integrate FA3 to PyTorch to benefit the largest number of researchers and engineers. We are also working with inference and training libraries (Hugging Face, vLLM, Megatron-LM, TransformerEngine, DeepSpeed) to help with their integration effort to speed up transformer training and inference with FA3.
- Generality: We are working with Triton and Jax/XLA developers to implement some of the techniques in FA3 in the Triton & XLA compilers. We are also collaborating with other researchers and engineers on hardware other than H100 (elaborated below).
We now respond to some common questions from the reviewers.
1. **Generality**: The techniques developed in FA3 are not limited to Nvidia Hopper GPUs. As mentioned in the intro, asynchrony and low-precision are the general trend of AI accelerators, due to Amdahl's law. Amdahl’s law states that the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used. As tensor cores in hardware accelerate matmuls exponentially with each new generation of hardware, the serial sections of the code (softmax and causal masking in attention) can dominate the runtime. The only way to overcome these latencies, other than adding expensive hardware units that compute them, is to overlap their execution with other concurrent work. We chose to validate our algorithms and ideas on Hopper GPUs due to good hardware and software support (async barriers, CUTLASS), but these ideas also apply to other modern accelerators. In fact, we are collaborating with AMD engineers to implement the overlapping of GEMM and Softmax on AMD GPUs such as MI300X, as well as Google researchers on a version of FA3 on TPUs. We are also exploring FP8 support for Ada GPUs (e.g. 4090) to potentially double the attention throughput on these consumer cards.
2. **Impact**: With FA3, we have significantly sped up attention, one of the two main layers in Transformers. Attention speedup can have a major impact when the sequences are long, which is increasingly common (e.g. Llama 3.1 has 128k context length). In a LLM inference production setting (with an already very optimized inference engine), we have measured that switching from FA2 to FA3 yields up to 30-40% speedup in time to first token for Llama-3 405B. This is a major improvement for inference services running on hundreds or thousands of GPUs. For 405B at 128K sequence length, attention takes about 50% of the time and MLP takes 50% of the time, and making attention 2x faster will yield about 1.3x speedup. One can combine FA3 with other techniques to speed up the MLP (e.g. sparsity, FP6/FP4) to get even higher speedup.
3. **Interoperability**: FA3 is a drop-in replacement for FA2, save for the FP8 case where the version of FA3 in May requires V to have layout BHDS (sequence length dimension must be contiguous), due to the constraint of the FP8 tensor cores. We have since developed a variant of the FP8 kernel that supports V input in the standard BSHD layout (with the head dimension as the contiguous mode). This is important to integrate FP8 attention into standard libraries, such as PyTorch and vLLM, and also circumvents a technical challenge with variable sequence length in terms of loading memory addresses not aligned to 16 bytes.
Recall from the paper that the FP8 WGMMA instruction only supports k-major operand B, so this variant necessitates doing an "in-kernel" transpose on tiles of V after they are loaded from GMEM to SMEM. Moreover, in order to overlap this transpose with other operations to minimize its impact on speed, we change the kernel scheduling. In short, we take advantage of special LDSM-STSM instructions that both minimize register usage and support transposition for SMEM <=> RMEM copy in order to place the in-kernel transpose of V in the producer, thereby further leveraging warp specialization with warpgroup register reallocation.
Our implementation achieves comparable performance to cuDNN, exceeding it for headdim 64 and matching for headdim 128/256.
Overall we are excited about generalizing our techniques to other hardwares to unlock new use cases for long context models. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Deep Submodular Peripteral Networks | Accept (spotlight) | Summary: The authors identify two open problems in machine learning and provide novel solutions to them. The first problem is that even though the submodular functions show up in numerous applications, learning submodular functions through DNNs remain unpractical. Their proposed solution to this issue is their new architecture called Deep Submodular Peripteral Networks (abbreviated as DSPNs). The other identified problem is the lack of graded pairwise preferences (GPCs) oracles. In its essence, it is the type of oracle where given two sets of elements, a score with a scalar value is returned. The sign of this score denotes which set is preferred over the other set and the absolute value of the score denotes the magnitude of this preference. To reflect the effects of this oracle better, they introduce a new loss function called peripteral loss. They evaluate DSPNs ability to learn a target function (a facility location function in the case of their experiments) against different baseline algorithms. They also demonstrate the effects of using different losses when training DSPNs.
Strengths: Considering the success of supervised learning from comparisons in domains such as healthcare etc., it is no surprise that learning from graded pairwise comparisons is a significant oracle selection. I can see it being very useful in team selection problems where we want to predict the outcome of a competition. The DSPN structure is original. Majority of the ideas conveyed clearly.
Weaknesses: - I have some concerns related to the organization of the paper. I am aware the page limitations can be annoying sometimes. Still, I do think that Figure 13 should be included in the main body of the paper. I think it would help readers understand the paper better and the main body of the paper would be more standalone that way. Maybe just a 2+1+1 or 2+1+2 layer version of Fig. 13 as you describe in your experiments.
- The introduction mentions E, M sets but we don't know what they stand for until later. You may consider mentioning them as heterogeneous and homogeneous sets before your main contributions.
Minor editorial comments:
- Appendix or Figure names are not properly capitalized at some places e.g. Line 810.
- Some figures refer the reader to the appendix without denoting they are in the appendix.
- On line 96-97, the adjectives need commas in between.
- Line 844, multiple "also"s
Technical Quality: 3
Clarity: 2
Questions for Authors: - For my understanding, can we interpret DSPNs as enhanced versions of DSFs with permutation invariance?
- Where do weighted matroid rank functions stand in relation to threshold potentials (Stobbe and Krause [95])?
- Isn't Proposition 8 a bit obsolete? Aren't all set functions characterized by their permutation-invariance hence the submodular set functions as well?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes, the authors discuss the limitations of their work in Appendix J.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Firstly, thank you for your review and your comments. We will attempt in the next version of the paper to address all of them. Detailed comments follow.
**Organization**
- We agree that Figure 13 ideally belongs in the main body. If the paper is accepted, for the final camera ready we are allotted one additional page that can fit Figure 13 as well as other motivating and other material.
**Descriptions of E,M**
- Yes, we can describe them as the heterogeneous and homogeneous sets right when the notation is first introduced.
**editorial comments**
- These typos will all be fixed in the next version. Thank you for pointing them out.
**DSPNs as enhanced versions of DSFs with permutation invariance**
- DSPNs are actually even more than this since DSPNs correspond to a larger sub-family of submodular functions than DSFs, as we prove in our paper. It may be that DSPNs can represent any submodular function, but we do not yet have a proof for that, but it is currently a conjecture.
**DSPNs vs threshold potentials (Stobbe and Krause [95])**
- This is a good question. In the DSF theory paper (Bilmes & Bai 2017), it was shown that DSFs extend the family of sums of concave composed with modular functions (SCCMs). The threshold potential functions of Stobbe & Krause are instances of SCCMs and hence DSFs correspond to a larger family than threshold potentials. DSPNs, moreover, correspond to a still larger sub-family of submodular functions than SCCMs as mentioned above. Hence, DSPNs are a larger family than those of Stobbe and Krause. Incidentally, Stobbe and Krause refer to SCCMs as "decomposable submodular functions" but we prefer the name "SCCMs" or "feature based functions" in order to avoid the overloading of and potential confusion between that and the notion of "decomposable graphs" in graphical models and chordal graph theory. The reason for this potential confusion is that the graph notion of decomposability can apply to a submodular function as well separately from if the submodular function is representable as an SCCM or not.
**Proposition 8**
- We consider a main theoretical result of the paper that the weighted matroid rank functions in the middle of the DSPN preserve submodularity and permutation invariance of the DSPN. We include proposition 8 only to be complete in case there is any question, but in the next version of the paper we can clarify this.
Again, thank you very much for your review and your time!
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my comments! I will keep my score. Good luck!
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their response! | Summary: This paper proposes a new framework to learn submodular functions. Specifically, the paper proposes a new parametric family of submodular functions using neural networks. Then, the paper proposes training such networks with a new loss function that is based on graded pairwise comparisons.
Strengths: I think the paper addresses an important problem of learning/distilling submodular functions in a scalable way. The paper also conducts thorough experiments comparing their approach with baselines.
The paper is well-written and easy to follow. The paper does a good job discussing related and prior work.
Weaknesses: This paper is outside my area of expertise, and as such I don't think I can offer constructive feedback.
Technical Quality: 3
Clarity: 3
Questions for Authors: None.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We nonetheless wish to thank you for your review and time. We are glad that you found the paper well-written and easy to follow.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you for the response. | Summary: The paper introduces Deep Submodular Peripteral Networks (DSPNs), a novel parametric family of submodular functions designed to address practical learning methods for submodular functions and graded pairwise comparisons (GPC). To learn DSPNs, the paper also introduces a new GPC-style “peripteral” loss, which leverages numerically graded relationships between pairs of objects, extracting more nuanced information than binary comparisons. Finally, the authors demonstrate DSPNs’ efficacy in learning submodularity from a
costly target submodular function showing superiority both for experimental design and online streaming applications.
Strengths: - The paper introduced Deep Submodular Peripteral Networks (DSPNs), a new parametric family of submodular functions. Since the computational cost for querying a DSPN is independent of dataset size, their approach can learn practical submodular functions scalably.
- The paper also proposed a new GPC-style “peripteral” loss to successfully learn DSPNs. As introduced, this loss has many applications such as learning a reward model in the context of RLHF.
Weaknesses: - It lacks a theoretical guarantee for the proposed peripteral loss. What happens if the target oracle does not exist?
Technical Quality: 3
Clarity: 2
Questions for Authors: - Can the authors explain intuitively how the peripteral loss is used to learn a reward model in RLHF?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and time. We address your questions below.
**Theoretical guarantee for the proposed peripteral loss**
- In our submission, we have included theoretical results regarding the guarantee that submodularity is retained by the use of the permutation invariant stage of a DSPN via the use of weighted matroid rank functions. There is still an open theoretical problem if a DSPN can perfectly represent (or arbitrarily closely approximate) *any* submodular function which currently is a conjecture. There is also the following conjecture: If the resulting peripteral loss, once trained, falls below a given threshold, then for each pair E,M with \Delta(E|M) > 0 (respectively \Delta(E|M) < 0), the volume in embedding space of the convex hull of the points in E will be greater (resp. smaller) than the volume in embedding space of the convex hull of the points in M.
**Target oracle does not exist**
- In such case, if an approximate oracle exists than we should still be able to train. Note that the oracle only needs to be able to be queried, we do not need to optimize over the oracle which means that non-submodular oracles (such as non-submodular functions or humans) can be the oracle. This segways to your third comment.
**Peripteral loss is used to learn a reward model in RLHF**
- This is briefly outlined on lines 45-59. The key idea is that RLHF could be used as normal via, say, PPO (proximal policy optimization) but the key difference is that the human providing the feedback would need to provide more than just a binary value of if A is better than B or B is better than A, rather a "graded" preference would need to be given by the computer saying how much is A better than B (or B better than A). This feedback is then represented via our $\Delta(E|M)$ score except here rather than scoring the preference for an $E$-set over an $M$ set (which would not be present), it would instead score two LLM-responses $A$ and $B$ relative to each other and the error (which is the peripteral discrepancy between the human's preference score and the model's difference score) would then propagate back to an LLM as in PPO via the peripteral loss. We hope to be able to pursue this research direction in the future.
Again, thanks very much for your review and comments. | Summary: The paper introduces deep submodular peripteral networks (DSPNs) and a graded pairwise preferences (GPC)-style peripteral loss. It shows that DSPNs are effective in learning submodularity from a target function.
Strengths: - The construction of the submodular function, i.e. DSPNs are interesting as they assure the submodularity.
- The loss function’s construction using contrastive sets is also interesting and reasonable.
- The experiments demonstrate that DSPNs combined with the peripteral loss are effective.
Weaknesses: The paper’s organization needs improvement. Many important concepts, such as submodular functions, FC functions, and GPC, lack definitions (even informal) when they first appear. Additionally, there is no motivation or detailed application of submodular functions in machine learning, making it difficult for readers to understand at the beginning.
Minor: Page 5, line 209, “then so also must”.
Technical Quality: 3
Clarity: 1
Questions for Authors: 1. I understand that due to page limits, a large portion of the content is included in the appendix. However, I suggest providing a short sentence with informal definitions or references to the definition locations for important concepts so that readers can quickly grasp their meanings and follow the text more naturally. For example, on line 61, what are graded pairwise preferences? Additionally, where is the formal definition of a submodular function? What are the definitions of a matroid and a matroid rank function?
2. I recommend adding some motivation and detailed applications about how submodular functions are used in machine learning and why they are important. Explicitly explaining where and how submodular functions can be applied in the introduction would be beneficial, instead of generally stating that they have been used in several areas and studied in several papers.
3. Could the output of the function be a vector of a few dimensions instead of a scalar, where each dimension represents different aspects of comparison, or can be scores judged by different people? Can you provide some insights on whether this scenario is reasonable and if so, whether your framework can be applied to this setting?
4. In Section 5, what is the target FL and how is it labeled/created? I suggest state explicitly on the task (e.g., image summarization) in the main text.
Confidence: 3
Soundness: 3
Presentation: 1
Contribution: 4
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and questions, and are glad that they found our work to be convincing. We address each question in turn.
**More definitions in the main body**
- Yes, in the next version of the paper we will add more such definitions in the main body. According to the Neurips2024 call for papers, if the paper is accepted, we will be granted an additional content page that we can use for this purpose, including as you suggest GPC, and the basic definitions of and motivations for submodularity and matroids.
**Output a vector rather than a scalar**
- This is an interesting question that we didn't pursue in the present paper, but there is no reason that our approach would not generalize to vector outputs, assuming an oracle queryable teacher is available. Given the opportunity, we will discuss this possibility in the next version.
**What is target FL?**
- While the oracle need not be an FL (facility location) function or even a submodular function, in the present work we use an expensive to optimize but easy to query target FL function as the oracle, and we transfer from this target FL to the learnt DSPN model. In our present work, each target FL is created by computing a real-valued feature vector for each sample and then computing a non-negative similarity between each such vector. We used a CLIP ViT encoder to encode input images, and a tuned RBF kernel to construct similarities for our target FL. Note that the target FL construction is rather expensive (quadratic) and also does not generalize to held-out data, but this is a key point. That is, the paper shows how we distill from this expensive target FL to a cheaper and generalizable DSPN.
**Main Task**
- Yes, we will do this in the revised version.
Again, thanks for your review and comments!
---
Rebuttal Comment 1.1:
Title: After Rebuttal
Comment: Thanks for the reply. The authors have addressed my questions. I will keep my score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their response! | Rebuttal 1:
Rebuttal: We wish to thank all of the reviewers for their reviews and comments. We are quite happy that the reviewers all fairly unanimously found our work to be interesting and worthwhile. We address each of the reviewers questions and comments in the below. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Reinforced Cross-Domain Knowledge Distillation on Time Series Data | Accept (poster) | Summary: The authors motivate the work by identifying limitations in existing approaches that integrate knowledge distillation into domain adaptation frameworks. Specifically, they note that coarsely aligning outputs over all source and target samples neglects the network capacity gap between teacher and student models, leading to poor distillation efficiency.
The proposed RCD-KD framework uses an adversarial discriminator module to align teacher and student representations between source and target domains. It also formulates target sample selection as a reinforcement learning problem, using a novel reward function based on uncertainty consistency and sample transferability. A dueling Double Deep Q-Network (DDQN) is employed to learn the optimal selection policy.
Strengths: Well-motivated: The authors clearly articulate the limitations of existing approaches and provide a compelling rationale for their proposed method.
Comprehensive evaluation: The experimental results are extensive, covering four different datasets across various time series tasks. The comparisons with state-of-the-art methods are thorough and demonstrate consistent improvements.
Weaknesses: Theoretical foundation: While the paper provides a detailed description of the proposed method, it lacks a strong theoretical foundation or analysis. Adding theoretical insights or guarantees would strengthen the contribution.
Computational complexity: The paper does not provide a detailed analysis of the computational complexity of the proposed method compared to existing approaches. Given the focus on resource-constrained devices, this information would be valuable.
Hyperparameter sensitivity: Although some hyperparameter settings are provided, a more comprehensive analysis of the method's sensitivity to different hyperparameters would be beneficial.
Limited discussion on failure cases: While the paper shows impressive results, a more in-depth discussion of scenarios where the method might fail or underperform would provide a more balanced view.
Technical Quality: 3
Clarity: 2
Questions for Authors: How does the computational complexity of RCD-KD compare to existing methods, especially considering the reinforcement learning component?
Have you explored the performance of the method on longer time series or datasets with a larger number of classes? How does it scale?
The paper mentions using DANN to pre-train the teacher model. How sensitive is the performance to the choice of the teacher's pre-training method?
How does the method perform when there is a significant domain shift between source and target domains? Are there cases where the performance degrades significantly?
Have you considered extending the approach to handle multi-source domain adaptation scenarios?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors acknowledge some limitations of their work in the conclusion section. They note that pre-training a cumbersome teacher with advanced UDA methods involves more training time than other approaches. Additionally, they mention that using only the distance between teacher's and student's logits to assess sample transferability might overlook intrinsic information from the feature space.
These are valid limitations, and it's commendable that the authors have included them. However, the discussion could be expanded to include potential implications of these limitations and possible strategies to address them in future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks to Reviewer nLZW for all the comments.
**Response to Weak-1 (Theoretical Foundation):** We appreciate the suggestion to include a stronger theoretical foundation or analysis to further strengthen the contribution. While we acknowledge the value that theoretical insights and guarantees could bring to the paper, our primary focus in this work has been on the practical implementation and empirical validation of our method. Our goal was twofold: to tackle the prevalent issues of domain shift and model complexity in time series applications, and to demonstrate the superior effectiveness of our approach compared to existing benchmarks through extensive experiments on a range of real-world time series tasks. We believe that the empirical success of our method is also very crucial for research community, and thus, in this paper we emphasizes empirical results over theoretical analysis.
Nonetheless, we will definitely consider incorporating a theoretical foundation in future extensions of this research. Potential directions include: (1) Theoretical Justification for RL: include a theoretical analysis of using reinforcement learning to optimize sample selection, examining the exploration-exploitation trade-off and its impact on performance. (2) Uncertainty Consistency Analysis: explore the theoretical relationship between uncertainty consistency and model performance, supported by mathematical proofs and derivations.
**Response to Weak-2 and Q-1 (Computational Complexity):** Please kindly refer to **Response to Major Weak-2** for Reviewer **Ltvb** for algorithm complexity analysis and **Response to Weak-1-2** for Reviewer **bzJ9** for model complexity analysis.
**Response to Weak-3 (Hyperparameter Sensitivity):** Please kindly refer to **Response to Weak-3** for Reviewer **E4vw** for sensitivity analysis of hyper parameters.
**Response to Weak-4 (Failure Case Discussion):** Indeed, there are some cases where our approach does not perform as expected, such as $2\to7$ in **HHAR** and $0\to11$ in **SSC**. The poor adaptation performance in these scenarios may be attributed to the larger domain shift, as even the complex teacher model and other benchmarks struggle to achieve good adaptation. When there is a significant domain gap that the complex teacher model cannot effectively address, the target knowledge it provides may also be unreliable for the student. To explore this potential failure cause, we conducted further validation on another HAR dataset characterized by larger domain gaps (See **Response to Q-4**) and we are willing to incorporate a detailed discussion in the revised manuscript.
**Response to Q-2 (Scalability to Larger Dataset):** Please kindly refer to **Response to Weak-2** for Reviewer **E4vw** for the analysis of our method on larger dataset.
**Response to Q-3 (Teacher Sensitivity):** As shown in Table 4 of our manuscript, we evaluate our framework with teachers pre-trained with different DA methods. Specifically, we utilize two discrepancy-based DA method (i.e., MDDA and SASA) and one adversarial-based DA method named CoDATS. The difference is that: for teachers pre-trained with discrepancy-based DA method, we have to train the domain discriminator. But for teachers pre-trained with adversarial-based DA methods, we can obtain the well-performed discriminator during teacher's training process. Thus, during student's training, the parameters of discriminator are frozen. From Table 4 of our manuscript, we can see that employing teachers from MDDA and SASA slightly underperforms CoDATS and DANN. The possible reason is that adding additional training steps for domain discriminator will inevitably increase training difficulty in terms of model convergence. But they still perform better than other benchmark methods. It indicates that our approach is not limited by the teacher pre-training strategies.
**Response to Q-4 (Larger Domain shift):** To evaluate our method under larger domain shift, we conducted some preliminary experiments on another dataset called HPP HAR [15]. This dataset includes 12 subjects, each performing 6 activities, with the smartphone placed in three different positions: pants, a shirt, and a backpack. The results are shown in the table below. As observed, almost all methods experience performance degradation when faced with a significant domain gap. A potential reason for this is that these KD-based cross-domain methods (including ours) are highly dependent on the teacher's performance. If the teacher fails to capture domain-invariant representations, the student model may also be negatively impacted. Due to time constraints, we were unable to explore other SOTA domain adaptation methods to enhance the teacher's performance and verify the above hypothesis. We plan to conduct this verification in future extensions of our research.
Scenario|Teacher|Student-Only|MobileDA|UNI_KD|Ours
:---:|:---:|:---:|:---:|:---:|:---:|
S01Pants $\to$ S02Backpack|43.54|23.45|37.41|35.45|38.95
S01Shirt $\to$ S03Pants|56.10|33.45|49.41|41.12|47.77
**Response to Q-5 (Extent to Multi-Source):** Some adjustments may be necessary if we extend our approach to multi-source domain adaptation (MSDA) scenarios. Firstly, instead of utilizing RL to select proper target samples, in MSDA we could leverage RL to dynamically select suitable source domains (or source samples), which are most relevant to target domain, to minimize the negative transfer as much as possible. Secondly, some of the key components in RL need to be re-defined. For instance, we need to re-formulate reward function to offer essential feedback to DDQN. A potential solution could be some functions to measure the data distribution discrepancy between source and target domain. Lastly, for MSDA, the teacher should be the expert who globally predicts well on the mixture of source domains. Methods like STEM [16] could be possible solutions for training a proper teacher in multi-source domain scenario. | Summary: This paper proposes a knowledge distillation method for unsupervised domain adaptation models in time series classification. After pre-training the teacher model with existing domain adaptation methods, the proposed Reinforced Cross-Domain Knowledge Distillation (RCD-KD) method selects suitable target domain samples for knowledge distillation with reinforcement learning and distills knowledge from the pre-trained teacher model to a smaller student model. Empirical experimental results on four public time series datasets demonstrate the effectiveness of the proposed method over other state-of-the-art benchmarks.
Strengths: * The writing and presentation of this paper are good. The setup of the proposed problem and the proposed method are described clearly.
* This paper proposes a new distillation method. It makes some contributions in using reinforcement learning for target sample selection in knowledge distillation.
* Experiments in four public datasets show the effectiveness of the proposed method, which outperforms some domain adaptation and knowledge distillation methods. There are also some ablation studies to validate the designs of the reward and the training losses.
Weaknesses: * The proposed problem seems to be a simple two-stage combination of domain adaptation and knowledge distillation and may not be realistic enough, which does not show the significance of solving them together in one problem. Besides, authors claim that it can help ‘on edge devices with very limited computational resources’, but the experiments only distill from one bigger CNN to a smaller CNN, which does not make a difference in enabling deployment on edge devices.
* Authors should explain more on why using reinforcement learning to select samples works better than directly using designed metrics such as uncertainty and transferability. Reinforcement learning will add a lot of computation costs to the training process and it is unclear what the learned selection policy looks like. It seems that the proposed reward cannot solve the claimed issue that ‘in the cross-domain scenario, teacher’s knowledge on each individual target sample may not be always reliable’. Besides, this paper proposes a distillation method for time series data, but the method does not show its special designs for time series.
* The domain discrimination loss is confusing. How does it enable ‘transfer the domain-invariant knowledge’ to the student model? If the teacher model already learns domain-invariant knowledge, why don’t we achieve this by distilling teacher features to the student model?
Technical Quality: 2
Clarity: 3
Questions for Authors: In addition to the Weaknesses, there are some minor points of suggestion:
* Figure 1 is a little hard to understand, especially the relations between the reward module and other parts.
* There are some typos, such as Line 118 ‘we consider (the) target sample selection task as a Markov Decision Process which can (be) addressed by reinforcement learning.’
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Authors discussed the limitations in the conclusion part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks to Reviewer bzJ9 for all the comments.
**Response to Weak-1-1 (Simple Combination):** We clarify that our solution is not a simple two-stage combination of DA and KD. Particularly, we optimize DA loss $L_{DC}$ and KD loss $L_{RKD}$ together, addressing domain shift and model complexity simultaneously. Moreover, we consider the following two solutions as simple two-stage combination. 1. KD$\to$DA: first distill the knowledge from teacher to student on source data, and then directly perform DA on student. 2. DA$\to$KD: first perform DA on the teacher and then conduct distillation. As shown in below table, these two solutions perform better than Source-Only model, but significantly worse than ours. The reason is that if performing KD on source data first, followed by DA, the student would become biased towards the source data during KD stage. And due to its compact network architecture, the student's adaptation performance would also be affected. If we perform DA first and then perform KD on target data, it would result in poor distillation efficiency in KD stage due to the lack of ground truth on target data. These issues are consistent with findings from previous research [12][13].
Methods|HAR|HHAR|FD|SSC
:---:|:---:|:---:|:---:|:---:
Student|55.94|58.74|66.78|50.39
KD$\to$DA|73.66|72.67|67.87|54.26
DA$\to$KD|74.29|74.54|70.40|60.08
Ours|94.68|82.37|92.63|67.49
**Response to Weakness-1-2 (Model Complexity):** To demonstrate the deployment on edge device, we compared our 1D-CNN teacher and student from four perspectives as shown in below table. Here, we employ Raspberry Pi 3B+ as the edge device for deployment. We can see that the student reduces 15.46 times parameters, 16.98 times FLOPs and 13.54 times memory usage compared to its teacher. Besides, the inference of student is 21.67 times faster than teacher on the edge device. Meanwhile, in our manuscript we have already shown that the student trained with our method is able to achieve comparable performance as the teacher. This enables our compact student to potentially meet the real-time response and on-site deployment requirements for certain time series applications.
||# Para.(M)|# FLOPs(M)|Memory Usage (Mb)|Inference Time (Sec)
:---:|:---:|:---:|:---:|:---:|
T|0.201|0.917|83.73|4.16
S|0.013|0.054|6.33|0.192
Rate|15.46$\times$|16.98$\times$|13.54$\times$|21.67$\times$
**Response to Weak-2-1 (Why RL works):** We argue that the primary reason why RL performs better is due to its inherent balance between exploitation and exploration. If we solely exploit the gained knowledge (i.e., directly using uncertainty or transferability for sample selection), the student is likely to become stuck at certain sub-optimal points (see comparison of $R_2$, $R_2^\dagger$, $R_3$ and $R_3^\dagger$ in Table 6 of our manuscript). The exploration in RL allows the agent to explore new possibility from the environment. In our implementation, we utilize the ‘NoisyFC’ whose weights and biases are perturbed by a parametric function of the noise to enhance the efficiency of exploration. Meanwhile, we also agree that RL does involve more computational costs than others, but the total training time is still acceptable, especially considering the performance improvement it could bring (see **Response to Major Weak-2** for Reviewer **bzJ9**).
**Response to Weak-2-2 (Claimed Issue and No Special Design for TS):** We'd like to clarify that our method does not intend to improve teacher's reliability but to adaptively transfer its target knowledge with our RL-based framework. The unreliability of teacher's target knowledge is the underlying reason why existing approaches that simply integrate KD with UDA frameworks often experience unsatisfactory adaptation performance. Thus, we are highly motivated to utilize RL to select samples aligning with student’s capability to mitigate such unreliable knowledge. Besides, we also agree that there is no special architecture design for TS in our proposed framework. We choose to evaluate on TS as model complexity and domain shift issues are very common in TS. As it is general, we will explore to extend our method to other research areas like CV and NLP in the future.
**Response to Weak-3 (Domain-invariant Knowledge):** The domain discrimination loss originates from Reference [14], which is closely related to adversarial learning. The main idea is to pit two networks against each other. The student is expected to generate invariant representations for both domains and the discriminator is expected to fail to distinguish them. By minimizing this loss, the student would generate similar representations on target domain as the teacher on source domain. In other words, the domain-invariant knowledge would be transferred from teacher to student.
Besides, there are two reasons why we cannot directly transfer the domain-invariant knowledge from teacher to student. First, the compact student model has very limited capacity and cannot capture the same fine-grained patterns in the data as teacher. Coarsely aligning their feature maps, as done in KD-STDA and MLD-DA, would lead to sub-optimal performance on target domain. Secondly, instead of focusing on learning domain-invariant representations, our objective is to improve student's generalization on target domain via teacher's knowledge, which motivates us to adaptively transfer teacher’s target knowledge based on student’s capability. Our ablation study results on framework (see Table 5 of manuscript) also suggest that the performance improvement from domain-invariant loss is very marginal. Conversely, our proposed RKD loss can significantly improve student's generalization capability on target domain.
**Response to Q-1 (Reward not clear):** We will try to improve the clarity of our framework via providing more description in updated version (See Updates in global **Author Rebuttal**).
**Response to Q-2 (Typos):** We will improve the typos in updated version.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: Thanks for the response. Some of my concerns have been addressed. Additionally, I suggest that some more discussions on RL as well as its training complexity and stability may help, considering that RL is integrated with distillation and domain adaptation to form a complicated training method. It makes sense to me that RL helps explore some new possibilities. But RL may also cause some issues such as unstable training and may not always succeed in exploring and exploiting. Would this lead to some failure cases? What are the differences between the learned policy and designed metrics, does the learned policy show some special properties?
Besides, since the problem and the technical design are not limited to time series, it may be better to consider going beyond this specific type of data in the writing and experiments.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer's feedback
Comment: Thanks to Reviewer **bzJ9** for the response to our rebuttal.
**Response to Q-1 (Include more discussion on RL):**
Thank you for above suggestion. As we responded in our previous rebuttal, we will include the discussion of training complexity in our updated version. Meanwhile, we will also provide more discussion on training stability as shown below.
>Particularly, a dueling DDQN is employed to learn the optimal target sample selection policy. The dueling architecture can effectively mitigate the risk of overestimation by separately estimating the state value and advantage function, which improves the accuracy of action-value predictions. Meanwhile, to tackle the instability issue often encountered in training deep reinforcement learning models, we leverage strategies such as target network and experience replay. Specifically, the target network provides more stable targets for updating the Q-values by maintaining a separate, slowly updated network for generating target values, while experience replay enables the model to learn from a diverse set of past experiences, further enhancing stability and convergence during training.
**Response to Q-2 (Unstable training issue of RL):**
In our proposed framework, we utilize the dueling DDQN architecture [1], which was initially developed to address potential instability issues by separating the value and advantage functions. The value stream establishes a solid baseline from which the advantages of actions are evaluated. It helps reduce the risk of overestimating active values and enhances the selection process. Additionally, during training, we incorporate techniques like target network updates and experience replay to further tackle stability concerns (see **Response to Q-1** for more details). Previous research [2] [3] also demonstrated that these techniques can effectively improve the stability of training in deep Q-networks.
[1] Wang, Z., Schaul, T., etc. Dueling network architectures for deep reinforcement learning. In ICML 2016.
[2] Wu, K., Wu, M., Yang., etc. Deep Reinforcement Learning Boosted Partial Domain Adaptation. In IJCAI 2021.
[3] Pan, J., Wang, etc.. Multisource transfer double DQN based on actor learning. IEEE TNNLS, 2018.
**Response to Q-3 (Failure Cases Discussion):**
Thank you very much for above comment. Indeed, we have observed some cases where our approach does not perform as expected. One potential factor for poor adaptation performance may be attributed to the larger domain shift (Please kindly refer to our **Response to Weak-4** and **Q-4** for Reviewer **nLZW**). A significant distribution gap between the source and target domains can render the knowledge from the teacher unreliable. However, it is worth noting that most benchmark methods also suffer from performance degradation in the presence of such domain gaps, while our approach consistently shows superior results. This issue might be partially addressed by enhancing the teacher's performance with SOTA domain adaptation techniques. Additionally, another potential cause for failure in exploration and exploitation could be related to the initialization of the dueling DDQN and its optimization trajectory. To address this, we conducted all experiments three times with different random seeds and reported the average performance.
**Response to Q-4 (Comparison of learned policies and their properties):** Thank you very much for raising this question. Unlike fields such as computer vision, the interpretation of time series data is not straightforward, making it challenging to visualize and directly compare the differences between learned policies and their properties. Currently, we assess these policies solely based on their impact on the student's performance. However, it might become feasible to conduct more explicit comparisons using synthetic time series data. By manually simulating domain shifts in such data, we could observe how the learned policies select samples and compare their behaviors more directly.
**Response to Q-5 (Go beyond time series data):** Thank you very much for your valuable feedback. We fully agree with your suggestion to consider extending our method beyond time series data. Our focus on time series in this paper is driven by the following factor. Our group’s research expertise is centered on time series analysis. Hence, we have only tested its effectiveness across multiple time series datasets/tasks. It would be premature for us to claim its general applicability to other fields such as computer vision or natural language processing without further validation.
Currently, we are very willing to conduct experiments on data types beyond time series. However, due to time constraints, we may not be able to generate sufficient or convincing results. As we mentioned in our rebuttal, we are very interested in exploring these extensions in a journal version if possible and assessing the effectiveness of our approach in different domains.
---
Rebuttal 2:
Title: Response to authors
Comment: Thanks for the response. I suggest detailed discussions on RL and more analysis of learned policies and failure cases (such as related experimental analysis and visualization) be included in the paper. I have updated the rating.
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer,
Thank you very much for the time and efforts dedicated to reviewing our paper. We are so grateful for your constructive suggestion and updating the rating for our paper. | Summary: This paper proposes a Reinforced Cross-Domain Knowledge Distillation (RCD-KD) framework for time series data, aiming to effectively transfer knowledge from a cumbersome teacher model to a compact student model across different domains. The RCD-KD framework leverages an adversarial domain discriminator to learn domain-invariant features and a reinforcement learning-based sample selection module to dynamically select informative target samples for knowledge distillation. The proposed method significantly improves the student model's generalization ability on the target domain compared to conventional knowledge distillation and domain adaptation techniques.
Strengths: 1. By incorporating an adversarial domain discriminator, the framework can effectively learn domain-invariant features, enabling the student model to generalize better to the target domain.
2. The reinforcement learning module dynamically selects informative target samples based on the student model's capacity and uncertainty, mitigating negative transfer and improving the efficiency of knowledge distillation.
3. Extensive experiments on four public datasets across three tasks demonstrate that the proposed RCD-KD framework consistently outperforms other state-of-the-art domain adaptation and knowledge distillation methods.
Weaknesses: 1. The framework utilizes the distance between teacher and student logits to assess sample transferability, potentially overlooking intrinsic information from the feature space. More comprehensive discussions on different feature distances could further enhance sample selection.
2. While the experiments demonstrate the effectiveness of the proposed RCD-KD framework on several datasets, it is unclear how well the method would scale to larger and more complex time series datasets with higher dimensional feature spaces. The computational efficiency and scalability of the framework under such settings remain to be investigated.
3. The paper acknowledges that grid search was used to tune hyperparameters such as α1, α2, λ, and τ. However, the sensitivity of the framework's performance to these hyperparameters is not thoroughly analyzed. The framework's robustness to different hyperparameter configurations could potentially limit its practical applicability without extensive tuning.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks to Reviewer E4vw for all the comments.
**Response to Weak-1 (Feature Knowledge for Transferability Assessment):** Although we have discussed above weakness as one of our limitations in manuscript, we conducted some preliminary experiments as suggested. We investigate several feature-based KD methods to assess sample transferability on **SSC** as below table shows. Specifically, we utilized L2 [7], attention maps (AT) [8], CORAL [9], and probabilistic knowledge transfer (PKT) [10] to measure the distance between teacher's and student's feature maps. These feature distances are then used to calculate reward $\mathcal{R}_3$ in Eq.(3). From the table, we can see that transferability assessments using feature distance obviously underperform logits-based method in our proposed framework. The potential reason is that: in our current framework, we utilize MCD module to generate $N$ teachers and then average their logits for calculating uncertainty and transferability. However, unlike logits, each data point in the feature maps carries different meaning depending on its spatial location within its feature space. Simply averaging feature maps across multiple teachers appears unreasonable and impractical, potentially resulting in poor performance. Therefore, the feature-based knowledge cannot be directly integrated into our existing framework. A more comprehensive designs need to be carefully considered.
Methods|0$\to$1|12$\to$5|16$\to$1|7$\to$18|9$\to$14|Avg
|:---:|:---:|:---:|:---:|:---:|:---:|:---:
L2|22.73|24.75|47.00|64.23|62.59|44.26
AT|29.93|38.25|59.56|59.02|56.52|48.66
CORAL|34.06|38.87|50.22|54.32|52.94|46.08
PKT|28.20|57.70|47.74|55.70|74.84|52.84
Logits (**Ours**)|48.39|69.56|70.52|72.20|76.79| 67.49
**Response to Weak-2 (Scalability to Larger Dataset):** To verify the efficiency and scalability of our method on larger time series (TS) dataset, we conduct experiments on another Human Activity Recognition dataset named PAMAP2 [11]. Below table compares the dataset complexity of PAMAP2 with the datasets we employed in our manuscript. Note that it is meaningless to compare the total size of datasets in our settings as our experiment evaluate transfer scenario between single subject. We summarize the averaged number of samples, channels, data length and classes across all transfer scenarios for these datasets. We also report the time complexity of our method across these datasets (i.e., training time per epoch for single transfer scenario). From this table, we can see PAMAP2 is larger in terms of number of samples and more complex in terms of number of channels and classes. Compared with **FD**, the epoch training time for PAMAP2 only increases 2 times as the number of training samples increases about 4 times, indicating that our method scales well in terms of computational efficiency on larger TS dataset.
Datasets|No.of Samples|No.of Channels|Data Length|No.of Classes|Training Time (sec)
:---:|:---:|:---:|:---:|:---:|:---:
HAR|216|9|128|6|1.61
HHAR|1150|3|128|6|9.07
FD|1828|1|5120|3|16.43
SSC|1428|1|3000|5|7.09
PAMAP2|8180|36|256|11|31.64
Meanwhile, we also conduct the performance comparison between our method and benchmarks on PAMAP2 with randomly selected 5 transfer scenarios. The experimental results are summarized as below table. We can see that our proposed method consistently outperform other benchmarks in terms of average Macro F1-score, even though it cannot achieve the best performance on some transfer scenarios. This observation indicates that the effectiveness of our proposed method can also be generalized to larger time series dataset.
Methods|102$\to$104|106$\to$103|107$\to$105|105$\to$106|107$\to$102|Avg
:---:|:---:|:---:|:---:|:---:|:---:|:---:|
KD-STDA|66.19|53.12|46.34|67.87|59.75|58.65
KA-MCD|34.35|49.16|49.92|33.95|35.97|40.67
MLD-DA|68.14|50.85|61.23|75.03|63.32|63.71
REDA|**71.49**|53.31|59.11|74.75|**64.86**|64.70
AAD|60.28|51.61|48.01|73.64|45.55|55.82
MobileDA|67.14|54.09|**64.21**|74.67|63.35|64.69
UNI-KD|64.82|**70.82**|43.92|69.65|56.20|59.28
**RCD-KD**|68.33|68.86|59.65|**75.44**|62.50|**66.96**
**Response to Weak-3 (Hyperparameter Sensitivity):** Due to paper space constraints, our sensitivity analysis for hyperparameters $\alpha_1$, $\alpha_2$, $\lambda$, $N$ and $K$ were included in the Supplementary (see Fig. 2,3,4 and Table 5). Please refer to our submitted Supplementary for the details.
For hyperparameter $\tau$ which is the temperature to soften teacher's logits, we added additional analysis ranged from 1 to 16 as shown in below Table. We can see that higher value of temperature (e.g., $\tau = 16$) would over smooth teacher's logits, resulting in poor distillation performance. Generally, $\tau = 2$ or $\tau = 4$ is a good choice for our method.
Dataset|$\tau=1$|$\tau=2$|$\tau=4$|$\tau=8$|$\tau=16$
:---:|:---:|:---:|:---:|:---:|:---:
HAR|92.14|94.68|94.23|91.35|89.45
HHAR|80.14|82.37|81.45|79.41|76.49
FD|90.79|92.63|92.74|88.51|85.41
SSC|65.10|67.49|66.98|63.21|59.01
---
Rebuttal Comment 1.1:
Comment: Thank you! The authors have addressed all my concerns.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer **E4vw**,
We are very glad to hear that all of your concerns have been addressed. We really appreciate the time and efforts you have dedicated for providing valuable feedback on our paper. | Summary: This paper introduces a reinforcement learning-based active learning method designed to dynamically select target data for knowledge-transfer, whose goal is to bridge the network capacity gap between teacher and student networks within a domain adaptation framework incorporating knowledge distillation. Specifically, the authors propose a novel reward mechanism aimed at learning an optimal policy for selecting target data, considering the student network's capacity.
Another novel aspect of this paper is its successful demonstration of the framework's effectiveness on time-series data. This modality is less explored in both domain adaptation and knowledge distillation research fields.
##################################### Post Rebuttal #####################################
All of my concerns have been properly addressed, and my questions have been answered during rebuttal. I am happy to raise my score to ***Strong Accept***.
##################################### Post Rebuttal #####################################
Strengths: 1. The paper is well-written and easy to follow.
2. The proposed reward function for selecting target data in domain adaptation from a large network to a smaller one is novel. This method integrates principles from active learning, reinforcement learning, knowledge distillation, and domain adaptation.
3. The authors offer a detailed description of their implementation in the paper, along with the accompanying code. This makes it straightforward for practitioners to apply their methods or build upon their work.
4. Formulating network output entropy as a reward function for data selection represents a novel approach that effectively incorporates concepts from reinforcement learning into a classification model.
5. The performance lift of the proposed method is very significant in time-series data, which promotes the interests of active-learning-based domain adaptation.
Weaknesses: ### Majors:
1. Using model output entropy as a measure of uncertainty level is a common technique in active learning. While applying this to promote uncertainty consistency between teacher and student networks is novel, it is important to acknowledge the context of active learning within the paper. It would be not good to borrow ideas without attribution.
2. Based on my experience, running such a reinforced loop for data selection is particularly time-consuming, especially concerning the Markov chain state update mentioned from Line 127 to 134. Therefore, it would be beneficial for the authors to conduct a time complexity analysis of their proposed method.
### Minors:
1. I believe active learning is highly related to the proposed work. Conducting a literature review on active learning would offer practitioners valuable context, enhancing their understanding of the proposed research.
2. In Line 112, "divergence" would be a more appropriate term for describing the difference between distributions, rather than using "distance".
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. For Lines 90 to 91, the authors stated that the estimated entropy from the domain-shared feature extractor is deemed unreliable for the student. Could the authors provide further elaboration on this assertion? Specifically, what does this unreliability entail and what are its underlying causes?
2. While DDQN isn't the method proposed in this paper, it should have significant impact on the overall performance. I am curious whether other active learning methods might outperform MCD. I recommend conducting an ablation study to examine how various active learning strategies influence overall performance.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks to Reviewer Ltvb for all the comments.
**Response to Major Weak-1 (Comparison with Active Learning):** We fully agree that our method is closely related to AL, particularly in selecting critical samples with uncertainty. We are also willing to include the acknowledge of AL in our updated version. Meanwhile, we'd like to clarify that there are also some distinguished differences between our approach and uncertainty-based AL sample selection methods. Firstly, as noted by the reviewer, our method introduces a novel concept by promoting uncertainty to uncertainty consistency. We use sample uncertainty consistency to assess the student's learning capability, rather than directly filtering samples based on their uncertainty. Additionally, this only forms part of our reward function, as we also incorporate sample transferability to evaluate student's network. Secondly, unlike AL, which explicitly leverages uncertainty for sample selection, our approach employs a RL-based module to heuristically learn the optimal target sample selection policy. And our experimental results on reinforced sample selection ablation (Table 6) have demonstrated its superior capability of performance enhancement compared to directly selecting samples as done in AL. Lastly, the selected target samples with our method will not be labeled as AL. Instead, they will continue to be evaluated without labelling after student's state is updated.
**Response to Major Weak-2 (Computational Complexity):** We performed the time complexity analysis as suggested and the results are shown in below table. Specifically, we measure the training time for our proposed method and other benchmarks with a NVIDIA 2080Ti GPU. The reported results are measured with one epoch on single transfer scenario on **FD** dataset, which has the largest training samples (about 1,800 samples per transfer scenario) among evaluated datasets.
Methods|Time(sec)|Macro F1-score
:---:|:---:|:---:
KD-STDA|1.68|55.81
KA-MCD|4.55|57.74
MLD-DA|1.91|60.90
REDA|1.78|56.27
AAD|0.91|59.12
MobileDA|1.28|56.86
UNI-KD|3.26|62.17
**Ours**|16.42|67.49
We can see that our method does require more training time compared to other benchmarks, reflecting its greater complexity. The primary computational costs arise from two factors. The first part is the generation of $K$ historical experiences at each step, which accounts for approximately 73\% of total training time. This could be significantly reduced by using a smaller K. The second factor, which consumes about 21\% of total training time, is the MCD module which conducts multiple inference processes for uncertainty estimation. This computational burden could be further decreased by adopting alternative uncertainty estimation methods. Nevertheless, although our training time is longer than other benchmarks, we argue that it is still within an acceptable range, especially considering the performance improvement it could bring.
**Response to Minor Weak-1 (Literature Review):** As suggested, we performed a literature review on uncertainty-based active learning and will include it in our updated version (See Updates in global **Author Rebuttal**).
**Response to Minor Weak-2 (Typo):** We would like to revise it in the final revision.
**Response to Q-1 (Unreliability):** As stated in UNI-KD, the authors employ a data-domain discriminator to estimate sample uncertainty, with inputs derived from feature extractor of compact student trained on both source and target domain (i.e., domain-shared). However, our experiments, which directly applied SOTA UDA methods to compact student (see Table 1 of our manuscript), demonstrated that the compact student cannot fully capture the fine-grained patterns in source and target domains as effectively as teacher. Due to its limited capacity, the student's adaptation performance is inferior, making the uncertainty estimated by UNI-KD unreliable. In contrast, our proposed method leverages more robust teacher to estimate uncertainty via MCD module. Our experimental results further demonstrate its effectiveness on enhancing student’s performance on target domain.
**Response to Q-2 (Ablation with AL):** We conduct the ablation study on three uncertainty-based active learning (AL) strategies, including least confidence (LC), sample margin (M) and sample entropy (H). The results are presented in below table. We take student trained with our framework using whole target samples as the baseline (i.e., without RL). 'LC' refers to leveraging student's confidence to directly select samples. 'LC Consist.' refers to using the consistency of teacher's and student's confidence for explicitly sample selection. 'LC Consist. +RL' refers to leveraging 'LC Consist.' as reward to learn optimal sample selection policy.
Methods|HAR|HHAR|FD|SSC
:---|:---:|:---:|:---:|:---:
Baseline|89.32|78.99|89.13|60.65
LC|79.21|76.22|74.14|52.9
LC Consist.|82.01|75.43|74.45|56.11
LC Consist.+RL|84.9|76.24|81.45|60.01
M|80.55|75.9|82.05|58.03
M Consist.|83.55|78.91|81.9|59.45
M Consist.+RL|90.11|80.01|80.79|61.97
H|88.31|79.09|88.18|59.23|
H Consist.|91.65|78.3|90.17|63.16|
H Consist.+RL|93.91|81.73|91.93|62.98|
We can see that: firstly, almost all uncertainty-based AL strategies exhibit performance degradation compared to the baseline. This could be attributed to the unreliable uncertainty estimation from student's outputs, especially at early training stage. Additionally, among these strategies, entropy performs the best, likely because it considers the overall probability distribution which might partially address student's unreliable predictions issue. Secondly, utilizing uncertainty consistency instead of uncertainty alone could enhance performance in most settings, as incorporating teacher's knowledge through consistency provides a more reliable measure. Lastly, our RL module could further enhance student's performance via employing any of uncertainty consistency as the reward, indicating its effectiveness.
---
Rebuttal 2:
Title: Comments after Reading Authors' Response
Comment: Thank you to the authors for the detailed response. All of my concerns have been properly addressed, and my questions have been answered. I am happy to raise my score to ***Strong Accept***.
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer **Ltvb**,
We are very glad to hear that your concerns have been addressed. Thank you very much for the time and efforts you have dedicated to reviewing our paper. | Rebuttal 1:
Rebuttal: ## Summary
We sincerely thank all the reviewers for their insightful and valuable feedback. We are pleased that the reviewers recognized the novelty of our work and appreciated our motivation for designing a reinforcement learning-based sample selection approach for cross-domain knowledge transfer. Additionally, we are glad that the comprehensive experimental evaluation across four public datasets was noted, highlighting the superior performance and robustness of our approach. We have carefully considered all the suggestions and comments. The overall summary of our rebuttal is as follows.
* We performed the computational complexity analysis as suggested by Reviewer **Ltvb**,**bzJ9** and **nLZW**.
* We verified the scalability of our method to larger dataset as suggested by Reviewer **E4vw** and **nLZW**, and scalability to dataset with significant domain shift as suggested by Reviewer **nLZW**.
* We conducted the ablation study with additional active learning strategies as suggested by Reviewer **Ltvb**,
* We added sensitivity analysis for temperature hyper parameter as suggested by **E4vw**, **nLZW**.
* We included experiments on integrating feature distances for sample selection as suggested by Reviewer **E4vw**.
* We provided additional details and clarifications as requested by all reviewers.
## Updates to manuscript:
* Include complexity analysis and additional hyperparameter sensitivity analysis as suggested by Reviewer **Ltvb**,**bzJ9** and **nLZW**.
* Include experimental results on larger dataset PAMAP2 as suggested by Reviewer **E4vw** and **nLZW**.
* Include literature review for active learning as suggested by Reviewer **Ltvb** as follows:
>Meanwhile, our work also relates to active learning (AL) field specifically in terms of selecting the most critical instances from unlabeled data. Note that here we only discuss the uncertainty-based sampling strategies in active learning as other query strategies (e.g., instance correlation) are beyond the scope of our paper. Particularly, the uncertainty can be measured by three metrics: least confidence, sample margin, and sample entropy [1]. The least confidence methods like [2][3] select the instances which have the highest posterior probability and the margin sampling leverage the margin between posterior probabilities of the first two most probable classes [4]. Unlike above two, the entropy metric measures the uncertainty over the whole output prediction distribution [5][6]. In our method, instead of explicitly utilizing entropy-based uncertainty as AL methods, we propose to leverage the consistency between teacher' and student's entropy-based uncertainty to learn the optimal sample selection policy with dueling DDQN. The experimental results across various time series datasets demonstrate the effectiveness of our method.
* Update caption for Fig.1 and include more description to better illustrate our reward module in our framework as commented by Reviewer **bzJ9**.
>As illustrated in Fig.1, the reward function consists of three parts. The first one is the action $a_k$ which is the output of dueling DDQN. The second part is the uncertainty consistency, estimated by entropy from student's logits $\boldsymbol{q}^S$ and the averaged logits $\overline{\boldsymbol{p}}^T$ of $N$ teachers generated from MCD module. The third part is the sample transferability based on the KL divergence between $\boldsymbol{q}^S$ and $\overline{\boldsymbol{p}}^T$. The output of reward module $r_k$ then will be utilized for the optimization of dueling DDQN for learning optimal sample selection policy.
## References in Rebuttal:
[1] Tharwat, A., Schenck, W. (2023). A survey on active learning: State-of-the-art, practical challenges and research directions. Mathematics.
[2] Culotta A, McCallum A. Reducing labeling effort for structured prediction tasks. In: Proceedings of the 20th AAAI 2005
[3] Zhu J, Wang H, Tsou B, Ma M (2010) Active learning with sampling by uncertainty and density for instances annotations. IEEE Trans Audio Speech Lang Process 18(6)
[4] Campbell C, Cristianini N, Smola A. Query learning with large margin classifiers. In: Proceedings of the 17th ICML 2000
[5] Burl MC, Wang E. Active learning for directed exploration of complex systems. In: Proceedings of the 26th (ICML 2009)
[6] Kim J, Song Y, et al . MMr-based active machine learning for bionamed entity recognition. In: Human language technology and the North American association for computational linguistics
[7] A. Romero, N. Ballas, et al., “Fitnets: Hints for thin deep nets,” in ICLR, 2015.
[8]. Zagoruyko and N. Komodakis, “Paying more attention to attention: Improving the performance of convolutional neural networks via attentiontransfer,” 5th ICLR, 2017.
[9] B. Sun and K. Saenko, “Deep coral: Correlation alignment for deep do-main adaptation,” in European conference on computer vision. Springer,2016
[10] N. Passalis and A. Tefas, “Learning deep representations with probabilistic knowledge transfer,” in Proceedings of the ECCV, 2018,
[11] Reiss, A., Stricker, D. Introducing a new benchmarked dataset for activity monitoring. In 2012 16th international symposium on wearable computers.
[12] Atif Belal, Madhu Kiran, et al. Knowledge distillation methods for efficient unsupervised adaptation across multiple domains. Image and Vision Computing.
[13] Granger, E., Kiran, M.. Joint progressive knowledge distillation and unsupervised domain adaptation. In 2020 IJCNN.
[14] Tzeng, E., Hoffman, J., Saenko, K. Adversarial discriminative domain adaptation. In Proceedings of ICCV (2017).
[15] Chen, Zhenghua, et al .:Smartphone sensor-based human activity recognition using feature fusion and maximum full a posteriori. IEEE TIM (2019)
[16] Nguyen, V. A., Nguyen, T., et al. (2021). Stem: An approach to multi-source domain adaptation with guarantees. In Proceedings of ICCV. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SyncVIS: Synchronized Video Instance Segmentation | Accept (poster) | Summary: The paper argues that existing methods for video instance segmentation use asynchronous designs, leading to difficulties in handling complex video scenarios. To address this problem, the paper proposes a SyncVIS method for synchronized modeling, achieving state-of-the-art results on several benchmarks.
Strengths: 1. The paper is generally clear to understand.
2. The paper achieves good performance by synchronized modeling.
3. Visualizations are provided to illustrate the effectiveness of the method.
Weaknesses: 1. The novelty of synchronized embedding optimization is limited.
2. The paper did not compare with DVIS++. For example, using ResNet-50 and offline mode, DVIS++ outperforms SyncVIS by 2.5% on YouTube-VIS 2019. DVIS++: Improved Decoupled Framework for Universal Video Segmentation, arXiv, 2023.
3. More implementation details like training steps should be reported.
4. Compute resources, such as the type of GPU, are not reported in the paper, yet the authors answered YES to question 8 in the checklist.
Technical Quality: 2
Clarity: 2
Questions for Authors: In addition to the Weaknesses,
1. How does the method ensure that the video queries learn motion information?
2. Ablation study on different values of $N_{k}$ (line 173) should be conducted.
3. How is the efficiency of the method?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: In the beginning, we want to thank you for the detailed, insightful, and constructive comments.
### Novelty
To sum up, as other reviewers have mentioned, our method provides an architectural design that is **intuitive and effective** (Reviewer 3JCg) and **innovative** (Reviewer 2sLS), **demonstrating a high level of versatility** (Reviewer MsYp). Our SyncVIS explicitly introduces video-level query embeddings to synchronize video-level embeddings with frame-level query embeddings. The synchronized video-frame modeling paradigm promotes the mutual learning of frame- and video-level embeddings by selecting key information from both embeddings and updating gradually via the synchronous decoder structure.
### DVIS++
In offline mode, our method is lower than DVIS++, but our **best** performance on ResNet-50 backbone is in **online** mode, which is higher than DVIS++, by adapting to CTVIS. As mentioned in our paper and credited by Reviewer MsYp, our method **demonstrates a high level of versatility** on many state-of-the-art VIS methods. DVIS++ is built on DVIS, which incorporates a denoising training strategy and contrastive learning paradigm as well as an improved DINOv2 backbone. These adjustments, however, are orthogonal to our improvements to the modeling of spatial-temporal representations as well as our optimization strategy.
In the referring tracker and the temporal refiner, with the addition of our video-level embeddings and our synchronous modeling structure, our method can still bring significant improvements (2.1 AP) by adapting to DVIS++ in offline mode.
| Method | AP |
|-----------------------------|------|
| DVIS++ | 56.7 |
| + Synchronized Modeling | 58.0 |
| + Synchronized Optimization | 57.6 |
| + Both (SyncVIS) | 58.8 |
### Implementation details
As for the implementation details such as training steps, we list these parameters in the **configs** file in the link provided in the **Appendix**. For example for the Youtube-VIS 2019, as the batch size is set to 8, the max_iter is 140000, and we use AdamW optimizer with a base learning rate of 5e-4 and a weight decay of 0.05. The backbone multiplier is 0.1.
### Computation resources
Most of our experiments are conducted on 4 A100 GPUs (80G), and on a cuda 11.1, PyTorch 3.9 environment. The training time is approximately 1.5 days when training with the Swin-L backbone.
### Motion information in video queries
In the DETR-style architecture, when video queries are **associated with features across time** via the decoder, they can effectively model instance-level motion through the cascade structure. In Mask2Former-VIS, the use of video queries alone enables the capture of instance motion.
A similar finding has been reported in the Seqformer model, which notes that **"a stand-alone instance query suffices for capturing a time sequence of instances in a video."** However, as the number of input frames increases, relying solely on video queries becomes insufficient for simultaneously tracking the movement of all instances. Such limitations motivate us to propose such synchronous designs to address the shortcomings in motion modeling.
### Ablation of $N_k$
The ablation study of N_k is provided in Table 8 and the related analysis starts in L.324. The ablation results reveal that when selecting top $N_{k}=10$ embeddings to aggregate, the model performance reaches its optimum. When $N_{k}$ gets larger than optimum, the **redundant** query features will **dilute** the original compact information. On the other hand, getting too small will lead to the insufficiency of the injected information.
### Efficiency
The relevant results are provided in L.242-243. We list the model parameters and FPS of SeqFormer (220M/27.7), VITA (229M/22.8), and our SyncVIS (245M/22.1). Our model performs **notably better** with comparable model parameters and inference speed.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer hMrY,
We appreciate the time and effort you have dedicated to reviewing our work. Your feedback is invaluable, and we are grateful for the opportunity to address any concerns you may have. If you have any further questions or require additional clarification on our rebuttal, please do not hesitate to reach out to us. We stand ready to provide any necessary information or explanations to facilitate your review process.
Thank you again for your thorough review. We look forward to receiving your feedback and are hopeful for a favorable consideration of our work. | Summary: This paper focuses on improving synchronization between frame and video queries in video instance segmentation for better long-range video analysis. The authors propose encoding frame and video queries separately, then using confidence scores to select Nk queries. These queries are updated through mutual information exchange with momentum updates. To train each query, they introduce a synchronized optimization method using a divide-and-conquer approach. They also identify that video-level bipartite matching complexity increases with the number of frames, and address this by suggesting sub-clip level matching. Their technique, when applied to CTVIS and VITA, demonstrates enhanced performance in both online and offline settings compared to existing methods.
Strengths: - The paper is comprehensively written, with clear explanations of its contributions and a detailed analysis of prior research. The proposed methods are innovative and seem well-founded.
- Extensive experiments validate the effectiveness of the proposed methods, significantly bolstering the paper's credibility.
Weaknesses: - The analysis primarily focuses on early work (Mask2Former-VIS), while the baselines used are CTIVS and VITA. Despite this, there is a lack of thorough analysis on these baselines.
- The GT assignment method of Synchronized Embedding Optimization is not compared with existing methods such as TCOVIS. Additionally, although the authors aim for long-range video modeling, their performance on the long video dataset YouTube-VIS 2022 is lower than that of TCOVIS.
- In the Checklist under Experiments Compute Resources, the authors answered "yes" but did not specify the equipment used.
Reference
Li, Junlong, et al. "Tcovis: Temporally consistent online video instance segmentation." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
Technical Quality: 2
Clarity: 3
Questions for Authors: - In Section 3.1, the author highlights the performance drop in Mask2Former-VIS when increasing the number of frames. CTVIS, on the other hand, shows improved performance with more frames and seems to handle temporal modeling well through its memory bank. If CTVIS were used as the baseline, why does the proposed method show improved performance and what specific aspects are enhanced?
- TCOVIS aggregates the cost for each frame and matches GT with predictions at the video level globally. The proposed method matches at the sub-clip level, which is a contrasting approach. What are the respective strengths and weaknesses of each method? Additionally, why does the proposed method perform worse on long videos like YouTubeVIS-2022 compared to TCOVIS?
- In L219, it is mentioned that inference with the Swin-L backbone was done at 480p, but the code indicates 448p. Which is correct? Furthermore, in L218, the learning rate is stated as 1e-4, but the code shows 5e-5. Also, the iteration and learning rate decay settings for YouTubeVIS-2019 and 2021 datasets seem inconsistent. How were the optimization settings determined?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors address the limitations of the proposed method in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's thorough review and valuable input. Your feedback has been instrumental in helping us enhance the quality and clarity of our work.
### Analysis of baseline method
The analysis of CTVIS and VITA are included in the Introduction and Related Works (L. 37, L. 104). We choose to analyze Mask2Former-VIS because it's a basic and representative VIS model that both VITA and CTVIS are built on, while VITA utilizes frame-level queries to segment each frame independently and then associate frame-level queries with video-level queries and CTVIS adopts the frame-level queries to build consistent contrastive items and memory banks. 1) **VITA**: Even though VITA adopts video-level queries and manages to associate them, we argue that this process is **sensitive** because the wellness of video-level queries **heavily relies on the learning of frame-level queries**. Since frame-level queries (decoded by frame features) lack global motion information, this asynchronous unidirectional aggregation will cause potential information loss to the output video-level queries. 2) **CTVIS**: CTVIS utilizes contrastive learning to strengthen the representations of frame-level embeddings and uses frame-level embeddings to maintain a memory bank to model temporal relations. However, it mainly focuses on building discriminative frame-level embeddings, and **hardly models the long-term spatio-temporal object representation**. That is, CTVIS **lacks an explicit synchronous association** between video- embeddings and frame-level embeddings. As proved in YoutubeVIS 2019 & 2021, SyncVIS achieves 57.9 and 51.9 in the online mode, while CTVIS is 55.1 and 50.1, which is lower than our results.
We will revise our paper in the final version.
### Comparison with TCOVIS
In TCOVIS, the global instance assignment aims to match predictions from all frames in the video clip with GTs. It considers all frames as a whole in searching for the optimal objective. Our method, on the other hand, divides all video frames into smaller sub-clips for matching with GTs.
1) **TCOVIS:** By matching predictions across all frames, TCOVIS is capable of **associating the video frames from the beginning to the end** of a minute-long video in the Youtube-VIS 2022 validation dataset, but it may **overlook the fine-grained local details** that is crucial for distinguishing multiple similar instances. Thus, when handling multiple instances and movements in consecutive frames, TCOVIS fails to simultaneously track and segment many more instances in crowded and occluded scenes. On OVIS, TCOVIS is 46.7 while our SyncVIS is 50.8, outperforming by a large margin.
2) **SyncVIS:** By dividing video into several sub-clips, our optimization strategy aims to **reduce the increasing optimization complexity** as the input frame number grows (L. 188, L. 260) because our synchronous modeling paradigm can model temporal association better with more input frames. To realize this target, our strategy is to divide the video into several sub-clips that could make optimization easier while retaining the temporal motion information.
3) **YoutubeVIS 2022:** Our SyncVIS mainly focuses on solving challenging scenarios with multiple instances and occlusions, which is usually overlooked for previous asynchronous methods because of their query-sensitive designs. When handling minute-long videos, our video-level embeddings are insufficient for modeling the associations across such a huge amount of input frames. Therefore, the performance on Youtube-VIS 2022 is not the SOTA result.
However, our SyncVIS manages to find a balance between modeling long-range video and keeping high performance on crowded and occluded video scenes in shorter lengths. Even though TCOVIS has better results on Youtube-VIS 2022 validation dataset, its performance on another challenging VIS dataset, OVIS, is 4.1 points below our SyncVIS. TCOVIS manages to model the extra-long videos in Youtube-VIS 2022 validation dataset, but it somehow neglects the much more common cases, which are combinations of long videos and complex scenes with more instances and occlusions.
We will revise this part to the final version of the paper.
### Computation resources
Most of our experiments are conducted on 4 A100 GPUs (80G), and on a cuda 11.1, PyTorch 3.9 environment. The training time is approximately 1.5 days when training with the Swin-L backbone. We will specify this in the final version of the paper.
### Optimization setting
Since the two datasets have different sizes, and that 2021 has more videos than 2019, we use different training iterations and learning rates for different datasets. in practice. We will revise our paper in the final version to state this difference (As for the image size, the number should be 448, and the learning rate of the Swin-Large backbone on YoutubeVIS 2019 is 5e-4, and we will correct this in the final version).
### Improvement upon CTVIS
It's true that building a memory bank contributes to the temporal modeling of instances. However, the forward passing of CTVIS is still asynchronous. While a single frame produces the frame embedding, there is **no explicit video-level embedding** to interact with the frame-level instance embedding. In our design, we add a set of video-level embeddings that gradually update with the frame-level embeddings. The explicit video-level embedding can directly provide long-range information and temporal information to the frame-level embedding, which enhances the quality of Hungarian matching and the contrastive learning module afterward.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors' thorough responses. Now, I understand and agree that bidirectional communication between video queries and frame queries is crucial for solving the VIS task, and that this is a point overlooked by previous works.
However, I am still not convinced about the sub-clip matching approach.
- I don't understand why matching predictions and ground truth at the sub-clip level would lead to better distinction between multiple instances. Intuitively, it seems that globally matching predictions to each object's trajectory would be more effective.
- Since TCOVIS is based on GenVIS and SyncVIS is based on CTVIS, I find it difficult to compare their performance on OVIS alone as a measure of optimization effectiveness. I'm not confident that sub-clip matching optimizes better than global matching for the OVIS dataset.
- Also, the memory issue doesn't seem significant since VIS tasks typically don't use a large number of frames for ground truth assignment during the training phase.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to review our rebuttal and sharing your perspectives. We are glad that the significance of our synchronous design is acknowledged, and we'd love to address to your additional comments.
### Advantage of sub-clip level matching
- The advantage of optimizing sub-clip over the GIA strategy (TCOVIS) is its ability to **better adapt to changes** in the target instance within the video, particularly in cases of occlusion of many similar instances (In OVIS, most cases are videos with many similar instances, most of which are occluded in certain frames.)
- By optimizing the local sub-sequence of the video, rather than the entire video sequence, if the target instance becomes occluded in certain frames, our optimizing strategy can adjust the features within the sub-sequence to adapt to this change, **without being affected by the unoccluded frames**. For instance, if the target becomes occluded between frames 10 and 15. For the GIA strategy, it would be difficult to adjust the features in these last 5 frames, as the features in the initial 10 frames may be well-represented.
- We tested two optimization methods based on GenVIS in the online setting with ResNet-50 backbone on OVIS dataset. As shown in the table, our method shows **better improvement** (0.6 AP) over GIA. This result is consistent with our analysis that our optimization strategy is more effective under such scenarios.
| Method | $AP$ | $AP_{50}$ | $AP_{75}$ |
|----------------------------------------------------------------|:--:|:-----:|:-----:|
| GenVIS |35.8| 60.8 | 36.2 |
| GenVIS + Global Instance Assignment |36.2| 61.4 | 36.6 |
| GenVIS + Synchronized Optimization |36.8| 62.3 | 37.0 |
### Memory issue
- As for the memory issue, previous VIS methods would take input frame number at a quite low value (For example, Mask2Former is 2). However, we argue that this is because their performance will even drop (due to their asynchronous structure) when handling more input frames during training (as shown in Fig. 3). But in our design, we introduce an efficient synchronous modeling paradigm that is capable of efficiently utilizing temporal information with more input frames. With more input frames, the memory issue becomes more significant. | Summary: This paper concentrates on the video instance segmentation task. To address the problem of motion information loss when existing methods use frame queries, and the optimization issue of bipartite matching across multiple frames, the authors propose a synchronized video-frame modeling paradigm. This paradigm allows for interaction between frame queries and video queries and introduces a synchronized embedding optimization strategy in a video-level buffer, making multi-frame tracking optimization feasible. The effectiveness of the proposed method is verified on four VIS benchmarks, and detailed ablation studies are conducted.
Strengths: 1. The proposed SyncVIS shows impressive performance across multiple VIS benchmarks.
2. The method introduced can be adapted to enhance multiple existing VIS frameworks, demonstrating a high level of versatility.
Weaknesses: 1. Line 45 mentions that 'image encoding stage (rather than video encoding)' could lead to motion information loss; however, the proposed method still employs image encoding. Frame queries take video query for feature aggregation, the aggregation is merely an accumulation of multi-frame information, similar to the video query in SeqFormer, which can help achieve robustness, but how does it model object motion? This statement might be inappropriate.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. The proposed Synchronized Embedding Optimization for bipartite matching is it aimed at the bipartite matching between prediction and Ground Truth, or object query bipartite matchings during the tracking process (Like MinVIS)? This aspect is quite confusing for me. From lines L186-L193, it seems that the author aims to enhance bipartite matching between prediction and GTs. Yet, lines L194-L199 appear to discuss associating the tracking results of two frames.
2. The conclusion in Sec4.4 seems counterintuitive. Longer Sub-clips could provide the model with more temporal information to help it model complex scenes and trajectories. This should be especially evident on OVIS, which includes many objects and complex inter-object motions, including occlusions and disappearances and reappearances. According to the paper's motivation, longer Sub-clips should perform better on OVIS, but the experimental conclusion states the optimal length on OVIS to be T_s = 2, which corresponds to a pair of frames, providing very limited temporal information. Doesn't this contradict the motivation or the conclusion?
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for taking the time to provide such detailed and constructive criticism. Your suggestions have been invaluable in strengthening our paper.
### Object motion modeling
- **Synchronous design for robust modeling**
Our proposed SyncVIS employs image encoding as well as video encoding **in a synchronous manner** to model object motion while compensating for the potential "motion information loss" in the "image encoding" stage. In our design, frame-level embeddings are assigned to each sampled frame, and responsible for modeling the appearance of instances, and video-level embeddings are a set of shared instance queries for all sampled frames, which are used to characterize the general motion (because they encode the position information of instances across frames, and thereby naturally contain the motion information).
SeqFormer has the observation that "a stand-alone instance query suffices for capturing a time sequence of instances in a video." It decomposes the decoder to be frame-independent while building communication between different frames using instance queries, which also aim to model motion across frames. Nevertheless it follows an asynchronous manner, which is **less robust** in modeling temporal associations than our synchronous design because the aggregation is unidirectional instead of a mutual enhancement and the motion loss in image encoding is not compensated. As proved in YoutubeVIS 2019 & 2021, SyncVIS achieves 54.2 and 48.9 in the offline mode, while Seqformer is 47.4 and 40.5, which is much lower than our results.
- **Experiment**
As shown in Table (also in the Table 6 of the main paper), by implementing a synchornous structure, SyncVIS outperforms the asynchronous unidirectional structure by 1.6 AP. This further proves that our design is more robust than the design in SeqFormer in modeling object motion.
| Method | $AP$ | $AP_{50}$ | $AP_{75}$ |
|----------------------------------------------------------------|:--:|:-----:|:-----:|
| Cascade Structure + Both Queries |49.9| 72.0 | 54.4 |
| Synchronous Structure + Both Queries (SyncVIS) |51.5| 73.2 | 55.9 |
### Synchronized Embedding Optimization
In the synchronized embedding optimization, we aim to enhance bipartite matching between prediction and GTs. L194-L199, the illustrations of $t_i$ and $t_j$ are trying to explain how the optimization strategy works in a divide-and-conquer way and illustrate our motivation of dividing the whole training sample videos into sub-clips.
### Setting of $T_s$
The results do not contradict the conclusion. In optimization strategy, our main goal is to **reduce the increasing optimization complexity** as the input frame number grows (L. 188, L. 260). To realize this target, our strategy is to divide the video into several sub-clips that could make optimization easier while retaining the temporal motion information. Longer Sub-clips could provide the model with more temporal information, but their optimization complexity also rises polynomially.
The other important factor is the **dataset**. OVIS, compared to YoutubeVIS 2019, has more occluded and crowded scenes (5.8 objects per video for OVIS while 1.7 for YoutubeVIS 2019). Thus, the complexity of OVIS for each frame is three times that of the YoutubeVIS 2019. In order to further reduce the complexity for better optimization, dividing into smaller sub-clips can accelerate the optimization. Also, keeping the size of two is able to maintain the temporal information.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer MsYp,
We sincerely appreciate your dedicated time and effort in reviewing our work. Your valuable comments and feedback are crucial for improving the quality of our research.
We have carefully considered your concerns and provided corresponding responses and updated results. We believe we have addressed the key issues you raised, and we welcome further discussion to ensure we have fully covered your concerns. Please let us know if any remaining unclear parts or areas require additional clarification. We are committed to providing a comprehensive response and are open to any additional feedback you may have. Your input is invaluable in helping us strengthen our work.
Thank you again for your thorough review. We look forward to continuing our productive discussion.
---
Rebuttal 2:
Comment: Thanks to the author for the response. All my doubts have been answered in the author's rebuttal. I agree with the phenomena observed on OVIS and hope that there will be more detailed explanations about Synchronized Embedding Optimization in the revision. | Summary: This paper proposes SyncVIS, an approach for Video Instance Segmentation (VIS), which tries to jointly model frame-level and video-level embeddings thus can capture both semantics and movement of instances. The new architecture design is intuitive and generic enough to be applied to various VIS models. Experiments are done on Youtube-VIS 2019, 2021, 2022, and OVIS which show that SyncVIS achieves state-of-the-art results on these benchmarks. Ablations are thorough and enough to understand the proposed approach. Written presentation is clear and easy to read.
Strengths: - The newly proposed architecture design is intuitive and effective.
- The new approach can be applied to most of existing VIS architectures and gives consistent improvements.
- SyncVIS achieves state-of-the-art performance on current benchmarks.
- Ablation experiments are solid and provide good insights about the newly proposed method.
Weaknesses: - The paper may be benefit from having from qualitative results to highly what SyncVIS can improve from the base models which were trained on asynchronous manner in the main text if space allows.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Why the results of Mask2Former-VIS on OVIS dataset in Table 9 are too low?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The author(s) have some statements about their method's limitation and further provide more details in supplementary.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: In the beginning, we want to thank you for the detailed, insightful, and constructive comments.
### Qualitative results
In the Appendix, we provide many illustrations comparing previous base models with our SyncVIS. We select some cases under different scenarios, which include settings with multiple similar instances, settings with reappearance of instance, settings with different poses of instance, and settings with long video (Fig. 4 - Fig. 7). In the Fig. 8 of the Appendix, we provide visualizations of implementing different designs of embeddings, which further illustrate our synchronous design is capable of better segmenting and tracking multiple similar instances, while the asynchronous design ends in ignoring some instances and segmenting objects incompletely.
### Mask2Former-VIS on OVIS dataset performance
- **Complexity of OVIS**
OVIS is known for having more occluded and crowded scenes than the YoutubeVIS dataset (5.8 objects per video for OVIS while 1.7 for YoutubeVIS 2019), and thus poses more challenges to the model's temporal modeling capacity on multiple instances.
- **Mask2Former-VIS**
Mask2Former-VIS, on the other hand, is a typical **offline** VIS method that models the whole video sequence with only video-level embeddings, which is extremely insufficient for handling occluded and crowded scenes. Even if we increase the input frame number to bring more temporal information, the modeling is still harder for Mask2Former-VIS. Even though we experimented with different hyperparameter settings for OVIS scenario, the results are still unsatisfying due to the innate restrictions of Mask2Former-VIS. The following-up work of Mask2Former-VIS, such as Seqformer, also suffers from the typical asynchronous design of offline methods, and its performance is also below 15 AP. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Out-of-Distribution Detection with a Single Unconditional Diffusion Model | Accept (poster) | Summary: This paper proposes an unsupervised anomaly detection method based on diffusion models. The core idea is to leverage the properties of the score function of a pre-trained diffusion model to distinguish samples from different distributions, rather than relying on log-likelihoods or reconstruction error. To this end, the authors first demonstrate that log-likelihoods are not a good metric for differentiating samples from different datasets. The authors then motivate using the score function, specifically the L2 norm of the score function summed across different time steps as a statistic for differentiating OOD samples. This statistic can be interpreted as the rate of change of diffusion trajectories from the original distribution to the standard Normal distribution. Extending this observation, the authors also incorporate the curvature of the trajectory by considering the derivative of the score function. One key contribution is showing that a single diffusion model trained on a large, diverse dataset, such as ImageNet, can be used for OOD detection across multiple datasets. Experiments are performed on CIFAR-10, CIFAR-100, CelebA, and SVHN datasets, demonstrating higher average AUCROC scores compared to existing methods. Ablation experiments show the effect of various design choices.
Strengths: - One central problem in anomaly detection is that the model is specific to each dataset. This paper proposes a partial solution to this problem, which is of significance to the community.
- The proposed idea of using properties of the diffusion trajectory such as the rate of change and curvature is a novel idea, to the best of my knowledge.
- The paper presents the approach clearly and with sufficient motivation. The writing and organization of the paper are praiseworthy, with the authors first presenting the problem, discussing failure modes of likelihood-based methods, analyzing the diffusion trajectories, and then finally introducing their approach.
- The experimental results are strong, with good performance across datasets. Various ablation studies help understand design choices for the proposed approach.
Weaknesses: There are certain concerns regarding the applicability of the approach to a broader range of problems, and some clarifying questions for the authors.
- The method relies on the availability of a reasonably large and diverse dataset for the domain of interest. For images, ImageNet is an obvious choice, but this raises the question of the applicability of this method to tabular domains, which are especially relevant for scientific and industrial applications.
- The connection to OT in Section 3.4 is valid only if the data distribution is Gaussian, as per my understanding. However, the data distribution, including the image datasets analyzed in this paper, is typically highly multi-modal. This invalidates the connection, and an explanation from the authors would be beneficial.
- It is not clear how the 6D statistic introduced in Section 3.5 is used to distinguish samples since it is not a scalar value that can directly be compared.
- Since a single diffusion model trained on ImageNet is used for the experiment, it is not clear what distinguishes an ID and OOD dataset. What makes a dataset ID in this scenario and what is the difference when evaluating C10 vs SVHN and SVHN vs C10 in Table 3?
- There is at least one other diffusion-based anomaly detection method that leverages properties of the diffusion schedule rather than relying on reconstruction error [1]. A brief discussion on the differences between this approach and the one discussed in the paper would be beneficial.
[1] Livernoche, V., Jain, V., Hezaveh, Y. and Ravanbakhsh, S., On Diffusion Modeling for Anomaly Detection. In *The Twelfth International Conference on Learning Representations*.
Technical Quality: 2
Clarity: 3
Questions for Authors: - The results in Table 4 are a bit surprising. Why is a model trained on ImageNet better than a model trained on the ID dataset? For example, when treating C10 as the ID dataset, shouldn’t a diffusion model trained on C10 perform better?
- In this paper, the authors motivate the use of ImageNet due to it being a large and diverse dataset. Do the authors have some thoughts on how to pick such a ‘base’ dataset when applying this method to other domains? Is the idea that this dataset should provide coverage over the ID and OOD datasets?
- The authors provide an explanation for why a higher number of DDIM steps slightly hurt performance, but shouldn’t a smaller time difference make the finite approximation method more accurate?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors discuss the limitations of their approach in sufficient detail. Some of my points in the weaknesses section echo this discussion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging the significance of the problem, novelty of our approach, quality of our writing and strong experimental results. Below we provide our response to the reviewer’s concerns and questions.
> method relies on the availability of a reasonably large and diverse dataset...this raises the question of the applicability of this method to tabular domains, which are especially relevant for scientific and industrial applications.
Thank you for raising this point. Our experiments focus on the image domain as diffusion models for images are more mature compared to domains such as tabular data. OOD benchmarks for generative models also mainly focus on images. We believe the general approach proposed in DiffPath would still apply, but experiments would need to bear this out. We will mention tabular data as another domain of future exploration in the conclusion, lines 312-314: "There are several interesting future directions…such as video, audio, language time series and *tabular*…".
> connection to OT in Section 3.4 is valid only if the data distribution is Gaussian. However, the data distribution, including the image datasets analyzed in this paper, is typically highly multi-modal.
We agree that one can only mathematically prove the path is OT if the data is Gaussian, and we acknowledge this in Sec 3.4 (c.f. Line 177-178: “this map is the optimal transport (OT) path if the data distribution is Gaussian”). However, prior work has shown experimentally that for higher-dimensional mixtures and images, the paths match the OT cost up to numerical precision [1]. Hence, we discuss OT as a motivation for DiffPath. We will revise Sec 3.4 to emphasize this further.
> not clear how the 6D statistic introduced in Section 3.5 is used to distinguish samples since it is not a scalar value that can directly be compared.
Thank you for raising this point, which we believe we should make clearer in the paper. For OOD detection, the AUROC computation requires scalar values and as such, we do not use the 6D scores directly. Rather, we fit a GMM to the 6D statistic of the ID training set. During evaluation, to obtain an OOD score for a test sample, we compute the likelihood under the GMM. A lower likelihood implies the sample is “farther” from the ID training set, hence more likely to be OOD. The procedure is shown in line 6 of Algorithm 1.
> it is not clear what distinguishes an ID and OOD dataset. What makes a dataset ID in this scenario and what is the difference when evaluating C10 vs SVHN and SVHN vs C10 in Table 3
As discussed above, the ID dataset is the one whose training set’s statistics (1D or 6D) was fit to a density model (KDE or GMM). At test time, we calculate the likelihood of a sample under the density model to classify whether the sample is ID or OOD. For example, consider the task of C10 vs SVHN. If C10 is ID, we compute the statistics of C10’s training set using the ImageNet diffusion model and fit a density estimator to it, then evaluate using samples from C10 and SVHN’s test sets.
> one other diffusion-based anomaly detection method that leverages properties of the diffusion schedule rather than relying on reconstruction error. A brief discussion on the differences between this approach and the one discussed in the paper would be beneficial.
We thank the reviewer for pointing out [2], which we will discuss in the paper. [2] performs OOD detection by using the distribution of the diffusion time of a noisy test sample. Anomalous samples are farther from the data manifold and have higher diffusion time. The key difference with our approach is that DiffPath computes statistics of the diffusion path rather than diffusion time, and we use a single model while [2] requires either KNN search for each sample, or a parametric model per dataset.
> Why is a model trained on ImageNet better than a model trained on the ID dataset? For example, when treating C10 as the ID dataset, shouldn’t a diffusion model trained on C10 perform better?
We hypothesize that the C10 model is unable to compute the scores accurately for SVHN as the C10 model is unable to generalize to SVHN features. Meanwhile, as ImageNet is diverse, the model has broadly captured the features contained in C10 and SVHN. We note that SVHN, being digits, is not well represented in ImageNet. Yet, it appears the diversity of ImageNet allows the model to generalize beyond the training distribution, hence highlighting the importance of a large and diverse base dataset.
> thoughts on how to pick such a ‘base’ dataset when applying this method to other domains? Is the idea that this dataset should provide coverage over the ID and OOD datasets?
Indeed, our central hypothesis is that the base dataset should be diverse enough such that it broadly covers both ID and OOD. In this work, we chose ImageNet as this was the most readily-available large image diffusion model. Extension to even larger models like Stable Diffusion would serve as interesting future work.
> shouldn’t a smaller time difference make the finite approximation method more accurate?
Apart from what was mentioned in the paper, we hypothesize another reason is we do not compute the full Eq. 9, but only the simple time derivative. As a result, at higher DDIM steps, we may not technically be approaching the true $d \epsilon / d\gamma_t$. However, at the lower steps (50/100) that we use in this work, this approximation does not seem to hinder the performance for OOD detection. We leave the investigation of full JVP computation of the derivative to future work.
Thank you again for your positive review. We hope that we have addressed your concerns. If you have further concerns, please let us know.
[1] Khrulkov, Valentin, et al. "Understanding ddpm latent codes through optimal transport." arXiv preprint arXiv:2202.07477 (2022).
[2] Livernoche, Victor, et al. "On diffusion modeling for anomaly detection." arXiv preprint arXiv:2305.18593 (2023).
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications which clear most of my questions.
After reading the other reviews and the authors' responses, I think the paper is a useful contribution and should be accepted. I maintain my positive score.
---
Rebuttal 2:
Title: Thank you to the reviewer
Comment: We thank the reviewer for their positive remarks and for acknowledging the contributions of our work! | Summary: This paper looks at statistics calculated from the path through the diffusion model based on $\int_0^T \| \epsilon_{\phi}(x_t, t) - \epsilon_{\psi}(x_t, t)\|dt$ and use it to discriminate between two distributions $\psi$ and $\phi$. A KDE of the distance of ID scores is used to compute the likelihood of the score of a new test image, and then perform OOD detection.
Strengths: * The paper is well written and easy to follow.
* the idea of computing ID statistics using a network pretrained on a large dataset has proven successful with other techniques (SSL, or classifier trained on Imagenet) so it makes perfect sense to explore this idea with diffusion models.
* The method seems to work better than competitors.
* Interesting link with OT.
Weaknesses: 1. The training distribution matters (as stated by the authors themselves), so saying that the DM is an universal OOD detector is a bit of an overclaim.
2. In 3.1, It is stated that "likelihood does not work", but the content does not demonstrate such a strong statement. It only shows that *likelihood obtained directly from a diffusion model does not work as is for OOD detection*. It seems from prior work that likelihood coming from the training of generative models is not suitable for OOD detection -- which is a weaker claim than "likelihood does not work" -- but even this weaker statement is not demonstrated properly since showing it for diffusion models is not sufficient evidence. In addition, the authors end up using a likelihood (that of the KDE) to build their score which makes this wording confusing. This part is nothing more than a motivating experiment (which is, by the way, interesting and fits well in the flow of the paper), so I would not provide such an overinterpretation.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why use the KDE and not directly the score $\sqrt{\sum_t \| \epsilon_{\theta} (x_t, t) \|^2_2}$
2. 3.5 why use the first-order term, which disconnects the statistics from OT theory? Why powers are chosen up to three?
3. The method requires 50 FE, but what is the cost of one individual FE? OOD detection applications are often real-time, online or embedded so it is important to detail this point.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging the motivation and importance of extending OOD detection to large pretrained generative models, as well as the quality of our writing. Below we provide our response to the reviewer’s questions and concerns.
> “The training distribution matters (as stated by the authors themselves), so saying that the DM is an universal OOD detector is a bit of an overclaim.”
To clarify, we do not claim our method to be a “universal OOD detector”. Instead, we claim that *with a single model*, our method can perform OOD detection “across diverse tasks” (c.f. Line 6), “that a single model outperforms baselines that necessitates separate models for each distribution” (c.f. Lines 47-49) and that “a generalist model … can also be applied to out-of-distribution detection” (c.f. Lines 310-311).
We were careful to ensure that our claims apply only to the tasks that we have tested in the paper. Certainly, we did not expect DiffPath to be capable of “universal” OOD detection across all domains. We acknowledge this explicitly in the limitations section, where we mention c.f. Line 320-321: “we consider a DM trained on ImageNet, which may not be general enough for specialized applications such as medical images.”
> "It only shows that likelihood obtained directly from a diffusion model does not work as is for OOD detection. It seems from prior work that likelihood coming from the training of generative models is not suitable for OOD detection -- which is a weaker claim than "likelihood does not work" -- but even this weaker statement is not demonstrated properly since showing it for diffusion models is not sufficient evidence. In addition, the authors end up using a likelihood (that of the KDE) to build their score which makes this wording confusing. This part is nothing more than a motivating experiment (which is, by the way, interesting and fits well in the flow of the paper), so I would not provide such an overinterpretation.”
We thank the reviewer for pointing this out, and for acknowledging the motivating purpose of Sec 3.1. We agree that a more accurate header for Sec 3.1 is that “Diffusion Model Likelihoods Do Not Work for OOD Detection”. We will make this change in the final revision.
> “Why use the KDE and not directly the score”
To calculate the AUROC for OOD detection, one needs to fix a threshold and assign values lower than the threshold as OOD and higher as ID, then integrate over all thresholds. We use a density estimator like KDE because in DiffPath, Theorem 1 does not suggest that OOD samples will have lower score norms than ID samples, only that they are *different*. This is evident in Fig 2, where all we can say is that the histograms are separated relative to each other. Hence, we fit a likelihood estimator like a KDE or GMM to the ID training samples. This way, OOD samples will have low likelihoods under the estimator, and we use those likelihoods in the AUROC computation. This is a subtle point that we believe is worth discussing in the paper for clarity. We will include this discussion in the final revision.
> “why use the first-order term, which disconnects the statistics from OT theory? Why powers are chosen up to three?”
We are unsure what the reviewer means by “disconnects the statistics from OT theory”. DiffPath 1D considers first-order terms of the Taylor expansion (Eq. 8) while DiffPath 6D uses both first and second-order terms. Both first and second-order terms of the diffusion path are discussed in the OT example in Sec 3.4 (c.f. Lines 187-191: "the corresponding first and second-order OOD statistics are equal and given by $||\frac{dx_i}{dt}||_2 =||\frac{d^2x_i}{dt^2}||_2 = ||a_i e^{-t}||_2...$")
In DiffPath 6D, we combine different powers of the first and second-order terms to form a higher-dimensional statistic. This stems from a purely practical standpoint, as we found that this led to more robust OOD performance. One could in principle investigate other combinations or even higher powers, which we leave to future studies.
> “The method requires 50 FE, but what is the cost of one individual FE? OOD detection applications are often real-time, online or embedded so it is important to detail this point”
Running on a Nvidia A5000 GPU, a single 64x64 sample on our model requires 0.02s (20ms) per FE of the UNet, which we will include in the paper. This is of course hardware dependent. Regardless, due to the nature of diffusion models, we do not expect DiffPath in its current iteration to be suitable for real-time/online OOD detection. This work represents a first step in showing that a single diffusion model can be used for OOD detection (which we believe to be a notable result) and we leave inference speed improvements to future work.
We hope we have resolved your concerns. If so, we kindly ask you to consider raising your score. Should you have any additional issues, please feel free to reach out to us.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. I increase my rating accordingly.
---
Rebuttal 2:
Title: Thank you to the reviewer
Comment: We thank the reviewer for their positive response and for raising their score! | Summary: This paper presents a diffusion model trained on a single dataset that can also perform well in OOD detection across diverse tasks. The core concept is Diffusion Paths (DiffPath), which characterizes the properties of the forward diffusion path. Specifically, they measure the rate-of-change and curvature of the diffusion paths, using these as the derivatives of the score and contextualize regarding the optimal transport concept. The method is tested on various benchmarks and generally shows improvement over compared baselines.
Strengths: - The proposed idea of defining and discriminating between ID and OOD datasets using the forward diffusion path is novel.
- The method is comprehensible, and the illustrations are clear.
- This framework shows good experimental results compared to several baselines.
Weaknesses: - In Figure 3, how extensively does the diffusion model need to be trained on a large, diverse dataset?
- The method is limited by the coverage of the ImageNet-trained diffusion model.
- Ultimately, to achieve broad coverage, a large-scale pretrained diffusion model is required (e.g. more varisous OOD dataset, medical, manufacturing). Is the contribution of guaranteed generalizability compared to foundational generative models still valid?
- Add a single trained case that can cover datasets from different domains, not just ImageNet.
- In Table 4, performance drops significantly compared to the baseline in hard-settings like CIFAR10 vs. CIFAR100.
- The proposed method struggles particularly in distinguishing between ID/OOD datasets with similar semantics.
- Include experiments and improvement strategies for hard-settings in OpenOOD [1].
- Table 4, add and compare with the latest research baseline, Projection Regret [2].
[1] Zhang, Jingyang, et al. "Openood v1. 5: Enhanced benchmark for out-of-distribution detection." arXiv preprint arXiv:2306.09301 (2023).
[2] Choi, Sungik, et al. "Projection regret: Reducing background bias for novelty detection via diffusion models." Advances in Neural Information Processing Systems 36 (2023): 19230-19245.
Technical Quality: 3
Clarity: 2
Questions for Authors: N/A
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: see weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging the novelty, clarity and strong experimental results of our work. Below we provide our response to the questions raised by the reviewer.
> In Figure 3, how extensively does the diffusion model need to be trained on a large, diverse dataset?
To clarify, we use the pretrained checkpoint from Improved-DDPM [1] trained on unconditional ImageNet 64x64, which according to the authors [2] is trained for 1.5M steps. We do not retrain/finetune the diffusion model, as the key idea is to leverage existing large pre-trained models. In terms of training duration, we did not perform ablations due to the large compute resources required for ImageNet-scale training. That said, our ablations in Table 4 suggest that a large and diverse base dataset is needed for good OOD performance.
> “The method is limited by the coverage of the ImageNet-trained diffusion model.” ... “to achieve broad coverage, a large-scale pretrained diffusion model is required (e.g. more varisous OOD dataset, medical, manufacturing). Is the contribution of guaranteed generalizability compared to foundational generative models still valid?”
We agree with the reviewer that the performance is dependent on the coverage of the base distribution, in this case ImageNet. We have acknowledged this in the limitations section of the paper, where we hypothesize that the ImageNet model would likely be unsuitable for specialized applications like medical and manufacturing (c.f. Line 320-321: “we consider a DM trained on ImageNet, which may not be general enough for specialized applications such as medical images.”).
Thus, we do not claim to have a model for OOD detection across *all* domains. Our main contribution is to show that a single unconditional diffusion model can be used for OOD detection across a variety of image tasks which we test in the paper. For specialized applications, one should use a base distribution that provides coverage over such cases, or consider even larger foundation models like Stable Diffusion, which we leave for future studies.
> “Add a single trained case that can cover datasets from different domains, not just ImageNet”
We are unsure what the reviewer means by “a single trained case”. We believe the reviewer is suggesting that we should use a single model other than ImageNet that covers multiple datasets/domains. If so, we unfortunately do not have the computational resources to train such a model from scratch and are unaware of any other existing trained unconditional diffusion model. We would be happy to conduct further experiments if the reviewer can point us to an existing model. We have conducted ablations in Table 4 where the base distributions are CIFAR10, SVHN and CelebA. We find that the performance of these models are inferior to the ImageNet model.
> “In Table 4, performance drops significantly compared to the baseline in hard-settings like CIFAR10 vs. CIFAR100.” and “The proposed method struggles particularly in distinguishing between ID/OOD datasets with similar semantics.” and “Include experiments and improvement strategies for hard-settings in OpenOOD”
We thank the reviewer for pointing OpenOOD out to us. To better study DiffPath's performance on near-OOD tasks, we ran additional experiments based on the benchmark proposed in OpenOOD [1] for C10 and ImageNet200 as in-distribution. We report the average AUROC along with most recent baselines from [1].
|Methods|C10|ImageNet200|
|:-------------:|:-----:|:-----------:|
|KLM|0.79|0.808|
|VIM|0.887|0.787|
|KNN|0.907|0.816|
|DICE|0.783|0.818|
|DiffPath|0.797|0.906|
We see that DiffPath achieves similar performance in C10 to recent methods and achieves the strongest result in ImageNet200. We will include these results in the paper. To further improve near-OOD performance, we believe that incorporation of perceptual features [2] can help; currently, DiffPath works from a KL divergence perspective, which may be less sensitive to small pixel differences. We leave this to future work.
> “Table 4, add and compare with the latest research baseline, Projection Regret”.
We cited PR in related works but did not compare quantitatively as we were not able to find open-source code. For diffusion baselines, we endeavor to test all fairly under the same settings and ID-OOD setups, so we train and evaluate the baselines with the provided code. Given that it is non-trivial to reproduce PR’s implementation without reference code, we opt to discuss PR qualitatively. If the reviewer is aware of code for PR, please let us know and we can run the required experiments.
We hope we have addressed your concerns. If so, we hope that you would consider raising your score. If you have further concerns, please let us know.
[1] https://github.com/openai/improved-diffusion
[2] Nichol, Alexander Quinn, and Prafulla Dhariwal. "Improved denoising diffusion probabilistic models." International conference on machine learning. PMLR, 2021.
[3] Yang, Jingkang, et al. "Openood: Benchmarking generalized out-of-distribution detection." Advances in Neural Information Processing Systems 35 (2022): 32598-32611.
[4] Choi, Sungik, et al. "Projection regret: Reducing background bias for novelty detection via diffusion models." Advances in Neural Information Processing Systems 36 (2023): 19230-19245.
---
Rebuttal 2:
Title: Thank you to the reviewer
Comment: As the discussion period comes to a close, we sincerely thank the reviewer for their helpful comments. We hope that we have addressed the reviewer's concerns in our rebuttal. If the reviewer still has unaddressed concerns, please let us know and we will be happy to discuss further. | Summary: The paper proposes DiffPath, an OOD Detection method with foundation diffusion models (i.e., diffusion models over diverse data) that can be applied to any in-distribution dataset. By measuring properties of the diffusion trajectory mapping images to noise, the paper demonstrates some improvements on OOD detection. Notably, the method is faster than most other diffusion-based techniques for OOD detection.
Strengths: - The goal of being able to achieve OOD detection with a foundation diffusion model is important as this can enable modular separation of OOD detection and other downstream tasks.
- The paper is well written, with theoretical connections wherever possible, enabling intuitive understanding and making it easy to follow.
- The experiment in sec 3.5 is interesting and higher-order norms have also been used in the past: for example, check out the Gram Matrix paper [1].
[1] Sastry and Oore. Detecting Out-of-Distribution Examples with Gram Matrices. ICML 2020.
Weaknesses: - Achieves lower performance for near-ood detection tasks.
- The evaluations do not sufficiently demonstrate benefits of a single unconditional diffusion model. Since two sets of examples are said to be OOD if there is no class-overlap between them and the diffusion model is trained on 1k Imagenet classes, I believe that some of the experiments should be focused on demonstrating ease of constructing OOD detectors for arbitrary sets of imagenet classes as in-distribution or perhaps the entire imagenet-data as in-distribution.
- As a follow-up to the above, it seems like this technique can also be used for learning one-class classification networks. In fact, I feel that the paper should discuss the performance on this task first before considering the natural extension to OOD detection where several classes are in-distribution. It also helps understand the generality of the statistics used to identify OOD examples.
- The use of Theorem 1 (fisher-divergence) to motivate L2-norm of score-function as an indicator for OOD detection is not entirely clear. For example, the fisher-divergence uses difference between score-function outputs estimated over a batch of examples.
- The ablation study with resizing is useful. However, it seems like the diffusion model is trained on 64x64 images and hence, the CIFAR10, CIFAR100 and SVHN images are scaled-up to 64x64 while celeba and textures are scaled down to 64x64. For a fair comparison with methods which directly work at 32x32 resolution, celeba and textures should be first downsampled to 32x32 and then upsampled to 64x64. Otherwise, ood detection between pixelated CIFAR10 images and comparatively high-res celeba/textures images would not be so meaningful.
Technical Quality: 2
Clarity: 3
Questions for Authors: See weaknesses.
1. What hyperparameters do you use for KDE/GMM algorithms? How to select these hyperparameters?
2. Intuitively, why do the trajectory statistics such as norm and rate-of-change vary from class to class?
3. Using an SVHN/CIFAR10/CELEB-A model to directly perform OOD detection was not as successful as using a diffusion model trained on imagenet. However, considering a SVHN diffusion model, if we select one of the SVHN classes as in-distribution, can we achieve effective OOD detection?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging the strong motivation, insightful writing, and interesting experiments in our paper. Below we provide our responses to the questions raised by the reviewer.
>lower performance for near-ood detection tasks
To better study DiffPath's performance on near-OOD tasks, we ran additional experiments based on the benchmark proposed in OpenOOD [1] for C10 and ImageNet200 as in-distribution. We report the average AUROC along with the most recent baselines from [1].
|Methods|C10|ImageNet200|
|:-------------:|:-----:|:-----------:|
|KLM|0.79|0.808|
|VIM|0.887|0.787|
|KNN|0.907|0.816|
|DICE|0.783|0.818|
|DiffPath|0.797|0.906|
We see that DiffPath achieves similar performance in C10 to recent methods and achieves the strongest result in ImageNet200. We will include these results in the paper. To further improve near-OOD performance, we believe that incorporation of perceptual features [2] can help; currently, DiffPath works from a KL divergence perspective, which may be less sensitive to small pixel differences. We leave this to future work.
>evaluations do not sufficiently demonstrate benefits of a single unconditional diffusion model
Our evaluations show that DiffPath, with a single model, is comparable to baselines trained on in-distribution data. The benefit is therefore the reduction in resources needed to train different models for different tasks. Our work connects OOD detection with recent foundation generative models, where one model excels at multiple tasks. Further, our ablations show that the choice of base distribution matters, and one requires a diverse dataset for DiffPath to work well.
> some of the experiments should be focused on demonstrating ease of constructing OOD detectors for arbitrary sets of imagenet classes as in-distribution... this technique can also be used for learning one-class classification networks.
Thank you for the suggestion. We conducted experiments using single ImageNet64 classes as ID against various OOD classes and report the AUROC for DiffPath-6D. Due to limited time, we evaluate on a select but diverse set of randomly selected classes. Note that none of the diffusion baselines consider such a task. The results show DiffPath-6D performs well without hyperparameter tuning for this task. This shows DiffPath's potential for one-class classification. We will include these results in the paper.
|ID (below) / OOD (right)|Daisy|Dugong|Altar|Orange|Perfume|
|:-------------------------:|:-----:|:------:|:-----:|:------:|:-------:|
|Airship|0.823|0.83|0.818|0.818|0.863|
|Cheeseburger|0.768|0.86|0.727|0.786|0.958|
|Pizza|0.824|0.9|0.692|0.842|0.959|
>the fisher-divergence uses difference between score-function outputs estimated over a batch of examples
Thank you for highlighting this distinction, which we will highlight in the revision for clarity. First, we clarify that Theorem 1 serves as motivation and not a guarantee/proof for OOD detection, as stated in line 121 of the paper: “This *motivates* the use of the norm of the score as an OOD statistic…”. The intuition is that if L2 norms of the scores of individual samples are different, so should their expectations (the reverse may not be true). Our experiments and Fig. 2 suggests that this reasoning holds.
>For a fair comparison with methods which directly work at 32x32 resolution, celeba and textures should be first downsampled to 32x32 and then upsampled to 64x64
We thank the reviewer for this suggestion. We agree that standardizing the resolution will allow for fairer comparison. Double resizing introduces twice the interpolation error, hence we retrained baselines at 64x64 resolution so every dataset is resized only once to 64x64. Due to time constraints, we report results for CelebA as ID:
||CIFAR10|SVHN| CIFAR100|Textures|
|:-----------:|:-------:|:------:|:--------:|:--------:|
|MSMA|1|1|1|0.979|
|DDPM-OOD|0.674| 0.463|0.644|0.841|
|LMD|0.999|1|0.998|0.98|
|DiffPath|0.999|1|0.998|0.943|
MSMA, LMD and DiffPath have comparable performance, while DDPM-OOD suffers a performance drop. We will include these updated results and revise our claims to DiffPath ‘matches’ the performance of baselines at 64x64 resolution. Again, we emphasize that DiffPath matches the performance using a single model, while baselines require individually trained models.
>What hyperparameters do you use for KDE/GMM algorithms? How to select these hyperparameters?
We mention the hyperparameter values in appendix B. Selection is done empirically via simple search over a defined set.
>why do the trajectory statistics such as norm and rate-of-change vary from class to class?
As motivated by Theorem 1 and the OT example, our hypothesis is that the path connecting different distributions to standard Gaussian differs. In this work, we observe that the differences manifest in the rate-of-change and curvature of the paths. There could certainly be other measurable properties of the paths that could be explored, which we believe is an exciting future direction.
> considering a SVHN diffusion model, if we select one of the SVHN classes as in-distribution, can we achieve effective OOD detection?
As we are not proposing to use a diffusion model with SVHN as the base distribution, we are unsure of the reviewer’s suggestion. In terms of single-class OOD detection, we have provided results on ImageNet above as suggested.
We hope that we have addressed the reviewer’s concerns. If so, we kindly request to consider raising your score. If you have further concerns, please do not hesitate to let us know.
[1] Yang, Jingkang, et al. "Openood: Benchmarking generalized out-of-distribution detection." Advances in Neural Information Processing Systems 35 (2022): 32598-32611.
[2] Choi, Sungik, et al. "Projection regret: Reducing background bias for novelty detection via diffusion models." Advances in Neural Information Processing Systems 36 (2023): 19230-19245.
---
Rebuttal 2:
Title: Thank you for your response
Comment: Thank you for your response! I still have a few outstanding concerns/questions:
1. Open-OOD results: Could you please describe the transformation pipeline (e.g., the sequence of image transformation you applied in order to get the 64x64 images) you used for ImageNet-200?
2. One-class learning results: these results seem encouraging! Instead of arbitrary pairs of in-distribution and out-of-distribution classes, it will be more effective to consider semantically related categories for ID/OOD pairs. For example, see Table 8 of [1]. Here, if you consider the dogs subset, you may consider one of them as ID (e.g., beagle) and the remaining 11 dog breeds as OOD. This is a challenging setting while also having important practical applications -- so, demonstrating improvements in this case is going to be significant. I understand that this experiment requires time and compute resources and may be more suitable for a later revision.
3. Fair Comparison: Thank you for the additional experiments on this task. Comparing these numbers to the original table, it seems to confirm my speculation that comparing between pixelated images (i.e., upsampling C10/SVHN/C100) and higher-resolution images (e.g., downsampling Celeb-A) leads to over-optimistic results. For e.g., in Table 3, MSMA is shown to yield an AUROC of 0.871 for CelebA vs C10 when using a diffusion model trained at 32x32 resolution (is this correct?) and comparing between original C10 images and downsampled CelebA images. Next, in this new experiment using a diffusion model trained at 64x64 resolution, we notice that it achieves an AUROC of 1.0 and in my understanding, this can attributed to upsampling C10 images to 64x64. While I understand that DiffPath uses a single model while baseline methods rely upon a separate model in each case, DiffPath also has access to a much stronger diffusion model since its trained on much more data. So, matching the results with the baseline methods is not very surprising; in fact, it also seems to me that MSMA achieves better results and requires fewer function evaluations as compared to DiffPath.
***
EDIT: in Q3, I was referring to an experimental setting where a diffusion model is trained on SVHN images and one of the SVHN classes (e.g., 0) is considered as in-distribution and others as OOD. It is similar to the one-class detection results for ImageNet already provided in the above response. This experiment can evaluate if DiffPath can generalize to a diffusion model trained on much smaller data. Again, this may be more suitable for consideration in a later revision.
[1] Ahmed and Courville. Detecting Semantic Anomalies.
---
Rebuttal 3:
Title: Response to reviewer 1/2
Comment: We thank the reviewer for their timely reply. Below is our response to the reviewer’s comments.
**OpenOOD transformation pipeline**
We use standard transformations in diffusion image modeling: we resize the images to 64x64 using bilinear interpolation, followed by normalization of pixels to the range [-1,1].
**One class learning**
We thank the reviewer for this suggestion. To our knowledge, generative and discriminative OOD methods are evaluated differently in the literature and we are unaware of any generative baselines that are evaluated via the one-class experiments suggested. We agree that such tests would be very challenging for unconditional generative models as they may not learn the "right" features for discriminating the classes.
Due to time constraints, we could only run preliminary trials with select ImageNet dog breeds (Ibizan hound vs bluetick and beagle). Without further tuning, we found that DiffPath did not distinguish the dog breeds. We also tested LMD on the same task and found similar performance. We hypothesize that methods that perform well on this task are likely to be discriminative models which utilize class labels during training, such as those evaluated in [1].
Gradient-based classification training enables the model to learn fine-grained features specific to each dog class, which we believe is crucial in this context. In contrast, generative models focus on maximizing the likelihood of the overall data distribution and are not explicitly trained to identify subtle discriminative features. This could potentially lead to weaker performance in tasks where the distributions exhibit a high degree of overlap.
To enhance the performance of generative methods in this challenging context, one potential improvement could involve augmenting the OOD score calculation with discriminative features [2]. This approach could leverage the strengths of both methodologies: the generative model (e.g., DiffPath) would effectively manage most tasks, while the discriminative features would address more complex cases characterized by high distribution overlap. We leave this to future studies and will include this discussion as a limitation of DiffPath in the revision.
**"DiffPath also has access to a much stronger diffusion model since its trained on much more data. So, matching the results with the baseline methods is not very surprising"**
We respectfully disagree with the reviewer on this statement. We find it surprising that one can perform OOD detection using a generative model that has *not* been trained on ID data or labels, *regardless of the quantity of other data the model is trained on*.
While one could argue that ImageNet provides broad coverage over images like those in CIFAR10 and faces in CelebA, exact data from these distributions are **not** included in ImageNet. Digits of SVHN are even more sparsely covered by ImageNet. Thus, we believe this is a surprising discovery, one which connects OOD detection with recent findings in foundation generative models, where one model excels at multiple tasks. Our work shows that the curvature statistics of the Imagenet diffusion model captures general properties of images that are useful for OOD detection.
**Fair comparison**
For a more complete comparison with diffusion baselines at 64x64 resolution, we were able to train MSMA and DDPM-OOD for all ID setups. We will include complete results, including LMD, in the final revision (reconstruction for LMD takes $>10^3$ steps per sample, thus we are unable to obtain results at present). We present the averaged AUROC result over all 12 tasks in Table 3 of our paper for simplicity
| Method | Average |
|--------------------|---------|
| MSMA | 0.951 |
| DDPM-OOD | 0.765 |
| DiffPath | 0.942 |
Upsizing leads to more optimistic results for MSMA (and LMD in earlier experiments), and only marginally better results for DDPM-OOD. In this setting, MSMA and DiffPath are competitive with each other.
In terms of function evaluations (FEs), MSMA is cheaper than DiffPath (10 vs 50) but this isn't a fundamental limitation of DiffPath: MSMA is a discrete noise level NCSN [3], while DiffPath utilizes the continuous-time diffusion formulation with DDIM sampling. DiffPath can be made more efficient by using better diffusion samplers like DPM-Solver [4].
Taken as a whole, the experimental results show strong performance for DiffPath, as it is competitive with baselines using roughly the same order-of-magnitude in FEs (or one/two magnitudes better FE versus DDPM-OOD/LMD), *while using a single model and without significant hyperparameter tuning*. We believe the last point to be most significant, as this work is the first to show this for generative OOD methods.
---
Rebuttal 4:
Title: Response to reviewer 2/2
Comment: We thank the reviewer for suggesting the above experiments that have suggested where improvements can be made to DiffPath. We hope that the reviewer will evaluate our work in the context of our main claims:
- We propose the use of diffusion path statistics, namely the rate-of-change and curvature, for OOD detection.
- We show that these statistics are obtained from a Taylor expansion about the diffusion process, and can be estimated simply via finite difference of the DDIM sampler.
- We further show that one can obtain these statistics for different data distributions using a single model, even when the model is not trained on task-specific data.
- We connect these statistics to score-matching and optimal transport theory, providing a theoretical motivation for DiffPath.
DiffPath is competitive with generative baselines requiring individually-trained models.
Our experiments were geared towards evaluating these claims. We do **not** claim that DiffPath is the best possible OOD detector in all settings.
[1] Ahmed, Faruk, and Aaron Courville. "Detecting semantic anomalies."
[2] Zhang, Richard, et al. "The unreasonable effectiveness of deep features as a perceptual metric."
[3] Song, Yang, and Stefano Ermon. "Generative modeling by estimating gradients of the data distribution."
[4] Lu, Cheng, et al. "Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps."
---
Rebuttal Comment 4.1:
Title: Thank you!
Comment: I thank the authors for their reply and new insights from additional experiments. I also appreciate the clear explanation of their perspectives. While I strongly subscribe to the claim-based evaluation, I feel that the unconventional experimental settings using both pixelated and non-pixelated images gives the wrong message. More specifically, there are 3 types of comparisons in your experimental setup:
1. **Pixelated vs Pixelated**:
- Examples: C10 vs C100, C10 vs SVHN, SVHN vs C10, SVHN vs C100.
- Finding: In Table 3, we see that baselines outperform DiffPath in all these cases while DiffPath still offers comparable performance in some cases.
2. **Pixelated vs Non-pixelated**:
- Examples: C10 vs Celeb-A, C10 vs Textures, SVHN vs Celeb-A, SVHN vs Textures, Celeb-A vs C10, Celeb-A vs C100, Celeb-A vs SVHN
- Finding: In each of these cases, DiffPath achieves close to 100% detection rate. However, this is also very different from the conventional setting and distinguishing between pixelated and non-pixelated images may almost be trivial. In Table 3, DiffPath outperforms all baselines but under fair settings, it seems like all baselines achieve close to 100% detection rate (at least for Celeb-A vs C10/C100/SVHN as in your rebuttal response).
3. **Non-pixelated vs Non-pixelated**:
- Examples: Celeb-A vs Textures
- Finding: All baselines outperform DiffPath both in Table 3 and in the new Celeb-A results in your rebuttal response.
As stated above, my concern is that these are non-conventional settings with over-optimistic results in some cases and cannot be compared directly with other works. A standardized benchmark such as OpenOOD allows us to standardize comparison between methods. It is impressive that DiffPath achieves improvements on Imagenet-200 on the near-OOD tasks in OpenOOD; but the baseline methods in that table operate using 224x224 images while DiffPath operates at 64x64 resolution and hence, it is not clear if it is meaningful to compare between them directly.
I would appreciate a standardized comparison between baselines and your method that does not involve pixelated images (i.e., resizing standard 32x32 datasets to 64x64 resolution). This would allow for a fair comparison between DiffPath results and other results in the literature. Finally, while it is not necessary to improve over the baselines in every case, it would be good to demonstrate _some_ improvements over the baseline in the standardized evaluation settings.
I request the authors to kindly correct me in case I am misinterpreting the results. Thank you!
---
Reply to Comment 4.1.1:
Title: Thank you to the reviewer
Comment: As the discussion period comes to a close, we sincerely thank the reviewer for their helpful comments and for engaging with us. We hope that we have addressed the reviewer's concerns in our rebuttals and if so, we kindly request that the reviewer consider revising their score. If the reviewer still has unaddressed concerns, please let us know and we will be happy to discuss further.
---
Rebuttal 5:
Title: Response to reviewer
Comment: Thank you to the reviewer for their prompt response and for providing further clarifications. We better understand the concerns raised.
From our interpretation, the reviewer uses the term 'pixelated' to describe images that have been upsampled from a lower resolution (e.g., 32x32 CIFAR10 to 64x64), and 'non-pixelated' to refer to images that have been downsampled from a higher resolution (e.g., CelebA, Textures). We agree with the reviewer’s observation that when comparing 'pixelated' versus 'non-pixelated' images, the blur introduced during upsampling adds additional information—absent in downsampling—that could potentially simplify the OOD task.
To address this concern, we have re-evaluated the results presented in Table 3 of the paper, focusing exclusively on OOD tasks where comparisons are made only between 'pixelated vs pixelated' and 'non-pixelated vs non-pixelated' images. In other words, we have omitted cases where an upsampled image is compared to a non-upsampled image, thereby eliminating any potential advantage due to the blur of upsampling. The revised results are presented below:
| ID | CIFAR10 | CIFAR10 | SVHN | SVHN | CelebA | Average |
|--------------|---------|---------|--------|---------|--------|---------|
| OOD | SVHN | CIFAR100| CIFAR10| CIFAR100| Textures| |
| Diffusion NLL| 0.091 | 0.521 | 0.99 | 0.992 | 0.809 | 0.681 |
| Diffusion IC | 0.921 | 0.519 | 0.08 | 0.1 | 0.559 | 0.436 |
| MSMA | 0.957 | 0.615 | 0.976 | 0.98 | 0.967 | 0.899 |
| DDPM-OOD | 0.39 | 0.536 | 0.951 | 0.945 | 0.773 | 0.719 |
| LMD | 0.992 | 0.604 | 0.919 | 0.881 | 0.972 | 0.874 |
| DiffPath | 0.92 | 0.593 | 0.924 | 0.936 | 0.946 | 0.864 |
From this revised evaluation, DiffPath remains highly competitive with existing baselines. Specifically, it outperforms DDPM-OOD, NLL, and IC, while closely matching the performance of MSMA and LMD. Again, this is achieved using a single model that is not trained on in-distribution data. We will revise the results to only include comparisons between datasets of the same resolution, e.g., these new results and the new 64x64 results in the rebuttal, as well as moderate our claims accordingly. In light of these findings, along with our methodological contributions, we believe that our work remains novel and represents a significant advancement in the generative OOD literature.
Regarding the reviewer's remarks on the OpenOOD benchmarks, we would like to highlight that in OpenOOD, the near-OOD tasks involve datasets where images vary in size *even within the same dataset* (e.g., NINCO and SSB-hard) [1]. Consequently, there is no standardized approach to ensuring comparisons are exclusively between 'pixelated vs pixelated' and 'non-pixelated vs non-pixelated' images, *even in the original settings tested in OpenOOD*. Therefore, we believe that the focus on resolution within the OpenOOD context may be less critical than anticipated.
[1] https://github.com/Jingkang50/OpenOOD | Rebuttal 1:
Rebuttal: Thank you to the reviewers for their thoughtful comments and feedback. We are glad that the reviewers found our idea of diffusion paths for OOD detection to be novel, well-motivated and that the paper is well-written.
We find most of the reviewer’s questions revolve around method/experiment clarifications. In general, we would like to reiterate that our main contribution is in proposing statistics of the diffusion path, specifically the rate-of-change and curvature, for OOD detection. We show that these statistics are a natural result of the score-matching formulation of diffusion models, where they are derived from a Taylor expansion of the diffusion process. We further show that the statistics can be obtained from a *single* pre-trained unconditional diffusion model trained on a diverse dataset — this approach contrasts against prior work which focus on training a specific model on the in-distribution data. Our experiments serve to validate this idea and show that DiffPath is competitive with strong *individually-trained* baselines.
Based on the comments, we have made the following revisions:
1. Ran additional near-OOD experiments according to the setup in OpenOOD [1], which show significantly improved performance for DiffPath.
2. Ran additional single-class ImageNet OOD experiments.
3. Ran experiments for baselines at the same resolution for more accurate comparisons, showing DiffPath matches the baselines with only a single model.
4. Several clarifications on the theory and methodology of DiffPath to enhance clarity for the reader.
Please see below for detailed responses to each reviewer.
[1] Yang, Jingkang, et al. "Openood: Benchmarking generalized out-of-distribution detection." Advances in Neural Information Processing Systems 35 (2022): 32598-32611. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Efficient Recurrent Off-Policy RL Requires a Context-Encoder-Specific Learning Rate | Accept (poster) | Summary: The authors investigate the stability of training deep recurrent policies for POMDP tasks. They hypothesize that the recurrent part of the encoder faces less stable training compared to the fixed-length parts of the encoder (e.g. the MLP input layer) and propose that the former should use a lower learning rate. This is shown to be effective mostly through an empirical evaluation.
Strengths: The paper's ideas are communicated clearly and the authors present excellently summarize the state of the art for several partially observable settings, including the general setting, meta RL, and credit assignment. I appreciated the wide range of experimental domains and the sensitivity analysis demonstrating that the recurrent layers and the rest of the network should have different learning rates for optimal performance. The suggestion to introduce a new hyperparameter for the recurrent encoder could be broadly adopted.
Weaknesses: Unfortunately, I believe the work is not sufficiently novel for acceptance at NeurIPS. The problem identified by the authors --- that a recurrent encoder is numerically less stable than a fixed-length encoder --- has been identified many times over the years with various techniques proposed to mitigate it. This includes classical works such as the original LSTM paper [1] aimed at tackling exponentially scaling gradients. More recently, techniques such as gradient clipping and truncating the number of recurrent backpropagation steps (which is not the same as shortening the context length) have been suggested to tackle this problem as well.
Experimentally, I would have liked to see that these last two tricks do not already solve the stability issues, as there was no mention of them in the paper. It also seems that the hyperparameters were tuned by hand (since I could not find any mention of a hyperparameter selection scheme) which I think is insufficient for a work for which the sole contribution is an empirical result that we are meant broadly apply. I would have expected to see a very rigorous and objective hyperparameter selection method such as a grid search over reasonable hyperparameters. While I appreciated the sensitivity analysis in Fig 9, it was only conducted on a single domain.
Ultimately however, my opinion is that the idea is not substantial enough for publication in a top venue.
[1]: Hochreiter, Sepp, and Jürgen Schmidhuber. "Long short-term memory." Neural computation 9.8 (1997): 1735-1780.
Technical Quality: 2
Clarity: 4
Questions for Authors: 1. How were hyperparameters tuned in the experiments?
2. Have you considered the approaches mentioned in the weaknesses? (gradient clipping and truncated gradients)
Confidence: 4
Soundness: 2
Presentation: 4
Contribution: 1
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and providing us with your valuable feedback.
- **[LSTM]** Our paper discovered a totally different issue from gradient exploding (numerical instability) resolved in LSTM: even if the RNN gradients do not explode and are similar in magnitude to those of MLPs, the gradient updates can still cause significant variations in the RNN outputs. These large output variations lead to instability in RL training, requiring a slow down in RNN updates. This is evident in Figure 3, particularly when the rollout step is 0. The action variation of the brown curve is lower than that of the orange curve, indicating that the RNN parameter changes are small and its gradient is not large. However, as the rollout steps increase, the brown curve grows by several orders of magnitude, which can destabilize reinforcement learning training.
- **[The gradient clipping & truncated gradient tricks]**
- We have tried gradient clipping (0.5 maximum gradient norm as did in Dreamer) and truncated gradient (gradient maximum horizon 32) tricks, but we found that these tricks are irrelevant to the problem of RNNs in RL. Their purpose is to prevent gradient explosion in RNNs and cannot slow down the RNN updates. In Fig. R1, we show the effects of these tricks. It can be seen that gradient truncation may still lead to instability and infinite values in tasks like Ant and HalfCheetah. Gradient clipping can suppress extreme values but does not improve performance. We conducted an in-depth investigation and compared the policy gradient norms of RESeL and the gradient clipping approach. As shown in Fig. R3, the policy gradient norm of the gradient clipping method has many spikes, reflecting training instability. This instability does not stem from gradient value instability, as we have already employed gradient clipping.
- We find the gradient instability originates from the instability of value function learning. In Fig. R4, we observe that in the baseline with gradient clipping but unchanged learning rates, the value loss frequently encounters outliers. This is due to large variation in the outputs of the policy and value networks, which make value function learning, based on bootstrapping, more unstable. This instability results in large value loss, which in turn causes spikes in the policy gradients. This issue is distinct from the instability and explosion of RNN gradients.
- **[Sensitivity analysis was only conducted on a single domain]** We have conducted similar experiments in WalkerBLT-P-v0 and AntBLT-V-v0. The results are shown in Fig. R2. Limited by computational resources, we only included the setting of Figs. 9(a) and 9(c) to illustrate that the optimal $LR_{CE}$ is around $10^{-5}$ in these tasks, and to show that $LR_{CE} = LR_{MLP}$ does not achieve optimal performance. The conclusions drawn from Fig. R2 are consistent with those presented in Figs. 9(a) and 9(c).
- **[Hyper-parameters]** Some hyper-parameters (discount factor, reward as input) were chosen based on the nature of the environments, while others (Context encoder LR, policy and value function LR, and batch size) are determined via grid search:
- **Context Encoder LR**: We fixed the learning rate of the context encoder to be 1/30 of the policy learning rate in all tasks. We aim to have the action variation caused by RNN updates to be comparable to that caused by MLP updates, thus fixing the learning rate ratio between the context encoder and MLP. After conducting parameter searches in several environments, as shown in Figure 9, we find the 1/30 ratio performed well (this conclusion is also evident in Figure R2), so we fixed this ratio across all environments.
- **Policy and Value Function LR**: We mainly followed the default learning rate settings in CleanRL [1], using a policy learning rate of $3 \times 10^{-4}$ and an action-value function learning rate of $10^{-3}$. We conducted a grid search between the aforementioned learning rates and those reduced by a factor of $5$ (aiming to further stabilize training). We found that a smaller learning rate was more stable in tasks where the partial observability is not very significant (e.g., MDPs like classic MuJoCo).
- **Discount Factor ($\gamma$)**: For general tasks, we used a common $\gamma$ value of 0.99. For environments requiring long-term memory (e.g., Key-to-Door), we used a larger $\gamma$ of 0.9999.
- **Reward as Input**: The Classic Meta-RL tasks require inferring the true reward function based on real-time rewards. Thus, we also included real-time reward feedback as an input to the policy and value in these tasks.
- **Batch Size**: We conducted a grid search for batch size, mainly exploring sizes that were 1x or 2x the maximum trajectory length. We found that in classic POMDP tasks, a larger batch size performed better. In other tasks, the advantage of a large batch size was not significant and could even reduce training speed.
[1] Huang et al. "CleanRL: High-quality single-file implementations of deep reinforcement learning algorithms." JMLR 2022.
----
We sincerely appreciate your time and expertise in reviewing our paper. We would greatly appreciate it if you could re-evaluate our paper based on the above responses.
---
Rebuttal 2:
Title: Re: Rebuttal
Comment: Thank you to the authors for the effort put into the rebuttal. I appreciate the inclusion of an investigation on gradient clipping and truncating the number of gradient steps, although I would have liked to see truncations with a smaller number of steps as well. Many implementations consider 4 or 8 steps (e.g. [1,2]) which may do well given that the output variation is exponential in the number of steps -- truncating to 32 steps doesn't provide much insight into whether it can tackle the exponential variation.
I'm still not convinced about the novelty of the work after reading the authors' rebuttal and the other reviews. The authors claim that the novelty is not so much in their implementation trick but in the theoretical and intuitive understanding gained from this work. However, the idea that RNNs tend to be more unstable than MLPs is not a particularly new or surprising finding. Or is the novel understanding something different from what I just described?
In reply to Reviewer 89BQ's argument for the novelty of this work, I can agree that the trick of using separate learning rates is not considered common practice. With a more rigorous investigation to ensure that current methods don't already solve this, such as gradient truncation with a smaller number of steps, I can agree that this knowledge could be useful to many practitioners (though perhaps it doesn't require a 9-page NeurIPS paper to disseminate). However, this paper doesn't seem to offer much beyond that. Most papers that propose a new algorithm also come with new and surprising insights that future works can build off of. Unfortunately, with this work, I don't think the insights are sufficiently novel. I don't see them pushing a research area forward, inspiring future works, or significantly improving our scientific understanding.
I've increased by score since the experimentation was improved, but unfortunately my main complaint regarding lack of novelty still stands.
[1] https://github.com/MarcoMeter/recurrent-ppo-truncated-bptt?tab=readme-ov-file
[2] https://github.com/lcswillems/torch-ac?tab=readme-ov-file
---
Rebuttal Comment 2.1:
Comment: Thank you for your reply. It appears that there may be a misunderstanding. The reviewer seems to believe that the instability in Recurrent off-policy RL is due to numerical explosion or instability of RNN gradients, and that addressing the gradient issues alone would resolve the overall training stability. This understanding aligns with the traditional issues of RNNs and corresponding solutions. However, there will be different issues in off-policy RL.
Our experimental results (as shown by the red line in Fig. R3) indicate that the gradient explosion problem is not significant, as advanced RNNs (such as GRU, Mamba, etc.) have already addressed gradient instability quite effectively. If the gradients were indeed growing exponentially with the number of steps, merely reducing the RNN learning rate by 30 times would not be sufficient to counteract this exponential divergence, whereas gradient clipping would effectively mitigate such issues. Contrary to this, the experimental results show that our method is significantly more stable than gradient clipping.
This is because the fundamental cause of instability in Recurrent off-policy RL is not gradient instability, but rather excessively large output variations between consecutive updates of the RNN. Even if the gradients are not large (on the same order of magnitude as those in MLPs), the outputs of the recurrent policy/value function can vary significantly between updates. These large output changes introduce instability into RL. For instance, in value function training, the (simplified) optimization target is $||Q(s,a)-r(s,a)-\gamma\hat{Q}(s',a')||_2^2$, where $\hat{Q}$ and $a'$ are the target value function network and the outputs of the current policy on $s'$, respectively. If the outputs of the value function network and policy network vary significantly, $a'$ will change greatly, and $\hat{Q}(s',a')$ will also fluctuate substantially. This results in large shifts in the optimization target of the value function after each update, leading to training instability. Similar instabilities occur in policy training, contributing to overall RL training instability. This is why, as shown in Fig. R4, even after clipping the gradients, the value loss remains unstable. **To the best of our knowledge, this finding has not been reported in previous work.** We will include these discussions in the revised version of our paper.
Our theoretical results demonstrate that output variation does not increase exponentially with the number of steps but converges to a certain value. Therefore, we can balance the output variations, amplified by time steps, with a small RNN-specific learning rate. In contrast, if gradient clipping or truncation is used to keep the gradients within a normal range (similar to MLPs), the RNN output variations will still be large, and RL training will remain unstable.
The results of Gradient truncation with 4 or 8 truncation steps are shown in the table below. In both variants, the algorithm produced infinite outputs before 250 iterations, leading to early stopping.
| | $LR_{CE}$=$10^{-5}$ | $LR_{CE}$=$3\times10^{-4}$, grad-step-truncation-4 | $LR_{CE}$=$3\times10^{-4}$, grad-step-truncation-8 | $LR_{CE}$=$3\times10^{-4}$, grad-step-truncation-32 |
| :-----------------: | :-------------------------------------: | :------------------------------------------------: | :------------------------------------------------: | :-------------------------------------------------: |
| AntBLT-V-v0 | $ \mathbf{1986} \pm\mathbf{73} ^\star$ | $408\pm 74$ | $269\pm 88$ | $499±55$ |
| HalfCheetahBLT-V-v0 | $ \mathbf{2679} \pm\mathbf{176} ^\star$ | $-107\pm 320$ | $-703\pm 330$ | $−458±244$ | | Summary: This paper proposes RESeL which improves recurrent off-policy RL in POMDPs mainly by applying a lower learning rate to the context encoder. This is justified by their theoretical analysis that the recurrence amplifies the output difference in the long run. In practice, they also incorporate several techniques (Mamba architecture, critic ensembles, efficient data sampling, slow policy update). The main contribution is on strong empirical performance. Across several benchmarks (classic POMDP, meta-RL, credit assignment), RESeL attains SOTA performance.
Strengths: The problem setting of partial observability that this paper tackles is crucial and challenging. As indicated by the experiments, the partial observability is severe in some tasks and may require full context lengths (around 1000).
The paper is well written and easy to follow.
The performance gain averaged on a wide range of POMDP tasks is large enough to make RESeL a very strong empirical work. RESeL uses Mamba, which also accelerates training time a lot.
The ablation experiments on section 5.1 and 5.3 clearly shows that a smaller learning rate helps stability (through reduced action variation).
Weaknesses: No major weaknesses. Please see the questions below.
One missing point is about the theoretical understanding -- why a smaller learning rate on context encoder can stabilize training? Reducing the output variation between consecutive updates, in my opinion, is a starting point, but not enough to explain the stability as a whole. This is connected with two-timescale update of feature vs value learning, e.g., https://openreview.net/forum?id=rJleN20qK7 Perhaps this related work is worth a discussion.
Technical Quality: 3
Clarity: 4
Questions for Authors: About architectural design. Is there a reason for using respective context encoders in actor-critic?
About proposition 1. The connection between a smaller learning rate and the bound (1) is not very clear. Is the logic that a smaller learning rate leads to a smaller \|\theta - \theta’\| and thus a smaller \epsilon, then a tighter upper bound in (1)?
Is this a typo in the unusually large batch size described in Table 2?
As the main finding is on the learning rates, could the sensitivity analysis (Figure 9) be applied to more tasks?
Indicated by Figure 10, is RESeL-GRU more sample-efficient than RESeL-Mamba? Is Mamba a crucial component of RESeL?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: Yes, no major limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your time to review and provide positive feedback for our work.
- **[Theoretical understanding]** Thank you very much for pointing out this relevant work. It has been very enlightening for us. RESeL and TTN are closely related, with the MLP value and the small learning rate RNN encoder corresponding to the value function and representation network, respectively. The convergence guarantee of TTN also explains why a small output variation between consecutive updates can enhance training stability.
- **[Architectural design]** As noted in [1], sharing a context encoder could lead to a large gradient norm, causing instability. Therefore, we chose to use separate context encoders.
- **[The connection between a smaller learning rate and the bound (1)]** You are right, reducing the learning rate can lower $\epsilon$, thereby reducing the upper bound. We will include this discussion in the revised version of our paper.
- **[Large batch size described in Table 2]** The batch size here refers to the number of transitions. For each update, we collect at least one complete trajectory to calculate the loss. In environments like HalfCheetah, one trajectory consists of 1000 steps. So, the 1000 and 2000 mentioned correspond to one or two complete trajectories in these environments. To maintain consistency in the number of valid transitions across different environments, especially those with varying trajectory lengths, we use the same batch size for them. For these environment, we will dynamically sample uncertain number of complete trajectories to ensure that the number of valid transitions is no less than this batch size.
- **[Sensitivity analysis in more tasks]** Yes, we have conducted similar experiments in WalkerBLT-P-v0 and AntBLT-V-v0. The results are shown in Fig. R2. Limited by computational resources, we only included the setting of Figs. 9(a) and 9(c) to illustrate that the optimal $LR_{CE}$ is around $10^{-5}$ in these tasks, and to show that $LR_{CE} = LR_{MLP}$ does not achieve optimal performance.
- **[Comparison between RESeL-Mamba and RESeL-GRU]** It's true that GRU is more sample-efficient than Mamba in some environments. Mamba loses some expressive power due to its parallelzability (the hidden state at time $t$ is not used as input to the nonlinear network at time $t+1$ but is accumulated through a linear transition matrix) and also is not crucial in RESeL. However, Mamba significantly reduces computation time during training and requires much less memory in environments with variable trajectory lengths.
[1] Ni et al. "Recurrent Model-Free RL Can Be a Strong Baseline for Many POMDPs." ICML 2022.
---
Thank you for taking the time to provide us with your feedback. We appreciate your valuable comments and suggestions, which will help us improve our work. We look forward to receiving your further feedback.
---
Rebuttal 2:
Comment: Thank you for the rebuttal and the additional experiments, although what I expected was a thorough sensitive analysis across all the tasks (currently only 3 tasks are shown).
Overall the paper and the rebuttal looks good to me. I keep my rating.
---
Rebuttal Comment 2.1:
Comment: Thank you for your response. Due to time and computational resource limitation, we were only able to add two additional environments during the rebuttal phase. We will continue with sensitivity studies and include the results and discussion of the remaining tasks in the next version of our paper. We appreciate your constructive suggestions, which have helped improve our paper. | Summary: The paper mitigates the training instability issue of the recurrent off-policy RL by using a smaller learning rate for the RNN block.
Strengths: - The paper is well-written and easy to follow.
- The paper proposes a simple solution with analysis.
- The proposed solution is thoroughly verified in different experimental settings.
Weaknesses: - My main concern is the novelty of the proposed solution. Although the paper gives a reason for using different learning rate for the RNN block and the MLPs, the proposed method is to choose a lower learning rate for the RNN block. In practice, it's somehow common to choose different learning rate for different components. For example, Dreamer [Hafner, Danijar, et al.], which also uses an RNN block in their model, chooses a smaller learning rate for the RNN-based world model. Also, when facing the training stability issue, tuning the learning rate is always on the checklist.
However, I would consider accepting the paper if it proposes an approach to automatically decide the learning rate of the RNN block by leveraging the analysis in section 4.2.
- Some details need further explanation:
- In equation (1), it could be better to distinguish two $\epsilon$.
- Will the amplification factor $\beta = \frac{K_y}{1-K_h}$ always larger than 1 (L160)?
- The average variation in network output $\frac{1}{t}\sum^{t-1}_{i=0} || y_i - y'_i||$ converges to $\beta \epsilon + \epsilon$, which is not involved in $t$, but when this indicates with **longer** sequence lengths, the average variations in the RNN output induced by gradient deselect are amplified (L163-164)?
- In the caption of Fig 3, what does it mean by "after one-step gradient-update"? Does it mean one-gradient update per environment step?
- In L218, why does the learning rate of MLP must be increased **twentyfold**?
- In L221, "The right panel shows the green and blue curve remain at similar levels until the final time step". Where are the green and blue curves?
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness above
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The limitation is discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper, and for your insightful comments.
- **[Dreamer]** It is not a common practice in reinforcement learning to use different learning rates for specific layers in a neural network. Typically, in reinforcement learning, the number of learning rates corresponds to the number of loss functions, with each loss function's associated network parameters sharing the same learning rate. For instance, in Dreamer, the world model is updated based on a prediction loss. Despite comprising various modules such as the non-RNN encoder, decoder, predictor, and RNN sequence model, these modules share a single learning rate in Dreamer families [1-3]. However, Figure 9(c) demonstrates that using the same learning rate for all layers during RL policy training could be suboptimal. Conversely, RESeL assigns a separate smaller learning rate specifically for the sequence model, while using a different learning rate for others.
- **[Tuning the learning rate is always on the checklist]** We are not naively tuning the learning rate here. We have shown that uniformly tuning the overall learning rate does not achieve optimal performance. Instead, we propose RESeL to specifically set a smaller learning rate for the RNN, based on our theoretical analysis and intuitive understanding. Despite being straightforward, it is not trivial.
- **[An approach to automatically decide the learning rate of the RNN block]**
- In all experiments, the learning rate for the RNN was set to 1/30 of the policy learning rate. We found this setting to be highly generalizable. From a practical standpoint, we recommend setting it to 1/30 of the policy learning rate or conducting a grid search centered around this ratio.
- On the other hand, for long trajectories, the upper bound of the average change in RNN outputs will eventually converge to $(1+\beta)\epsilon_{RNN}$, where $\epsilon_{RNN}$ is the variation of the RNN’s first step output. Our goal is to find a scaling factor $\alpha$ for the RNN learning rate so that the variation in RNN outputs matches $\epsilon_{MLP}$: $\alpha(1+\beta)\epsilon_{RNN}\approx\epsilon_{MLP}$. However, the values of $\beta$, $\epsilon_{RNN}$, and $\epsilon_{MLP}$ are highly dependent on the neural network weights. Thus, we can periodically test $(1+\beta)\epsilon_{RNN}$ and $\epsilon_{MLP}$ and use their ratio to determine $\alpha$. Specifically, for every 500 gradient updates, we perform a pseudo-update to observe how much the outputs of the policy change after individually updating the RNN block or the MLP. We then use the ratio of these changes to scale the RNN learning rate. To avoid instability in training caused by frequent changes in the learning rate, we only adjust the learning rate during the initial 50 iterations (a warmup phase, corresponding to 50000 gradient updates) and keep it fixed thereafter. We set the initial RNN learning rate to $3\times10^{-4}$. Our experimental results, shown in Fig. R6, indicate that the automated tuning method performs similarly to RESeL. We will add this discussion in our revised version. The final RNN learning rates are listed in the following table, which is close to our setting.
| Env. Name | Auto-Tuned RNN Learning Rate |
| ------------------- | --------------------------------------- |
| AntBLT-V-v0 | $6.0\times 10^{-6}\pm 3\times 10^{-7}$ |
| HalfCheetahBLT-V-v0 | $1.1 \times 10^{-5}\pm 1\times 10^{-7}$ |
| HopperBLT-V-v0 | $1.1 \times 10^{-5}\pm 4\times 10^{-7}$ |
| WalkerBLT-V-v0 | $1.2 \times 10^{-5}\pm 1\times 10^{-7}$ |
- **[Distinguish two $\epsilon$]** Thank you for the suggestion. We will distinguish the two $\epsilon$ in the revised version.
- **[Is $\beta$ always larger than 1]** It depends on the weights and gradient magnitude of the neural network. However, $\beta$ is always larger than $0$, so the upper bound of $||y_i-y_i'||(i>0)$ is always greater than $\epsilon$.
- **[Average variation is not involved in $t$]** This needs to be understood in conjunction with Eq. (11) in Appendix B. Since the right-hand side of Eq. (11) is an increasing function of $t$, the mean value increases over time. We will include this explanation in the revised paper.
- **[After one-step gradient-update in Fig. 3]** The action variation quantifies the difference in policy output after the gradient update compared to its output before the update, as considered in proposition 1. "After a one-step gradient update" indicates that we updated the policy only once, but we compare the differences in action outputs at various rollout steps before and after the policy update.
- **[Learning rate of MLP must be increased twentyfold]** We found that increasing the MLP learning rate twentyfold to 0.006 resulted in action variation comparable to a CE learning rate of 0.0003. We will revise this statement in the updated paper to avoid confusion.
- **[The green and blue curve]** This was a typo, it should actually refer to the orange and purple curves.
[1] Hafner et al. "Dream to control: Learning behaviors by latent imagination." arXiv preprint 2019.
[2] Hafner et al. "Mastering atari with discrete world models." arXiv preprint 2020.
[3] Hafner et al. "Mastering diverse domains through world models." arXiv preprint 2023.
---
We hope that the above response can address your concerns adequately. We would greatly appreciate it if you could re-evaluate our paper based on the above responses.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed reply and the efforts in preparing the rebuttal. After reading your rebuttal, I agree that although only a separate learning rate is used for RNN, the paper explains the motivation behind this and supports the conclusion with good experiments. I would improve my score to borderline accept.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We are glad that our rebuttal was able to address your concerns. Your suggestions have been very helpful to us. On the other hand, should you have any additional questions or lingering concerns, please do not hesitate to contact us. | Summary: The paper contributes to the important and long-standing issue of representing latent state information for successfully finding optimal RL control policies in the context of partially observable Markov decision processes (POMDP). The major contribution is a newly composed RL algorithm (RESeL), which, by design, enables a thorough study of the impact and importance of the actual learning rate (LR) used for the recurrent (latent) context encoder part of the solution architecture. The author highlight the fact that current state-of-the-art (SOTA) methods commonly use an equal LR for training both the latent representation and the policy/critic networks.
Their idea of separating the LR between recurrent CE and policy networks, as well as the overall performance benefits w.r.t current SOTA algorithms, is demonstrated by an extensive survey of selected POMDP tasks and benchmarks. These experiments together with theoretical arguments let the authors conclude that using a single (and often too large) LR parameter approach of previous methods leads to sub-optimal performance results in solving POMDP tasks.
Strengths: The paper presents a sound, well-written and focused approach of studying the impact of using dedicated LR parameters for both the context encoding and policy/critic related NN representation as part a newly proposed RL algorithm (RESeL) to solve POMDP tasks with high performance. It is implicitly stated (by not mentioning previous work) that the authors' work represents the first time that such a clear separation and impact analysis of different LR between the major architecture components is made, which - if true - points to a relatively high degree of originality.
The authors underline their central proposition (of better using lower LR for the CE related network parts) both by theoretical arguments/proof and an extensive experimental study on a relatively large set of well-known POMDP tasks and benchmarks, including direct comparisons to a large variety of SOTA algorithms. This presentation, together with additional material in the appendix, is well-suited to allow reproducibility and comparability of their results.
Weaknesses: Even though mentioned in the Limitiation section, it would have been very helpful to add some benchmark comparisons of the RESeL algorithm with its RNN-CE block entirely bypassed/skipped to clearly separate the impact of the latent space presentation from the rest of the RL algorithm. In particular for the classic POMDP tasks, where only position and velocity features are masked, respectively, it is important to know a baseline performance when simply feeding the current and last-step observation vector into the MLP networks (given that RESeL uses 2 x 256 neurons layers in their MLPs, adding a few more inputs show be more than tractable). Including such extension (not necessarily for all benchmarks but maybe a smaller subset) could significantly improve the scientific insight.
Having clear focus on the impact of varying LR the remainder of the RESeL algorithm is entirely built from existing RL building blocks, like the overarching SAC RL architecture or convential RNN/GRU components and, hence, provides no original contribution in itself (as correctly mentioned by the authors, using RNN for latent space representation is known for a long time, as is the SAC approach). Re-using existing building blocks per-se is not a bad idea (and I don't recommend addressing this issue in the rebuttal phase) but one has to accept that the algorithmic novelty in this work is hence limited.
As a minor remark: the authors spend a relatively large portion of their introduction on the mathematical formulation of Proposition 1, whose central statement (variation of RNN outputs grow as function of rollout-step and LR) appears not entirely original (if it really is a first-time, the authors should highlight this fact more strongly) since the result is quite intuitive for people working in the RNN/RL community. Besides that, the connection between lowering the LR as part of the update rule when training RNNs and the resulting formula of Proposition 1 is not obvious and could be highlighted more strongly.
Technical Quality: 3
Clarity: 3
Questions for Authors: l. 127: Why is the last-step observation also needed as input to the MLP if both current obs and context-encoded obs are already fed into the NN? According to Fig. 2, the last-step obs is fed into the CE only (but not into the MLP)
l. 132: Is there any justification/reasoning behind the choice of the MLP architecture, in particular why a relatively large capacity (2 x 256 neurons) was chosen? Is there an estimate of how much this choice impacts the overall performance results of the policy training?
l. 163-164: It seems to me that the claim of Proposition 1 and the subsequential statements ("average variations in the RNN output ... are amplified") are not a novel discovery but have been studied and found by others in similar or different contexts of RNN before. In that case, it would be good to provide corresponding citations of pior art or - if the others believe this is the first time such a claim was made - highlight the significance of their finding.
l. 165-166: Please provide further arguments/proof, why the effect of Proposition 1 can be mitigated by smaller learning rate (even if such claim sounds plausible, it would be good to refer to previous work or more detailed reasoning). In other words: how does the learning rate affect the result of ||y - y'|| as a result of Propsotion 1 (it is likely related to epsilon in the formula but if so this relation could be mentioned more explicitly)?
l. 167-168: Similar remark as before: this sentence remarks a general claim about MLPs which should either be backed up by reference citations or by stronger reasoning or explanations.
l. 195: Given the typical level of stochasticity or dependency on random intial states, a trial number of 6 seems relatively low to incorporate statistical fluctuations in the return evaluation, in particular for the following ablation studies. Have the authors made sure that their reported results are not prone to larger statistical errors? What in detail is included in the choice of "different random seeds"? Does it only refer to the initial weight setup of the RNNs/MLPs or does it reflect varying intial conditions of the environemnts? Do all environments provide deterministic rollouts?
Figure 3: Why are error bars only visible for the brown curves/datapoints?
l. 218: How is "action variation after one-step update" as a function of rollout steps defined in the case of MLPs as they don't have an intrinsic time-convolution? I.e., what is the meaning of the i-th rollout time step in the case of MLP "only" (LR_CE = 0)? And why is that variation not close to zero for the 0-th rollout step, as indicated in Figure 3 (gray dashed curve) but starting from a value around 0.6?
l. 223-229: At this point of studying the performance in various benchmarks as a function of LR, and also for some of the subsequent ablation studies of Sec. 5.3, it would be great to show a benchmark reference where the CE module is skipped or bypassed altogether, feeding the current and last obserivation only into the MLP networks but not using a CE representation of the latent state at all (while there is one case where LR_CE = 0 in Fig. 9a, this is still different from skipping the entire module and following the suggestion above). This would generally help to underline the significance of the CE for the chosen set of benchmarks and tasks.
Fig. 6: It appears that SAC does hardly profit from using a recurrent CE (SAC-GRU) if compared to the MLP variant (SAC-MLP); is there any explanation why? It could stress the importance of showing the RESeL performance with and without CE, as mentioned in my comment above
Minor remarks:
l. 126: Better don't use " ' " behind name of algorithm ("RESeL's") but speak of RESeL policy as a joint expression
l. 171: Printing style (formatting) of scientific number notation "3e - 4" looks somewhat odd; check if correct Math formula style was chosen in the LaTex document or even better use decimal notation ("$3 10^-4$") or capital "E" instead of "e". This applies to other instances of number formatting of the main text as well.
l. 240: SOTA (state-of-the-art) should be introduced as an abbreviation
Fig. 5 (and others): Quality of plots could generally be improved by avoiding overlapping of images and axes captions in some cases
Fig. 6: Colors/symbols of "EPI" and "SAC-MLP" are hardly distinguishable
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the main limitation well. As they state, what is missing is a study on the impact of using the RNN as a context encoder in general, which refers to my request for providing a reference baseline where the CE is bypassed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the time and effort you have invested in reviewing our paper, as well as your insightful and constructive feedback. Your comments have greatly assisted us in improving the quality of our work.
- **[Skipping CE blocks]** Thank you for pointing out the missing baseline. We implemented this baseline by setting the CE outputs to zero while keeping everything else unchanged to isolate the effect of the latent space. The results are shown as the NO_CE curve in Fig. R1. The results indicate that historical information is crucial in our algorithm. Without it, achieving high returns on these POMDP tasks is hard.
- **[Algorithmic novelty]** Our work primarily focuses on the impact of learning rates in recurrent RL. We believe the importance and ubiquity of this issue merit a dedicated discussion.
- **[Originality of Proposition 1]** We referenced the proof from model imitation learning [1]. We consider the RNN as the transition function of an MDP and then, based on the derivation in [1], combined with the characteristics of RNNs, we derived the compounding error of RNNs. To the best of our knowledge, we are the first to derive the compounding error bound in RNNs. Thank you for your suggestion, we will highlight this point and include the relevant citations in the revised paper.
- **[Relationship between Proposition 1 and reducing learning rate]** Lowering the learning rate reduces the single-step network output variation $\epsilon$, thereby lowering the upper bound of Eq. (1).
- **[Last-step observation]** Here we use an MLP-based pre-encoder, which is not the MLP policy, to map the last-step observation to a latent space. The MLP policy does not take the last-step observation as input.
- **[MLP architecture]** We adopted the MLP network design from algorithms like SAC [2] and TD7 [3], which use a two-layer MLP for the policy network, with each layer consisting of 256 neurons. We previously tried smaller network structures, we find the impact was not significant.
- **[Claims on MLP]** A smaller learning rate makes MLP training very slow, requiring many gradient updates to train the MLP adequately, leading to inefficiency. We will add this explanation to the revised paper.
- **[Random seed]**
- We set a random seed at the start of each experiment to randomly initialize everything, including neural network weights and the environment. Every 1000 gradient updates, we perform a policy evaluation without resetting the environment's random state (forking the current random state). Using the current policy, we deterministically collect five trajectories for evaluation, where the policy outputs mean actions, but the initial state of the environment is re-sampled at the start of each trajectory. We repeated the experiment six times, each with a different random seed.
- To verify that our choice of seed count does not introduce larger statistical errors, we conducted an additional experiment with six more seeds on WalkerBLT-V-v0 and HopperBLT-V-v0. The results, shown in Fig. R5, demonstrate that the new curve (seed_7-12) closely matches the mean and shadow of the old curve (seed_1-6), indicating no significant statistical deviation. This shows that six seeds are sufficient for our experimental setup.
- **[Figure 3]** Regarding Figure 3, we trained the policy using RESeL and collected trajectories with non-deterministic actions in the environment. We obtained the mean actions $\{a_0,a_1,a_2,...\}$ for each trajectory, where $1,2,3,...$ are rollout steps. We then performed a single gradient update on the policy and obtained the updated policy's mean action outputs at each state in the trajectory $\{a_0', a_1', a_2', \ldots\}$. Figure 3 demonstrates $\{||a_0-a_0'||_2,||a_1-a_1'||_2,||a_2-a_2'||_2,...\}$ for different learning rates, with each experiment repeated 20 times. Although each curve has certain variance, the high repetition count results in a small standard error. The brown curves have a larger variance, hence the noticeable shadow (standard error), while other curves have smaller variance, making the shadow less apparent but present. In the gray dashed curve, the excessively high learning rate caused unpredictable behaviors, leading to significant initial variations in the policy. The gray dashed curve starts high and then decreases, possibly due to the larger action amplitude in the latter half of the trajectory, reaching the boundary values. As a result, many actions reach the boundary values and get compressed after substantial policy updates, leading to less variation compared to the first half.
- **[SAC-GRU and SAC-MLP]** We believe the too large output variations of RNNs cause the value and policy learning process being instable, therefore preventing the policy from efficiently utilizing the historical information.
- **[Minors]** Thank you for pointing these out. We will correct all these issues in the revised paper.
[1] Xu et al. "Error bounds of imitating policies and environments." NeurIPS 2021.
[2] Haarnoja et al. "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor." ICML 2018.
[3] Fujimoto et al. "For sale: State-action representation learning for deep reinforcement learning." NeurIPS 2023.
---
Thank you for guiding us towards making it a better work. We hope our responses have addressed your concerns effectively and enhanced the clarity of our main contributions. We look forward to your further comments.
---
Rebuttal Comment 1.1:
Comment: I'd like to thank the authors for their detailed response to my remarks and concerns, and in particular for their additional study of bypassing the CE to provide another baseline for the performance evaluation. Together with other explanations and clarifications done in the revised version my remaining questions and suggestions also have been sufficiently addressed. Hence, I continue to vote for "Accept (7)".
Towards the main concern regarding lack of novelty, as raised in my own review and reflected by some of the other reviewers' remarks: It is true that tuning the LR in any context of machine learning is on the checklist of every good practioner, so I was tempted to come to the same conclusion in the beginning that this work doesn't add enough novelty regarding this aspect. What changed my final perspective is related to the answer given by the authors in their rebuttal to some of the other reviews: In the particular context of multi-component POMDP-RL solution architectures it is not common practice to use separate LR for modules which otherwise contribute to the same overall loss function. And even if that idea was previously applied in this particular context, it would highly likely not entail a comprehensive study as provided in the present work.
---
Rebuttal 2:
Comment: Thank you very much for your prompt response and kind comments. We are delighted that our reply has addressed your concerns. Your valuable suggestions have greatly improved our work. | Rebuttal 1:
Rebuttal: We appreciate the time and effort all the reviewers have dedicated to our paper and their highly constructive comments. Here, we provide a general response to the common concerns raised.
- **[Novelty/Contribution]**
- We would like to emphasize that our core contribution lies in presenting a universal principle rather than specifying the hyperparameters for each task. This principle is supported by theoretical foundations and intuitive understanding, making it more than just an empirical trick. Although it is simple, it is by no means trivial.
- Specifically, our findings indicate that the instability in recurrent RL arises from the amplified action variation by rollout step, specifically the compounding error in RNN output. This results in larger overall output variation for the RNN compared to the MLP, even when their single-step changes are the same, leading to instability during training. This paper proposes to solely slow down updates of the RNN by giving it a lower learning rate, then $\epsilon$ in Eq. (1) is reduced, thereby decreasing the RNN's output variation and improving the stability. There have been various traditional approaches that can improve the training stability of RNNs in non-RL scenarios, such as reducing the overall learning rate, gradient clipping, and gradient truncation. However, these methods cannot solely slow down the update of RNN thus still sufferring training instability in reinforcement learning.
- **[Relationship between Proposition 1 and reducing learning rate]**
Reducing the learning rate can decrease the single-step network output variation, denoted as $\epsilon$, thereby reducing the upper bound of Eq. (1).
- **[Figure 3]**
Here we elaborate on how we obtained Figure 3: We trained a policy using RESeL and collected a batch of trajectories in the environment with stochastic exploration. For each trajectory, we calculated the mean action output by the RESeL policy $\{a_0, a_1, a_2, \ldots\}$, where $1, 2, 3, \ldots$ represent rollout steps. We then performed a single gradient update on the policy and obtained the updated policy's mean action outputs at each state in the trajectory $\{a_0', a_1', a_2', \ldots\}$. In Figure 3, we plotted the action variation $\{||a_0 - a_0'||_2, ||a_1 - a_1'||_2, ||a_2 - a_2'||_2, \ldots\}$ under different learning rate settings. **The action variation means the difference in policy output after the gradient update compared to its output before the update.**
- **[RNN learning rate setting]**
- We fixed the learning rate of the RNN to be 1/30 of the policy learning rate for all tasks. Fig. 9 and Fig. R2 demonstrate that this ratio achieves the highest policy performance. Additionally, Fig. 3 also shows that this ratio effectively prevents the RNN output variation from significantly exceeding that of the MLP. Our experiments indicate that this ratio has good generalizability and works well in all tasks. Therefore, we recommend setting the RNN learning rate to 1/30 of the policy MLP learning rate in practice or conducting a grid search centered around this ratio.
- Based on the suggestion from Reviewer dCQa, we have also implemented an automatic RNN Learning Rate Tuning method. At the beginning of training, we introduce a warmup phase where the RNN’s learning rate is automatically adjusted. The tuning objective is ensuring that the action variation caused by updating the RNN alone is roughly consistent with that of the MLP.
**We have uploaded the additional experiment results (Figs. R1-R6) in a separate PDF.**
Pdf: /pdf/91d8aa41e796b0c6b04998fbe7218bd485ee1c6d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
The Last Iterate Advantage: Empirical Auditing and Principled Heuristic Analysis of Differentially Private SGD | Reject | Summary: The paper presents a heuristic approach for evaluating the privacy of DP-SGD when only the last model iteration is released. This method contrasts with traditional analyses that consider all intermediate updates, offering a more practical assessment for scenarios where adversaries only access the final model. The proposed heuristic is experimentally shown to provide reliable privacy leakage estimates, making it a valuable tool for pre-audit privacy assessments.
Strengths: 1. Focus on a good and important question.
2. Good explanation and clear paper layout.
3. Propose a new analysis neither from the theoretical nor empirical point of view.
Weaknesses: I think the proposed method is interesting and new but I still have some questions.
1. I know the linear loss function assumption is common in theoretical analysis but it seems that the proposed method wants to have contributions in the empirical case, so why still make the linear assumption?
2. While I appreciate the effort to introduce a Heuristic analysis, I remain skeptical about its necessity and effectiveness. The primary benefit of theoretical analysis is its precision and rigor, which often include the flexibility to adjust bounds as needed. If the goal is to find a more relaxed lower bound on privacy risks, this can often be achieved by simply loosening the constraints within the existing theoretical framework. Introducing a separate heuristic analysis seems to complicate matters without providing clear advantages.
3. I do not think you are using a correct baseline. When you make that only the last iteration model can be seen assumption, it is not fair to use normal DP-SGD analysis. I think it is better to use the theoretical analysis from those hidden state papers you cited. I am curious if you compare your proposed method with those methods, will you still get the same conclusion?
4. I find Table 1 in the paper somewhat unclear and would appreciate further explanation from the authors regarding its purpose and implications. The table suggests that similar levels of heuristic ε are achieved across varying batch sizes, yet there is a noticeable increase in the standard privacy budget for smaller batches to maintain comparable performance. This observation seems to underscore the well-known impact of batch size rather than demonstrating an advantage of the proposed heuristic method.
Could the authors elaborate on how this data relates to the efficacy of the heuristic analysis?
Technical Quality: 2
Clarity: 3
Questions for Authors: Please check the weaknesses.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Please check the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and comments.
> 1. I know the linear loss function assumption is common in theoretical analysis but it seems that the proposed method wants to have contributions in the empirical case, so why still make the linear assumption?
Unfortunately, we do not understand this question. We propose a heuristic. It is theoretically justified in the linear case, but we argue that it is still representative for real deep learning settings.
> 2. While I appreciate the effort to introduce a Heuristic analysis, I remain skeptical about its necessity and effectiveness. The primary benefit of theoretical analysis is its precision and rigor, which often include the flexibility to adjust bounds as needed. If the goal is to find a more relaxed lower bound on privacy risks, this can often be achieved by simply loosening the constraints within the existing theoretical framework. Introducing a separate heuristic analysis seems to complicate matters without providing clear advantages.
It is not clear how we can loosen the constraints within the existing theoretical framework. One natural approach is to make some kind of assumption that makes the privacy analysis easier; this is precisely what we do.
> 3. I do not think you are using a correct baseline. When you make that only the last iteration model can be seen assumption, it is not fair to use normal DP-SGD analysis. I think it is better to use the theoretical analysis from those hidden state papers you cited. I am curious if you compare your proposed method with those methods, will you still get the same conclusion?
It is difficult to compare to these prior works [[FMTT18](https://arxiv.org/abs/1808.06651); [CYS21](https://arxiv.org/abs/2102.05855); [YS22](https://arxiv.org/abs/2203.05363); [AT22](https://arxiv.org/abs/2205.13710); [BSA24](https://arxiv.org/abs/2403.00278)] for two reasons. First, many of these prior works make an assumption of strong convexity, which is technically incompatible with our linearity assumption. If we try to compare anyway, how would we set the strong convexity parameter for a fair comparison? We could always set the strong convexity parameter in such a way that our numbers are better. Second, most of these prior works are theoretical in nature and it is nontrivial to extract concrete numbers from their theorems to compare with.
> 4. I find Table 1 in the paper somewhat unclear and would appreciate further explanation from the authors regarding its purpose and implications. The table suggests that similar levels of heuristic $\varepsilon$ are achieved across varying batch sizes, yet there is a noticeable increase in the standard privacy budget for smaller batches to maintain comparable performance. This observation seems to underscore the well-known impact of batch size rather than demonstrating an advantage of the proposed heuristic method. Could the authors elaborate on how this data relates to the efficacy of the heuristic analysis?
The accepted wisdom, e.g. [[DBHSB22](https://arxiv.org/abs/2204.13650)], is that larger batch size yields better privacy/utility with DP-SGD. However, this is based on the standard DP-SGD analysis. Table 1 questions this accepted wisdom. Our heuristic, like the results of privacy auditing, is relatively insensitive to the batch size.
Training with large batch sizes is computationally costly, so if it is not truly improving privacy, we shouldn’t advise practitioners to do it. Further research is needed to understand whether or not larger batch sizes are truly beneficial.
We hope that we have addressed the reviewers concerns and that they reconsider their recommendation.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Let me clarify what I mean for the first question.
Is it feasible to apply your proposed method to a non-linear loss like a cross-entropy loss that the community usually uses? I also wonder why you think a technique theoretically justified in the linear case can be representative of real deep learning. For example, I want to train a Vision transformer model on ImageNet with DP-SGD. Do you think it is possible to only use a linear loss? Or in other words, will your proposed method can be applied in this real deep learning case?
---
Rebuttal 2:
Comment: Thank you for the clarification.
> Is it feasible to apply your proposed method to a non-linear loss like a cross-entropy loss that the community usually uses?
Realistic deep learning losses yield complicated output distributions on the model weights and, unfortunately, it is not possible to give an exact differential privacy analysis.
This is why there is a lot of work on privacy auditing, which aims to empirically approximate an exact DP analysis. Thus we compare our heuristic to state-of-the-art privacy auditing methods.
In Section 4.2 we performed an exact analysis of quadratic losses. This is slightly more general than linear losses. But even here the output distribution becomes a mixture of Gaussians which is nontrivial to analyze.
> I also wonder why you think a technique theoretically justified in the linear case can be representative of real deep learning.
It is indeed surprising that linear losses yield a good approximation to real deep learning in terms of privacy.
The key observation that inspired this paper is that, in the practice of privacy auditing, the strongest membership inference results are achieved by examples ("canaries") with gradients that are the same in each iteration of SGD. This corresponds exactly to the case of a linear loss function.
So our linear approximation is well-justified empirically. Theoretically, Appendix B shows that *in the full batch setting* our linear approximation is indeed the worst case, although this proof does not extend to the minibatch setting.
> For example, I want to train a Vision transformer model on ImageNet with DP-SGD. Do you think it is possible to only use a linear loss? Or in other words, will your proposed method can be applied in this real deep learning case?
Obviously, training with a linear loss would result in non-convergence of DP-SGD, so we don't recommend doing that. :-)
Our thesis is that, in the last iterate setting, for the privacy analysis, we can *pretend* that we have a linear loss to give a heuristic privacy analysis. This heuristic privacy analysis is optimistic, but it may be closer to the "true" privacy loss than the standard pessimistic DP-SGD analysis. We argue that this is a useful/interesting perspective to add to practical private deep learning, although it still leaves open many questions.
---
Rebuttal Comment 2.1:
Comment: Thanks authors for the detailed explanations. I have read the rebuttal and comments but not be fully convinced by them. I will keep the same score. | Summary: This paper proposes a heuristic privacy analysis of releasing only the final model iterate of differentailly private gradient descent (DP-SGD). The analysis is based off of the worst-case differential privacy guarantee of DP-SGD with linear losses, under the assumption that the heuristic can be applied to more general loss functions in order to approximate the privacy loss.
Strengths: * The premise of the paper (a heuristic privacy analysis of releasing only the final model iterate of DP-SGD) is very interesting, and Theorem 1 is a cool result.
* The paper thoroughly assesses the limitations of the heuristic (in Section 4).
Weaknesses: * I don’t know how useful the heuristic analysis would be in practice — beyond a lightweight sanity check — since ultimately it is just a heuristic and not a rigorous upper or lower bound on the privacy loss.
* The empirical study of the heuristic looks to be very thorough, but sparse on interpretation. I would have appreciated more discussion on the figures, and didn’t really feel like there was a strong take-home message from the paper.
* Algorithm 1 is DP-SGD with a regularizer, but in practice it is somewhat rare to use explicit regularization with DP-SGD. So I’m not sure that the heuristic would be widely applicable to the more common implementation of DP-SGD without regularization.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. One of the potential use cases of the heuristic (lines 82-85) is to predict the outcome of privacy auditing. So I’m wondering:
* If one of the advantages of the heuristic is that it’s less computationally expensive than privacy auditing, but you’ll have to do the computationally expensive privacy auditing anyway (in order to compare), then what is the point?
* One of the disadvantages of privacy auditing is that it’s difficult to perform correctly, but the heuristic doesn’t have rigorous theoretical guarantees. If privacy auditing and the heuristic don’t agree, then how would you know which one was wrong?
2. In Figure 3, why does the heuristic $\epsilon$ scale with the number of iterations $T$? Since the heuristic is a last-iterate analysis, I would have imagined that it would converge to a constant?
3. Would it be possible to extend Theorem 1 to non-regularized DP-SGD?
4. It’s not obvious to me why the privacy guarantee for an arbitrary loss would be similar to the privacy guarantee for a linear loss (and not even a GLM loss, mind you). Maybe I missed this in the paper, but would it be possible to further justify why the heuristic would be a good approximation in cases where the linear loss assumption doesn’t hold?
5. I feel like the proof of Theorem 1 is hard to appreciate without knowing more technical preliminaries, such as how the hockey-stick divergence can be used to show that an algorithm satisfies DP. I wonder if the proof could be re-framed in the language of privacy profiles and dominating pairs? (https://arxiv.org/abs/1807.01647 and https://arxiv.org/abs/2106.08567)
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful comments. We address their main concerns:
> * I don’t know how useful the heuristic analysis would be in practice.
We believe that such a “sanity check” is useful in practice. In practice, often the standard $\varepsilon$ DP parameter is uncomfortably large and we seek additional validation through methods like privacy auditing. Our heuristic can provide additional validation, which is particularly useful because it is easy to do privacy auditing incorrectly, which gives a false sense of security.
More importantly, beyond immediate practical impact, we hope that our work leads to further scientific exploration of the privacy implications of the last iterate setting. In other words, our work identifies a phenomenon and proposes an explanation (and also challenges that explanation). Thus we believe that our work is valuable as basic science, even if the heuristic is not immediately useful.
> * The empirical study of the heuristic looks to be very thorough, but sparse on interpretation. I would have appreciated more discussion on the figures, and didn’t really feel like there was a strong take-home message from the paper.
Space permitting, we will add further discussion of the experimental results in the revision (the camera ready allows an extra page of content).
Overall, we attempted to present the results in a neutral manner, to let readers draw their own conclusions. In particular, we put a lot of effort into probing the limitations of our heuristic. (E.g. Section 4 shows that our heuristic can be too optimistic and Figure 2 shows that it doesn’t fully explain the gap between theoretical analyses and empirical results.)
The take home message is (i) that the last iterate setting is interesting and there is a gap between the upper bounds we can prove and the lower bounds we get empirically; and (ii) that we give a heuristic that seems to capture a major reason for this phenomenon.
> * Algorithm 1 is DP-SGD with a regularizer, but in practice it is somewhat rare to use explicit regularization with DP-SGD.
The regularizer can always be set to 0 or incorporated into the loss function, and this is how DP-SGD is usually presented for simplicity. So the regularizer does not change the validity of our heuristic.
The reason we included the regularizer is that it is convenient for the counterexamples in Section 4 to not have gradient clipping. But if we instead removed clipping of the loss gradients, then the standard DP-SGD analysis would be invalid. Having an unclipped regularizer doesn’t invalidate the standard DP-SGD analysis, but allows us to include counterexamples with arbitrary gradients.
> * If one of the advantages of the heuristic is that it’s less computationally expensive than privacy auditing, but you’ll have to do the computationally expensive privacy auditing anyway (in order to compare), then what is the point?
One use case would be selecting hyperparameters (noise multiplier, batch size, learning rate, etc.). I.e., use our heuristic to consider many settings of the hyperparameters and select the one that is predicted to work best in terms of privacy auditing. But only actually perform privacy auditing once on the final choice of hyperparameters.
> * One of the disadvantages of privacy auditing is that it’s difficult to perform correctly, but the heuristic doesn’t have rigorous theoretical guarantees. If privacy auditing and the heuristic don’t agree, then how would you know which one was wrong?
That is a great question!
The standard DP-SGD analysis gives an upper bound on $\varepsilon$, while privacy auditing gives a lower bound. In practice, there is usually a large gap between these numbers.
The question that we face is which of the two is closer to the truth. Our heuristic gives a third number which falls between the upper and lower bounds and which has some principled justification. The heuristic thus hopefully helps answer that question.
> 2. In Figure 3, why does the heuristic $\epsilon$ scale with the number of iterations $T$? Since the heuristic is a last-iterate analysis, I would have imagined that it would converge to a constant?
Note that the heuristic $\varepsilon$ is minimized around $T=1/q$. At this point there is approximately a $e^{-1} \approx 0.36$ probability that the canary example is *never* sampled, which provides a lot of privacy. As $T$ increases, the probability that the canary is sampled at least once increases, but eventually it does converge.
> 3. Would it be possible to extend Theorem 1 to non-regularized DP-SGD?
The regularizer can be set to 0.
> It’s not obvious to me why the privacy guarantee for an arbitrary loss would be similar to the privacy guarantee for a linear loss (and not even a GLM loss, mind you).
This is the main thesis of our paper and it is indeed surprising!
The main justification is our experimental results which show that linear losses do seem to represent the worst case for real deep neural networks It has been observed in the practice that the worst case examples for privacy auditing have gradients that are constant across iterations, which corresponds precisely to linear loss functions.
Unfortunately, we cannot formally prove that linear losses are representative of general losses, since our counterexamples show that this is not true for pathological examples. However, as discussed in Appendix B, in the full batch setting, we can actually prove that linear losses are the worst case.
> I feel like the proof of Theorem 1 is hard to appreciate without knowing more technical preliminaries
We will endeavor to make the proof more accessible to different audiences.
We hope that the reviewer reconsiders their recommendation in light of our responses.
---
Rebuttal Comment 1.1:
Comment: Thanks for addressing my points. In hindsight my concerns about the regularizer don’t hold any water, but I do still have some remaining concerns.
Using the heuristic for hyperparameter selection is an interesting idea. But I’m a little unclear on the exact set-up, and why the hyperparameters would be chosen according to privacy auditing (instead of according to utility). I imagine a scenario where for a fixed level of privacy, you try to maximize the utility by selecting the best hyperparameters subject to the privacy constraint, then do privacy auditing on the final hyperparameters. In this case I don’t see why it would be necessary to do privacy auditing on any of the hyperparameter candidates apart from the final ones.
My other question is — if I’ve understood correctly, Theorem 1 implies that the heuristic will always be smaller than the existing $(\epsilon, \delta)$-DP bound. (Informal logic: Theorem 1 is a tight upper bound for linear losses, and extending to a larger class of loss functions can only make the privacy loss larger.) But there doesn’t seem to be a strong connection between privacy auditing and the heuristic apart from the empirical observation that the heuristic is larger than the privacy auditing bound (except for specially constructed pathological cases). I hate to say that 33,000 GPU hours aren’t enough to convince me on this, but I do feel that I would need to see the empirical observation validated across a wider range of problems and datasets (i.e., not just CIFAR10) and ideally at least some theoretical justification before I’d be comfortable agreeing that the heuristic is a valid and reliablbe tool.
I realize that it’s not ideal to bring this up so soon before the end of the reviewer-author discussion period…but if time permits it would be great to receive clarification on these issues.
---
Rebuttal 2:
Comment: Thank you for the comment. We respond below. We hope that we have been able to clarify all the issues before the discussion ends.
> Using the heuristic for hyperparameter selection is an interesting idea. But I’m a little unclear on the exact set-up, and why the hyperparameters would be chosen according to privacy auditing (instead of according to utility).
Hyperparameters would need to be chosen to achieve both high utility and low privacy loss.
The "ideal" is to calibrate the hyperparameters so that the standard analysis of DP-SGD gives a good privacy parameter $\varepsilon$.
For better or worse, it is becoming accepted practice to set the standard $\varepsilon$ to large-ish values (e.g. $10$ or $20$). One way to justify this practice is to perform privacy auditing on the final model to argue that this large $\varepsilon$ is overly conservative. This is one of the practical motivations for work on privacy auditing.
The use case for hyperparameter tuning that we envisage is that one has dual privacy constraints, say, $\varepsilon_{\text{standard}} \le 10$ but also $\varepsilon_{\text{auditing}} \le 1$. Our heuristic would help evaluate the latter constraint during the hyperparameter selection phase.
This use case is analogous to scaling laws on the utility side.
That said, we reiterate that we think this work is more about basic science than about immediate applications. Progress on both sides -- provable DP upper bounds for ML and empirical privacy auditing lower bounds -- has been slowing down without converging. The goal of our work is to offer a fresh (if somewhat unorthodox) perspective -- our heuristic is neither a provable upper bound nor a lower bound; it should be somewhere in between.
> if I’ve understood correctly, Theorem 1 implies that the heuristic will always be smaller than the existing
-DP bound.
Yes. This is true by the postprocessing property of DP. (The standard analysis assumes all intermediate iterates are revealed. Our result is tight for when only the sum is revealed, which is a postprocessing.)
> But there doesn’t seem to be a strong connection between privacy auditing and the heuristic apart from the empirical observation that the heuristic is larger than the privacy auditing bound
This is supported by the experimental evidence, plus the observation repeated in the literature that the best privacy auditing results (i.e. worst case for privacy) are when the canary examples have the same gradient in each iteration, which corresponds to linear losses. (I.e., prior work shows that "gradient space attacks" are better than "input space attacks.") The best theoretical justification is the fact that we know that linear losses achieve the worst case in the special case of full batch training (as discussed in Appendix B).
> I hate to say that 33,000 GPU hours aren’t enough to convince me on this, but I do feel that I would need to see the empirical observation validated across a wider range of problems and datasets (i.e., not just CIFAR10)
That is a fair point. But this limitation is also true of the privacy auditing literature more generally. The vast majority of experimentation is on CIFAR10 and smaller datasets. This is simply because privacy auditing generally requires many training runs, which is prohibitive for anything larger.
It's important to note that the claim that our heuristic upper bounds privacy auditing results for "realistic" models & datasets is falsifiable (at least to the extent that "realistic" is clearly-defined). Our hope is that any future privacy auditing work would compare to our heuristic. If they fail to exceed our heuristic, then the claim gains more weight. If they do exceed our heuristic, then that would represent significant progress on privacy auditing. | Summary: The paper proposes a heuristic privacy analysis for DP-SGD that focuses on releasing only the last iterate, as opposed to all intermediate iterates. The authors argue that this approach is more realistic and provides sharper privacy guarantees in practical scenarios. The heuristic is based on a linear structure assumption for the model and is validated experimentally through attacks/privacy auditing.
Strengths: This paper is well-written. The paper introduces a new heuristic analysis of DP-SGD for linear loss functions and also critically examines its limitations, and identifies areas for further research.
Weaknesses: To my understanding, this paper offers a tighter privacy accounting analysis specifically for linear loss functions. However, I find its applicability limited since it cannot be extended to general ML tasks where the loss functions are not linear. Additionally, the fact that the privacy adversary has access to all intermediate iterates of the training process makes DP-SGD overly conservative is quite well-known. The main challenge remains in developing tight privacy accounting analyses for iterative algorithms like SGD.
Technical Quality: 3
Clarity: 2
Questions for Authors: Why does the example provided in Section 4.2 show linear loss is a good approximation of many convex losses? Could you please provide some more insights?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and comments. We address their points below.
> the fact that the privacy adversary has access to all intermediate iterates of the training process makes DP-SGD overly conservative is quite well-known.
The reviewer’s claim that this phenomenon is well-known is debatable. While there are a handful of papers that prove separations between the standard analysis and the last iterate setting, there are also papers that tell the opposite story by proving that DP-SGD (with the standard analysis) is optimal in some sense. We believe that many practitioners are not aware of this limitation of the standard analysis of DP-SGD.
In any case, the contribution of our work is to not only identify the phenomenon, but also to provide a heuristic that helps explain why this phenomenon arises and to study it experimentally.
> I find its applicability limited since it cannot be extended to general ML tasks where the loss functions are not linear.
Our manuscript is forthcoming about the fact that our analysis is only a provable bound for linear loss functions and is only a heuristic when applied to general ML tasks. Our thesis is that, in terms of privacy of DP-SGD, linear loss functions are a reasonable model for most general ML tasks, which is supported by experimental evidence.
We hope that our work inspires further research into the last iterate setting of DP-SGD.
> Why does the example provided in Section 4.2 show linear loss is a good approximation of many convex losses? Could you please provide some more insights?
Section 4 probes the limitations of our heuristic. In Section 4.2 we tried to find a convex loss function that violates our heuristic.The largest violation we could construct is an $\varepsilon$ that is 3% higher than our heuristic. While this technically violates our heuristic, the violation is quite small, which overall supports our thesis that the heuristic is a good approximation.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. My score remains unchanged. | Summary: The authors provide exact DP guarantees for cases where only the last iterate of DP-SGD is shared with the malicious clients, and linear models with linear loss functions are used. They propose their DP bound to be used as a heuristic that approximates the true DP guarantees for cases where more complex models are used. They show that for normal DP-SGD training, the predictions of their heuristic fall between the standard DP bound computed under the assumption that all intermediate iterations of DP-SGD are shared with the attacker, which is a strict upper bound of the true DP guarantee when only the last iterate is shared and DP-SGD with full batches and only last iterate sharing. They also compare their method against SoTA DP attacks and show that under most circumstances, their heuristic value for the DP is higher. They suggest that this is the result of the attacks not being good enough at precisely estimating the true DP guarantees. Finally, the authors demonstrate that their heuristic under unrealistic circumstances can underestimate the true DP guarantee but argue this only happens under hand-crafted losses and gradient updates, which do not happen in practical circumstances.
Strengths: - The last iterate setting is important
- The linear function DP bound is exact
- The linear function DP bound has interesting properties
- The counter-examples for the DP heuristic themselves seem interesting and probably can be adapted to other settings
Weaknesses: - I am confused by L234. The authors propose to maximize their heuristic over all $t\leq T$, while beforehand (e.g. in Figure 1/Section 2) they advocated to computing the heuristic for a single $T$. Which one is the exact proposed heuristic by the paper?
- In Figure 2, I am not sure how we adapt existing techniques to the last-iterate-only setting? Can the authors explain in more details?
- Can the authors explain in Figure 1, what network and dataset were used?
- The authors do not provide code. I am not sure about the reason, but I will give them the benefit of the doubt that the reason is indeed related to anonymity
**Nits:**
- Eq. 8. I assume you do indexing from i = 1. In that case, $A_{T-i}$ should be $A_{T-i+1}$ instead. If you do 0-based indexing, even more fixes to the equation are needed.
- I believe Eq. 7 should be multiplied by $\eta$ on the right-hand side
- Equation at L442, left-hand side should be $m_T$ not $m_t$
- I believe the last equation at L459 should have $(1-q)^{n-k}$ instead of $(1-q)^{k}$. I also believe $n$ is $T$ in this equation
- The definition of $l(m)$ in L109 is confusing as $m$ is considered input to the function, while in the rest of the paper $m$ is used as a parameter. Consider putting $x$ instead.
- Consider defining the hockey-stick divergence in terms of both its pdf and cdf in the appendix to ease unfamiliar readers. I had to read quite a bit on my own to understand it.
- Consider adding some information in the appendix as to how to deal with the mixed discrete-continuous probability for $P$. I assume many readers will be unfamiliar.
- Consider deriving the formulas for $P$ and $Q$ in Section 4.2 in the appendix. They are not obvious.
- Consider having an appendix section that quickly recaps how [NSTPC21] and [NHSBTJCT23] work. Their operation is critical for understanding Section 3. I ended up reading them to get an idea of what was going on there.
Technical Quality: 2
Clarity: 2
Questions for Authors: (See Weaknesses Above)
While I am familiar with DP and DP guarantees, I am not a DP expert. Still, I was able to follow the math in the paper, and I am mostly confident in it. What I am not as confident in is if the heuristic, which by authors' own admission does not provide rigorous upper or lower bounds, will be useful in practice. In addition, there is few experimental results that require explanation (see above). In particular, I am concerned about why, in the counter-examples in Section 4, the authors propose maximizing the heuristic across $T$, while in the experiments in Figure 1, the heuristic is proposed to be used for a particular $T$. I am, thus, not sure which of the two versions of the heuristic should be used in practical settings as the true heuristic. Further, I am not sure if the structure of the proposed DP counterexamples in Section 4 is novel, but if it is, I think it might have uses outside of this paper.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors acknowledge the limitations of using the heuristic to compute the DP bounds
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough reading of our paper and thoughtful comments. In particular, we appreciate the reviewer spotting several typos, which we will fix. We respond to the main questions:
> * I am confused by L234. The authors propose to maximize their heuristic over all $t \le T$, while beforehand (e.g. in Figure 1/Section 2) they advocated to computing the heuristic for a single $T$. Which one is the exact proposed heuristic by the paper?
This is in Section 4, where we probe the limitations of our proposed heuristic.
Section 4.1 illustrates the limitations of computing the heuristic for only the total number of iterations $T$. Thus in Section 4.2 we maximize over $t \le T$ to show limitations that go above and beyond what is in Section 4.1.
The heuristic that we propose is simply to run with a single number of iterations $T$, but with the caveat that it is a heuristic and has known limitations, including non-monotonicity in $T$.
> * In Figure 2, I am not sure how we adapt existing techniques to the last-iterate-only setting? Can the authors explain in more details?
We are not sure we understand this question. The line labeled “standard” in our figures is *not* adapted to the last iterate setting. The figures illustrate that there is a large gap between the existing upper and lower bounds, which motivates our work, and our heuristic partially explains this gap.
> * Can the authors explain in Figure 1, what network and dataset were used?
Figure 1 does not include NN experiments (unlike the other figures). It simply compares our heuristic to some mathematical baselines in different parameter regimes.
> I am not sure if the structure of the proposed DP counterexamples in Section 4 is novel,
To the best of our knowledge these counterexamples are novel. However, after the NeurIPS submission deadline we have learned of subsequent related work that may reproduce some of these.
> What I am not as confident in is if the heuristic, which by authors' own admission does not provide rigorous upper or lower bounds, will be useful in practice.
This is a very important question. :-)
We believe that our heuristic can be useful in practice as a “sanity check.” E.g., in practice, the standard $\varepsilon$ DP parameter is often uncomfortably large (e.g., $\varepsilon=20$) and additional validation, such as privacy auditing, is used to justify tolerating this. Our heuristic can provide additional validation, which is particularly useful because it is easy to do privacy auditing incorrectly, which gives a false sense of security.
More importantly, beyond immediate practical impact, we hope that our work leads to further scientific exploration of the privacy implications of the last iterate setting. In other words, our work identifies a phenomenon and proposes an explanation (and also challenges our own explanation). Thus we believe that our work is valuable as basic science, even if the heuristic is not immediately useful.
We hope that we have addressed the reviewer’s comments and that they reconsider their recommendation.
---
Rebuttal 2:
Comment: **Re:** I am confused by L234.
**Ans:** My confusion stems from the fact that by maximizing $\epsilon$, the authors take the most sound and least tight (I am using these terms here loosely, referring to their meanings for the empirical and provable DP guarantees cases, respectively) prediction they make across all $t$. Thus, despite the heuristic producing too low $\epsilon$s for many $t$, the authors report that for the “best” $t$ they are close to the counterexamples they find.
**Re:** In Figure 2, I am not sure how we adapt existing techniques to the last-iterate-only setting? Can the authors explain in more details?
**Ans:** My question is regarding the Empirical attack in Figure 2. The authors say they adapt the empirical attack to the last-iterate DP setting. My question was if the authors could provide details on how this is done?
**Re:** Can the authors explain in Figure 1, what network and dataset were used?
**Ans:** Can the authors describe how the plot and the heuristics themselves are compiled in more detail?
**Re:** I am not sure if the structure of the proposed DP counterexamples in Section 4 is novel.
**Ans:** I would also like to ask other reviewers who are more familiar with SOTA DP research to confirm this. Also, I would request that if they also find it novel, to take this into account in their reviews.
**Re:** What I am not as confident in is if the heuristic, which by authors' own admission does not provide rigorous upper or lower bounds, will be useful in practice.
**Ans:** I find the first part of the author’s answer here very useful. I would suggest that they include it more prominently in the paper. Regarding the second part of the answer --- I think the proposed counterexamples in Section 4 strongly suggest that general last-iterate DP guarantees are likely the same as full DP. That said, they also suggest that they are the same only for very artificial examples that only happen for extremely unrealistic models. Do the authors have any suggestions on how the DP definitions in the last-iteration case can be amended to make their worst case closer to DP-SGD executed on real networks? Maybe something related to progressing the loss during training?
---
Rebuttal Comment 2.1:
Comment: Thank you for your comment. Some responses:
> Thus, despite the heuristic producing too low $\epsilon$s for many $t$, the authors report that for the “best” $t$ they are close to the counterexamples they find.
That is a fair point. We presented the results in Section 4.2 to show that we are capturing something more than Section 4.1, but we should also compare to the original baseline. This is what Figure 5b looks like if we just use a single $T$: [https://i.sstatic.net/IYnSlZwW.png](https://ibb.co/DpC6crS)
The non-monotonicity issue identified in Section 4.1 has a larger effect than the phenomenon captured in Section 4.2. (This is already evident from Figure 1c.)
> My question is regarding the Empirical attack in Figure 2. The authors say they adapt the empirical attack to the last-iterate DP setting. My question was if the authors could provide details on how this is done?
The prior literature on privacy auditing considers different types of attacks some of which apply to our last iterate setting and some of which don't. We don't significantly adapt the attacks, we just consider those that are applicable to the last iterate setting.
The basic idea for all the membership inference attacks is to run the training procedure both with and without a given example and then to apply a distinguishing test to the final model.
For input space attacks (Figure 4), we use the loss of the malicious input as a distinguisher. For gradient space attacks (Figures 2 and 3) the distinguishing test measures the dot product of the final model checkpoint and the gradient canary.
The true positive rate / false positive rate tradeoff curve for this distinguishing test is then used to calculate the $\varepsilon$ values.
The difference between Figures 2 and 3 is that in Figure 3 the gradients of the other examples are zeroed out, while in Figure 2 the other examples' gradients are nonzero and effectively act as additional noise that aids privacy. The reason we included Figure 2 is that in this setting, if the adversary has access to intermediate iterates, then prior auditing work has shown that the standard DP-SGD analysis is tight. But it is obviously very far from tight in the last iterate setting and our heuristic only partially explains the gap.
> Can the authors describe how the plot [Figure 1] and the heuristics themselves are compiled in more detail?
The line labelled "Heuristic" is calculated using the method described in Appendix A.1.
The line marked "Standard" is calculated using an open source DP accounting library [[Goo20](https://github.com/google/differential-privacy/tree/main/python/dp_accounting)].
The "Full Batch" line can be calculated using the same open source library or as a special case of our heuristic (we opted for the latter, since the open source library can be slow).
> That said, they also suggest that they are the same only for very artificial examples that only happen for extremely unrealistic models. Do the authors have any suggestions on how the DP definitions in the last-iteration case can be amended to make their worst case closer to DP-SGD executed on real networks? Maybe something related to progressing the loss during training?
This is the big question and it's something that we are continuing to work on.
The only prior work in this direction [[FMTT18](https://arxiv.org/abs/1808.06651); [CYS21](https://arxiv.org/abs/2102.05855); [YS22](https://arxiv.org/abs/2203.05363); [AT22](https://arxiv.org/abs/2205.13710); [BSA24](https://arxiv.org/abs/2403.00278)] assumes contractivity -- i.e., if you perturb the model weights at some point, then, in subsequent steps, the effect of this perturbation does not grow or even shrinks. We can prove contractivity for smooth and (strongly) convex loss functions. But, for deep learning, we have seen experimentally that this contractivity assumption is simply not true.
We speculate that it would be fruitful to understand to what extent real deep learning losses are "locally approximately linear" (however you want to precisely define that) and then to exploit that property in the privacy analysis. The value of our submission is that it tell us what is the best we can hope for from such an approach. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Token Merging for Training-Free Semantic Binding in Text-to-Image Synthesis | Accept (poster) | Summary: The authors tackle semantic binding, where T2I models often fail to correctly reflect the relations between objects (object binding) or objects and their attributes (attribute binding). To address this, they introduce Token Merging (ToMe), a method that aggregates related tokens to a single composite token, which ensures they share the same cross-attention map. To do that, two training-free optimizations are introduced. The first, Semantic Binding loss, which makes sure the composite token leads to noise prediction that is consistent with the full phrase the token is based on. The second, Entropy loss, which helps the tokens focus exclusively on their designated regions.
Strengths: * Creating a composite token and then optimizing it is a creative and elegant approach to the problem. I'm curious to see what future work can be built on top of this contribution.
* The focus on object binding is indeed missing from the literature, a topic that is well-addressed in this work.
* The idea of end token substitution to remove attribute information is simple.
Weaknesses: * The related work underplays the role of Attend-and-Excite and Linguistic Binding in Diffusion Models (SynGen) have on this paper. The former introduced the idea of semantic guidance, while the latter used dependency graphs to extract syntactically related tokens from prompts and presented a loss that encourages cross-attention maps of syntactically related tokens to agree. Both of which are ideas this paper meaningfully builds on.
* You do not describe how you obtain the syntactic information: entities and their relevant objects nor if it is automatic or an input constraint. This is particularly confusing because in section 3.2 you begin describing exactly that, but do not delve into the specifics. Furthermore, in section 4.1 (line 248), you mention spaCy and that you "identify each object and its corresponding attributes for token merging", but do not provide actual details about your method. Does it mean it can accurately capture *all* syntactic structures? It is possible that I misunderstand, but it feels like the syntactic analysis portion of this work borrows from the SynGen paper. If this is the case, give appropriate credit. If it is not, then add details, as this is part of the method and is quite unclear.
* If I understand correctly, there are no human evaluation experiments, a user study would provide merit to ToMe’s superior performance. As a side note, in table 1, ‘Human-preference’ is a slightly misleading title. If I understand correctly, it is the output of the Image Reward human-preference model, but it sounds as if it is a human-evaluation result.
* I find that despite the claim that ToMe works well on highly complex prompts, there are no such examples or experiments given. For example, Figure 5 depicts prompts that are supposed to be complex, but they are no different from the simplistic prompts in Attend-and-Excite (“a {color_1} {subject_1} and {a color_2} {subject_2}”), only that they address object binding.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. What input text is used in the global encoding part?
2. Do you expect your method to extend to SD-3?
3. I worry that ToMe suffers from too many gradient updates in the event of highly complex prompts (like all training-free approaches), which would push the latent out of distribution.
4. Does ToMe work on long captions, with multiple sentences? How?
5. You say in the appendix that SynGen was modified to work with SDXL. Can you elaborate on the modifications? What would be needed to reproduce this experiment too?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: See questions 3 and 4.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your feedback and will incorporate the discussions mentioned below to enhance the quality of our paper. Note that we utilize the numerical references to cite sources within the main paper.
**W1: Related work**
These two methods generally fall under optimization-based approaches. They primarily adjust noisy signals to enhance attention maps and strengthen semantic binding. Attend-and-Excite improves object presence by increasing the attention score for each object. SynGen performs a syntactic analysis of the prompt to identify entities and their modifiers, using attention loss functions to ensure that cross-attention maps align with the linguistic binding reflected in the syntax. These methods have indeed provided a solid foundation for future semantic binding work. Different from these two works, our training-free method ToMe is based on the text embedding additivity property. Instead of updating the latent space, we update the composite token embeddings. We further introduce two auxiliary losses, i.e. the entropy loss and semantic binding loss, to augment the ToMe performance.
We will include more detailed introductions to these two methods in the introduction and related work sections in any future version.
**W2: Syntactic information**
In Section 4.1 on the implementation details, we mention that we use SpaCy[27] to detect relationships, following the approach outlined in the SynGen. More specifically, we use SpaCy’s transformer-based dependency parser to analyze the prompt and identify all entity nouns, including both proper nouns and common nouns, that are not serving as direct modifiers of other nouns. This process helps pinpoint entity nouns and their corresponding modifiers. We will further detail this process in the method and implementation details section.
**W3: User study**
As discussed in the General Response 1. In this paper, we use the ImageReward[72] model to evaluate human preference scores, which measures image quality and prompt alignment. However, since quantitative results may not fully capture these aspects, we now conduct a user study with 20 participants to enrich the evaluation. Here we compare our method ToMe with SDXL, SynGen, Ranni and ELLA. Each participant is questioned with 100 images, each generated from five methods and twenty prompts. The results are presented in Fig.16 of the rebuttal file. In the user study, we ask the participants to rate the semantic binding into 4 levels. We then calculate the distribution of each method over the four diverse levels. We can observe that ToMe better achieve the semantic binding performance by mainly distribute in the highest level 1, while the other methods struggle to obtain satisfactory results.
**W4&Q4: Complex prompts generation**
As illustrated in General Response 2, ToMe is not limited to only simple cases. In the main paper Table 2, we show experiments on the T2I-CompBench benchmark, where there are long captions with various prompt templates. There are also complex prompts T2I generations in Fig.12. To further show that our method ToMe can be applied to the T2I generation with complex prompts. We adopt the suggestions from the reviewer to generate with more complicated prompt templates, that includes longer prompts with multiple objects, multiple attributes or even multiple sentences. The generation results are shown in Fig.13 in the rebuttal file.
**Q1: Global encoding text**
SDXL is using a conditioning concatenation for text embeddings with two prompts. To keep our method ToMe applicable and generalizable to other T2I generative models, we keep both prompts the same. For example, with prompt “A blue cat and a green dog”, we assign this prompt as both the local and global encoding prompt and input it to the two text encoders and concatenate the text embeddings along the channels. And in our method, ToMe is updating the concatenated text embedding conditions with our proposed losses. We will add this to the implementation details.
**Q2: Generalizability to other T2I models**
As claimed in General Response 4, T2I models are facing generation limitations for various specific cases. We aim, as a future goal, to generalize our method to various generative models (such as DeepFloyd-IF, SD3, PixArt, etc.) to counter this issue. Current T2I generation models are built on various language models and vision-language models, such as CLIP and T5, and incorporate diverse architectures, including UNet, GAN, and DiT. We are also curious to see if the additivity property truly exists across all these cases, and whether this property is a generalizable feature.
**Q3: Latent Distribution Drifts**
You are correct; large gradients can cause significant changes in the latent space, which is a common issue in training-free semantic binding methods[7,41,56]. The primary goal of these methods is to update the latent or text embeddings to align the semantics between texts and images, often resulting in varying degrees of distribution change. To mitigate this issue, we only update the composite token representations during the first $T_{\text{opt}} = 0.2T$ time steps. This technique helps our method ToMe experience fewer distribution changes from the base model SDXL, as demonstrated in multiple cases in Fig. 5, Fig. 12 and Fig.13.
**Q5: SynGen reproduction details**
The original SynGen paper is based on the SD1.5 model. To adapt it to use SDXL as the base model, we utilize the 32×32 cross-attention map from the first three layers of the UNet-decoder, which serves as the most semantic layer, similar to the 16×16 cross-attention in SD1.5. We also performed a grid search to determine the optimal learning rate for SynGen in this context. All other details remain consistent with the original paper. After tuning the hyperparameters, the SDXL-based SynGen outperforms the SD1.5-based SynGen, as shown in Table 1 and Table 4. We will release the SynGen adapted code in the future for reproduction.
---
Rebuttal Comment 1.1:
Title: Thanks for your comments and look forward to further discussion
Comment: Dear Reviewer 8gpY:
Thank you for your valuable feedback on our paper. We deeply appreciate the time and effort you’ve put into reviewing our work, and your constructive comments are invaluable to us.
Regarding the weaknesses you mentioned, we include the corresponding experiments and discussion as follows:
1. Expanded discussion on related work, particularly Attend-and-Excite and SynGen.
2. Added detailed explanations on syntactic information extraction using SpaCy.
3. Conducted a user study with 20 participants to validate ToMe’s performance.
4. Provided additional examples of complex prompts generation.
5. Clarified the input text used in global encoding.
6. Discussed the generalizability of our method to other T2I models.
7. Discussed the potential latent distribution drifts.
8. Detailed the modifications made to adapt SynGen for SDXL.
Your feedback is incredibly important to us, and we sincerely thank you for considering our rebuttal. Please let us know if there are any further concerns or questions—we are more than happy to discuss them.
Thank you again for your time, and we look forward to your response.
Best regards,
Authors of submission 1763
---
Rebuttal Comment 1.2:
Title: Thank you
Comment: Thank you for taking the time to write back and answering some of my questions.
**W2** Note that the current 4.1 section does not mention the SynGen paper, but I trust that this will be clarified in a revised version.
**Q1** Please add it to the paper.
**Q2** Great. For your consideration: it would be interesting to see a FLUX or SD-3 implementation in a revised manuscript.
**Q3** Thanks, please add that to the paper, as these optimization methods are indeed quite sensitive, and understanding the tradeoffs is important.
**Q5** Please specify the hyperparameters that were used and any other detail that can be used to reproduce your calibration of this baseline.
There are lots of details that were not provided in the discussion, but I trust that you will incorporate everything in great detail in a revised manuscript. I'm raising my score to reflect this.
---
Reply to Comment 1.2.1:
Comment: Thank you very much for reviewing our paper and providing valuable feedback. We're glad our rebuttal addressed your concerns, and we'll include the suggested experiments and detailed discussion in the revised manuscript. Thanks again for your time and effort. | Summary: The authors propose a method to mitigate semantic binding, a common phenomenon in text-to-image models. While previous methods explicitly control the attention maps so that nouns and attributes attend to the same regions, the authors propose combining the nouns and attributes into a single token. This approach enforces the attention maps to concentrate in the same area. Additionally, they introduce an inference-time optimization loss to ensure the attention maps are focused, and that each composed token retains the same semantic meaning as all the different tokens that comprise it. The authors also analyze the information encoded in different tokens at the output of the text encoder.
Strengths: 1. The authors present an interesting analysis of the information encoded tokens, specifically they show that much of the semantic information is also encoded in the EOT tokens.
2. The authors provide a method to mitigate semantic binding in prompts of the form: “A <noun_A> <Attribute_A> and <noun_B> <Attribute_B>”, that outperforms or performs comparably to existing methods across various benchmarking subsets.
Weaknesses: 1. It is not clear how the model performs on more complicated prompts where both the noun and its sub-objects have attributes, such as: “A blue cat wearing sunglasses and a yellow dog wearing a pink hat.”
2. The method is on the slower side compared to other existing methods, due to inference time optimization.
3. This work is missing a user study, as the proposed automatic metrics can be inaccurate, and human evaluation is common in previous works.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. How does the method perform on more complex prompts? For example, for a prompt like “A blue cat wearing sunglasses and a yellow dog wearing a pink hat”. From my understanding, the method should not succeed in separating “pink hat” and “yellow dog” as the whole expression will be combined into a single token.
2. Is it not clear to me what prevents semantic binding across nouns, e.g. both the dog* and the cat* tokens to have the same appearance, (e.g. a cat-dog hybrid), which is a common problem in text-to-image models. What prevents the dog* and cat* attention maps from attending to the same regions?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors propose limitations relating to the underlying models, but I would be interested to learn in which areas of semantic binding the paper still struggles, such as with more complex prompts or a larger number of nouns, etc..
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your feedback and will incorporate the discussions mentioned below to enhance the quality of our paper. Note that we utilize the numerical references to cite sources within the main paper.
**W1&Q1&L1: Complex prompts generation**
As demonstrate in General Response 2, our method ToMe is not limited to only simple cases. In the main paper Table 2, we show experiments on the T2I-CompBench benchmark, where there are long captions with various prompt templates. There are also complex prompts T2I generations in Fig.12 in the Appendix. To further show that our method ToMe can be applied to the T2I generation with complex prompts. We adopt the complicated long caption templates suggested from the reviewer. The generation results are shown in Fig.13 in the rebuttal file.
In this case, we use our method ToMe in an iterative updating way. As an instance, we generate the fourth image example in Fig.13 (left down) with the prompt “a blue cat wearing sunglasses and a yellow dog wearing a pink hat”. To apply ToMe, we merge ‘yellow dog’ into token1 (dog*) and ‘pink hat’ into token2 (hat*). Following that, we merge token1 and token2 into token3. The noun phrases for computing the semantic binding loss corresponding to these three tokens are ‘a yellow dog’, ‘a pink hat’ and ‘a dog wearing a hat’, respectively. Then we update token1 and token2 by our losses proposed in Section 3.2.2.
**W2: Inference time cost**
As detailed in the General Response 3, we present a time cost comparison in Table 3 in Appendix C.4, where we compare performance using 50 inference steps using the SDXL model with float32 version on an A40 GPU. We demonstrate that our method does not significantly increase inference time while improving semantic binding performance. We further extend this analysis by measuring the time cost with 20 inference steps and various ToMe configurations, as shown in the Table below. We report the time cost (by seconds) along with BLIP-VQA scores across the color, texture, and shape attribute binding subsets. From this table, we can observe that using the token merging (ToMe) technique and entropy loss (Config.C in Table 2), our method achieves excellent performance with minimal additional time cost. Additionally, even with only 20 inference steps, our method, ToMe, maintains high performance with very little degradation.
| method | inference steps | Time Cost | Color | Texture | Shape |
| :--------: | :--------: |:--------: |:--------: |:--------: |:--------: |
SDXL | 20 | 18s | 0.6136 | 0.5449 | 0.5260 |
*ToMe (Config C)* | 20 | 23s | *0.7419* | *0.6581* | *0.5742* |
**ToMe (Ours)** | 20 | 45s | **0.7612** | **0.6653** | **0.5974** |
Ranni (SDXL) | 50 | 87s | 0.6893 | 0.6325 | 0.4934 |
ELLA (SDXL) | 50 | 51s | 0.7260 | 0.6686 | 0.5634 |
SynGen (SDXL) | 50 | 67s | 0.7010 | 0.6044 | 0.5069 |
SDXL | 50 | 42s | 0.6369 | 0.5637 | 0.5408 |
*ToMe (Config C)* | 50 | 56s | *0.7525* | *0.6775* | *0.5797* |
**ToMe (Ours)** | 50 | 83s | **0.7656** | **0.6894** | **0.6051** |
**W3: User study**
Please also refer to the General Response 1. In this paper, we use the ImageReward[72] model to evaluate human preference scores, which comprehensively measures image quality and prompt alignment. However, since quantitative results may not fully capture these aspects, we now also conduct a user study with 20 participants to enrich the evaluation. Here we compare our method ToMe with SDXL[51], SynGen[56], Ranni[19] and ELLA[28]. Each participant is questioned with 100 images, each generated from these five methods and twenty prompts.
The results are presented in Fig.16 of the rebuttal file. Similar to our proposed GPT-4o benchmark (detailed in Appendix C.5 and Fig.10), we ask the participants to rate the semantic binding into 4 levels and calculate the distribution of each comparison method over these four diverse levels. We can observe that our method better achieve the semantic binding performance by mainly distribute in the highest level 1, while the other methods struggle to obtain user satisfactory results.
**Q2: Mechanism to avoid semantic misalignment**
In this paper, we propose two losses to avoid the semantic misalignment and cross-attention leakage among objects and attributes.
First, the semantic binding loss assist in purifying the token semantics. Building on the ToMe technique, we decrease the semantic binding loss to enhance the cross-attention. It encourages the newly learned token to infer the same noise prediction as the original corresponding phrase, reinforcing the semantic coherence between the text and the generated image. To ensure that the semantics of the composite tokens accurately correspond to the noun phrases they represent, we use a clean prompt as a supervisory signal. The semantic binding loss is computed separately, ensuring no interference among noun words.
In addition, we reduce the entropy loss to help ensure that tokens focus exclusively on their designated regions, preventing the cross-attention maps from becoming overly divergent. The experimental results comparing scenarios with and without entropy loss are presented in Table 2, Fig. 6, and Fig. 7. Both losses are functioning together to improve the semantic binding. | Summary: This paper aims to solve the semantic binding problem in T2I models. The authors introduce a semantic binding method by merging tokens of entities and related attributes. Besides, several other tricks like semantic binding loss and entropy loss are introduced to improve the performance of semantic binding.
Strengths: - The idea is straightforward and it is also reasonable that it will work with some other tricks.
- The paper is well-written.
Weaknesses: - The importance of a few tricks like entropy loss is not well-emphasized.
- The motivation for some tricks like substituting the EOT token is not well-explained.
Technical Quality: 3
Clarity: 3
Questions for Authors: - 1. From Table 2, we know that entropy loss + token merging bring the biggest impact. This is reasonable because token merging actually changes nothing, all information still exists in the composition token, and semantic leakage will also happen. However, the reviewer does not fully understand what happens when doing token merging + entropy loss. Why do they work so well? Actually, we don't change the information and the way the information interacts.
- 1.1. The reviewer thinks Figure 7 is quite clear for the motivation, considering entities and their related attributes as one object. The reviewer is confused about what function entropy loss works.
- 2. What is the impact of substituting the EOT token? Is it important in the whole process?
- 3. How about the efficiency compared to the baseline in Table 1?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your feedback and will incorporate the discussions mentioned below to enhance the quality of our paper. Note that we utilize the numerical references to cite sources within the main paper.
**W1&Q1: Entropy loss**
In Appendix C.3, we demonstrate that the information coupling of token embeddings is also reflected in the entropy of cross-attention for each token. In Fig. 9-(c), we calculated the entropy of cross-attention maps for each token and found that tokens appearing later in the sequence generally have higher entropy, indicating that their cross-attention maps are more dispersed.
Based on this observations, in this paper, we combine the entropy loss with our proposed ToMe approach. Reducing the entropy of these cross-attention maps of composite tokens helps to ensure that these tokens focus exclusively on their designated regions, preventing the cross-attention maps from becoming overly divergent, thus avoiding cross-attention leakage [64]. The experimental results comparing scenarios with and without entropy loss are presented in Table 2, Fig.6, Fig.7 and Fig.14 (in the rebuttal file).
As an example in Fig.14, the original SDXL (Config A) suffered from attribute binding errors due to divergent cross-attention maps. When only applying token merging (Config B), the co-expression of entities and attributes resulted in a dog wearing a hat in the image, but the attribute leakage issue remained due to the divergent cross-attention maps. When only applying the entropy loss (Config E), although the cross-attention maps corresponding to each token are more concentrated, they may focus on wrong regions. Only by applying both token merging and $\mathcal{L}_{ent}$ techniques (Config C), the cross-attention map of the composite token becomes better concentrated on the correct areas and thus leading to more satisfactory semantic binding of entities and attributes.
**W2&Q2: EOT ablation study**
The end token substitution (ETS) technique is proposed to address potential semantic misalignment in the final tokens of long sequences. As the [EOT] token interacts with all tokens, it often encapsulates the entire semantic information, as shown in Fig. 2. Therefore, the semantic information in [EOT] can interfere with attribute expressions, we mitigate this by replacing [EOT] to remove the attribute information it contains from the original prompts, retaining only the semantic information for each subject.
For example, as the cross-attention maps and T2I generation performance shown in Fig.15 (in the rebuttal file), when ToMe is not combined with the EST technique, the ‘sunglasses’ semantics contained in the EOT token cause the boy to incorrectly wear sunglasses. However, when combined with ETS, the unwanted semantic binding is relieved.
**Q3: Efficiency**
As detailed in the General Response 3, we present a time cost comparison in Table 3 in Appendix C.4, where we compare performance using 50 inference steps using the SDXL model with float32 version on an A40 GPU. We demonstrate that our method does not significantly increase inference time while improving semantic binding performance. We further extend this analysis by measuring the time cost with 20 inference steps and various ToMe configurations, as shown in the Table below. We report the time cost (by seconds) along with BLIP-VQA scores across the color, texture, and shape attribute binding subsets. From this table, we can observe that using the token merging (ToMe) technique and entropy loss (Config.C in Table 2), our method achieves excellent performance with minimal additional time cost. Additionally, even with only 20 inference steps, our method, ToMe, maintains high performance with very little degradation.
| method | inference steps | Time Cost | Color | Texture | Shape |
| :--------: | :--------: |:--------: |:--------: |:--------: |:--------: |
SDXL | 20 | 18s | 0.6136 | 0.5449 | 0.5260 |
*ToMe (Config C)* | 20 | 23s | *0.7419* | *0.6581* | *0.5742* |
**ToMe (Ours)** | 20 | 45s | **0.7612** | **0.6653** | **0.5974** |
Ranni (SDXL) | 50 | 87s | 0.6893 | 0.6325 | 0.4934 |
ELLA (SDXL) | 50 | 51s | 0.7260 | 0.6686 | 0.5634 |
SynGen (SDXL) | 50 | 67s | 0.7010 | 0.6044 | 0.5069 |
SDXL | 50 | 42s | 0.6369 | 0.5637 | 0.5408 |
*ToMe (Config C)* | 50 | 56s | *0.7525* | *0.6775* | *0.5797* |
**ToMe (Ours)** | 50 | 83s | **0.7656** | **0.6894** | **0.6051** |
---
Rebuttal 2:
Title: Thanks for your comments and look forward to further discussion
Comment: Dear Reviewer mo93:
Thank you for your valuable feedback on our work. Your constructive comments on our work are invaluable, and we genuinely hope to get feedback from you.
Regarding the weaknesses you mentioned, we include the corresponding experiments and discussion as follows:
1. Entropy Loss & Token Merging: Our experiments (Fig. 14 in the rebuttal file) show that combining these techniques effectively focuses cross-attention maps, improving semantic binding.
2. End Token Substitution (ETS): As illustrated in Fig. 15 in the rebuttal file, ETS is also can prevent unintended semantic interference, leading to more accurate attribute expressions.
3. Efficiency Compare: We've provided a time cost analysis , showing that our method improves performance with minimal additional time.
Your feedback is incredibly important to us, and we sincerely thank you for considering our rebuttal. We are more than happy to discuss them if you have any further concerns or questions.
Thank you again for your time and effort to review our work and looking forward to your response.
Best Regards,
Authors of submission 1763
---
Rebuttal Comment 2.1:
Title: Discussion
Comment: Dear authors:
Thanks for your response. They address my concerns.
I will raise my score.
Best,
Reviewer mo93
---
Reply to Comment 2.1.1:
Comment: Thank you very much for taking the time to review our paper and offering insightful feedback. We're pleased that our response addressed your concerns, and we will incorporate the recommended experiments and thorough discussion in the revised manuscript. We truly appreciate your time and effort. | Summary: This paper focuses on the problem of lack of semantic binding in text-to-image generation models, and specifically on the misalignment between objects and their sub-objects. The paper introduces a training-free T2I method named ToMe after analyzing the properties of CLIP text embeddings and diffusion models. Utilizing the composition recognition ability of diffusion models, ToMe binds the embeddings of objects with their associated sub-objects together, and retains only the semantic information in the [EOT] tokens. ToMe is shown to be simple yet effective from the experimental results.
Strengths: 1. The paper is well-written and easy to follow.
2. The authors present detailed analysis of the CLIP text embeddings including their coupled and entangled properties and the semantically additive properties of text embeddings shown in diffusion models.
3. The structure of the proposed ToMe model is training-free, simple and mostly clear.
4. The authors provide extensive experimental results including quantitative, qualitative and ablation studies on the T2I-CompBench and GPT-4o Benchmark (introduced in this paper).
Weaknesses: 1. The Iterative composite Token Update (Sec. 3.2.2) is not clear enough to me. The authors introduce two losses: semantic binding loss and entropy loss. And since ToMe is training-free, it is not clear in the paper how these losses help the update the tokens at test time.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can the proposed method generalize to more objects, or background with various properties? It would be great if the authors can explain how generating more objects will change the pipeline, or if this is not feasible, please give an explanation why.
2. Please demonstrate how the two losses are integrated in the T2I pipeline, and how they are used to update the tokens.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors presented the inherent limitation of SDXL that ToMe is based on like producing artifacts in generated images, and unable to generate images with complex layouts. Also the limitation of CLIP to produce text embeddings could restrict the performance of ToMe.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your feedback and will incorporate the discussions mentioned below to enhance the quality of our paper. Note that we utilize the numerical references to cite sources within the main paper.
**W1&Q2: Token Update**
By training-free, we mean that our method ToMe does not involve training over datasets, unlike finetuning-based methods such as Ranni[19], ELLA[28], and CoMat[32]. Like other works (AnE[7], D&B[41], SynGen[56], etc.), methods that only backpropagate during inference time (on the generation of a single image) are considered to be training-free.
ToMe distinguishes itself from other optimization-based methods by updating token embeddings, whereas most existing methods (e.g., AnE[7], D&B[41], SynGen[56]) update the latent space using designed loss functions. Instead of back-propagating the gradients to the T2I model parameters or the latent variables of the previous timestep latent $z_t$, we compute the loss and apply the gradients to the composite token embeddings in each inference step during the first $T_{opt} = 0.2T$ time steps.
More specifically, at time step $ t $, the latent variable $ z_t $ and the text embedding $ \mathcal{C} $ are fed into the diffusion model. After computing the loss, the gradient is back propagated to update $ \mathcal{C} $. The latent variable $ z_t $ and the updated text embedding $ \mathcal{C} $ are then fed into the diffusion model again to predict the noise and obtain $ z_{t-1} $.
**Q1: Complex prompts generation**
As we have demonstrated in the General Response 2, our method ToMe is not limited to only simple cases. In the main paper Table 2, we show experiments on the T2I-CompBench benchmark, where there are long captions with various complex prompts. There are also complex prompts T2I generations in Fig.12 in the Appendix.
To further show that our method ToMe can be applied to the T2I generation with complex prompts. We adopt the suggestions from the reviewer. That is to achieve T2I generation with prompts including multiple objects or backgrounds with multiple attributes. The first row of T2I generation results shown in Fig.13 (in the rebuttal file) demonstrate the effectiveness of our method ToMe under complex scenarios, where more objects and background with various properties are demanded.
**L1: Generalizability**
As we discuss in the General Response 4, we agree with you on this point. T2I models are facing generation limitations for various specific cases. We aim, as a future goal, to generalize our method to various generative models to counter this issue. Current T2I generation models are built on various language models and vision-language models, such as CLIP and T5, and incorporate diverse architectures, including UNet, GAN, and DiT. We are also curious to see if the additivity property truly exists across all these cases, and whether this property is a generalizable feature. | Rebuttal 1:
Rebuttal: We appreciate all reviewers (**R1**=**jwvK**, **R2**=**mo93**, **R3**=**85Q2**, **R4**=**8gpY**) for their positive feedbacks. They note that this paper is well-written (**R1,R2**); the idea is simple and straightforward (**R1, R2, R4**); that we present interesting analysis over the token properties (**R1, R3**); that we provide extensive experiments over various benchmarks (**R1, R3**); and we well address the object binding in this work (**R4**). Below we respond to general questions raised by reviewers. We use **W** to abbreviate Weaknesses, **Q** to represent Questions and **L** for Limitations. Note that we utilize the numerical references to cite sources within the main paper.
**General Response 1: User study (R3-W3, R4-W3)**
In this paper, we use the ImageReward [72] model to evaluate human preference scores, which comprehensively measures image quality and prompt alignment. However, since quantitative results may not fully capture these aspects, we now also conduct a user study with 20 participants to enrich the evaluation. Here we compare our method ToMe with SDXL[51], SynGen[56], Ranni[19] and ELLA[28]. Each participant is questioned with 100 images, each generated from these five methods and twenty prompts.
The results are presented in Fig.16 of the rebuttal file. Similar to our proposed GPT-4o benchmark (detailed in Appendix C.5 and Fig.10), we ask the participants (instead of GPT-4o model as we used) to rate the semantic binding into 4 levels and calculate the distribution of each comparison method over these four diverse levels. We can observe that our method better achieve the semantic binding performance by mainly distribute in the highest level 1, while the other methods struggle to obtain user satisfactory results.
**General Response 2: Complex prompts generation (R1-Q1, R3-W1&Q1&L1, R4-W4&Q4)**
Actually, our method ToMe is not limited to only simple cases. In the main paper Table 2, we show experiments on the T2I-CompBench benchmark, where there are long captions with various prompt templates. There are also complex prompts T2I generations in Fig.12 in the Appendix.
To further show that our method ToMe can be applied to the T2I generation with complex prompts. We adopt the suggestions from the reviews. That includes *(1) R1 (jwvK) and R3 (85Q2)*: multiple objects or backgrounds with multiple attributes; *(2) R4 (8gpY)*: long captions with multiple sentences. The generation results are shown in Fig.13 in the rebuttal file.
For the second case, we apply the ToMe method directly, as shown in the last example of Fig.13.
For the first case, we use our method ToMe in an iterative updating way. As an instance, we generate the fourth image example in Fig.13 (left down) with the prompt “a blue cat wearing sunglasses and a yellow dog wearing a pink hat”. To apply ToMe, we merge ‘yellow dog’ into token1 (dog*) and ‘pink hat’ into token2 (hat*). Following that, we merge token1 and token2 into token3. The noun phrases for computing the semantic binding loss corresponding to these three tokens are ‘a yellow dog’, ‘a pink hat’ and ‘a dog wearing a hat’, respectively. Then we update token1 and token2 by our losses proposed in Section 3.2.2.
**General Response 3: Time cost (R2-Q3, R3-W2)**
In Appendix C.4, we present a time cost comparison in Table 3, where we compare performance using 50 inference steps using the SDXL model with float32 version on an A40 GPU. We demonstrate that our method does not significantly increase inference time while improving semantic binding performance. We further extend this analysis by measuring the time cost with 20 inference steps and various ToMe configurations, as shown in the Table below. We report the time cost (by seconds) along with BLIP-VQA scores across the color, texture, and shape attribute binding subsets. From this table, we can observe that using the token merging (ToMe) technique and entropy loss (Config.C in Table 2), our method achieves excellent performance with minimal additional time cost. Additionally, even with only 20 inference steps, our method, ToMe, maintains high performance with very little degradation.
| method | inference steps | Time Cost | Color | Texture | Shape |
| :--------: | :--------: |:--------: |:--------: |:--------: |:--------: |
SDXL | 20 | 18s | 0.6136 | 0.5449 | 0.5260 |
*ToMe (Config C)* | 20 | 23s | *0.7419* | *0.6581* | *0.5742* |
**ToMe (Ours)** | 20 | 45s | **0.7612** | **0.6653** | **0.5974** |
Ranni (SDXL) | 50 | 87s | 0.6893 | 0.6325 | 0.4934 |
ELLA (SDXL) | 50 | 51s | 0.7260 | 0.6686 | 0.5634 |
SynGen (SDXL) | 50 | 67s | 0.7010 | 0.6044 | 0.5069 |
SDXL | 50 | 42s | 0.6369 | 0.5637 | 0.5408 |
*ToMe (Config C)* | 50 | 56s | *0.7525* | *0.6775* | *0.5797* |
**ToMe (Ours)** | 50 | 83s | **0.7656** | **0.6894** | **0.6051** |
**General Response 4: Generalizability to other T2I models (R1-L1, R4-Q2)**
We agree with you on this point. T2I models are facing generation limitations for various specific cases. We aim, as a future goal, to generalize our method to various generative models to counter this issue. Current T2I generation models are built on various language models and vision-language models, such as CLIP and T5, and incorporate diverse architectures, including UNet, GAN, and DiT. We are also curious to see if the additivity property truly exists across all these cases, and whether this property is a generalizable feature.
Pdf: /pdf/8f369c3f293920e4e45a0482575816b96d7418e7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Improved Distribution Matching Distillation for Fast Image Synthesis | Accept (oral) | Summary: The authors propose an improved way of distilling image-generating diffusion models into fast models capable of generating high-quality images with as few as 1-4 steps. Compared to prior work using distribution-matching (DMD), they do away with the regression loss term that tied the teacher path to the student path. They also introduce a GAN-style loss in the pipeline, and introduce a few other tricks to squeeze out extra performance. They demonstrate SOTA performance among efficient models, and even beat the teacher model (made possible by training using real images and a GAN loss).
Strengths: Overall, I found this to be an excellent paper. The work is well-motivated, addressing an important problem, and will likely be interesting to a large audience. The writing is excellent, with all concepts being well explained. The experimental coverage is excellent, with evaluation on two datasets and with a user study, leaving nothing more to be desired. Their performance numbers are also convincing. Finally, their attention to detail on experimental parameters also looks very thorough, giving confidence that their results can be reproduced.
On a more detailed level, I personally read the original DMD paper not too long ago, and found their regression loss to be slightly unsatisfactory, since it ties the student generation paths to the teacher paths in a way that seems contrary to the idea of the distribution matching loss. Therefore, I was happy to see in this paper that one can do away with this term.
Weaknesses: There's not so much to say here, since I found most aspects of the paper to be excellent. However, I would have liked to see some more details in how their approach relates to competing approaches. Most notably, the line of work by Sauer et al [23, 24] use SDS and a GAN-style loss. Now that a GAN-style loss is introduced in the DMD framework, the gap between these two approaches gets smaller, and it would be nice to see some more explanation about what the core difference is.
The progress is mostly empirical in nature. For example, the authors note that the training becomes more stable using the two-scale update rule, but they don't present any theoretical convergence guarantees (which is perfectly fine, the empirical progress is definitely good enough in my opinion).
There are a few minor typos (e.g. line 122, should be "gradient of the data log likelihood"), but few enough not to impair the overall understanding (and I trust the authors to do a final proof reading for the camera ready version).
Technical Quality: 4
Clarity: 3
Questions for Authors: What is the most significant difference between this work and the line of work by Sauer et al ([23, 24])? Could one sentence about that be added to the "related work" section?
The original DMD paper [22] demonstrated examples of mode collapse when omitting the regression loss. GANs (at least some) are also known to be prone to mode collapse. Could you mention anything about the mode collapse situation in this method? (Since the FID numbers are good, I assume that this is also good, but since this was a main point of analysis in [22]?)
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Yes, limitation and potential social impact are well-described in section 6 and A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate Reviewer xaYh's constructive feedback. We will fix all typos. Below, we address the remaining concerns.
**How is DMD2 related to ADD and LADD? What is the most significant difference?**
Thank you for the opportunity to discuss how DMD2 relates to other concurrent GAN based methods such as ADD and LADD. The ADD paper [23] utilizes a pretrained DINO-based classifier in pixel space, which we found to be less efficient in terms of training and leading to reduced diversity, as detailed in Table 6. More recent efforts like LADD, UFOGEN, and SDXL-Lightning employ latent diffusion GAN-based networks akin to our DMD2. A significant difference, which is often overlooked, is that DMD-style training inherently integrates classifier-free guidance directly into the real score of the DMD gradient. This integration simplifies the training process of our model. In contrast, purely GAN-based methods typically struggle to incorporate classifier-free guidance directly. For example, SDXL-Lightning needs to combine GAN with progressive distillation to enable CFG, while LADD relies on diffusion generated images with CFG as real data in its GAN discriminator. These approaches often complicate the training process. We will elaborate further on these distinctions in the related work section of our revised paper.
**The original DMD paper [22] demonstrated examples of mode collapse when omitting the regression loss. GANs (at least some) are also known to be prone to mode collapse. Could you mention anything about the mode collapse situation in this method? (Since the FID numbers are good, I assume that this is also good, but since this was a main point of analysis in [22]?)**
Similar to our response to Reviewer xYYF, we assess the mode collapse situation under two scenarios:
- Class-Conditional Image Generation: We observed no mode collapse, as evidenced by the state-of-the-art FID scores for image generation (see Table 1).
- Text-to-Image Generation: This scenario is more complex. We utilize classifier-free guidance, which typically trades diversity for image quality. At high guidance settings, using the SDXL baseline, we achieved superior image quality with diversity comparable to or better than other distillation methods but slightly worse than the teacher model ((see Table 6 and the new results in our Response to Reviewer aF5v).
Our current framework trains stably and produces excellent image quality. However, there remains a small gap in output diversity compared to the original teacher model and we are open to exploring future methods that might enhance this diversity/quality tradeoff by better integrating trajectory-preserving techniques (such as the more efficient consistency distillation) with distribution matching methods. We believe this is a promising direction for future research.
---
Rebuttal Comment 1.1:
Comment: I read the rebuttal and thank the authors for good responses to my questions. I have no further comments, and my "strong accept" recommendation stands. I wish the authors good luck, and I'm looking forward to read the final version! | Summary: This work proposed an improved training method for distribution matching distillation, named DMD2. Notably, compared to DMD, it does need the regression loss which relies on constructing the synthetic noise-data pairs. Instead, DMD2 introduces three new features: 1) a two time-scale update rule for fake score and generator training, 2) a GAN loss which extracts the bottleneck features of fake score network for a training prediction head as the discriminator, and 3) backward simulation for few-step distillation that produces inference-time generator inputs during training.
Strengths: - The paper is well written and easy to read.
- The new distribution matching distillation technique (DMD2) significantly improves over DMD with several well-justified innovations, such as TTUR, GAN loss and backward simulation.
- DMD2 achieves SOTA performance on ImageNet-64 and COCO 2014 with one-step generation, and can achieve SOTA performance in distilling SDXL to a 4-step generator, measured by sufficient automatic metrics and human studies.
- Ablation studies have been well executed to highlight the importance of each introduced feature.
Weaknesses: - The proposed method introduces many hyperparameters and seems to be sensitive to these hyperparameters, such as batch size, guidance scale for teacher score, GAN loss weighting and fake score updating frequency. For instance, in different distillation tasks (EDM on ImageNet, SDv1.5 and SDXL), these hyperparameters are different (as shown in Appendix G). I’m not sure if the hyperparameters need to be specifically tuned for good performance in each distillation task.
- The authors claimed that “SDXL remains challenging to distill into a one-step generator because of limited model capacity and a complex optimization landscape to the direct mapping from noise to highly diverse and detailed images”. For the point of “limited model capacity”, does it mean that if the student has the same network with SDXL, by any means, we are not able to achieve a one-step generation that matches SDXL’s performance? For the point of “a complex optimization landscape”, it seems that both SDv1.5 and SDXL are trained on LAION dataset, which means they are both trying to learn the same mapping from noise to data. Does it mean the higher-quality generation of SDXL (rather the training data of teacher models) hinders the one-step distillation? On the other hand, I wonder if it is possible to tune the hyperparameters of distilling SDXL for a better one-step generator. For example, in Appendix G, the batch size for distilling SDXL is only 128 while the batch size for distilling SDv1.5 is only 2048. If DMD2 is sensitive to batch size in the large-scale text-to-image case, can we increase the batch size for distilling SDXL to 2048 during training for improved performance?
- There are some inconsistencies: 1) In Figure 4, the caption says the teacher uses 50 sampling steps while the main text says “while requiring 25x fewer forward passes (4 vs 100)”. 2) EDM (Teacher ODE) originally reported their FID as 2.22, while this work reports 2.32 in Table 1. 3) Both $\mu_{\text{real}}$ and $\mu_{\text{fake}}$ are introduced without definition.
Technical Quality: 3
Clarity: 3
Questions for Authors: - I’m curious about the extra cost of backward simulation, i.e., producing synthetic images with the current student generator running several steps. Is it possible to compare the training time per iteration with and without backward simulation?
- From the numbers in Table 2, it looks like the 4-step distillation improves Patch FID but gets worse FID and CLIP, compared to 1-step distillation. Any justification?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate Reviewer uheV's constructive feedback. Below, we address the remaining concerns.
**How to select the set of hyperparameters?**
Thank you for your question. Our approach to selecting hyperparameters is straightforward and consistent across all datasets. We utilize the maximum batch size our compute setup allows. The learning rate is determined as the highest value that does not cause divergent loss within the first 500 iterations. We set the number of TTUR iterations to the minimum required for stability. The guidance scale is chosen based on what yields the best results for the teacher model. For the GAN weight, our method works well across a wide range of values, as demonstrated by the following ImageNet FID scores:
| Weight | ImaegNet FID |
| - | - |
| 2e-3 | 1.31 |
| 3e-3 | 1.28 |
| 4e-3 | 1.28 |
| 5e-3 | 1.26 |
| 1e-2 | 1.30 |
We will include these guidelines in our updated version.
**For the point of “limited model capacity”, does it mean that if the student has the same network with SDXL, by any means, we are not able to achieve a one-step generation that matches SDXL’s performance?**
We are open to the possibility of a one-step generation that matches the performance of the teacher model. However, we want to emphasize that achieving this in a single-step generation is substantially more challenging than in a multi-step process, especially when distilling complex networks like SDXL. While our one-step generator can match the teacher model in quantitative metrics such as FID, qualitative aspects of image quality present greater challenges. Visually, the one-step process still produces some artifacts that are difficult to eliminate (See Figure 11). An exciting direction for future research could involve refining our training objectives to address these issues more directly, as seen in recent developments like those explored in HyperSD [1].
**For the point of “a complex optimization landscape”, it seems that both SDv1.5 and SDXL are trained on LAION dataset, which means they are both trying to learn the same mapping from noise to data. Does it mean the higher-quality generation of SDXL (rather the training data of teacher models) hinders the one-step distillation?**
While both models are trained on the LAION dataset, they likely utilize different subsets, leading to variations in the noise-to-data mapping. The higher quality and particularly the subtler details found in SDXL are indeed more challenging to capture, potentially requiring more advancements in loss design.
**I wonder if it is possible to tune the hyperparameters of distilling SDXL for a better one-step generator. For example, in Appendix G, the batch size for distilling SDXL is only 128 while the batch size for distilling SDv1.5 is only 2048. If DMD2 is sensitive to batch size in the large-scale text-to-image case, can we increase the batch size for distilling SDXL to 2048 during training for improved performance?**
We have observed improved performance with increased batch sizes and compute. We used a batch size of 128 for SDXL, as this is the maximum our current setup can accommodate within our compute budget of 3 days—especially considering that the SDXL model is three times larger than SDv1.5. Exploring a larger batch size, such as 2048, with additional resources in the future would indeed be an interesting experiment to potentially enhance performance further.
**Some Writing Inconsistency**
Thank you for pointing out these inconsistencies. Regarding the discrepancy between 50 and 100 steps in Figure 4, the teacher model uses 50 steps, but effectively has 100 forward passes due to the application of classifier-free guidance. For EDM, upon reevaluating the released model, we recorded a FID of 2.32, which we will update to 2.22 in our revised version to reflect the most accurate data. We appreciate your attention to detail and will correct the remaining issues. Thank you again for your valuable feedback.
**Extra cost per training iteration of backward simulation**
Thank you for raising the issue of training computational cost. The training iteration time for the model with backward simulation is approximately 9.2 seconds per iteration, compared to 7.8 seconds for the model without backward simulation. We also discovered that enabling backward simulation only during the last 20% of the training epochs yields comparable results. This approach offers a more computationally efficient option when resources are limited.
**From the numbers in Table 2, it looks like the 4-step distillation improves Patch FID but gets worse FID and CLIP, compared to 1-step distillation. Any justification?**
From the data in Table 2, the difference in FID between the one-step and four-step distillation (0.3) is generally within the range of variability observed across different runs, indicating comparable performance. The significant improvement in Patch FID for the four-step distillation reflects better local image detail, aligning with qualitative improvements we observed. However, the CLIP score did decrease, which may suggest that the four-step method slightly sacrifices prompt alignment for image quality. This also happens for previous approaches with good text to image alignment like SDXL-Turbo.
[1] Ren, Yuxi, et al. "Hyper-sd: Trajectory segmented consistency model for efficient image synthesis." arXiv preprint arXiv:2404.13686 (2024). | Summary: This paper introduces DMD2, a few-step distilled generator to achieve fast sampling while maintaining the decent generation quality of the multi-step diffusion models. DMD2 proposes several new improvements to the training procedure of the original DMD, including (1) replacing the regression loss with the Two-Time scale Update Rule (TTUR) to stabilize the training process, (2) incorporating the standard GAN loss to achieve better quality, (3) and utilizing the backward simulation to alleviate the potential mismatch of training and inference. Built upon these modifications, DMD2 achieves excellent results on few-step image generation.
Strengths: 1. The source-intensive process of generating noise-image pairs is replaced by a simple TTUR strategy.
2. DMD2 achieves SOTA results on one-step image generation on ImageNet64 and shows its effectiveness on distilling SDXL into few steps.
Weaknesses: 1. It would be better to include a detailed training algorithm to clearly showcase the modifications over the original DMD training process.
2. In practice, the real data used to train the (teacher) diffusion models may not be accessible to the users who hope to distill a small (student) generation model (due to privacy, storage, …). In that case, one limitation of DMD2 is the calculation of GAN loss may become inapplicable. Could you share your opinions about this matter?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In the original DMD paper, the regression loss is capable of mitigating the issue of mode collapse. Would DMD2 also suffer from this issue as the regression loss is removed and an extra GAN loss is introduced?
2. Why the intermediate outputs of DMD2 shown in the right subfigure of Figure 3 are so similar and seem to follow a certain trajectory? As far as I know, distribution matching-based methods do not guarantee the specific paths of the teacher diffusion model and the student generation model are aligned (as mentioned in Lines 35-39). Could authors provide more explanations about this phenomenon (Fig 3)? Note that the samples generated by few-step consistency models may also switch between different paths and generate very different images.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please refer to the weakness and question sections above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate Reviewer aF5v's constructive feedback. Below, we address the remaining concerns.
**It would be better to include a detailed training algorithm to clearly showcase the modifications over the original DMD training process.**
Thank you for your suggestion. As you accurately summarized, our DMD2 model eliminates the need for the regression loss and incorporates a diffusion GAN loss along with backward simulation to mitigate training-inference mismatches. To clearly illustrate these modifications over the original DMD training process, we will include a detailed comparison of the training algorithms in the revised version of our paper.
**In practice, the real data used to train the (teacher) diffusion models may not be accessible to the users who hope to distill a small (student) generation model (due to privacy, storage, …). In that case, one limitation of DMD2 is the calculation of GAN loss may become inapplicable. Could you share your opinions about this matter?**
We acknowledge that DMD2 relies on access to real data to enhance diversity and image quality. However, it is important to note that the exact dataset used to train the teacher model is not required. For our training, we utilized a random set of 500,000 images from the LAION database, which is generally of lower quality than the curated aesthetic dataset of SDXL. This demonstrates that our GAN loss can be effectively applied using an alternative dataset, underscoring its versatility and universal applicability, regardless of the specific model being distilled.
**In the original DMD paper, the regression loss is capable of mitigating the issue of mode collapse. Would DMD2 also suffer from this issue as the regression loss is removed and an extra GAN loss is introduced?**
Thank you for your question. In DMD2, we demonstrate that much of the mode collapse observed in the original DMD method relates more to training issues than to an inherent inability of the DMD loss to support diverse generator training. Practically, we can assess the final performance under two scenarios:
- Class-Conditional Image Generation: We observed no mode collapse, as evidenced by the state-of-the-art FID scores for image generation (see Table 1).
- Text-to-Image Generation: This scenario is more complex. We utilize classifier-free guidance, which typically trades diversity for image quality. At high guidance settings, using the SDXL baseline, we achieved superior image quality with diversity comparable to or better than other distillation methods but slightly worse than the teacher model (see Table 6 and the new results in our Response to Reviewer **aF5v**).
Our current framework trains stably and produces excellent image quality along with comparable diversity. However, we are open to exploring future methods that might enhance this diversity/quality tradeoff by better integrating trajectory-preserving techniques (such as the more efficient consistency distillation) with distribution matching methods. We believe this is a promising direction for further research.
**Why intermediate outputs of DMD2 follow a certain trajectory?**
This observation was initially surprising to us as well. Although our generator does not follow the teacher diffusion model’s sampling trajectory, our training methods lead to few-step generators where the first output significantly shapes the general structure. Subsequent images tend to closely resemble this initial output. This effect is likely influenced by the relatively high signal-to-noise ratios in subsequent sampling steps, which preserve much of the structure even after noise injection. | Summary: This work addresses identifies reasons for training instability of one of competitive diffusion distillation approaches based on distribution matching, using bi-level optimization and also adopt a GAN based feature space feedback for improved quality. Overall demonstrate very good performance on SDXL, SD checkpoints demonstrating effectiveness on large models too.
Strengths: Paper is well written and easy to understand, with good benchmarking for large scale models and also comparing to other distillation techniques. Overall, DMD puts lesser constraints on distillation w.r.t underlying map from noise to data space enabling more good formulation for distillation. And improving stability is useful for broader adoption of DMD style formulation for distillation towards practical diffusion based applications.
Weaknesses: As the authors discuss on not using real-data within current formulation and setup, there could be a tendency for model to have model collapse?
As proposed distillation objective is not sampling w.r.t teacher's marginal predictive distribution nor data-distribution. It would be useful to get some diversity metric at per-prompt level using e.g., LPIPS Diversity or etc comparing to other Distillation approaches and teacher model.
Also it would be useful to understand what stage of training causes this mode collapse, i.e., does small scale training preserve diversity at cost of some quality drop or we observe a consistent drop in diversity?
Technical Quality: 4
Clarity: 3
Questions for Authors: What is setup of ablation without backward simulation in multi-step setting? Do forward diffusion and query the student generator and based on that query fake score function estimator? If so is the fake score function also trained on equivalent forward diffuse + generator's predictive distribution? As it is currently unclear is it alignment of fake score function alignment to generator or the backward simulation which is resulting in improved performance.
Also given DMD has implicit assumption that the fake score function is capturing predictive distribution of student generator and authors identify its fitting being one of reasons for instability. It might be useful to understand how sensitive is alignment of fake score estimation to student's generator. Also, how good is quality of fake score fun at different stages of training and its implications?
As at implementation level we are starting with pretrained model and in the limit if distilled student model matches pre-trained model's weights we are asking fake score function to match pre-trained model again.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: This work build on DMD and improves stability, good engineering practice as primary contribution of this work with limited formulation novelty or insights. So more insights on hyperparameter sensitivity, why and how are different design choices effect final performance, diversity etc as discussed above would make it more useful for community and strong contribution!
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate Reviewer aF5v's constructive feedback. Below, we address the remaining concerns.
**As the authors discuss on not using real-data within current formulation and setup, there could be a tendency for model to have model collapse?**
There may be some misunderstanding regarding our use of real data. In fact, our model does incorporate real data during training via the GAN loss component. This inclusion enhances diversity and helps to prevent mode collapse. The positive impact of integrating GAN loss with real data is evident in the improved performance detailed in Tables 3 and 4.
**As proposed distillation objective is not sampling w.r.t teacher's marginal predictive distribution nor data-distribution. It would be useful to get some diversity metric at per-prompt level using e.g., LPIPS Diversity or etc comparing to other Distillation approaches and teacher model.**
Thank you for your suggestion! We have indeed conducted a diversity assessment using LPIPS diversity metrics, which is presented in Table 6 of the appendix. Our results indicate that our model's diversity is on par with other distillation approaches, such as the latent consistency model, and significantly exceeds that of purely GAN-based methods like SDXL-Turbo. While our model shows slightly worse diversity compared to the teacher model, it offers substantially better image quality, as illustrated in Figure 5. We believe this often represents a more advantageous trade-off for practical text-to-image applications.
**Also it would be useful to understand what stage of training causes this mode collapse, i.e., does small scale training preserve diversity at cost of some quality drop or we observe a consistent drop in diversity?**
Thank you for your suggestion! In response, we retrained our SDXL model and monitored diversity metrics throughout the training process. The results from this new run showed a small diversity improvement over those reported in our paper. Specifically, the model displays higher diversity at the beginning, which gradually diminishes as image quality improves. Despite this trend, the final diversity scores of the model remain closely comparable to those of the teacher model (0.63 vs 0.64 for the teacher), indicating a well-maintained balance between diversity and image quality.
| Train Iter | LPIPS Diversity |
| - | - |
| 0 | 0.25 |
| 4k | 0.65 |
| 8k | 0.64 |
| 12k | 0.65 |
| 16k | 0.64 |
| 20k | 0.63 |
| SD Teacher Baseline | 0.64 |
**Setting without backward simulation**
The reviewer’s interpretation is correct. In the setting without backward simulation, we add noise to real images and then feed these noisy images into our generator. The output from the generator is then supervised using both the DMD and GAN loss. Additionally, the fake score function is trained based on this generator output. We will clarify this process more thoroughly in the revised version of our paper.
**how sensitive is alignment of fake score estimation to student's generator and how good is the quality of fake score at different stages of training and its implications?**
Thank you for suggesting this set of further analyses. Due to time constraints, these will be included in the revised version of our paper. Regarding the first part, we believe achieving proper alignment between the fake score estimation and the generator’s output distribution is crucial. As shown in Figure 9 of the main paper, more frequent training of the fake score—aimed at enhancing its accuracy—leads to more stable generator training and overall improved performance. We will provide a more comprehensive analysis in the revised version. For the second part, we plan to retrain an ImageNet model and monitor the performance of the fake score model at various training stages by assessing the quality of its sample outputs. We appreciate the reviewers’ valuable suggestions and look forward to incorporating these insights!
---
Rebuttal Comment 1.1:
Title: Thank you for addressing/clarifying most of concerns.
Comment: Happy to raise my score.
One thing i want to understand from authors, is how stable is this formulation when number of steps go from 20K to say 60K or more? Do you see performance peak around 20K or further training improves quality? unlike original DMD as authors report only 20K was curious if there were any interesting empirical findings which would provide further insights to community.
This goes back to fake score function alignment, does it help to reinitialize fake score function with teacher model again as student converges well to teacher? As it seems a bit unclear on properties/fit of fake score function and how it effects overall training stability.
Looking forward for more results in later version of paper.
---
Reply to Comment 1.1.1:
Comment: We are pleased that our responses have addressed your concerns and appreciate your consideration to raise the score!
Regarding your further inquiries, we currently utilize 20K iterations because this represents almost the maximum compute we can afford (64 GPUs over 3 days). However, we have not yet observed the peak performance of our models. For example, in a trial where we extended to 30K iterations, we managed to improve the FID from 19.3 to 18.7. We are eager to explore extending the training duration in our revised version to further confirm the model's stability.
The suggestion to reinitialize the fake diffusion model at a later stage is intriguing, and we look forward to experimenting with this approach. Thank you once again for your invaluable suggestions and insights! | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their constructive feedback. We are grateful for the positive reception of our work, which has been recognized for its well-founded innovations and outstanding quality. Our DMD2 model facilitates the training of a few-step generator that delivers superior image quality and diversity comparable to the teacher SDXL models. We have incorporated additional evaluations concerning human-related and diversity metrics as requested by Reviewer **Yut7** and Reviewer **aF5v**. Detailed responses to each reviewer’s specific concerns are provided below. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper introduces an upgraded version of Distribution Matching Distillation (DMD), i.e., DMD2, which addresses the limitation and inefficiency of previous DMD and improves the performance of efficient and high-quality image synthesis using diffusion models. Specifically, the authors identify the limitations of the original Distribution Matching Distillation (DMD), such as the need for a regression loss and extensive dataset construction. DMD2 eliminates the regression loss, integrates a Generative Adversarial Network (GAN) loss, and introduces a two-time-scale update rule to stabilize training. Additionally, a new training procedure is implemented to simulate multi-step sampling, addressing the training-inference mismatch. Experimental results demonstrate that DMD2 achieves state-of-the-art performance, surpassing the original DMD and other competitive models in image quality and efficiency.
Strengths: 1. **Elimination of Regression Loss**: By removing the regression loss, DMD2 simplifies the training process and reduces computational costs, making it more scalable and flexible for large-scale applications.
2. **Integration of GAN Loss**: The incorporation of a GAN loss improves the quality of generated images by discriminating between real and generated samples, enhancing the overall distribution matching objective.
3. **Two-Time-Scale Update Rule**: This technique addresses training instability issues, ensuring that the fake score accurately tracks the generator’s output distribution, leading to stable and high-quality image generation.
4. **Multi-Step Sampling**: The introduction of multi-step sampling allows DMD2 to produce high-quality images in fewer steps, addressing the inefficiency of one-step generation while maintaining performance.
5. **Comprehensive Evaluation**: The paper provides extensive experimental results on various benchmarks, demonstrating DMD2's superior performance in both class-conditional and text-to-image synthesis tasks.
Weaknesses: I do not find a specific weakness of this paper.
Technical Quality: 4
Clarity: 4
Questions for Authors: - Training with GAN often entails numerical instability. Does DMD2 have such concerns? If it is true, could the author provide some details in overcoming the instability of DMD2?
- Besides the evaluation metric such as FID and Inception scores, how about some human-related metrics such as ImageReward or aesthetic scores? Does DMD2 show comparable results to teacher SDXL?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The paper includes limitations of the proposed method, e.g., requires multiple steps to generate on par with teacher model, e.g., SDXL.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate Reviewer Yut7's constructive feedback. Below, we address the remaining questions regarding numerical instability and human-related metrics.
**Training with GAN often entails numerical instability. Does DMD2 have such concerns? If it is true, could the author provide some details in overcoming the instability of DMD2?**
The GAN component in DMD2 does not introduce numerical instability. This stability can be attributed to our use of a diffusion GAN framework, where both the generators and discriminators are initialized from a pretrained diffusion model. Before classification, images are also treated with noise injection, enhancing stability. This method has proven more stable than traditional GANs that utilize pixel space discriminators without noise injection, as supported by several recent studies [1, 2]. Furthermore, DMD2 includes a two-timestep-scale update rule that bolsters stability. Additionally, DMD2 utilizes a weighted combination of DMD and GAN losses. The DMD loss consistently guides the overall structure of the images, preventing mode collapse and ensuring training stability even with the added GAN components.
**Besides the evaluation metric such as FID and Inception scores, how about some human-related metrics such as ImageReward or aesthetic scores? Does DMD2 show comparable results to teacher SDXL?**
Thank you for this insightful suggestion! We have extended our evaluation of DMD2 and SDXL to include both ImageReward and aesthetic scores, using PartiPrompts [3] for consistency with our human evaluation. Below is a summary of our findings:
| Method | ImageReward | Aesthetic Score |
| - | - | - |
|DMD2 | 1.07 | 6.30 |
| SDXL | 0.86 | 6.16 |
These results demonstrate that DMD2 consistently outperforms SDXL, corroborating the findings from our main paper's FID and human evaluation results. We will incorporate these additional metrics into the revised version of our paper.
[1] Wang, Zhendong, et al. "Diffusion-gan: Training gans with diffusion." ICLR 2023
[2] Xu, Yanwu, et al. "Ufogen: You forward once large scale text-to-image generation via diffusion gans." CVPR 2024.
[3] Yu, Jiahui, et al. "Scaling autoregressive models for content-rich text-to-image generation." arXiv preprint arXiv:2206.10789 2.3 (2022)
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. I will maintain my original score. | null | null | null | null | null | null |
Any2Graph: Deep End-To-End Supervised Graph Prediction With An Optimal Transport Loss | Accept (spotlight) | Summary: This paper presents a generic framework for end-to-end supervised graph prediction. The proposed framework can take different types of data as input and learn to output graphs. The core of the framework is a novel loss function (PMFGW) that enables generalizability in different scenarios. In the end, the paper demonstrates the capability and superior performance of Any2Graph over existing solutions on synthetic datasets as well as real-world datasets.
Strengths: S1: The paper tackles an interesting and important problem: supervised graph predictions, which can potentially benefit many applications.
S2: The paper is very well-written and easy to follow. It effectively motivates the key challenge of the problem and clearly articulates the proposed idea step by step.
S3: The paper's primary contribution, PMFGW, is well-formulated. It also shows good performance across various metrics compared to prior solutions.
S4: The experiments in this paper are comprehensive and solid. They not only showcase PMFGW's high performance but also provide insights into its properties.
Weaknesses: W1: The computational performance results could be more comprehensive. While the asymptotic time complexity is provided, it would be valuable to understand how training time scales with an increase in M. This information would help readers assess the practicality of using this approach for their specific use cases.
W2: It would be interesting to understand the failure cases produced by Any2Graph.
W3: Any2Graph produces continuous outputs; it would be interesting to know their distribution. How will the predicted graph affect the predicted graph if we use a different threshold, e.g., 0.1 or 0.9?
Technical Quality: 4
Clarity: 4
Questions for Authors: Please check the weaknesses section.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The paper's contributions are unlikely to have any negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper! All your questions are very interesting and answering them helped us to significantly improve the paper.
**Weaknesses:**
> W1: The computational performance results could be more comprehensive. While the asymptotic time complexity is provided, it would be valuable to understand how training time scales with an increase in M. This information would help readers assess the practicality of using this approach for their specific use cases.
We report the training time for the different datasets in table 5 (appendix E.2) but we agree that the complexity analysis with respect to $M$ could be more detailed. Thank you for this important remark, we will fill this gap in the final version of the paper. In short: the cost of the transformer scales with $M^2$ and the cost of PMFGW scales with $kM^3$ where $k$ is the number of iteration required for the solver. We provide a novel plot for empirical estimation of $k$ in the pdf (figure 1).
> W2: It would be interesting to understand the failure cases produced by Any2Graph.
This is very important as well, thank you. We think that they are two types of failure cases. One the one hand, it can happen that the training dynamic fail to converge. We discuss this in detail in appendix F.1 along with simple methods to prevent this bad behaviour. On the other hand there is the case where $M$ is too large which we already discussed above.
> W3: Any2Graph produces continuous outputs; it would be interesting to know their distribution. How will the predicted graph affect the predicted graph if we use a different threshold, e.g., 0.1 or 0.9?
Once again this is an interesting point that is not discussed at all in the paper. We provide a few novel plots in the global response pdf. As you can see the model is very robust to the choice of the treshold (table 1 and 2)! This is because the model is quite confident in his prediction, as demonstrated in the histograms (figure 3 and 4). | Summary: The authors propose a flexible framework for end-to-end Supervised Graph Prediction (SGP), called Any2Graph, capable of handling various types of input data. This framework leverages a novel, fully differentiable, and node permutation-invariant optimal transport-based loss called the Partially Masked Fused Gromov-Wasserstein (PMFGW) loss. Unlike the Fused Gromov-Wasserstein loss (FGW), which cannot compare graphs of different sizes, the PMFGW is size-agnostic. To satisfy this property, the discrete output space is relaxed to a continuous space. A discrete graph is mapped to a continuous graph using a padding operator. The PMFGW measures the discrepancy between the predicted continuous graph and the target padded graph. Essentially, PMFGW extends FGW by adding an additional term that ensures the padding of a node is well predicted, and the partial mapping of the second and third terms. The authors introduce three propositions demonstrating that PMFGW translates all properties of FGW to the new size-agnostic graph representation. The architecture of the end-to-end SGP framework is a modification of the Relationformer, ensuring versatility in terms of input data (not only images) and modifying the operation for computing the adjacency matrix. The authors showed that Any2Graph achieved state-of-the-art prediction performance at a very low computational inference cost on four real-world datasets characterized by different types of input data.
Strengths: The paper is well-written, and both the description of the PMFGW loss and the architecture of the Any2Graph framework are clearly presented
The flexibility regarding the input data type and the size-agnostic graph representation properties provided by Any2Graph have the potential to significantly impact the supervised graph prediction task, which play an important role in several applications from graphics to neuroscience.
The sound theoretical analysis proves that the novel PMFGW loss translates all the properties of FGW to the new size-agnostic representation based on the idea of mapping each graph to the corresponding continuous padded graph.
The numerical experiments are well-conducted with respect to:
- The comparison of state-of-the-art (SOTA) methods in terms of both accuracy and computational efficiency.
- The study of the robustness of Any2Graph concerning maximum graph size and the weight of the three terms of the PMFGW loss.
Weaknesses: I think to improve the readability of of the paper I suggest to include a extend the description of the graph matching (section 2, paragraph 'Comparing graphs of the same size' by including a formal definition of the graph matching problem.
Technical Quality: 4
Clarity: 4
Questions for Authors: At the end of section 2, the authors state that Unbalanced OT introduces several additional regularization parameters (I believe three) that are difficult to tune, especially in scenarios like SGP, where model predictions exhibit wide variability during training. While I agree, I was wondering if the authors have conducted any preliminary experiments using a UOT-based loss.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors have adequately addressed the limitations of the Any2Graph framework. Their work does not appear to have any potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and positive feedback.
> I think to improve the readability of of the paper I suggest to extend the description of the graph matching (section 2, paragraph 'Comparing graphs of the same size')
We will follow your suggestion to provide a more detailled introduction to the graph matching problem in the final version of the paper.
> **(UOT as loss)** At the end of section 2, the authors state that Unbalanced OT introduces several additional regularization parameters (I believe three) that are difficult to tune, especially in scenarios like SGP, where model predictions exhibit wide variability during training. While I agree, I was wondering if the authors have conducted any preliminary experiments using a UOT-based loss.
Thank you for this question! UOT is actually the first thing we tried and we are greatful for this opportunity to share the insight we gained from those initial attempts.
To illustrate the discussion we can try to keep the exact same framework except we use FUGW [1] instead of PMFGW, that is we swap equation (5) with
\begin{align}
\texttt{FUGW}(\hat{y},y) = \min_{\mathbf{T}\geq 0} \quad &\alpha_f \sum_{i,j}T_{i,j}\ell_F(\hat{\mathbf{f}\}_{i},\mathbf{f}\_{j})+ \alpha_A \sum\_{i,j,k,l} T\_{i,j} T\_{k,l}\ell_A(\hat{A\}\_{i,k},A\_{j,l}) \newline &+ \rho \varphi(\mathbf{t}\_{1}, \hat{h} ) + \rho \varphi(\mathbf{t}\_{2}, h )
\end{align}
Where $\mathbf{t}\_{1}$ and $\mathbf{t}\_{2}$ are the marginals of $\mathbf{T}$ and $\varphi$ is the divergence that controls the soft marginal constraints, for instance L2. Following [2], the solver solves a sequences of linear UOT problems for which a majoration-minimization (mm) algorithm is provided in [3].
This UOT variant of PMFGW looks sound, but when we tried it we ran into the following issues:
1. The FUGW solver is extremely slow, about 10x slower than that of PMFGW. We tried to implement a GPU batched version but speedup was limited due to the shared convergence criterion.
2. Tuning the hyperparameters is very hard, the soft marginal constraint $\rho$ in particular is very unstable. We provide a simple illustration below for $\alpha_A = 0$ (that is we are only trying to learn the nodes).
Denoting
$$\mathbf{T}^* = \text{argmin} \quad \alpha_f \sum\_{i,j}T\_{i,j}\ell\_F(\hat{\mathbf{f}}\_{i},\mathbf{f}\_{j})+ \rho \varphi(\mathbf{t}\_{1}, \hat{h} ) + \rho \varphi(\mathbf{t}\_{2}, h ) $$
we know from equation (7) of [3] that $T^*_{i,j} = 0$ whenever $\alpha\_f \ell\_F(\hat{\mathbf{f}}\_{i},\mathbf{f}\_{j}) > \rho (\hat{h}\_i + h\_j)$. In particular:
$$\alpha\_f \ell\_F(\hat{\mathbf{f}}\_{i},\mathbf{f}\_{j}) \geq 2\rho \quad \implies \quad T^*\_{i,j} = 0$$
This means that if $\rho$ is set too low it can happen that the optimal transport is $\mathbf{T}^* = 0$. But in that case the loss backpropagated to the network is
$$\mathcal{L}(\hat{y},y) = \rho \varphi(\mathbf{0},\hat{h}) + \rho \varphi(\mathbf{0},h) = \rho \varphi(\mathbf{0},\hat{h}) + \text{cst}$$
And the SGD will push the neural network toward a trivial prediction $\hat{h} = 0$.
Because of the wide variability you mention we were never able to overcome such instabilities despite searching over a grid of parameters.
[1] Thual, A., Tran, Q. H., Zemskova, T., Courty, N., Flamary, R., Dehaene, S., & Thirion, B. (2022). Aligning individual brains with fused unbalanced Gromov Wasserstein. Advances in neural information processing systems, 35, 21792-21804.
[2] Séjourné, T., Vialard, F. X., & Peyré, G. (2021). The unbalanced gromov wasserstein distance: Conic formulation and relaxation. Advances in Neural Information Processing Systems, 34, 8766-8779.
[3] Chapel, L., Flamary, R., Wu, H., Févotte, C., & Gasso, G. (2021). Unbalanced optimal transport through non-negative penalized linear regression. Advances in Neural Information Processing Systems, 34, 23270-23282.
---
Rebuttal Comment 1.1:
Title: Reply to the rebuttal
Comment: I appreciate the responses, especially the detailed answer regarding UOT as a loss. I will maintain the proposed score. | Summary: This paper presents Any2Graph, a generic framework for end-to-end supervised graph prediction (SGP) with an optimal transport loss. The framework handles various input modalities and output graphs of arbitrary size and node ordering. The novel Partially-Masked Fused Gromov-Wasserstein loss is differentiable and permutation invariant, making it suitable for SGP. Numerical experiments showcase the versatility and superior performance of Any2Graph on a synthetic dataset and several real-world tasks such as map construction and molecule prediction. The paper addresses a practical challenge in deep learning models and offers a promising solution.
Strengths: Innovative Methodology: The paper introduces a novel approach to supervised graph prediction (SGP) that is end-to-end, versatile, and achieves state-of-the-art results. The proposed framework, Any2Graph, leverages a new asymmetric Partially-Masked Fused Gromov-Wasserstein loss that is differentiable and node permutation invariant.
Practical Impact: The method is demonstrated on a wide range of real-world tasks including map construction from satellite images and molecule prediction from fingerprints. The results showcase the effectiveness of the proposed approach, offering a promising solution to practical challenges in deep learning models.
Weaknesses: 1. This paper addresses the SGP problem, characterized by size agnostic, node order insensitivity, and a vast search space. As the number of nodes increases, the computational load may significantly increase. However, the discussion of this issue in the paper is relatively limited.
2. Evaluation and Ablation Studies: The evaluation results are impressive but lack sufficient ablation studies. Additional analysis on the impact of different components (e.g., loss function) would enhance our understanding of the proposed method’s strengths and weaknesses.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can an analysis of computational complexity be provided? Can a computationally feasible solution be proposed for graph generation problems with a large number of nodes (e.g., graphs with more than 100 nodes, which are commonly encountered in various data sets)?
2. Have you considered performing ablation studies or comparisons against popular baselines?
3. Can you discuss potential limitations or failure cases of your method? How might these impact real-world applications? Are there scenarios where traditional methods may outperform Any2Graph?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: As Discussed In Sec. Weakness & Sec. Limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper. We adress your many interesting questions below:
> 1. Can an analysis of computational complexity be provided? Can a computationally feasible solution be proposed for graph generation problems with a large number of nodes (e.g., graphs with more than 100 nodes, which are commonly encountered in various data sets)?
We fully agree that the original version of the paper was missing a detailed computational complexity analysis that we have now added. Thank you for this important remark. In short: the cost of the transformer scales with $M^2$ and the cost of PMFGW scales with $kM^3$ where $k$ is the number of iteration required for the solver. We provide a novel plot for empirical estimation of $k$ in the pdf. Thus, this version of Any2Graph, like Relationformer, cannot scale to graphs with hundreds of nodes. Our opinion is that the supervised prediction of small graphs (few tens of nodes) is already a very challenging topics with many real world application. The goal of this first work is not to scale beyond this order of magnitude but to introduce a novel framework. Yet, we agree that this is a natural question and this motivated us to augment the conclusion with a detailed plan to scale Any2Graph in future works (using approximate attention [1], heuristics [2] and entropic regularization [3]).
> 2. Have you considered performing ablation studies or comparisons against popular baselines?
This is a natural question but we don't think that is possible to remove any part of Any2Graph without fully breaking the pipeline. For instance if we remove the $l_h$ terms in PMFGW, the model will simply not learn to predict which nodes are activated and the accuracy will drop to 0. Yet, we would like to point out that the comparison with Relationformer can be seen as a form of ablation study, since we use the exact same architecture except for the loss. We also provide the results with and without feature diffusion which we see as a form of ablation study even if we don't formulate it that way due to space constraints.
> 3. Can you discuss potential limitations or failure cases of your method? How might these impact real-world applications?
This is very important as well, thank you. We think that there are two types of failure cases. On the one hand, it can happen that the training dynamic fails to converge. We discuss this in detail in appendix F.1 along with simple methods to prevent this bad behaviour. On the other hand there is the case where $M$ is too large which we already discussed above.
> 4. Are there scenarios where traditional methods may outperform Any2Graph?
Split from the previous question for readability. To the best of our knowledge, the surrogate regression methods are the 'tradionnal approach' for tackling SGP in a general fashion. Yet they operate on a different data regime than Any2Graph as they are able to deal with scarse data but come with a high decoding cost. For instance, Any2Graph is ill-suited to deal with the metabolite prediction task [4] that benefits from a known candidate graph set.
[1] Fournier, Q., Caron, G. M., & Aloise, D. (2023). A practical survey on faster and lighter transformers. ACM Computing Surveys, 55(14s), 1-40.
[2] D. B. Blumenthal et al. “Comparing heuristics for graph edit distance computation”. The VLDB journal
[3] Altschuler, J., Niles-Weed, J., & Rigollet, P. (2017). Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration. Advances in neural information processing systems, 30.
[4] Brogat-Motte & all. (2022). Vector-valued least-squares regression under output regularity assumptions. Journal of Machine Learning Research
---
Rebuttal Comment 1.1:
Title: Thanks!
Comment: I appreciate the response and all generally makes sense. I'll maintain my score. | Summary: This work proposes Any2Graph, an end-to-end deep learning framework for Supervised Graph Prediction (SGP) leveraging a novel OT loss called PMFGW. The model consistently achieves state-of-the-art performances across multiple graph prediction tasks and input modalities.
Strengths: - The author combines Partial Matching and OT methods, using permutation and padding operator to consistently predict and compare graphs of arbitrary sizes.
- The author uses a variety of datasets to illustrate that Any2Graph has consistent and strong performance on different modal inputs, including noisy images, real world satellite images and texts (molecular fingerprints).
Weaknesses: - There are some inconsistent notations in the paper. For example, the loss function for node features is denoted as $l_F$ in Eq. (1), while the author uses $l_f$ in the next sentence.
- The author emphasizes the importance of the maximum node size $M$ that strongly influences efficiency and expressiveness of the model. However, the analysis in Section 5.3 shows $M=16$ is sufficient to tackle the problem, indicating that the datasets may be relatively simple and cannot fully reflect the prediction ability on more complex graphs with higher orders of magnitude of nodes.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The author uses PMFGW as the evaluation metric at the graph level, which is also the training objective of the model. The comparison may be not fair in that case since the loss functions $l_h, l_f, l_A$ and weights $\alpha$ can be arbitrarily chosen. Please correct me if I am wrong.
- In Section 4, the author mentions that threshold 1/2 is chosen. I wonder whether and how the threshold influences the model's capability. Please show me some experimental results on one or more datasets.
- The datasets may be relatively simple since $M=16$ is sufficient to tackle the problem. Can you give some results on more complecated datasets (\emph{i.e.}, with higher orders of magnitude of nodes)?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: - As mentioned in the paper, the main limitation is its scalability to graphs of larger size.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are greatful for the in-depth review of our paper. We also thank you for having pointed out typos. They will be corrected in the final version of the paper. We answer your questions below:
> The author uses PMFGW as the evaluation metric at the graph level, which is also the training objective of the model. The comparison may be not fair in that case since the loss functions $l_h, l_F, l_A$ and weights can be arbitrarily chosen. Please correct me if I am wrong.
You are absolutetly correct. This is why we never use PMFGW as an objective metric in the experiments. That being said, we were very careful to report meaningful values in table 1 by fixing $l_h, l_F, l_A$ and weights for each tasks, the values used are that reported in appendix E.2. We will make this clear in the final version of the paper.
> In Section 4, the author mentions that threshold 1/2 is chosen. I wonder whether and how the threshold influences the model's capability. Please show me some experimental results on one or more datasets.
This is an interesting point. We provide a few plots with different thresholds in the global response pdf. As you can see the model is very robust to the choice of the threshold as it is quite confident in its prediction.
> The datasets may be relatively simple since $M=16$ is sufficient to tackle the problem. Can you give some results on more complecated datasets (i.e., with higher orders of magnitude of nodes)?
Our opinion is that the supervised prediction of small graphs (few tens of nodes) is already a very challenging problem with many realy world applications. The goal of this first work is not to scale beyond this order of magnitude. That being said, your question motivated us to train Any2Graph on a dataset with graphs of size up to size $50$ (figure 2). Still, we understand that scaling to larger graph is a natural question. Thus we have enriched the conclusion of the paper with more ideas for scaling Any2Graph in a future work (using approximate attention [1], heuristics [2] and entropic regularization [3]).
[1] Fournier, Q., Caron, G. M., & Aloise, D. (2023). A practical survey on faster and lighter transformers. ACM Computing Surveys, 55(14s), 1-40.
[2] D. B. Blumenthal et al. “Comparing heuristics for graph edit distance computation”. The VLDB journal
[3] Altschuler, J., Niles-Weed, J., & Rigollet, P. (2017). Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration. Advances in neural information processing systems, 30.
---
Rebuttal Comment 1.1:
Comment: I'm quite appreciated for detailed explanations and additional experiments provided in the attachment. Results have demonstrated that the method is suitable for more complicated graphs with $M=50$ (though it could be better to show concrete values of the metric in the final version) and is robust to different threasholds as well. After reading the comments of other reviewers and the author's rebuttal, I'd like to change the rating to 7.
Wish you all the best. | Rebuttal 1:
Rebuttal: First we would like to thank the reviewers for their mostly positive reviews with constructive questions.
The majority of the comments are positive with many reveiwers finding the paper well written (**CHfS**,**bW5k**,**ct8H**), well positioned in the literature (**CHfS**) and with a nice potential impact in SGP and applications (**ct8H**,**bW5k**). Reviewers also noted that the numerical experiments are solid (**ct8H**, **bW5k**) and demonstrate very good empirical performance for a wide range of modalities (**CHfS**,**YNCo**,**9w7t**,**YNCo**), all this with very low computational inference cost (**bW5k**). The novel proposed loss PMFGW remain simple (**CHfS**) but also relies a sound theoretical analysis (**bW5k**).
On a slightly more reserved note, **CHfS** was wondering about the limited novelty of the PMFGW loss but we believe that those small changes are the ones that made the large gain in performance possible which provides an important contribution to the community. Reviewers also asked questions about the scalability of the method (**YNCo**, **9w7t**, **ct8H**). We acknowledged that the computationnal complexity analysis was not detailed enough and thus provided new experiments to fill this gap. Any2Graph is suited to the supervised prediction of graphs with few tens of nodes (we provide a novel exemple with 50 nodes) and reviewers agree that this is enough for many practical applications (**ct8H**, **bW5k**). Yet we understand that scaling to larger graph is a natural question. For that reason, we enriched the conclusion of the paper by drawing additional ideas for scaling Any2Graph in a future work.
Pdf: /pdf/07f8d4f667613d4eae8878e93fa7ca0abacf67fe.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This work aims to design an end-to-end pipeline for structured graph prediction (SGP). The proposed loss, PMFGW, is a extension of Fused Gromov Wasserstein to generate graphs with bounded arbitrary sizes, along with a standard pipeline carefully modified from Relationformer. It is empirically verified on public tasks and a novel synthetic dataset, Coloring, with outstanding performance.
Strengths: 1. The solution is straightforward with good empirical performance: the extension is clear as penalizing loss for misalignment of padding dependent of graph sizes, and the empirical performance is outstanding against the related work, Relationformer.
2. The presentation is well written: for each contribution and novel design, the details are closely accompanied with reasons of design and necessary discussion about related works, which makes it easy to position this work in the literature.
Weaknesses: One may argue that a clearly successful paper could have more novelty, than proposing a loss with an additional term, the significance of which I am not sure about. The positive side is that the simple design has good empirical performance, and perhaps could inspire other researchers to simply add such a term to solve graph tasks with different sizes.
Hence, I would like to provide a score of weak accept for now, and see how the other reviewers think about this potential argument.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could you please reveal more why the reported results about Relationformer in Table 1 are largely different from their original ones? I suppose the reason may be as written around line 278 ''use the same architecture for both approaches...'', but I still want to make sure the difference is well understood. Is there any other issue here, like data split?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper! Note that we moved the following discussion in the global response as it might interest the other reviewers:
> One may argue that a clearly successful paper could have more novelty, than proposing a loss with an additional term, the significance of which I am not sure about. The positive side is that the simple design has good empirical performance, and perhaps could inspire other researchers to simply add such a term to solve graph tasks with different sizes.
We would like to further explain our point of view here. We agree that PMFGW is a relatively straightforward extension of an existing OT problem but using it for SGP is novel and we hope that the important gain in performance of such a natural framework will help to spark more interest into SGP. This is also the reason why we provide tools for benchmarking future methods (synthetic datasets and set of metrics). Finally, note that we explored significantly more complex approaches with weaker results (see the UOT discussion with reviewer **bW5k**).
> Could you please reveal more why the reported results about Relationformer in Table 1 are largely different from their original ones? I suppose the reason may be as written around line 278 'use the same architecture for both approaches...', but I still want to make sure the difference is well understood. Is there any other issue here, like data split
This is an important point indeed. You are correct, the main difference comes from the architecture as we use a simple Resnet18 while relationformer leverage a Resnet50 and a complex MultiLevel Attention scheme. This is very specific to images inputs and we did not use it as we wanted to show the generality of our approach to different data modalities. The data splits however are exactly the same (those of the original Toulouse and USCities papers). Finally, the definition of some metrics differ between the two papers. The graph level metrics reported in relationformer rely on the TOPO and SMD score which are heavily specific to Sat2Graph tasks. We believe that the Node level and Edge level metrics are similar to ours but no exact definition are given in the transformer paper. Anyway, the final table we provide is a fair comparison relying on the same backbone and a set of well defined task-agnostic metrics.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. After reading the response, I will maintain my score for ''Technically solid, moderate-to-high impact paper''. | null | null | null | null | null | null |
Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing | Accept (poster) | Summary: This paper introduces ALPHALLM, which is an imagination-searching-criticizing framework designed for self-improvement. Inspired by AlphaGo, authors integrate MCTS and LLMs to establish the self-improvement loop. Additionally, authors proposed eta-MCTS, which is a decoding method used to reduce the search space. The experiments show that ALPHALLM can increase LLMs’ reasoning ability obviously, especially combined with eta-MCTS decoding.
Strengths: 1. It’s innovative to use tree search to do self-improvement and it’s good to identify some challenges very clearly.
2. The experiments show that the reasoning ability of LLMs can be improved effectively.
Weaknesses: 1. It could be more clear in some parts and including some examples would be helpful. For example, in the part of 4.2 data synthesizing, I’m curious about how the synthesized data looks like.
2. Authors identify that one of the challenges in working on LLMs’ self-improvement is the difficulty to get clear evaluations, and they proposed some method to get the evaluations, i.e. value function, PRM, ORM. But they do not discuss whether these methods are reliable to get perfect feedback.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please refer to the part of weaknesses.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations are sufficiently discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer’s insightful feedback. Your recognition of our novelty, clear identification of challenges, and the effectiveness of our results is is highly encouraging to us.
---
> **[W1]** It could be more clear in some parts and including some examples would be helpful. For example, in the part of 4.2 data synthesizing, I’m curious about how the synthesized data looks like.
We understand that some sections of the paper, such as the data synthesizing part in section 4.2, could benefit from further clarification and examples.
To address your query about the synthesized data, we used the method described in [1] to generate synthesized questions. Here's an example synthesized based on GSM8K:
```
Question: Sandy's monthly phone bill expense is equal to ten times her age now. In two years, Sandy will be three times as old as Kim. If Kim is currently x years old, calculate Sandy's monthly phone bill expense.\nIf we know the answer to the above question is 340, what is the value of unknown variable x?\n
```
We hope this example provides a clearer understanding of how our data synthesizing process works. We are open to further discussions to improve the clarity of our work.
[1]. Yu, Longhui, et al. "Metamath: Bootstrap your own mathematical questions for large language models." arXiv preprint arXiv:2309.12284 (2023).
---
> **[W2]** Authors identify that one of the challenges in working on LLMs’ self-improvement is the difficulty to get clear evaluations, and they proposed some method to get the evaluations, i.e. value function, PRM, ORM. But they do not discuss whether these methods are reliable to get perfect feedback.
We appreciate your insightful comment regarding the reliability of the evaluation methods such as the value function, PRM, and ORM for obtaining ideal feedback in the context of LLMs' self-improvement. In our study, we have utilized a combination of these evaluation methods to ensure better calibration results.
The accuracy obtained on the GSM8K dataset for each critics is as follows:
- Value: 0.832
- PRM: 0.841
- ORM: 0.877
As a combination of them, we have achieved the following calibration scores:
- ECE: 0.1319
- AUROC: 0.958
- Brier Score: 0.0429
These results indicate that our proposed evaluation methods have provided reasonably good feedback for LLMs' self-improvement. We appreciate the suggestion and will continue to explore more reliable and robust evaluation techniques to enhance the self-improvement capabilities of LLMs in future research.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply and these address some of the questions. I will maintain my score.
---
Reply to Comment 1.1.1:
Title: Thank you for your feedback
Comment: Thank you and we appreciate your feedback. | Summary: This paper proposes an imagination-searching-criticizing approach called ALPHALLM to enhance the capabilities of large language models (LLMs). ALPHALLM integrates Monte Carlo Tree Search (MCTS) with LLMs to establish a self-improving framework.
Strengths: - This paper introduces a novel approach to LLM self-improvement using MCTS, presenting an interesting concept.
- The paper is well-written, and the proposed imaginative-searching-criticizing approach is clearly explained.
- Using only 7.5k/3k final math answer annotations and after just two iterations of self-improvement, ALPHALLM achieves impressive results: 92.0 on GSM8K and 51.0 on MATH. These results are remarkable.
Weaknesses: - Given only final math answer annotations, ALPHALLM essentially performs the final-label (reward) classification problem and tree search based on the predicted reward to generate rationales for each final answer. It is still unclear how ALPHALLM could outperform other LLM-SFTs that utilize both rationale annotations and final answer annotations.
- ALPHALLM is a general framework. The authors should apply ALPHALLM to other tasks where learning signals are clear to demonstrate the overall effectiveness of this self-improvement framework.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Have you ever tried using your value/reward function to perform RL fine-tuning with PPO?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback and insightful questions. We are encouraged by your approval of the novelty and effectiveness of ALPHALLM, as well as the clarity of this approach.
---
> **[W1]** Given only final math answer annotations, ALPHALLM essentially performs the final-label (reward) classification problem and tree search based on the predicted reward to generate rationales for each final answer. It is still unclear how ALPHALLM could outperform other LLM-SFTs that utilize both rationale annotations and final answer annotations.
Our primary motivation stems from the observation that as LLMs continue to evolve, achieving human parity in numerous tasks, the necessity and quality of explicit rationale annotations in datasets such as GSM8K and MATH become points of consideration.
- The rationale annotations provided in datasets like GSM8K and MATH, while useful, are not necessarily optimal or superior to those that can be generated through advanced methods like MCTS guided approaches. The quality and suitability of these human-provided rationales can vary, and they might not always align with the nuanced reasoning capabilities of advanced LLMs. Our hypothesis is that the rationales generated by ALPHALLM, guided by MCTS, are potentially more robust and aligned with the model’s training process than those annotated in GSM8K and MATH.
- ALPHALLM utilizes final math answer annotations in conjunction with tree search based on predicted rewards to derive rationales. This method allows the model to explore a wider range of reasoning paths and select the most plausible ones based on calculated rewards, rather than being confined to potentially suboptimal human-provided annotations.
- As presented in our experimental results, ALPHALLM demonstrates superior performance compared to WizardMath, which utilizes over 96k data points for math-specific training, including both rationale and final answer annotations
---
> **[W2]** ALPHALLM is a general framework. The authors should apply ALPHALLM to other tasks where learning signals are clear to demonstrate the overall effectiveness of this self-improvement framework.
We appreciate the reviewer’s suggestion regarding the application of ALPHALLM to a broader range of tasks. We are actively working on extending our framework to other STEM reasoning tasks to further validate its effectiveness. Preliminary results are promising, and we are committed to sharing a comprehensive report on these additional tasks in the near future!
---
> **[Q1]** Have you ever tried using your value/reward function to perform RL fine-tuning with PPO?
We have not yet explored vanilla PPO but optimizing LLMs using trajectories from ALPHALLM with PPO/DPO is indeed on our roadmap for future work. To provide a comparative analysis, we included the results of Best-of-N ranked by our ORM in the current study. Best-of-N is often considered an alternative to PPO, particularly when N is sufficiently large. As demonstrated in Table 5 in the submission, ALPHALLM significantly outperforms Best-of-N, highlighting the effectiveness of our MCTS approach. This substantial margin of improvement underscores the potential of our method over traditional approaches like PPO, even before we integrate them.
---
Rebuttal 2:
Comment: Thank you for your response. Below is my feedback:
- To support the hypothesis that the rationales generated by ALPHALLM are more robust and aligned, the author should provide additional examples of these rationales and analyze the differences between human-annotated rationales and those generated by ALPHALLM. Without this, the theoretical strength of ALPHALLM remains unclear.
- After reading comments from Reviewers SX7Z and PPNK, I agree with their concerns about the implementation details. I also believe that the code should be provided to allow for verification of each component of your method.
Due to the main concerns raised above, I have decided to lower my score. However, given that the experimental results are indeed remarkable, I still support the acceptance of this paper.
---
Rebuttal Comment 2.1:
Title: Thank you for your feedback
Comment: Thank you for the constructive feedback! We agree that a specific analysis between human-annotated rationales with those generated by ALPHALLM is valuable. Indeed, we have already provided evidence in the fine-tuning results. For instance, in Table 2 (around line 313), we show that ALPHALLM, trained with self-generated rationales, outperforms LLaMA-2 70B SFT, which uses human-annotated rationales. This empirical evidence could support the hypothesis that the rationales generated by ALPHALLM are more effective. We also agree that providing additional examples of these rationales and analyze the differences could provide deep insights. We will definitely include this analysis in the appendix in the next version.
Regarding the implementation details, we have included additional ablations and details in our response to Reviewer PPNK and Reviewer SX7Z. We kindly ask you to review those additional results to see if they address your concerns. We understand that some details (e.g. Appendix A.6) might be overlooked without a thorough review of the appendix. To further ensure the reproducibility of our results and to facilitate the verification of each component of our method, we are also committed to providing the codebase associated with ALPHALLM. Thank you for your understanding! | Summary: The paper proposes AlphaLLM, a tree-search enhanced framework with a few improvements over Data Synthesizing, option-level MCTS, Importance-Based Adaptive Branching, state merging, fast rollout, critic function and policy improvement process. Experimental results verify the framework's effectiveness on GSM8k and MATH.
Strengths: 1. The technical contribution seems solid. The paper proposes a comprehensive framework and address modifications/improvements over the full pipeline of tree-search.
2. Experimental results demonstrate great potentials for the proposed algorithms.
Weaknesses: 1. The writing is not clear enough. It is not that clear to me how the option-level is implemented to separate sentences (how the Termination function is determined).
2. There are many different components in the pipeline design but the ablation studies are not enough to validate them one by one. For example, It at least need to involve:
2.1 Comparison with other heuristic search methods, for example: (1) the beam-search + value function described in [1, 2]) (2) majority-vote or reranking with ORM.
2.2 Wall time or token consumption comparison between different methods.
2.3 The ablation studies seem to be too simple to present how different components can really influence the performance. Pure with/without results seem to be too simple. More experiments like how to control the hyperparameter for these things are required. For example, how different heuristic functions can influence the State Merge, how different fast rollout model can influence the effiency, how the thresold in ORM in the process of data generation can influence the improvement process, etc.
Reference
[1] Feng, Xidong, et al. "Alphazero-like tree-search can guide large language model decoding and training." arXiv preprint arXiv:2309.17179 (2023).
[2] Yu, Fei, Anningzhe Gao, and Benyou Wang. "Outcome-supervised verifiers for planning in mathematical reasoning." arXiv preprint arXiv:2311.09724 (2023).
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. What is the difference between value function training and PRM training? From the description it seems that the PRM's training is exactly the same as value function training in Monte-Carlo estimation (except you tend to take more simulations starting from a fixed state).
Also, it seems that the technique described in the critic section has almost been described in many previous works: value function training in [1], PRM training in [2, 3], and ORM in [4], the author needs to clarify more on their own contributions and make proper citations.
2. One difference between your work and [1] is that they leverage the value function itself to backward (like AlphaZero) while you are using the fast model to rollout to reach the terminal state, which is like the initial version of AlphaGo (where they have a fast-move network) and [5]. What do you think of these differences and why do you make this design choices?
Reference
[1] Feng, Xidong, et al. "Alphazero-like tree-search can guide large language model decoding and training." arXiv preprint arXiv:2309.17179 (2023).
[2] Wang, Peiyi, et al. "Math-shepherd: A label-free step-by-step verifier for llms in mathematical reasoning." arXiv preprint arXiv:2312.08935 (2023).
[3] Lightman, Hunter, et al. "Let's verify step by step." arXiv preprint arXiv:2305.20050 (2023).
[4] Uesato, Jonathan, et al. "Solving math word problems with process-and outcome-based feedback." arXiv preprint arXiv:2211.14275 (2022).
[5] Hao, Shibo, et al. "Reasoning with language model is planning with world model." arXiv preprint arXiv:2305.14992 (2023).
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See questions and weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **[W1]** It is not that clear how the option-level is implemented to separate sentences
Thank you for your feedback regarding the clarity of our writing. To clarify, the termination function for options operates differently depending on the dataset:
- For the GSM8K dataset, the termination condition occurs at the end of each line. This is based on the typical structure of this dataset where each line represents a distinct step or point.
- For the MATH dataset, due to its complexity and the base model's tendency to generate many '\n\n' line breaks with some less meaningful content between them, termination occurs at the end of a line if a formula pattern is detected. During inference, if '\n\n' is encountered, we perform a rule-based check for formula patterns. It terminates if a pattern is found or continues generating until the next '\n\n'.
We will include the clarifications in the revision.
---
> **[W2.1, W2.2]** The ablation studies are not enough. 2.1 Other heuristic search methods; 2.2 Token consumption
In Table 5 of the original submission, a basic comparison of accuracy and the number of rollouts for self-consistency (majority-vote) and re-ranking methods for both GSM8K and MATH datasets was included. Following the suggestion, beam search (BFS) has also been incorporated into the experiments on GSM8K, as shown in the table below:
Method | #Responses | #Tokens | Acc
---|---|---|---
Greedy | 1 | 127 | 57.8
Maj-Vote | 10 | 1272 | 67.4
Maj-Vote | 30 | 3788 | 74.2
Maj-Vote | 50 | 6332 | 75.4
Re-ranking | 10 | 1272 | 80.8
Re-ranking | 30 | 3788 | 86.3
Re-ranking | 50 | 6332 | 87.7
BFS | - | 2430 | 80.6
\etaMCTS | - | 1521 | 87.0
\etaMCTS | - | 6360 | 88.9
Token consumption is estimated by tracking MCTS rollouts and multiplying by the average tokens per rollout for Llama2-70b on GSM8K. The table indicates that \etaMCTS outperforms and is more efficient than other baselines.
---
> **[W2.3]** 2.3 The ablation studies seem to be too simple
In addition, we have conducted ablation studies on GSM8K with base Llama2-70b to further demonstrate the influence of various components on the performance:
**State Merge**: The impact of the choice of heuristic functions (w/ hyperparameters) or model-based state merge is not very significant.
Method | Threshold | Acc
---|---|---
Edit distance | 20 | 86.8
Edit distance | 50 | 87.0
Cosine Similarity (TF-IDF) | 0.7 | 86.3
Model-based (Llama-2-70b-chat )| N/A | 86.7
**Fast-rollout**: Number of rollouts: As the number of fast-rollouts increases, there is a corresponding improvement in performance. This is due to the reduction in the variance of the estimates. We used n=4 in our experiments for better trade-off between performance and efficiency.
#rollout| Acc
---|---
1|85.9
4|86.5
8|86.7
**Fast-rollout model**: Using Llama-2-70b instead of Abel-7B-002 improves performance by reducing bias from a smaller model, but Abel-002-7B is faster with similar computational resources due to higher concurrency and quicker processing.
Model | Acc| Speed (s)
---|---|---
Abel-002-7B|87.0|16.8
Llama-2-70b|87.3|38.1
**ORM**: During training, each question is sampled multiple times, and the ORM predicts the correctness of each trajectory as True or False. The ORM's score (probability of outputting True) is used in the data generation process, along with the scores from value and PRM, eliminating the need for a threshold.
**Adaptive branching**: with adaptive branching, the performance score is 84.9, whereas without it, and using a fixed number of 4 branches, the score drops to 79.5.
---
> **[Q1]** What is the difference between value function and PRM training? Also, please clarify your contributions and provide proper citations in the critic section.
Value function training and PRM training share some similarities, but they differ in task formulation, model architecture, and training loss.
Value function training focuses on estimating the expected FUTURE return of a given state or state-action pair. In contrast, PRM evaluates the quality of the current state or node. While PRM ideally requires quality labels for each state, due to the high cost and time involved in obtaining these, MC estimation is used as a proxy.
The Value Function model extends from a policy model by adding a value head and is trained using regression loss to predict expected returns, a continuous variable. PRM, trained using LM loss, evaluates the quality of states/nodes and benefits from the contextual understanding of language models. The differences in architecture and training result in distinct behaviors: the Value Function model has better precision (0.82 vs. 0.62) and calibration (ECE: 0.032 vs. 0.375), while PRM has superior recall (0.90 vs. 0.79).
Finally, our contribution lies in utilizing a trio of critics, each with strengths in different aspects, to provide reliable signals for guiding MCTS. We will restructure the critic section to clarify these points further, and will also ensure to appropriately cite the references [1,2,3,4] in that section to acknowledge their contributions to this field.
---
> **[Q2]** Why choose to include the fast-rollout?
We use rollouts to the terminal state, because empirical evidence shows that the value network's accuracy improves with detailed information closer to the terminal state. For example, an early version of the value network had 89.5% accuracy at the terminal state and 83.2% accuracy three steps before, indicating more reliable predictions near the terminal state. Estimations at the end of rollout trajectories have lower bias but higher variance, so using the mean score from multiple rollouts offers low-bias, low-variance estimations. To mitigate bias from fast rollouts of a smaller model, we also keep value estimations at the actual progress point. Additionally, using the outcome reward model through fast rollouts provides extra signals that enhance learning, making our approach more robust and effective.
---
Rebuttal 2:
Title: Rebuttal Supplement: References
Comment: Due to the word limit, we included the references used in the rebuttal for reviewer PPNk in this comment.
We would also like to greatly thank the reviewer PPNk for the valuable feedback and insightful questions. We appreciate the recognition of our technical contributions and the potential of our algorithm. The reviewer's questions and suggestions have been crucial in improving the clarity of our work.
## Reference
[1] Feng, Xidong, et al. "Alphazero-like tree-search can guide large language model decoding and training." arXiv preprint arXiv:2309.17179 (2023).
[2] Wang, Peiyi, et al. "Math-shepherd: A label-free step-by-step verifier for llms in mathematical reasoning." arXiv preprint arXiv:2312.08935 (2023).
[3] Lightman, Hunter, et al. "Let's verify step by step." arXiv preprint arXiv:2305.20050 (2023).
[4] Uesato, Jonathan, et al. "Solving math word problems with process-and outcome-based feedback." arXiv preprint arXiv:2211.14275 (2022).
---
Rebuttal Comment 2.1:
Comment: Despite the difference on architecture and training loss, the clarification on formulation difference makes me still confused:
"Value function training focuses on estimating the expected FUTURE return of a given state or state-action pair. In contrast, PRM evaluates the quality of the current state or node. While PRM ideally requires quality labels for each state, due to the high cost and time involved in obtaining these, MC estimation is used as a proxy."
Since you are using MC estimation, then this PRM is exactly the same for estimating the expected future return of a given state or state-action pair, so it should fall back to value function training.
---
Reply to Comment 2.1.1:
Title: Thank you for your feedback
Comment: Thank you for your insightful comment! We agree that the training objective of the MC proxy is indeed the same as that of the value function, and both the estimated values should tend to be the same as long as the MC samples and data used for training the value function go to infinity. The reason we adopted this approach is due to the high cost and time involved in obtaining quality labels like [1] for each state. Therefore, we adapted the PRM used in [2,3] as a proxy .
In practice, this approach has proven to be useful. The value function exhibits better precision and calibration, while PRM has superior recall. By integrating these models, we observed an overall performance improvement from 84.9% to 85.9% on GSM8K.
In future work, we plan to explore additional methods to obtain more accurate estimations of the quality of the current step. We appreciate your valuable feedback and thank you once again for your contribution to our research.
[1] Lightman, Hunter, et al. "Let's verify step by step." arXiv preprint arXiv:2305.20050 (2023).
[2] Wang, Peiyi, et al. "Math-shepherd: A label-free step-by-step verifier for llms in mathematical reasoning." arXiv preprint arXiv:2312.08935 (2023).
[3] Jiao, Fangkai, et al. "Learning planning-based reasoning by trajectories collection and process reward synthesizing." arXiv preprint arXiv:2402.00658 (2024). | Summary: The paper introduces a method for self-imporvement of LLMs called AlphaLLM. The method consists of three components:
Generation of expert trajectories
Effective Monte-Carlo-Tree-Search over the LLM outputs (nablda-MCTS)
A series of critics for evaluating reliable reward signal (value function, Process Reward Model, Outcome Reward Models)
The method can be used as a rollout mechanism to generate high quality outputs. To help deal with the large branching factor, the authors propose three subcomponents:
State space is set to Options, which is neither token space nor sentence space, but a variable length sequence which terminates based on an auxiliary heuristic beta.
States are merged using another auxiliary function p_vM
Fast rollouts are used with a smaller LLM to help estimate.
The authors then demonstrate incredibly strong empirical results on both MATHS and GSM8K. Training with the proposed method on LLAMA-2-70b is able to achieve 51% on the MATH dataset.
Strengths: The method seems to work well!
The discussion on state merge and options helps express difficulties in applying MCTS.
The paper seems to combine many subcomponents to produce a strong algorithm.
The branching factor is well-motivated.
Weaknesses: Lots of experimental details are missing. There is very little discussion of how AlphaLLM is trained or what base model it is (it's only mentioned in the introduction!). In particular, no information is given about the actual training of either the policy (LLM), the Critics, or the heuristic functions p_vM.
The explanation of the options is lacking - even looking at A.2 the explanation of options is unclear. Is Beta learnt as a function or generated in some other way? Are the set of options I, learnt before a rollout or generated on the fly?
Ablations are missing - in particular using different state methods, removing the Importance-Based Adaptive Branching, State Merge or Option-Level MCTS.
The prompts in the appendix are reduced so we can’t replicate.
No Code provided.
Weird choice of baselines - surely to keep evaluations consistent with other LLMs, you should allow models to generate the same amount of output tokens with access to the PRM or ORM? I find it hard to believe COT on Claude-2 is a fair comparison, when the amount of optimisation pressure applied to multiple components of the method (including gradient updates).
There is no example of a rollout provided.
Technical Quality: 2
Clarity: 3
Questions for Authors: - What policy was used in the end for the fast rollout?
- The prompt templates are missing key parts - [A detailed rubric that specifies how to evaluate a step of a task] would be helpful if shared. - In particular when comparing to GPT-4 or other model perform was an equally informative prompt used? (Or at least a prompt of equal input tokens).
- If the synthetic data generated is used few-shot by llama2, how does your model perform?
- I think the related work is missing some key papers such as: [https://arxiv.org/abs/2203.14465] (the STAR family of papers)
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The paper in its current form is not replicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Your suggestions have helped us a lot to improve the work, though we do think there might exist misunderstandings about our method. We hope the following clarifications and additional evidences would be possible for a re-evaluation.
---
> **[W1]** Lots of experimental details are missing. There is very little discussion of how AlphaLLM is trained or what base model it is (it's only mentioned in the introduction!). In particular, no information is given about the actual training of either the policy, the Critics, or the heuristic functions p_vM
Model and training details: As detailed in **Appendix A.6** (line 609-624) in the original submission, the base models used are Llama-2-70b for GSM8K and WizardMath-70B-V1.0 for MATH, respectively. More details regarding the training (LR, warm-up, etc.), critics (data and model), and the fast-rollout are also provided in **A.6**.
Heuristic functions: As mentioned in **Sec 4.3.3**, the heuristic function p_vM can either be a faster rule-based measurement (e.g. edit distance) or a model-based method (e.g. prompting a LLM). For our experiments, we used edit distance. We also include additional ablation studies demonstrating that the performance is not very sensitive to the choice of p_vM:
Method | Threshold | Acc
---|---|---
Edit distance | 20 | 86.8
Edit distance | 50 | 87.0
Cosine Similarity (TF-IDF) | 0.7 | 86.3
Model-based (Llama-2-70b-chat )| N/A | 86.7
---
> **[W2]** The explanation of the options is lacking. Is Beta learnt or generated in some other way? Are the set of options I, learnt before a rollout or generated on the fly?
The termination function $\beta$ can be either be learnt or rule-based. In practice, for the GSM8K, we used a newline character (\n) as the termination condition, as shown in **Table 4**. For the MATH, termination condition occurs when a line with a formula pattern is found, with line breaks indicated by '\n\n'. During inference, if the generated output has '\n\n', we check for formula patterns using rules. It terminates if a pattern is detected or continues until another '\n\n' appears.
The set $I \subseteq S$ represents a set of initial states (not a set of options). It can be the state of a question or a previously terminated step.
---
> **[W3]** Ablations are missing - in particular using different state methods, removing the Adaptive Branching, State Merge or Option-Level MCTS.
We have included ablations for different state methods, including PRM, fast-rollout with ORM, state merge, and large number of rollouts, tool-augmented ORM and option-level MCTS in **Table 3**, with a discussion in **Sec 5.3**. Additional ablations on adaptive branching are included here:
- With adaptive branching: 84.9
- Without adaptive branching: 79.5
---
> **[W4]** No Code provided.
We plan to release the code after it undergoes a legal review by our entity.
---
> **[W5]** The prompts in the appendix are reduced so we can’t replicate.
All the prompt templates will be included with the code release. These templates are detailed in Appendix A.5 , with specific rubrics for PRM and ORM are provided in Section A in the comment below.
---
> **[W6]** Weird choice of baselines - surely to keep evaluations consistent with other LLMs, you should allow models to generate the same amount of output tokens with access to the PRM or ORM? I find it hard to believe COT on Claude-2 is a fair comparison, when the amount of optimisation pressure applied to multiple components of the method.
Our primary focus in this paper is to explore the self-improvement capabilities of LLMs. The experimental results demonstrate that our method, AlphaLLM, can achieve significant self-improvement. Moreover, we observed that the performance of Llama-2-70b, when employs our proposed \etaMCTS, is comparable to that of Claude-2 and GPT-4, highlighting the potential for self-improvement.
We included proprietary models mainly to illustrate the potential of our self-improvement method. The definition of a "fair comparison" can vary. Proprietary models, such as the latest versions of Claude-2 and GPT-4, are trained with unknown data quantities and types, often iteratively refined. By default, these models already incorporate chain-of-thought processes. It is plausible that proprietary models leverage System-2 results to enhance System-1 outputs, complicating direct comparisons.
In addition, we have demonstrated superior performance when compared with recent work [1] that also used MCTS and utilizes a similar token usage amount. For example, AlphaLLM achieved 51.0 on MATH while [1] achieved 34.3.
We hope this clarifies our approach and rationale behind the baseline choices. We welcome further suggestions to improve our evaluation methodology.
---
> **[W7]** There is no example of a rollout provided.
Please refer to the example in the Section B in the comment.
---
> **[Q1]** What policy was used in the end for the fast rollout?
As mentioned in **Appendix A.6**, the fast-rollout model is Abel-002-7B.
---
> **[Q2]** The prompt templates are missing key parts - In particular when comparing to GPT-4 or other model perform was an equally informative prompt used? (Or at least a prompt of equal input tokens).
Please refer to the responses for [W5] and [W6].
---
> **[Q3]** If the synthetic data generated is used few-shot by llama2, how does your model perform?
The synthetic data would still have the same question format as [2] but with answer steps from MCTS. Therefore, Llama2 with few-shot prompting would exhibit similar performance to that observed on the questions in [2].
---
> **[Q4]** I think the related work is missing some key papers such as the STAR family of papers
We acknowledge that we missed the STaR papers and will include them in the related work section. Although we have cited some similar papers [3,4], we appreciate the importance of adding this relevant literature.
---
Rebuttal 2:
Title: Rebuttal Supplement: Templates, Examples and References
Comment: Due to the word limit, I am providing the prompts, examples, and references used in our rebuttal for reviewer SX7Z in this comment.
## A. Rubrics in Prompt Templates
As mentioned in our response to **[W5]**, the prompt templates for PRM and ORM are detailed in Appendix A.5 , with specific rubrics for PRM and ORM are
- PRM:
```
You are given a math problem, followed by a step-by-step reasoning process. Your task is to read the problem carefully, understand the solving steps, and check the correctness of the last reasoning step. Output 'True' if the last step is correct, and 'False' otherwise.
```
- ORM:
```
Assess a solution including final answer to a given math problem by following below steps.\n- Evaluate the method used for solving the problem.\n- Review each calculation step for accuracy. Check for computational errors, incorrect formula applications, or arithmetic mistakes.\n- The solution should use all the information provided in the question.\n- Examine the final answer for correctness, considering the calculations and method used.\n.
```
## B. Rollout Example
Here's a detailed rollout example addressing **[W7]**:
Consider the following GSM-like question:
```
Question: Sandy's monthly phone bill expense is equal to ten times her age now. In two years, Sandy will be three times as old as Kim. If Kim is currently x years old, calculate Sandy's monthly phone bill expense.\nIf we know the answer to the above question is 340, what is the value of the unknown variable x?\n
```
A node in the second layer could have the following content:
```
Answer: We know that Sandy's monthly phone bill is 10 times her age. In two years, Sandy will be 3 times as old as Kim. The sum of Sandy's age now and 2 years is 3 times the sum of Kim's age now and two years.\nSandy's age now is 340/10 = <<340/10=34>>34. In two years, Sandy's age will be 34 + 2 = <<34+2=36>>36.\n
```
The parent of this node has the content:
```
Answer: We know that Sandy's monthly phone bill is 10 times her age. In two years, Sandy will be 3 times as old as Kim. The sum of Sandy's age now and 2 years is 3 times the sum of Kim's age now and two years.\n
```
And one of its fast-rollout paths could be:
```
The sum of Sandy's age now and 2 years is 36. The sum of Kim's age now and two years is x + 2.\n36 = 3(x + 2)\n36 = 3x + 6\n3x = 30\nx = 10\n#### 10
```
## C. Reference
[1]. Zhang, Dan, et al. "ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search." arXiv preprint arXiv:2406.03816 (2024).
[2]. Yu, Longhui, et al. "Metamath: Bootstrap your own mathematical questions for large language models." arXiv preprint arXiv:2309.12284 (2023).
[3]. Li, Xian, et al. "Self-alignment with instruction backtranslation." arXiv preprint arXiv:2308.06259 (2023).
[4]. Guo, Hongyi, et al. "Human-instruction-free llm self-alignment with limited samples." arXiv preprint arXiv:2401.06785 (2024).
---
Rebuttal Comment 2.1:
Title: Thank you for your comments.
Comment: I thank the authors for these comments and apologise for not seeing the training section earlier.
My concerns about the writing and lack of correct ablations have been re-enforced by the other reviewers, I believe stand.
I still think your baselines are wrong, you want to show your method relative to other methods either generates more informative datapoints (similar to compare to Lightman's Process-based supervision) or that this is just a method for boostrapping from a weak learner (in which case i imagine using BoN with these new value functions should get a similar result).
The paper is still of weak quality in my opinion but the results are truly exciting. I think if the authors can make a deliberate effort to tidy up writing and compare to baselines with their base models.
I'm updating my score to an accept because the result is really good but this paper still really lacks clear communication or generating knowledge on whats happening here.
---
Reply to Comment 2.1.1:
Title: Thank you for your feedback
Comment: Thank you for your constructive suggestions! We truly appreciate your feedback and will work on improving the writing to make the paper clearer and more accessible.
Regarding the ablations, could you please specify any particular ablations you feel may be missing? In addition to the ablations we included in our submission (with and without certain components), we have also provided more detailed ablations, including hyperparameter controls, in our response to Reviewer PPNk. We would be happy to consider incorporating any additional specific ablations you might suggest.
As for the baselines, we have included a similar discussion in Section 5.3 and Figure 2. The result demonstrates that, in self-improving, using \etaMCTS for data collection outperforms Reranking (BoN) in various aspects, including the improved policy itself, as well as when combined with reranking and \etaMCTS in both iterations 1 and 2. We believe this evidence shows that our method generates more informative data points.
Thank you again for your valuable feedback. We will keep refining our paper to enhance its clarity and overall presentation. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
An Image is Worth 32 Tokens for Reconstruction and Generation | Accept (poster) | Summary: This paper proposes a novel way to tokenize images to benefit image reconstruction and generation. This paper argues that the convention of compressing images into 2D latent spaces in the VQVAE/VQGAN setting limits the VQ model’s ability to fully exploit the redundancies present in images.
During encoding, a fixed number of latent tokens are concatenated with the image patches. A vision transformer (ViT) is used to extract the representation from image patches to the latent representation. During decoding, the latent tokens are concatenated with a set of masked tokens. Another ViT is used to reconstruct image patches from the latent representation. However, instead of training encoder and decoder end2end, this paper first trains the model with the discrete codes generated by an off-the-shelf MaskGIT-VQGAN model, and then only fine-tunes the decoder with RGB pixels. During image generation, MaskGIT is used to generate the latent tokens.
The model is tested on 256x256 ImageNet for image reconstruction, generation and classification.
Strengths: 1. The proposed method is simple yet effective.
2. Scaling up the tokenizer seems able to achieve even more compact image representations.
3. The experiments on image classification is very interesting. As the size of the latent representation decreases, the tokenizer increasingly learns semantically rich representations.
Weaknesses: 1. More efforts should be put in writing Two-Stage Training. It is not clear how exactly the "warmup" works. Does it train the whole encoder-decoder but with frozen codebook used in VQGAN? Or it means the model is trained by the quantized value of VQGAN instead of by the RGB pixels, i.e., the input and output of the model are the quantized value of VQGAN?
2. In the ablation test of 2D variant of TiTok-B64, how many tokens are used in this 2D representation? What if we use the same number of tokens for 1D and 2D variants? Will the performance be similar?
Technical Quality: 4
Clarity: 4
Questions for Authors: How important is the off-the-shelf codebook? If the MaskGIT-VQGAN codebook is replaced by some other codebook, let's say FSQ, how would the performance look like? Can we use a continuous autoencoder instead of VQGAN as the outside compressor?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The limitation and potential negative societal impact have been discussed in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1: More details on two-stage training?**
Please see ***"General Questions and Concerns - Details of two-stage training."*** At the warm-up stage, the model's input is still RGB images, and output is the proxy codes, with cross-entropy loss for supervision.
> **W2: Ablation test of 2D variant of TiTok-B64?**
We thank the reviewer for the question. Both the TiTok-B-64 (2D) and TiTok-B-64 (1D) use the same number (64) of tokens, except that 2D variant kept the patch tokens as latent grid representation and directly reconstructed images from these 2D tokens (similar to a common 2D tokenizer), whereas 1D variant uses the latent tokens to guide the reconstruction from the mask token sequences. This experiment demonstrates that 1D formulation shows superior performance to 2D counterparts, especially when the number of tokens is limited.
> **Q1: How important is the off-the-shelf codebook? Continuous autoencoder instead of VQGAN as the outside compressor?**
Please see ***"General Questions and Concerns - Reviewer HiP7 Q1: How important is the off-the-shelf codebook? Can we use a continuous autoencoder?"***
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification. Since most reviewers have the confusion of the two-stage training, apparently this part is very poorly written. I strongly suggest the authors to revise the manuscript.
---
Reply to Comment 1.1.1:
Title: Thanks for Your Review and Support
Comment: Thank you so much for the valuable suggestions and for considering our responses! We will revise and refine the confusing parts in the two-stage training as discussed in the rebuttal as suggested. If you need any further information or clarification, please feel free to contact us! | Summary: This paper introduces TiTok (Transformer-based 1-Dimensional Tokenizer) that can tokenize images as compact 1D sequences instead of 2D latent grids. Accompanied with bidirectional non-autoregressive image generator, TiTok achieves SOTA performance on imagenet 256x256 benchmark with much faster generation process. The key contributions are:
1) A 1D tokenization method that can represent images using significantly fewer tokens compared with traditional 2D approaches without comprising performance (as few as 32 tokens);
2) Scaling up the size of tokenizer model helps to learn more compact and semantic-rich latent representations;
3) Significant speed-up for both training and inference with competitive performance compared with 2D tokenizer.
Strengths: 1) The idea of using 1D representation instead of the conventional 2D is novel. It shows as few as 32 tokens are able to reconstruct an image well. The significant reduction of token numbers makes the training and inference much more efficient. Furthermore, scaling model sizes enables more compact representation, which provides insights into scaling laws for image tokenization.
2) The experiments are comprehensive, including different model sizes and token numbers. The ablation of different design choices are well designed. The evaluations on both image reconstruction and generation show superior performance compared with MaskGIT and VQGAN.
3) The paper is well rewritten, and easy to follow. The codes and models are publicly available.
Weaknesses: A major concern is the two-stage training method. From Table 3 (c), the gap with and wo proxy codes is big, 1.7 vs 5.1 for rFID, and 195 vs 120 for IS. However, the proxy codes come from MaskGIT. The authors emphasized that due to the missing of a strong training recipe, the sing-stage method is lagging behind. Thus, it seems the training method is the most important one compared with architecture designs.
BTW, recently there is an open-source implementation for MAGVIT2 including training codes: https://github.com/TencentARC/Open-MAGVIT2
It will be great if the authors can adopt their training method and make the single-stage training work.
Technical Quality: 3
Clarity: 3
Questions for Authors: Another related work is Seed, which also proposes a 1D tokenizer with semantic meaningful tokens (32 tokens). It's better to cite it:
Ge, Yuying, Yixiao Ge, Ziyun Zeng, Xintao Wang, and Ying Shan. "Planting a seed of vision in large language model." arXiv preprint arXiv:2307.08041 (2023).
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No negative societal impact
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1&2: two-stage training method, Improved public available tokenizer training recipe?**
Please see ***"General Questions and Concerns - Reviewer Mn4Y W1&2: Two-stage training and better single-stage recipe from Open-MAGVIT2?"***
> **Q1: Comparison to SEED?**
We thank the reviewer for the suggestion. SEED is already cited and discussed in the Related Work L102 to L109. Specifically, BLIP, SEED, EMU, and other similar multi-modal LLMs build a tokenizer on top of CLIP encoder, which gives highly-semantic tokens. However, due to the nature of CLIP models that focus on high-level information, they can only feed the tokens into a diffusion model and only reconstruct an image with high-level semantic similarities, while the layouts and details may not be well reconstructed. (see Fig. 3 in the SEED paper as a reference).
---
Rebuttal Comment 1.1:
Comment: Thanks for providing the detailed rebuttal, especially running the single-stage training experiment by borrowing the training recipe from open-magvit2. I will raise my score.
---
Reply to Comment 1.1.1:
Title: Thanks for Your Review and Support
Comment: Thank you very much for your insightful feedback and for considering our responses. We're glad we could address all your concerns satisfactorily. Your support in recommending our paper for acceptance is greatly appreciated. If you have any further questions, please feel free to let us know. | Summary: This paper introduces a transformer-based image tokenizer (ViT) designed to convert 2D images into a 1D discrete sequence, named TiTok. The authors demonstrate that an image of size 256x256 can be discretized into a compact space of only 32 discrete tokens. This new tokenizer encodes semantic information, in contrast to other local 2D tokenizers. TiTok is trained using a two-stage process: initially, proxy codes are used to avoid the complexity of VQGAN loss, followed by a fine-tuning stage. Finally, they train a MaskGIT model to generate the discrete tokens, achieving an FID score of less than 2.
Strengths: - The proposed method not only demonstrates increased speed compared to previous approaches but also showcases improved quality, as indicated by the FID score and the visualization.
- Thanks to the compact space, the tokenizer learns semantic features rather than local features, which suggests potential for interesting interpolation and image manipulation applications in the future.
- The method is original and presents a novel approach to addressing the generative task using discrete tokens in a 1D sequence.
Weaknesses: - The method relies on proxy training for the tokenizer, but the motivation for this choice is unclear. The VQGAN loss (Reco, LPIPS, and GAN) is not overly complex to train, as evidenced by the availability of many open-source implementations. Utilizing proxy tokens is cumbersome since it requires a pre-trained network with the VQGAN loss.
- The paper may lack sufficient ablation studies on hyperparameters for sampling, such as the number of steps, the role of temperature, or the CFG.
- Providing uncurated visualizations of few samples from one or multiple classes could have helped readers better assess the diversity generated by the model.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Figure 4 shows better gFID compared to rFID when using only 32 tokens. How is this possible? How can you achieve a better FID than the actual reconstruction?
- Given that the number of tokens to predict is low (<128) and the structure is 1D, why not use a decoder-only transformer with next-token prediction? This approach would eliminate the burden of the sampling strategy and maintain a manageable number of forward passes thanks to the tokenizers
- I might have missed something, but how do you train TiTok with the proxy codes when the number of tokens output by the VQGAN are more than 32 and when the number of codes in the codebook are not the same?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: /
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1: Why proxy code (two-stage training) instead of VQGAN loss (recon + LPIPS + GAN)?**
It is noteworthy that the two-stage training is ***not*** necessary, as is demonstrated in ***"General Questions and Concerns-Single-stage training for TiTok"***. We also show that TiTok can work well with the commonly used Taming-VQGAN recipe as shown in Tab.3c. However, as is discussed in L345 - 352, there is a performance gap between Taming-VQGAN recipe and MaskGIT-VQGAN recipe, where the latter one has a significantly better performance but provides no public reference or access. We adopt the two-stage training mainly for bridging the gap between the Taming-VQGAN recipe and other state-of-the-art tokenizers.
> **W2: Ablation studies on hyperparameters for sampling**
Following prior arts [10, 67], we did a grid search (with step 0.5) on the sampling hyper-parameters and reported the optimal ones. We add a further ablation study based on TiTok-L-32 as suggested below (the final setting is labeled in bold, each grid number is organized as IS/FID). The effects of using CFG or not is already provided in Tab. 6 in the submission ("w/o guidance" refers to no CFG and "w/ guidance" refers to CFG).
| guidance_scale \ temperature | 8 | 8.5 | 9 | 9.5 | 10 | 10.5 | 11 |
|------------------------------|--------------|--------------|--------------|---------------|---------------|---------------|---------------|
| **3** | 197.9/2.78 | 188.8/2.77 | 178.0/2.77 | 169.9/2.84 | 161.6/2.94 | 156.1/3.10 | 150.2/3.27 |
| **3.5** | 207.5/2.92 | 199.4/2.82 | 191.2/2.74 | 182.2/2.76 | 174.3/2.84 | 166.2/2.93 | 159.6/3.04 |
| **4** | 217.9/3.02 | 209.8/2.89 | 200.6/2.80 | 192.7/2.77 | 184.1/2.75 | 176.1/2.80 | 170.5/2.89 |
| **4.5** | 226.2/3.11 | 217.5/3.00 | 209.7/2.87 | **199.8/2.77** | 193.7/2.77 | 184.7/2.78 | 177.8/2.80 |
| **5** | 234.0/3.28 | 225.0/3.09 | 217.4/2.98 | 208.6/2.87 | 202.0/2.81 | 194.4/2.79 | 187.8/2.77 |
| **5.5** | 241.9/3.42 | 233.0/3.23 | 222.4/3.04 | 215.7/2.96 | 208.7/2.88 | 200.7/2.83 | 194.7/2.79 |
| **6** | 247.7/3.65 | 237.0/3.41 | 230.8/3.19 | 220.9/3.04 | 214.5/2.94 | 208.0/2.88 | 202.0/2.83 |
> **W3: Uncurated visualizations**
Thanks for the valuable comments, we provided uncurated visualizations in the attached PDF file.
> **Q1: Why is gFID better than rFID?**
We appreciate the good question. This is because rFID is computed against the real ImageNet val set, i.e., we compute the FID score between the reconstructed val set and the real ImageNet val set. However, gFID is computed against the virtual ImageNet reference statistics from OpenAI's ADM [D] (see "Sec 2.2 Sample Quality Metrics" in their paper). This is also why the gFID of the real val set is 1.78 instead of 0. The rFID and gFID are commonly correlated but they are not directly comparable. It is also noticeable that the evaluation protocols we follow are widely used by most prior works [4, 10, 35, 48, 54, 67].
> **Q2: Why not use auto-regressive transformer for generation?**
As discussed in the limitation section, we use MaskGIT generation framework as it is much more efficient compared to diffusion models or auto-regressive models, e.g., DiT-XL/2 is a 675M model, ViT-VQGAN is a 1.7B model, and the recent VAR requires training a 2.0B model for SOTA performance, while our model using MaskGIT only trains a 177M model and achieves better or comparable performance to aforementioned counterparts.
> **Q3: "Token number mismatch" in the proxy code training?**
Please see ***"General Questions and Concerns - Reviewer 8XiP Q3: How is TiTok trained with Proxy codes?"***
[D] Dhariwal, et.al. "Diffusion models beat gans on image synthesis." NeurIPS 2021 | Summary: Background: VQ-GAN with 2D grid of latent tokens and fixed downsampling factors.
This paper proposes to use 1D tokens instead of 2D tokens.
Key ideas:
* Redundancies: adjacent regions are similar. 2D grid explicitly couples the latents and the pixels in the same relative coordinates.
* 1D tokens are enough for discriminative tasks (classification, etc.). Maybe they are enough for generative tasks too.
Architecture:
* Transformer-based encoder (tokenizer) and decoder (de-tokenizer)
Training:
* Warm-up stage: train the 1D VQ model with the proxy codes (= discrete codes generated by an off-the-shelf MaskGIT-VQGAN model)
* Decoder fine-tuning stage: finetune the decoder with frozen encoder and quantizer
Strengths: Technical advantages:
* More freedom in designing the architecture of 1D tokenizer
* More semantic-rich image embedding // -> definition of semantic-rich? grounding experiment?
* Improvement in FID
* Faster (70x ~ 410x) generation
Thorough analyses:
* Scaling experiment -> 32 tokens are sufficient for reconstruction.
* Larger tokenizer enables smaller number of latent tokens.
* Smaller number of latent tokens have clearer semantics than larger one, regarding linear probing.
Weaknesses: W1. There is no analysis of 1D tokens. What are the advantages of 1D tokens that match the drawbacks of 2D tokens mentioned in the introduction section? What are the effects of masking a subset of 1D tokens compared to 2D tokens? The paper emphasizes the 1D latent sequence but does not analyze whether the observed characteristics (scaling, compact latent, etc.) are due to this 1D nature. A comparison with methods that further reduce the size of 2D tokens is needed.
W2. For reconstruction results, it is necessary to compare using MSE, PSNR, and SSIM. FID is not sufficient to evaluate “reconstruction.” Why do we need rFID instead of typical reconstruction metrics such as PSNR, SSIM, and LPIPS?
W3. The explanation of decoder fine-tuning is too sparse. Although not a novel contribution of this paper, it plays a significant role in performance improvement as shown in the ablation study, requiring more detailed explanation beyond just the “VQGAN training recipe.”
Misc.
Sentences could be easier to read.
- e.g., L60 ... the ViT decoder (is utilized to reconstruct -> reconstructs) the input images ...
- I recommend reading "Style, the basics of clarity and grace".
Technical Quality: 1
Clarity: 3
Questions for Authors: Q1. What are the advantages of 1D compared to 2D, and what are the potential disadvantages? Please provide a brief explanation.
Q2. Is TiTok compatible with diffusion models? What is the reason for choosing MaskGIT over diffusion models or VAR (Scalable Image Generation via Next-Scale Prediction)? Appendix mentions computational burden. Is MaskGIT cheaper than the others?
Q3. Could you discuss the relationship between TiTok and "Vision transformers need registers"?
https://arxiv.org/abs/2309.16588
Confidence: 5
Soundness: 1
Presentation: 3
Contribution: 2
Limitations: Yes in the appendix sections.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1.1: Grounding experiment and analysis of 1D tokens, advantages of 1D tokens against 2D tokens. What if we mask a subset?**
Main advantages of 1D tokens are "semantic-meaningful" and "more compact". The "semantic-meaningful" is grounded by experiments in Fig. 4c, FID score in Tab. 1, 2, and visualization in Fig. 7, where we show that TiTok tends to learn more semantic-meaningful representation with a limited number of tokens. For "more compact", as evidenced by Fig.4d and Tab. 1, 2, TiTok uses much fewer tokens and leads to a substantial generation speed-up.
Masking a subset could be challenging. As there is no concept of "pad token" or "placehold" in the learned codebook (regardless of 1D or 2D), it is not doable to "mask a subset" and then reconstruct an image. The best we can do is to randomly replace a subset of tokens with random tokens, yet it is still challenging to provide meaningful insights or comparison between 1D and 2D tokenizers.
> **W1.2: Reducing sizes with 2D tokens**
We emphasize that the scaling experiments aim at studying the model's behavior under different numbers of tokens, which is not necessarily related to 1D or 2D tokens. However, 1D token representation provides a flexible design to use arbitrary number of tokens, while 2D tokenizers often limit the token number choices among $k^2$ (where $k^2$ is one of [1024, 256, 64]), making it not suitable to study the scaling property with different number latent tokens. Besides, as shown in Tab. 3c, where the 2D variant of TiTok-B using 64 tokens ($8^2$) achieves significantly worse performance than the 1D variant of TiTok-B using the same number of tokens, indicating that 2D tokenizers are not good choices with limited number of tokens.
> **W2: More evaluation metrics for reconstruction**
FID and IS are the main metrics to evaluate the reconstruction in the context of tokenizer and generator [10, 35, 54, 64]. As suggested, we report additional metrics obtained using the same code-base to ensure fairness.
| | num_tokens | rFID↓ | IS↑ | PSNR↑ | SSIM↑ | MAE↓ | MSE↓ |
|--------------|------------|------|-------|-------|--------|--------|--------|
| MaskGIT-VQ | 256 | 2.28 | 180.4 | 18.14 | 0.4386 | 0.0878 | 0.0188 |
| TiTok-L-32 | 32 | 2.21 | **195.5** | 15.88 | 0.3635 | 0.1189 | 0.0300 |
| TiTok-B-64 | 64 | **1.70** | 195.2 | 17.06 | 0.4023 | 0.1011 | 0.0234 |
| TiTok-B-64 w/ improved training recipe | 64 | 2.43 | 179.3 | **19.01** | **0.4479** | **0.0782** | **0.0154** |
| TiTok-S-128 | 128 | 1.71 | 177.3 | 17.73 | 0.4255 | 0.0929 | 0.0202 |
The slightly worse PSNR/SSIM scores of TiTok variants match our observation that although TiTok can retain most important/salient information of the image, the high-frequency/details may be ignored/made up, as compensation for using fewer tokens. However, an improved training recipe (see ***“General Questions and Concerns - Single-stage training for TiTok”***) can significantly boost all these metrics.
> **W3: More details on decoder-finetuning:**
See ***“General Questions and Concerns - Reviewer Pxnc W3: Explanation of decoder fine-tuning?”***
> **Q1: Potential advantages and disadvantages of 1D v.s. 2D**
1D tokenizer allows more flexible design and more semantic-meaningful and compact tokens compared to 2D, significantly speeding up the generation process while maintaining competitive scores. It is noteworthy that one may use the same large number of tokens in 1D tokenizer, thus making it a direct alternative to existing 2D tokenizers with similar or better performance.
The main disadvantages are 1D tokenizers are far more under-explored, compared to 2D tokenizers, demanding for more research efforts to study its applications to different tasks (e.g., image editing, diffusion models, video generation, multi-modal LLM, etc.). Besides, the training paradigm of 1D tokenizer could be further refined, e.g., while we adopt the two-stage training in the paper, we currently observe promising results from an improved single-stage training recipe as suggested by Reviewer Mn4Y, which could be further improved.
> **Q2: Other generation models (e.g., diffusion)?**
As discussed in the limitation section, we use MaskGIT generation framework as it is much more efficient compared to diffusion models or auto-regressive models, e.g., DiT-XL/2 is a 675M model, ViT-VQGAN is a 1.7B model, and the recent VAR [D] requires a 2.0B model for attaining SOTA performance, while our model with MaskGIT only requires a 177M model and achieves better or comparable performance to aforementioned counterparts.
The TiTok framework is totally compatible with the KL-VAE formulation that is widely used by diffusion models. As this paper mainly focuses on the tokenizer part, we believe the combination with diffusion models can be a promising future direction.
> **Q3: Comparison to Vision transformers need registers?**
Thanks for the suggestion. We will cite the paper (denoted as ViTreg below) for a discussion in a revision. TiTok uses latent tokens as the image representation, similar to ViTreg and other prior works. (a detailed discussion is available in L102-109). However, the two works have significant differences. ViTreg uses a set of latent tokens to allow a cleaner and more interpretable attention map, as the added latent tokens can help *alleviate the artifacts/outliers in the original self-attention map*. On the other hand, TiTok explicitly uses latent tokens to encode all the information needed to reconstruct the original image, where the latent tokens can be regarded as an information bottleneck between the tokenizer and de-tokenizer. The two works have distinctly different motivations and focuses.
> **W4: Writing improvement**
Thanks for the valuable suggestions and we will revise accordingly.
[D] Tian, Keyu, et al. "Visual autoregressive modeling: Scalable image generation via next-scale prediction." arXiv:2404.02905 (2024).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the rebuttal. Sorry for coming in late.
I am lowering my rating to BR because this paper is
* We did this and that. It results in this and that.
* *We do not know why. It works.*
The core argument of this paper is using 1D tokens instead of 2D tokens. However, the paper does not provide the principle behind the advantage of 1D tokens over 2D tokens. 2D tokens are capable of representing anything which can be represented by 1D tokens because 2D tokens have more operational freedom. I value the analysis showing various behavior. But, the advantage of 1D tokens is supported by only the behavior, not the principle or theory.
In short, I think using 1D tokens is worth publishing. But the content should be largely revised not to mislead the principle.
(I am open to discussion during reviewer-ac discussion period.) | Rebuttal 1:
Rebuttal: # General Questions and Concerns
We thank all reviewers for the initial positive scores and acknowledgements. We address the shared concerns below and upload additional visualization in the PDF attachment.
> **Details of two-stage training:**
To begin with, we describe the two-stage training in detail as follows:
1. In the first stage (warm-up stage), we use an off-the-shelf ImageNet-pretrained MaskGIT-VQ tokenizer to tokenize the input image into 256 tokens, which we refer to as proxy codes.
2. In the first stage training, instead of regressing the original RGB values, we use the proxy codes as reconstruction targets. Specifically, the workflow is: RGB images are patchified and flattened into a sequence and concatenated with 32 latent tokens, then they are fed into TiTok-Enc (Encoder of TiTok). Later, the latent tokens are kept as token representation and go through the quantizer. The quantized latent tokens are concatenated with 256 mask tokens and go through the TiTok-Dec (Decoder of TiTok). And the final output mask tokens are ***supervised by proxy codes using cross-entropy loss***.
3. After 2. is finished, we freeze both the TiTok-Enc and quantizer, and then only fine-tune the TiTok-Dec (responsible for reconstructing proxy codes) and MaskGIT-Dec (responsible for reconstructing RGB values from proxy codes) end-to-end towards pixel space, where the training losses include L2 loss, perceptual loss, and GAN loss following the common VQGAN paradigm.
> **Single-stage training for TiTok:**
As shown in Tab. 3c and discussed in L345-352. Two-stage training is ***not*** necessary for TiTok training, and it works fine with the commonly used and publicly available Taming-VQGAN recipe. In this case, the whole workflow is pretty straightforward, where the TiTok-Dec will instead directly reconstruct the images at pixel space.
However, the Taming-VQGAN recipe (developed more than 3 years ago) leads to an inferior FID score when compared to state-of-the-art tokenizers, putting TiTok at disadvantage when compared against other methods. Therefore we propose the two-stage training to benefit TiTok from the state-of-art MaskGIT-VQGAN tokenizer, which shares a similar architecture to Taming-VQGAN but has a significantly better score (rFID 2.28 v.s. 7.94)
We appreciate the reference to Open-MAGVIT2 per Reviewer Mn4Y's suggestion and some other recently open-sourced tokenizer training code bases such as LlamaGen (note that both are made publicly available after this submission). With these improved training recipes, we obtain much better score under the single-stage training, as shown below:
| Model | rFID | IS |
|------------------------------------------------|------|-------|
| Taming-VQGAN (256 tokens) | 7.94 | - |
| MaskGIT-VQ (256 tokens) | 2.28 | 180.4 |
| TiTok-B-64 (single-stage w/ Taming-VQGAN recipe)| 5.15 | 120.5 |
| TiTok-B-64 (two-stages recipe) | 1.70 | 195.2 |
| TiTok-B-64 (***single-stage w/ improved recipe***) | 2.43 | 179.3 |
While there still exists some minor gap to the two-stage recipe, we believe the gap would be bridged further with better hyper-parameter tuning, etc., which we do not have time or resources to fully explore during the short rebuttal period.
> **Reviewer Pxnc W3: Explanation of decoder fine-tuning?**
We appreciate the question and please see the detailed two-stage workflow above in ***Details of two-stage training***.
> **Reviewer 8XiP Q3: How is TiTok trained with Proxy codes?**
We appreciate the question and please see the detailed two-stage workflow above in ***Details of two-stage training***. As proxy codes are used as reconstruction targets and we use the mask token sequence to formulate the de-tokenization process, the number of proxy codes and number of latent tokens do not matter in this training stage.
> **Reviewer Mn4Y W1&2: Two-stage training and better single-stage recipe from Open-MAGVIT2?**
We appreciate the feedback and reference to Open-MAGVIT2. Please see the table above in ***Single-stage training for TiTok*** for single-stage TiTok with the updated training recipe.
> **Reviewer HiP7 Q1: How important is the off-the-shelf codebook? Can we use a continuous autoencoder?**
We thank the reviewer for the valuable questions. The role of off-the-shelf tokenizer is important for state-of-the-art performance but not for a usable tokenizer as demonstrated in ***Single-stage training for TiTok***. To the best of our knowledge, MaskGIT-VQ offers both a strong performance tokenizer and an open-source checkpoint (no training recipe though), which is why we use MaskGIT-VQ as the off-the-shelf tokenizer for two-stage training. Unfortunately, FSQ has not released their weight, making it challenging for us to experiment with it. In the table above in ***Single-stage training for TiTok***, we demonstrate that the off-the-shelf tokenizer can be removed while maintaining a similar performance, if we have a similar training recipe used by state-of-the-art tokenizers. We hope it can address the concerns regarding the off-the-shelf codebook.
Regarding using an off-the-shelf autoencoder such as KL-VAE, we believe it is totally feasible. However, regressing the continuous embedding requires more careful designs (e.g., GMM prediction head [A] or diffusion head [B]), while using VQ models simplifies the problems into classifying proxy codes. Thus we believe this is an interesting problem to be explored in the near future.
[A] Tschannen, Michael, et al. "Givt: Generative infinite-vocabulary transformers." arXiv preprint arXiv:2312.02116 (2023).
[B] Li, Tianhong, et al. "Autoregressive Image Generation without Vector Quantization." arXiv preprint arXiv:2406.11838 (2024).
Pdf: /pdf/4341831adf0e9dffb6e3fafcd5a636df7aa9d22d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper introduces TiTok, a 1D image tokenization method that can represent images using significantly fewer tokens compared to existing 2D approaches. The key contributions are:
1. A 1D tokenization scheme that breaks the fixed grid constraints of 2D tokenizers, allowing more flexible and compact image representations.
2. A dual-stage training strategy using proxy codes to improve tokenizer performance.
3. Extensive experiments demonstrating state-of-the-art performance on ImageNet generation benchmarks while using 8-64x fewer tokens and achieving 74-410x faster inference compared to leading diffusion models.
Strengths: 1. The conclusion is shocking. 32 tokens are enough to construct a picture. Although I know that the information in the picture is very redundant, this conclusion still shocks me.
2. The effect is still good.
Weaknesses: 1. Compared with 2D token, the main advantage of 1D image token (in my opinion) is not the efficiency and training data. Although 2D token is cumbersome, it supports the expansion of 2D dimension and has many other advantages. In applications, for higher resolution images and better effects, 2D token is still the mainstream and is difficult to be shaken by 1D token. The impact of 1D token on image generation may not be its main application. The most important value of 1D image token for me lies in its possibility of combining with multimodal technology. Due to its information compression ability and two-dimensional encoding method, 2D token is difficult to combine with language model to form a multimodal language model. And the encoder of the current multimodal language model has no decoder available. 1D token may be able to support multimodal language model and allow language model to directly output images. This is very potential. But this article does not have corresponding thinking, which is a pity.
2. Although the image formed by 32 tokens is shocking, there are still many questions to be answered. Compared with the case of more tokens, can the information encoded by each token be estimated? Can we know what specific information is forgotten in the process of token compression? Can 1D tokens encode 2D relationships? How do 1D tokens understand translation and rotation? Is 1D token similar to the concept of high-level dictionary? These important questions are not studied in depth in this article.
3. There are too few pictures shown in this article to make further judgments.
Technical Quality: 3
Clarity: 3
Questions for Authors: I have raised many question in weaknesses.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1.1: 2D v.s. 1D in image generation**
We appreciate the valuable insights and comments. We note that the "2D advantages" of "higher resolution images and better effects" mainly stem from more tokens being used. While the 1D formulation in this paper mainly aims at a compact tokenization, we note that when using the similar number (perhaps slightly fewer) tokens, it is expected to work on par, if not better, to 2D counterparts.
> **W1.2: 1D tokenization for multi-modal large language model**
We appreciate the suggestion and we strongly agree that TiTok has great potential in the multi-modal large language model. However, due to limitations on computation resources (e.g., The VQ-tokenizer-based multi-modal large language model Chameleon [C] by Meta requires training on 1024 A100-80G for 1 month), we thus mainly verify the tokenizer's effectiveness on the generation tasks, following prior arts [19, 35, 64, 66, 67]. We leave the application to multi-modal models as promising future directions.
> **W2.1: Measure the information encoded by tokens**
We thank the reviewer for the valuable questions. We tried our best to provide quantitative and qualitative measurements on the information carried by 1D tokens. For example, in Fig.4.c, we show the linear probing accuracy (and thus correlates with the "semantic" in the tokens) when the number of tokens varies. We also provide visualization on different numbers of tokens in Fig. 7 where we observe that the tokens tend to carry important/salient regions when the number of tokens is limited. Although we agree that it will be intriguing if we may measure each token individually, it may require further research efforts and we leave it for future work.
> **W2.2: What specific information is forgotten? 2D relationship, translation, rotation? High-level dictionary?**
It is noteworthy that our work is a very simple and initial attempt at this promising direction. As mentioned in W2.1, it is very challenging to directly "visualize" what role each token is responsible for, or how 2D relationship is modeled in the tokenization, although they are interesting topics towards better explainability. Based on experiments from Fig.4 and visualization from Fig. 7, we observe that the model tends to forget the low-level, high-frequency details in the background. For a similar reason, though the tokenizer can handle certain levels of translation/rotation in the input images, it is tricky to figure out which tokens are responsible for this part. High-level dictionary may not be capable to faithfully reconstruct the original images while TiTok tokens aim at a better trade-off between compact and semantic tokenization and faithful reconstruction (Please also see our response to Reviewer mn4Y’s Q1 for detailed examples on reconstructing images from CLIP high-level features, which fail to reconstruct the image layout or details).
> **W3: More figures?**
We appreciate the suggestions. We provided more visualization in the attached PDF file. As promised in the paper, we will also open-source the code & model for the community to examine.
[C] Team, Chameleon. "Chameleon: Mixed-modal early-fusion foundation models." arXiv preprint arXiv:2405.09818 (2024).
---
Rebuttal Comment 1.1:
Title: Post-rebuttal review
Comment: I have read the author response and the review from other reviewers. I will raise my score to "weak accept". Thanks.
---
Reply to Comment 1.1.1:
Title: Thanks for Your Review and Support
Comment: Thank you sincerely for your valuable feedback and for taking our responses into account. We're pleased that we were able to address your concerns. We greatly appreciate your support in recommending our paper for acceptance. Should you have any additional questions, please feel free to let us know. | null | null | null | null | null | null |
Microstructures and Accuracy of Graph Recall by Large Language Models | Accept (poster) | Summary: This paper conducts a comprehensive evaluation of LLMs' capability to recall graph (sub)structures when using natural languages as the interface. Through extensive experiments on diverse graphs from different domains, it points out interesting phenomena like LLMs' underperformance in graph recall tasks. It sheds light on novel insights and inspires further research in relevant fields.
Strengths: 1. This paper is well-written and well-motivated. The idea of integrating computational social science and LLMs, especially comparing the behavior of both LLMs and human beings, is interesting.
2. This paper tackles an important problem. Despite various papers on evaluating the effectiveness of LLMs' reasoning capability for graphs, few evaluate the first step, such as LLMs' capability to accurately memorize graph structures. This paper strengthens this point and makes a valuable contribution. In section 5, the authors also show that LLMs' capability on structure-related tasks is highly correlated to their capability to recall substructures.
3. The evaluation is well-designed, with support from social science.
4. Code implementation is provided, ensuring the reproducibility of experiments.
5. Several potential directions are provided to inspire future research.
Weaknesses: 1. This paper primarily studies the case where natural language is used as the interface to represent graphs. However, many recent works [1, 2] demonstrate that using a graph encoder and multi-modal projector may be a better choice to inject graph-related knowledge into LLMs. I wonder whether we would arrive at different conclusions when using such graph language models.
2. The authors focus on the case with k=5 (line 101) in this paper, which somewhat limits the generalizability of the conclusions. I think LLMs' recalling capability may also be closely related to the length of inputs. As a result, those LLMs designed with long-context capabilities may perform better for larger graphs. Moreover, larger graphs may present more complicated substructure patterns and lead to somewhat different results.
3. I cannot find the formulas used to obtain the results for Table 1.
4. Some phenomena in this paper do not have corresponding explanations (line 327).
[1] Perozzi, B., Fatemi, B., Zelle, D., Tsitsulin, A., Kazemi, M., Al-Rfou, R., & Halcrow, J. (2024). Let Your Graph Do the Talking: Encoding Structured Data for LLMs. ArXiv, abs/2402.05862.
[2] Chen, R., Zhao, T., Jaiswal, A., Shah, N., & Wang, Z. (2024). LLaGA: Large Language and Graph Assistant. ArXiv, abs/2402.08170.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Will the conclusions be consistent when we use a multi-modal LLMs and using tokens encoded by graph models like GNNs?
2. How will the conclusion change when we vary the size of the graphs?
3. How to calculate the bias score in Table 1?
4. May you provide more explanations for some phenomena, like one in line 327?
5. There's one recent paper [1] which provides theoretical analysis on transformer's reasoning capability on graphs. They relate transformer's capabilities on graph reasoning to the computational complexity of related tasks. In this case, I think graph recalling (edge level) is also a retrival task, and it would be interestring to discuss the relationships between findings in these two papers.
[1] Sanford, C., Fatemi, B., Hall, E., Tsitsulin, A., Kazemi, S.M., Halcrow, J., Perozzi, B., & Mirrokni, V.S. (2024). Understanding Transformer Reasoning Capabilities via Graph Algorithms. ArXiv, abs/2405.18512.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Authors have dicussed the limitations and potential social impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the review!
**Question 1**
We believe the conclusions will be relatively close to those of our results obtained when using domain-free narrative style to describe the graph (i.e. naming nodes as "node 1", "node 2", etc.). Our work has used this narrative style to test “Erdos-Renyi” graphs in Table 1, and to test topology structures extracted from other graph datasets in Figure 3 (a) – (c) “random” columns.
This is because the graph encoder [1,2] projects all nodes/edges to the same token space of LLMs, while leaving out all domain-relevant information. This is similar to, though not exactly the same as, one of our settings where the narrative style is domain-free, in which case the node tokens are simply extracted by dictionary lookup.
This work still makes meaningful contributions in this context, for two reasons:
- In terms of the proposed graph recall task and evaluation pipeline, they can be directly used to evaluate the “LLM+graph encoder” type of models. In fact, they are model-agnostic.
- The conclusions derived, in addition to the transferability we discussed in the first paragraph, also inform the many use cases where LLMs are used to process relational data that haven’t been (or can’t be) encoded by a graph encoder. “LLM + graph encoder” is certainly an important architectural innovation and could be increasingly popular in graph-based RAG, where an explicit graph structure has to be ready in place as part of the prompt.
**Question 2**
Please see Figure 2, 3 in the new pdf uploaded. We group the tested graphs by the number of nodes they have, into three intervals: [5, 10), [10, 20), and [20, 30]. We then report accuracy (Figure 2) and bias scores (Figure 3) for graphs in each of the three groups. These two figures essentially provide more fine-grained results for Table 1 in our paper. We observe:
- In terms of accuracy, we can see that larger graphs indeed have worse recall accuracy.
- In terms of bias scores for microstructures, the strongest trend exists with "edge" patter, showing that LLMs forget significantly more edges for larger graphs. For other patters, we do not observe a strong consistent trend.
**Question 3**
Please refer to Section 2.1 for this. Briefly speaking, we fit a random graph model (ERGM) to the recalled graph output by LLMs. The model parameters ($\theta$) have the physical meaning of being the bias scores.
**Question 4**
Yes. Line 327 is explained by lines 328 - 332. Below are our further explanations.
We observe that (1) LLMs tend to hallucinate triangles and “Alt-2-Paths” (i.e., the 5th pattern in Figure 2 Step 6) in both graph recall and link prediction tasks; (2) LLMs tend to hallucinate “Alt-Triangles” (i.e., the 4th pattern) in link prediction task, while forgetting this pattern in graph recall task.
These indicate that LLMs do not always exhibit consistent biases between graph recall and link prediction: on some patterns, the bias across the two tasks is consistent; on other patterns, the bias across the two tasks can be opposite to each other.
We also found this observation interesting or even counterintuitive to some extent, and that’s why we mentioned that we do not have a perfect explanation yet: because of triadic closure in link prediction, we had expected LLMs to favor less “Alt-Triangles” in link prediction. Our observation is the opposite, however. We had conjectured that the increase in “Alt-Triangles” might come from more spurious edges that get hallucinated but not from triadic closure over existing dense structures.
**Question 5**
Below please find our discussions of this interesting latest work (even though it came out after the submission deadline). We will also substantiate them further in our final version.
[1]’s theory indicates that graph recall, as a fundamental graph reasoning task, should be solvable by a dedicated "small" transformer-based LLM. Meanwhile, our work shows through rigorous empirical tests that the general-purpose transformer-based LLMs achieve a performance much lower than the theoretical upper bound. Therefore, there exist huge opportunities for improvement. More concretely:
- Theoretically, our proposed task of graph recall is indeed a graph retrieval task, which belongs to the simplest “D1” complexity class of problems for transformers. In other words, it is theoretically solvable by a “small” transformer (single-layer single-headed transformer with embedding dimension O(log N) where N is the length of the input node/edges sequence). This is consistent with our statement that graph recall is one of the simplest and most fundamental problems in graph reasoning.
- Empirically, [1] shows in its Table 4 that there still exists a huge gap between the best empirical performance and theoretical upper bound. For example, even fine-tuned Palm has an accuracy of computing node degree < 0.7 most of the time. This suggests that there is still huge room and opportunities to improve LLM’s graph recall performance, and that it may not be necessary to switch to new architectures beyond transformers.
Please let us know if we have addressed your concerns. Thank you!
---
Rebuttal Comment 1.1:
Comment: Thanks for your response, which address my concerns. I have raised my score.
---
Reply to Comment 1.1.1:
Comment: Thank you. We are glad that your concerns have been addressed. | Summary: The article discusses the issue of graph recall in large language models (LLMs) and conducts experiments on this topic. It tests the ability of LLMs to recall graphs, identifies factors influencing this ability, and examines the impact of this ability on subsequent tasks.
Strengths: 1.This article is the first to propose the scenario of graph recall. Existing methods do not yield good results for large models on graph tasks, making it difficult to identify the underlying issues.
2.It introduces an innovative method to verify the ability of graph recall. Using a heuristic approach, it evaluates the model's ability to recall graph structures.
3.The problem statement and method presentation are clear and well-organized.
Weaknesses: 1.The ability of graph recall should encompass many aspects, not just the recall of different micro structures.
2.The article mentions that the ability to recall is crucial for LLMs to complete graph tasks. However, the experimental section only examines the relationship between recall ability and link prediction.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1.Are there other evaluation metrics to measure recall ability? Why can't direct edge prediction be considered a manifestation of recall?
2.For some nodes with rich text information, could this also be a manifestation of recall ability, potentially affecting prediction results?
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: 1.The article provides limited explanation on how the graph recall ability of LLMs affects downstream tasks, focusing only on link prediction analysis.
2.Whether large models are sensitive to microstructures when processing graphs needs to be substantiated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the review!
**Weakness 1**
We not only study the recall's microstructures, but also study:
- the recall's accuracy (as the title suggests, and throughout the paper),
- its correlation with other prediction tasks such as link prediction (Section 5) and node classification (new experiment, see Weakness 2),
- its comparison with human behaviors (Section 3.3),
- the many factors that can potentially affect it (Section 4).
**Question 1**
> Are there other evaluation metrics to measure recall ability?
Please see "Weakness 2 & Limitation 1" below.
> Why can't direct edge prediction be considered a manifestation of recall?
Section 1, paragraphs 2-4 provide the answer. Here we summarize it as follows.
We agree with the reviewer that edge prediction is related to graph recall, and we’ve conducted correlation studies on this (Section 5).
However, edge prediction should not be considered as an inherent part of the graph recall task. Graph recall is a very different task from the downstream prediction tasks, including edge prediction, graph prediction, etc. This is because the correct answer for graph recall always exists in the prompt and can be directly extracted, which is not true for any prediction task. Please see our definition of graph recall task in paragraph 3, which also follows references [8, 33, 24, 11, 41, 7, 21] that experimented with humans.
We've followed the existing definition and focused on graph recall as a standalone task for studying LLMs because:
graph recall is both the most straightforward and fundamental step for graph reasoning; as lines 38 - 40 explain, if an LLM can’t even remember what edges it has seen, it is likely to suffer greatly in those downstream graph reasoning tasks.
Our experiment indeed shows LLMs suffer at graph recall, which is consistent with LLMs' poor performance at prediction tasks as well as other graph reasoning tasks [46, 34].
Besides, the rich literature [8, 33, 24, 11, 41, 7, 21] in human studies also suggests that graph recall, under its current definition, is a scientifically meaningful and important topic to study.
We have added an experiment on the correlation between graph recall and a node classification. Please see Figure 1 in our uploaded pdf.
**Weakness 2 & Limitation 1**
We have added a new experiment on the correlation between graph recall and node classification, please see Figure 1 in our uploaded pdf for more details. Our new experiment again shows that there exists positive correlation (r>0) between performance in graph recall and in node classification task, though the correlation is relatively less stronger. This is mainly due to the larger difference between the nature of the two tasks (graph recall and node classification).
>The article mentions that the ability to recall is crucial for LLMs to complete graph tasks. However, the experimental section only examines ...
This argument is based on the simple reasoning that graph recall is among the most straightforward and fundamental steps for graph reasoning: If an LLM can’t even remember what edges it has seen, it is very likely to suffer in downstream graph reasoning tasks (note that this trend can also be empricially observed from plots in both correlation studies just discussed). Therefore, we need to understand LLM's graph recall first, before proceeding to downstream tasks.
Besides, since LLM’s graph recall has never been studied, “relationship with downstream tasks” is only one of the many aspects of this topic that we need to examine (other aspects include accuracy, biased microstructures, comparison with humans, and factors that can affect it).
**Question 2**
We are not certain about the reviewer's definition of “rich text information”, so we have the following discussion.
- If the reviewer considers semantic information of nodes as “rich text information”, then most of the nodes in our dataset are already described with concrete semantic meanings (Appendix D). In terms of results, which have been extensively discussed in Section 4.1, we've shown that different semantics in narration (i.e., what we call “narrative styles”) have interesting effects on recall accuracy.
- If the reviewer has a different definition of “rich text information” (e.g., every node needs to have a feature vector or be associated with a long document), please let us know. We would be happy to discuss this further and/or provide more experiments.
**Limitation 2**
We are not certain about the exact meaning of LLMs being “sensitive to microstructures” as raised by the reviewer. Here are our answers based on two possible interpretations:
- *Interpretation 1: “LLM’s graph recall performance is affected by microstructures of the input graph”.* We have not made such a claim in our paper, which is why it has not been “substantiated”. Here is further discussion: we conjecture this to be true because if we look at the microstructures of graph samples from different datasets (Figure 5 - 8 in Appendix), we can see that they have great variations. It does seem that either having too many cliques (e.g. the protein dataset) or too many spurious edges (or random graph dataset) can lead to low performance. We would be happy to provide more experiments if the reviewer is referring to this interpretation.
- *Interpretation 2: “LLM’s recalled graphs have variations in microstructures”.* This is exactly one of the main themes which has been discussed throughout Section 3, with rigorous test procedures and numerical results that are statistically significant. If the reviewer could kindly let us know which particular statement needs to be substantiated, we would be happy to provide more discussion and/or experiment.
Please let us know if we have addressed your concerns. Thank you!
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
Please let us know if you have any further questions or concerns. Thank you very much for your time and consideration.
Warm regards,
Authors of Submission 4790
---
Rebuttal 2:
Comment: Thanks for your rebuttal which has addressed some of my concerns, and thus I would like to raise the original score to reflect it.
---
Rebuttal Comment 2.1:
Comment: Thank you for raising the score. We are glad to know that we have addressed your concerns. | Summary: This paper studies how well LLM models can recall graph structured information they have been provided with. While the core of it is a fairly straightforward evaluation of LLM graph recall, it has an interesting experimental design inspired by psychology that adds substantially to the paper. Some of the graphs used for evaluation come from real networks, which could bias the results (if, for example, an LLM's parametric knowledge had information about the nodes already).
Strengths: + interesting questions about recall ability of complex structure, with interesting results
+ interesting psychology inspired experimental design
+ nice presentation
Weaknesses: - I was left with some questions about disentangling parametric knowledge from recall (see questions below)
- The (perhaps well intentioned) sex priming assignment experiment seems out of place with the rest of the work and could raise a yellow flag for some readers. In its current form, the experiment and discussion adds little scientific value
Technical Quality: 3
Clarity: 4
Questions for Authors: One of the biggest questions I have concerns how much parametric knowledge is being used for the "in-domain" narrative setting -- Are the node names random (when the "in-domain Graph Narratives" are used?) or e.g. are they actual proteins? (if real protein names, are you assigning the correct name to each node?). Related work on temporal graph reasoning (Test of Time: A Benchmark for Evaluating LLMs on Temporal Reasoning, https://arxiv.org/pdf/2406.09170) shows a significant effect from "masking" node ids.
Any results or discussion in this area would be interesting.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the review!
**Weakness 1**
Please see Question 1.
**Weakness 2**
We agree with the reviewer and will remove this mini-study or move it to the appendix. As Section 4.3 explains, this mini-study was inspired by the previous human experiment “Sex and network recall accuracy” (our reference [7]). We included it for a comprehensive evaluation. However, the hypothesis was found to be not statistically significant.
**Question 1**
> Are the node names random (when the "in-domain Graph Narratives" are used?) or e.g. are they actual proteins?
For the in-domain narrative setting, our datasets cover both cases: (1) where parametric knowledge is potentially useful, and (2) where parametric knowledge is less useful:
- For protein networks, the protein names are not random. They are unique protein identifiers (known as the “UniProtKB/Swiss-Prot accession number” or NCBI index). Each node is assigned its real name. We also confirmed that LLMs know and precisely understand those protein identifiers. The same is true for DBLP coauthorship networks.
- For all other networks, the node names are generated and randomly assigned. For example, in traffic (geographical) networks, the node names are “bank”, “townhall”, “high school”, etc.
To further disentangle the effect of parametric knowledge and pure graph topology on graph recall, Section 4.1 introduces tests on cross-domain narratives (i.e. node names are randomly assigned from other domains or are plain indices like node 1, node 2, etc.). These tests essentially serve as an ablation study on parametric knowledge.
> Any results or discussion in this area would be interesting.
Regarding results and discussion in this area, we observe the following general trend in our numerical results:
“in-domain narratives with real names ” $\approx$ “in-domain narratives with permuted/generated names” > “cross-domain narratives”
For example, see heatmaps in Figure 3 (a) – (c), where the diagonals are the in-domain cases; the off-diagonal cells show the cross-domain ablations.
We observe that even when the LLM hasn’t seen the graph in training, they do better at recalling when the narrative style hints at the true domain from which the graph is sampled. This corresponds to the “no parametric knowledge” case. (or we might say the parametric knowledge factors in through a very implicit and subtle way, via LLM’s general understandings of the domain)
That said, when the LLM happens to know about the graph to recall, parametric knowledge can certainly help with graph recall. We decided not to fully exclude this case in our test because many real-world graphs do come from domains familiar to LLMs.
If the reviewer is particularly interested in the effect of parametric knowledge in protein networks, we would be happy to provide further experiments in which *real* protein names are *randomly* assigned to nodes.
Please let us know if we have addressed your concerns. Thank you!
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
Please let us know if you have any further questions or concerns. Thank you very much for your time and consideration.
Warm regards,
Authors of Submission 4790
---
Rebuttal Comment 1.2:
Comment: Thanks to to the authors for their reply, I have raised my score.
> If the reviewer is particularly interested in the effect of parametric knowledge in protein networks, we would be happy to provide further experiments in which real protein names are randomly assigned to nodes.
This would be an interesting experiment to add to the appendix if it isn't too costly to run.
- An additional thought while re-reading this paper -- recent theoretical results ("Understanding Transformer Reasoning Capabilities via Graph Algorithms", Sanford et al) categorize recall (in particular, edge recall iirc) as one of the simplest operations (it doesn't require a very "large" transformer network to perform it). Any thoughts about how your results connect to this kind of theoretical framework would be insightful.
---
Reply to Comment 1.2.1:
Comment: Thank you for raising the score. We are glad that your concerns have been addressed.
We will follow your advice to add the interesting experiment to appendix. Regarding the latest theoretical results, please see our response to Reviewer QYAY in Question 5. We will also include this in our final version. | null | null | Rebuttal 1:
Rebuttal: Dear Reviewers,
We deeply appreciate your reviews. New experimental results have been provided in the attached pdf to this post. Our response to each reviewer has been posted separately.
We very much look forward to further interacting with you in the discussion period.
Warm regards,
Authors of Submission 4790
Pdf: /pdf/ab781f4932b73dbeb774a9fe8cf3cb739500899c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
TinyLUT: Tiny Look-Up Table for Efficient Image Restoration at the Edge | Accept (poster) | Summary: To address the storage explosion challenge of LUT, this paper proposes a separable mapping strategy (SMS) and a dynamic discretization mechanism (DDM) to decompose the kernel and activation to reduce the storage consumption. Specifically, the SMS decomposes the convolution into independent sub-operations to reduce the input entries, consequently reducing the dimension of LUT. Additionally, the DDM explicitly compresses the quatization scales with learnable clipping parameters to decrease LUT scales.
Strengths: 1. This paper analyzes the storage explosion challenge of LUT and provides a solution by decomposing the convolution kernel and compressing the quantization scale.
2. The proposed image restoration method is efficient and effective for edge devices, as demonstrated in the experiments.
Weaknesses: 1. Although the separable mapping strategy (SMS) could significantly reduce the storage, it may lead to information loss and performance drop. As shown in Figure 3, the SMS obtains the final result using individually mapped indexes and neglects the relationship between the local indexes, which is important for convolutions.
2. The motivations and benefits of activation decomposition in the DDM are unclear. Moreover, this paper does not sufficiently discuss the difference between DDM and former activation decomposition used in SPLUT.
3. The discussion of LUT-based methods in the related works is insufficient. The key ideas and limitations of these methods should be elaborated.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. To my understanding, the proposed strategy can not be applied to transformer-based model like SwinIR. Why is SwinIR introduced in the related works, and why is it categorized under CNN methods?
2. Different from previous methods, the proposed method is not contrained by the storage. Can the proposed method achieve better performance by increasing the RF and channels?
3. Why is the denoising experiment placed after the ablation experiments? The reviewer suggests combining it with the SR evaluation.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations have been discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Weakness 1** We thank the reviewers for their comprehensive review. It must be clarified that the proposed SMS strategy will not result in information loss or performance degradation due to neglecting the relationships between local indexes. As for the original depthwise convolution operation, it can be obtained as follows:
\begin{equation}
F_{out} = \sum_{i=0}^{n-1}\sum_{j=0}^{n-1} {I_{i,j}*w_{i,j}}
\end{equation}
The reconstruction of local index relationships is achieved through a summation process. This process involves the multiplication of each pixel within the receptive field with its corresponding weight. The resulting products are then aggregated to form the final output.
After the CNN model training, we stored the results of each pixel within the receptive field multiplied with the trained weights in the corresponding LUT.
\begin{equation}
LUT_{(i,j)}[I_{(i,j)}] = I_{i,j}*w_{i,j}
\end{equation}
Similarly, after obtaining individual input-corresponding results through separately mapped indices in SMS, the final result is constructed via summation of all outputs. The relationships between local indices are reconstructed in this process. This process is formally expressed in the next equation of our manuscript:
\begin{equation}
\hat F_{out} = \frac{1}{n^{2}} \sum_{i=0}^{n-1}\sum_{j=0}^{n-1} LUT_{(i,j)}[x_{(i,j)}]
\end{equation}
Following the discrete retrieval in SMS, a summation of individual results is necessitated to reconstruct the relationships among local indices. To prevent misinterpretation, we will refine Figure 3 to explicitly illustrate this crucial step in the process.
**Response to Weakness 2** We appreciate the reviewer's insightful question. Our motivation for DDM stems from the limitations of the previous SPLUT approach, which was constrained to a 4 MSBs and 4 LSBs symmetric decomposition scheme for 8-bit inputs. SPLUT uses a fixed quantization scale for each layer. These constraints inherently limited model performance.
Our DDM proposed an innovative asymmetric activation decomposition strategy and adaptively finds the right quantization scale for each layer, thus significantly boosting the inference accuracy and achieving a better balance between accuracy and storage.
Notably, the DDM strategy is not exclusive to our method. It can be applied to other LUT-based methods to further reduce the storage consumption.
**Response to Weakness 3** The reviewer's reminder is very important and insightful. In response, we will augment the related work section with an explication of the methodologies presented in [1] and [2], the key ideas and limitations are also will be elaborated. In addition to summarizing the key ideas and limitations of SRLUT, SPLUT and MULUT, we will also summarize RCLUT's proposal of the plugin module to increase the RF of the model with slightly increasing storage overhead. SPFLUT[1] proposed a LUT compression framework to balance the performance improvement and storage growth. The incorporation of these references will enhance the comprehensiveness of our literature review.
**Response to Question 1** We extend our gratitude for the reviewer's meticulous examination. The inclusion of SwinIR in the related work section was motivated by its significant performance. However, upon careful consideration of the reviewer's astute observation, SwinIR does not align with the CNN-centric theme of our work, and the inappropriate description is removed from the manuscript.
Furthermore, we restructure the categorization of methods in the related work section. Specifically, we will supplement [3] and [4] under the CNN methods subsection to provide a more comprehensive overview of CNN approaches.
We reiterate our appreciation for the reviewer's thorough and insightful feedback, which has significantly contributed to enhancing the coherence and relevance of our manuscript.
**Response to Question 2** We appreciate the reviewer's suggestion for improvement. We validated the suggestions of the reviewer in the 4x SR task. As shown in Table 1, the improvement resulting from using a 5x5 RF and 32 feature channels shows an increase in accuracy with only a minor linear growth in storage overhead compared to the baseline.
We extend our sincere gratitude to the reviewers for their insightful suggestions, which have illuminated additional avenues for exploration in our method.
**Table 1 Quantitative comparisons on 4x SR**
| | Storage | Set5 | BSD100 | Manga109 |
|---------------------|---------|-------|--------|----------|
| TinyLUT-S | 37KB | 30.22 | 26.71 | 27.21 |
| TinyLUT-5x5 RF | 60KB | 30.30 | 26.75 | 27.24 |
| TinyLUT-32 channels | 136KB | 30.35 | 26.77 | 27.39 |
**Response to Question 3** The reviewer's suggestion is very important and insightful. We will revise the structure of the experimental section and combine the denoising experiments with the SR evaluation. This reorganization will significantly improve the clarity and flow of our research presentation.
[1] Li et al. "Look-Up Table Compression for Efficient Image Restoration", CVPR2024
[2] Liu et al. "Reconstructed Convolution Module Based Look-Up Tables for Efficient Image Super-Resolution", ICCV2023
[3] Lai et al. "Fast and Accurate Image Super-Resolution with Deep Laplacian Pyramid Networks", TPAMI 2019
[4] Tassano et al. "FastDVDnet: Towards Real-Time Deep Video Denoising Without Flow Estimation", CVPR2018
---
Rebuttal Comment 1.1:
Title: Post Rebuttal Comments
Comment: Thank you for the careful explanation. After reviewing the rebuttal and considering the feedback from other reviewers, my concerns have been addressed. I have raised my score to Accept, as this work effectively tackles the storage explosion of LUTs and provides a valuable image restoration method for edge devices.
---
Reply to Comment 1.1.1:
Title: Thank you for engaging in the discussion!
Comment: We would like to thank the reviewer for engaging in the discussion and increasing the score. We will ensure that all edits mentioned in the rebuttal are incorporated when revising our paper. Thanks again for your participation in the discussion! | Summary: This paper introduced a separable mapping strategy that solves the storage issue of LUT-based methods. In addition, a dynamic discretization mechanism is designed to decompose the activation and compression quantization scales. Experimental results show the potential of this work for image restoration tasks.
Strengths: 1. The paper was well written and organized.
2. The performance improvement of SR task is remarkable.
3. The proposed method is valuable for advancing image restoration on edge devices.
Weaknesses: 1. There is a lack of comparison with the state-of-the-art methods [1, 2], especially the literature [1] which also addresses the storage problem of LUT-based methods.
2. This paper claims to work on the image restoration problem, however only two restoration tasks were included in the experiments. Validation on more image restoration tasks is needed.
3. The proposed method does not perform well enough on the image denoising task. With similar inference efficiency, the performance of the proposed method lags far behind MULUT-SDY-X2 even more than 1 dB. I am concerned about the effectiveness of this work on a wider range of image restoration tasks.
> 1. Look-Up Table Compression for Efficient Image Restoration. CVPR 24.
>2. Reconstructed Convolution Module Based Look-Up Tables for Efficient Image Super-Resolution. ICCV 23.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see Weaknesses.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations were included.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to W1** The reviewer's reminder is very important and constructive. To address the reviewer's concern, we conduct a comparative study with the abovementioned two methods[1,2]. The comparison indicates that our TinyLUT is more effective than the latest storage saving LUT methods. Literature [1] proposed a LUT compression framework named DFC for efficient image restoration with a different technological route from our proposed TinyLUT. The DFC preserves diagonal LQHQ pairs to maintain representation capacity, while non-diagonal pairs are aggressively subsampled to save storage. Method [2] proposed a reconstructed convolution to decouple the calculation of spatial and channel-wise features for maintaining large RF with less LUT size.
As shown in Table 1, TinyLUT-F achieves much higher PSNR with 9× and 12× lower storage consumption than RCLUT [2] and SPFLUT+DFC [1], respectively.
**Table 1 Quantitative comparisons on 4x SR**
| | storage | Set5 | Set14 | BSD100 | Manga109 | Urban100 |
|----------------|----------|-------|-------|--------|----------|----------|
| RCLUT [2] | 1.513MB | 30.72 | 27.67 | 26.95 | 28.05 | 24.57 |
| SPFLUT [1] | 17.284MB | 31.11 | 27.92 | 27.10 | 28.68 | 24.87 |
| SPFLUT+DFC [1] | 2.018MB | 31.05 | 27.88 | 27.08 | 28.58 | 24.81 |
| TinyLUT-F | 171KB | 31.18 | 28.01 | 27.13 | 28.83 | 24.92 |
In addition, we compare SPFLUT+DFC [1] and TinyLUT in image desnoing with noise level 15. Note that RCLUT [2] did not test on image denoising task. As shown in Table 2, TinyLUT still yields about 0.2dB higher PSNR with 3× storage reduction compared to SPFLUT+DFC. The results further demonstrate that TinyLUT exhibits notable advantages in other image restoration tasks. These experiments corroborate the effectiveness of the proposed method, as noted by reviewers (G5Ti, MFFN).
**Table 2 Quantitative comparisons on image denoise**
| | storage | Set12 | BSD68 |
|------------|---------|-------|-------|
| SPFLUT | 3017KB | 32.11 | 31.17 |
| SPFLUT+DFC | 595KB | 32.01 | 31.09 |
| TinyLUT-F | 187KB | 32.22 | 31.20 |
**Response to W2** Thanks for the reviewer's insightful suggestion. As claimed in [3,4], image denoising and SR are classical yet still active topics in low-level vision since they are indispensable steps in many practical applications. Most of the image restoration models[3,4] adopted denoising and super-resolution as representative tasks. Upon the reviewer's insightful comment, we fully agree that the limitations of focusing solely on SR and image denoising cannot comprehensively demonstrate the generality in image restoration.
To address this concern, we incorporate the deblocking task widely employed in recent literature [1] into our experiment. The results of this experiment demonstrate the value of our method for advancing image restoration on edge devices. As shown in Table 3, TinyLUT-F still achieves remarkably higher PSNR-B than other LUT methods with much lower storage consumption.
**Table 3 Quantitative comparisons on image deblock**
| | storage | Classic5 | LIVE1 |
|------------|---------|----------|-------|
| SRLUT | 81KB | 27.58 | 27.69 |
| MULUT | 489KB | 28.29 | 28.39 |
| SPFLUT | 3017KB | 28.63 | 28.62 |
| SPFLUT_DFC | 595KB | 28.62 | 28.61 |
| TinyLUT-F | 187KB | 28.74 | 28.67 |
**Response to W3** The reviewer's reminder is very important. In the image denoising task, our TinyLUT-S does lag far behind MULUT-SDY-X2 in accuracy, which is results from the significant reduction in storage for deploying on edge devices with very limited resource budgets, especially memory and storage [5,6]. As shown in Table 4, our proposed TinyLUT-S occupies merely 7.6% of the storage required by MULUT-SDY-X2 that is the most lightweight version of MULUT.
As for the inference efficiency, inspired by comments from reviewers G5Ti and 2kHn, we found that still some operations can be merged to reduce inference latency, such as residual addition between blocks. Meanwhile, we have introduced pthread and omp to accelerate inference in a multi-threaded parallel manner. As a result, the inference latency of TinyLUT-S is reduced to 27ms on Raspberry Pi 4B, yielding a real-time inference efficiency comparable to the current lightest SRLUT with about 1dB PSNR promotion and 4× lower storage. Hence, our TinyLUT achieves a better trade-off between accuracy and latency, as noted by reviewer G5Ti.
**Table 4 Quantitative comparisons on denoise and latency**
| | Runtime | Storage | Set12 | BSD68 | Set12 | BSD68 | Set12 | BSD68 | Average |
|-----------------|---------|---------|-------|-------|-------|-------|-------|-------|---------|
| SRLUT | 21ms | 82KB | 30.42 | 29.78 | 27.19 | 26.85 | 22.62 | 22.39 | 26.54 |
| MULUT-SDY-X2 | 44ms | 289KB | 31.50 | 30.63 | 28.94 | 28.18 | 25.46 | 24.97 | 28.28 |
| MULUT-SDYEHO-X2 | 89ms | 978KB | 31.77 | 30.89 | 29.18 | 28.34 | 25.47 | 24.96 | 28.44 |
| TinyLUT-S | **27ms** | 22KB | 31.10 | 30.24 | 28.26 | 27.48 | 24.29 | 23.83 | 27.53 |
In addition, we compare our method and other LUT-based methods in image deblocking task under a quality factor of 10 in Table 3. The results demonstrate the effectiveness of our work on other image restoration tasks.
[1] Li et al. "Look-Up Table Compression for Efficient Image Restoration", CVPR2024
[2] Liu et al. "Reconstructed Convolution Module Based Look-Up Tables for Efficient Image Super-Resolution", ICCV2023
[3] Zhang et al. "Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising", TIP 2017
[4] Kim et al. "Accurate Image Super-Resolution Using Very Deep Convolutional Networks", CVPR 2016
[5] Lin et al. "MCUNet: Tiny Deep Learning on IoT Devices", NIPS 2020
[6] Lin et al. "MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning", NIPS2021
---
Rebuttal Comment 1.1:
Title: Reply
Comment: Thanks to your response, some of my concerns were addressed. The authors' response addresses the ethical issues well. However, I remain concerned about the performance of this work on a wider range of image restoration tasks. The performance of the proposed method lags too far behind on the denoising task, even though it is more efficient. It is difficult to argue that the method achieves a better trade-off between performance and efficiency, especially with respect to the MULUT-SDY-X2 method. Therefore, I would like to keep the previous rating.
---
Reply to Comment 1.1.1:
Comment: Thanks for the reviewer's reply. As demonstrated by the reviewer, the accuracy of TinyLUT-S in denoising is indeed lower than MULUT-SDY-X2, although TinyLUT-S is more efficient in storage and inference latency. However, it should also be noted that the denoising accuracy on the larger TinyLUT-F version is significantly higher than all existing LUT models, with an average increase of 0.6dB compared to MULUT-SDY-X2 and 35% storage reduction. Meanwhile, TinyLUT-F achieves a 0.49dB accuracy increase over MULUT-SDYEHO-X2 with only 19% storage consumption. While the accuracy of methods such as MULUT reaches its upper limit due to storage limitations. This indicates that our method addresses the storage explosion challenge of LUT-based methods, which is also the most important contribution of this paper and achieves a better balance between storage and accuracy.
Following the reviewer's insightful suggestions, we used multithreading and residual merging operations to accelerate the inference latency from 383ms to 254ms, and we noticed that TinyLUT-F still has potential for optimization in speed, such as referencing the proven efficiency enhancing LayerMerge [1] technique and other merging methods [2,3,4]. Thanks to the reviewer for the suggestions and questions regarding inference efficiency. This has allowed us to identify the shortcomings of TinyLUT in balancing inference efficiency and accuracy, which will also be an important direction for optimizing TinyLUT models in the future.
[1] Kim et al. “LayerMerge: Neural Network Depth Compression through Layer Pruning and Merging”, ICML2024
[2] Kim et al. “Efficient latency-aware cnn depth compression via two-stage dynamic programming”, ICML2023
[3] Dror et al. “Layer folding: Neural network depth reduction using activation linearization”, BMVC2022
[4] Fu et al. “Depthshrinker: A new compression paradigm towards boosting real-hardware efficiency of compact neural networks”, ICML2022
---
Reply to Comment 1.1.2:
Title: Thank you for engaging in the discussion!
Comment: We would like to thank the reviewer for engaging in the discussion. We will ensure that all edits mentioned in the rebuttal are incorporated when revising our paper. Regarding the inference efficiency you mentioned, it will be one of the directions for our future research. Thanks again for your participation in the discussion! | Summary: The paper presents TinyLUT, a method that significantly reduces LUT-based image restoration storage requirements for edge devices through separable mapping strategy and dynamic discretization mechanism, achieving competitive accuracy with over 5 times faster inference.
Strengths: 1. The proposed separable mapping strategy (SMS) and dynamic discretization mechanism (DDM) are efficient and effective for restoration tasks.
2. The proposed TinyLUT achieves significant reduction in memory storage and also gains better accuracy-latency tradeoff over other methods.
3. The paper demonstrates the great potential of LUT-based methods.
Weaknesses: 1. The proposed SMS and DDM are used to built TinyLUT model, so how can these two modules help to improve other models? That is, it would be better if you can validate their generalization as general compression modules.
2. Should compare to more recent works, like [a].
[a]. Look-Up Table Compression for Efficient Image Restoration. Li. CVPR 2024
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How do the models deploy in edge devices? Which computing engine or accelerating library is used? I think it is helpful for readers to understand the efficiency of the LUT-based methods.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Weakness 1** The reviewer's comment is very insightful. The proposed SMS and DDM methods are applicable to other LUT-based models such as SRLUT[1], SPLUT[2] and MULUT[3].
Our proposed SMS enables decomposition of the input from RF and channels of these models for fewer LUT storage overhead. For example, the storage of SRLUT-S can be reduced from 1.27MB to 16KB by SMS. Moreover, the addition of DDM will further reduce the storage from 16KB to about 4.25KB to adapt to the deployment on edge devices. Due to time constraints during the rebuttal period, comprehensive validation of this aspect will be presented in a subsequent appendix.
**Response to Weakness 2** The reviewer's suggestion is very valuable. To demonstrate the advantages of our method, we have conducted comparisons on image super-resolution and image denoising tasks with method [4] in Table 1. As described in your 'Strengths' section, these experimental outcomes further substantiate the considerable potential of LUT-based approaches. The method [4], being another LUT compression framework, was published after the submission of our manuscript. Method [4] based on DFC strategy preserves diagonal pairs to maintain the representation capacity, while non-diagonal pairs are aggressively subsampled to save storage. As can be seen, TinyLUT-F achieves the highest PSNR with about 12x storage reduction compared to SPFLUT+DFC.
Since DFC and TinyLUT are different technological routes, we can combine them to achieve greater storage reduction.
**Table 1 Quantitative comparisons on 4x SR task**
| | storage | Set5 | Set14 | BSD100 | Manga109 | Urban100 |
|------------|----------|-------|-------|--------|----------|----------|
| SPFLUT | 17.284MB | 31.11 | 27.92 | 27.10 | 28.68 | 24.87 |
| SPFLUT+DFC | 2.018MB | 31.05 | 27.88 | 27.08 | 28.58 | 24.81 |
| TinyLUT-F | 171KB | 31.18 | 28.01 | 27.13 | 28.83 | 24.92 |
In addition, we compare SPFLUT+DFC and our TinyLUT in the image denoising task with noise level 15 on Set12 and BSD68. As shown in Table 2, TinyLUT still yields about 0.2dB higher PSNR with 3× storage reduction compared to SPFLUT+DFC. The results further demonstrate that TinyLUT exhibits notable advantages in other image restoration tasks. These experiments corroborate the effectiveness of the proposed method, as noted by reviewers (G5Ti, MFFN).
**Table 2 Quantitative comparisons on image denoise**
| | storage | Set12 | BSD68 |
|------------|---------|-------|-------|
| SPFLUT | 3017KB | 32.11 | 31.17 |
| SPFLUT+DFC | 595KB | 32.01 | 31.09 |
| TinyLUT-F | 187KB | 32.22 | 31.20 |
**Response to Question 1** The inference speed evaluation was conducted by deploying TinyLUT on two distinct platforms: Xiaomi 11 smartphone and Raspberry Pi 4B.
For the Xiaomi 11 platform, the TinyLUT was implemented in C++ and invoked by Java Native Interface. Multi-thread parallel acceleration for look-up table was achieved through the Stream API.
For the Raspberry Pi 4B implementation, the TinyLUT model was also coded in C++. We employed the "pthread" library and "omp" library for multi-thread look-up table acceleration .
The source code will be made publicly available on GitHub to help the readers to understand the efficiency of the LUT-based methods.
[1] Jo et al. "Practical Single-Image Super-Resolution Using Look-Up Table", CVPR2021
[2] Ma et al. "Learning Series-Parallel Lookup Tables for Efficient Image Super-Resolution", ECCV2022
[3] Li et al. "MuLUT: Cooperating Multiple Look-Up Tables for Efficient Image Super-Resolution", ECCV2022
[4] Li et al. "Look-Up Table Compression for Efficient Image Restoration", CVPR2024
---
Rebuttal 2:
Title: Thank you for engaging in the discussion!
Comment: We greatly appreciate your time and dedication to providing us with your valuable feedback. We hope we have addressed the concerns, but if there is anything else that needs clarification or further discussion, please do not hesitate to let us know. We will ensure that all edits mentioned in the rebuttal are incorporated when revising our paper. Thanks again for your participation in the discussion! | Summary: This paper proposes to reduce the size of LUT for image restoration and to make it applicable on edge devices. The main idea is using depthwise separable convolution to replace the vanilla convolution. When transfer to LUT, the storage is significantly reduced. Experiments show the effectiveness and efficiency of the proposed method.
Strengths: The approach is reasonable and the experiment result is provable.
The evaluation is complete. The efficiency of the proposed method on edge device is proved.
Weaknesses: The baseline DnCNN is somehow out-of-state. As described in Limitation, the proposed method based on depthwise separable convolution is suitable for CNN based architectures only, and the LUT-based compression on attention-based method is to be explored.
Quantitative comparisons on color image restoration tasks are needed, e.g., CBSD68. It is better to conduct experiments on more testsets.
In runtime comparison, it is necessary to compare the complexity of the comparison algorithms. I am interested in the processing time of the proposed method on high-resolution images.
The description of the method is hard to understand. The writing should be improved.
Technical Quality: 3
Clarity: 2
Questions for Authors: See the weaknesses above.
Please follow the code of ethics in [1], Lena image in Fig 2 may violate this.
[1] https://conferences.ieeeauthorcenter.ieee.org/write-your-paper/improve-your-graphics/
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Weakness 1** We appreciate the reviewer's meticulous work. Indeed, our proposed TinyLUT as well as existing LUT-based methods are mainly designed for CNN-based architectures that are widely deployed on resource-constrained edge devices. Primarily because the convolution operation is computationally efficient [1][2][3]. CNNs compete favorably with Transformers in terms of accuracy and scalability, and convolution remains much desired and has never faded[4][5]. In recent years, many CNN methods have still been proposed [4][5][6].
Meanwhile, as highlighted by the reviewer and acknowledged in our Limitations section, LUT-based compression for attention-based methods represents a critical direction for future research. This area will be a focal point in our subsequent investigations.
**Response to Weakness 2** We thank the reviewer for the constructive suggestion. To address the reviewer's concern, we have conducted supplementary experiments for denoising task under 50 noise level on more color image test sets, inculding Kodak, CBSD68 and McMaster. The experiment results further indicate the effectiveness of our proposed TinyLUT, corroborating the assertions made by reviewers MFFN and G5Ti. The experimental results are shown as follows:
**Table 1 Quantitative comparisons on color image denoising with noise level 50**
| | Storage | Kodak | CBSD68 | McMaster | Average |
|-----------------|---------|-------|--------|----------|---------|
| SRLUT | 82KB | 22.64 | 22.45 | 23.41 | 22.83 |
| MULUT-SDY-X2 | 489KB | 25.64 | 24.95 | 26.38 | 25.66 |
| MULUT-SDYEHO-X2 | 979KB | 25.80 | 25.07 | 26.57 | 25.81 |
| TinyLUT-F | 189KB | 26.43 | 25.53 | 27.14 | 26.36 |
As illustrated in Table 1, our TinyLUT-F achieved an average accuracy improvement of over 0.5dB as well as about 5x storage reduction compared to MULUT-SDYEHO-X2. Like the performance validation on color image data in super-resolution task, the quantitative comparisons on color image denoising are still needed for comprehensive assessment. Meanwhile, this additional investigation expands the scope of our work.
**Response to Weakness 3** We fully agree with the reviewer's insightful comment. Following the complexity evaluation in Ref.[7], computational costs of addition and multiplication operations are employed as the metric to assess the model complexity. Following the existing literature, we conduct the complexity comparison on 4x super-resolution task with an output resolution of 1280x720. The results are presented in Table 2. As can be seen, our proposed TinyLUT-S achieves the lowest computational complexity compared to other competitors.
**Table 2 The statistics of addtion and multiplication operations**
| | Add | Mul |
|--------------|--------|-------|
| SRLUT-S | 44.3M | 19.4M |
| MULUT-SDY | 165M | 69.2M |
| MULUT-SDY-X2 | 145.5| 73.6M |
| TinyLUT-S | 10.28M | 5.3M |
| TinyLUT-F | 46.28M | 26.3M |
To address the reviewer's concern about latency, we conducted 4x super-resolution tests on a Raspberry Pi 4B platform. At present, the mainstream high-resolution images mainly include 1920×1080 (FHD) and 3840×2160 (UHD). Given this, we illustrated the processing time of our TinyLUT and competitive methods in Table 3. As can be seen, TinyLUT-S yields the lowest inference latency, while the latency of TinyLUT-F is comparable to SRLUT-S and far superior to MULUT on high-resolution images. This further validates the high efficiency of TinyLUT on edge devices mentioned by reviewers G5Ti and MFFN.
**Table 3 Latency of high-resolution image on Raspberry Pi 4B**
| | 1920x1080(FHD) | 3840x2160(UHD) |
|--------------|----------------|----------------|
| SRLUT-S | 554ms | 2224ms |
| MULUT-SDY | 853ms | 3566ms |
| MULUT-SDY-X2 | 906ms | 3834ms |
| TinyLUT-S | 163ms | 777ms |
| TinyLUT-F | 614ms | 2797ms |
**Response to Weakness 4** Thanks for the reviewer's suggestion. Although our article received positive feedback (2kHn) in terms of writing style and chapter division, there are still some grammar issues. We have reviewed the paper again and corrected some errors to improve the quality of the article.
**Response to Question 1** Thanks for the reviewer's important reminder. We comply with the reviewer's suggestion and replace the Lena image in Fig. 2 with the Baby image in the Set5 test dataset.
Meanwhile, we confirmed that Lena was not included in the training set. We also removed the Lena image from the Set 14 test dataset and validated the accuracy of each method again. As the PSNR shown in Table 4, the results indicate that after removing Lena image, our TinyLUT still achieves competitive results and does not affect the ranking of our method.
**Table 4 The 4x SR experiment for Set14 without Lena**
| | Set14v without Lena | Set14 |
|--------------|---------------------|-------|
| SRLUT-S | 26.80 | 27.01 |
| SPLUT-L | 27.26 | 27.54 |
| MULUT-SDY-X2 | 27.31 | 27.60 |
| TinyLUT-S | 27.02 | 27.33 |
| TinyLUT-F | 27.72 | 28.01 |
[1] Shaker et al. "SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications", ICCV 2023
[2] Howard et al. "Searching for mobilenet-v3", CVPR 2019
[3] Sandler et al. "Mobilenet-v2: Inverted residuals and linear bottlenecks", CVPR 2018
[4] Woo et al. "ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders", CVPR 2023
[5] Liu et al. "A ConvNet for the 2020s", CVPR2022
[6] Zhang et al. "Efficient CNN Architecture Design Guiede by Visualization", ICME 2022
[7] Li et al. "MuLUT: Cooperating Multiple Look-Up Tables for Efficient Image Super-Resolution", ECCV2022
---
Rebuttal 2:
Title: reply to the author's rebuttal
Comment: Thanks for the authors' reply. Part of my concerns are addressed. Thanks for the efforts of the Ethics reviewers. Although the authors conducted detailed experiments to demonstrate the advantages of the proposed TinyLUT and its variants, I still believe that accelerating the CNN-based image restoration methods is not very convincing, even considering the edge device scenario where domain-specific accelerator (DSA) is still an option. In high-level vision tasks, infering ViT-based models on edge devices has been investigated [1, 2, 3].
[1] Junting Pan, Adrian Bulat, Fuwen Tan, Xiatian Zhu, Lukasz Dudziak, Hongsheng Li, Georgios Tzimiropoulos, and Brais Martinez. Edgevits: Competing light-weight cnns on mobile devices with vision transformers. In European Conference on Computer Vision, 2022
[2] Abdelrahman Shaker, Muhammad Maaz, Hanoona Rasheed, Salman Khan, Ming-Hsuan Yang, and Fahad Shahbaz Khan. Swiftformer: Efficient additive attention for transformerbased real-time mobile vision applications. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023
[3] Shashank Nag, Logan Liberty, Aishwarya Sivakumar, Neeraja J Yadwadkar, Lizy Kurian John. Lightweight Vision Transformers for Low Energy Edge Inference. ISCA 2024 workshop MLArchSys (Machine Learning for Computer Architecture and Systems).
---
Rebuttal Comment 2.1:
Title: Thanks for the reviewer's reply!
Comment: Thanks for the reviewer's reply. We fully agree that ViT in edge device scenarios is a very promising emerging direction! The raised papers indeed demonstrate the growing efforts in breaking into this new field in academia in the last two years.
In this NeurIPS submission, we aim to share our recent progress on strategies that we believe, and as also echoed by reviewers, can directly benefit the existing edge computing community where CNN is still the majority.
Thanks to the kind suggestion of reviewer JnS6, we are pleased to cite the raised papers in the discussion section, as they not only enrich our manuscript but also serve to highlight and encourage further research efforts on ViT methods for edge devices.
---
Rebuttal Comment 2.2:
Title: Thank you for engaging in the discussion!
Comment: We would like to thank the reviewer for engaging in the discussion and for reminding us of the violation regarding the Lena image. We will ensure that all edits mentioned in the rebuttal are incorporated when revising our paper. Thanks again for your participation in the discussion! | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their valuable and professional comments. We are thankful that most of the reviewers shared positive feedback for our paper. Meanwhile, we are glad to read that our work was efficient (JnS6, G5Ti, MFFN), effective (G5Ti, MFFN), valuable (2kHn), potential (G5Ti), well written and organized (2kHn).
We appreciate the reviewers' acknowledgement of our proposed method's potential and contribution to the LUT-based approaches. As reviewer MFFN noted, our study "analyzes the storage explosion challenge of LUT and provides a solution by decomposing the convolution kernel and compressing the quantization scale". Our method solves the storage explosion problem encountered in recently proposed SRLUT[1], SPLUT[2] and MULUT[3]. Our work has promoted the development of LUT-based methods, as expressed by reviewer G5Ti: "The paper demonstrates the great potential of LUT-based methods," and as reviewer 2kHn also expressed: "The proposed method is valuable for advancing image restoration on edge devices."
However, the reviewers have some concerns about our paper. The reviewer expressed concern that our proposed method is only suitable for CNN-based architectures (JnS6). Meanwhile, the reviewers suggest that we should compare with the latest LUT-based methods, especially the SPFLUT[4] mentioned by G5Ti and 2kHn. The last concern is the violation of the Lena image mentioned by reviewer JnsS6.
Regarding the concern that our method is only suitable for CNN-based architectures. Indeed, our proposed TinyLUT as well as existing LUT-based methods are mainly designed for CNN-based architectures. This is because CNN are still the preferred choice for real-time deployment on mobile devices, primarily because the convolution operation is computationally efficient[5,6,7]. CNNs compete favorably with Transformers in terms of accuracy and scalability, and convolution remains much desired and has never faded[8,9]. In recent years, many CNN methods have still been proposed[8,9,10].
Regarding the comparison with the latest LUT-based methods, the [4] proposed SPFLUT+DFC method, was published in CVPR2024 after we submitted the manuscript. To demonstrate the advantages of our method, we have conducted the comparisons on image super-resolution and image denoising tasks with SPFLUT+DFC in Table 1. As can be seen, TinyLUT-F achieves the highest PSNR with about 12x storage reduction compared to SPFLUT+DFC. As described in the 'Strengths' section by G5Ti, these experiments further demonstrate the value of our method for image restoration on edge devices.
**Table 1 Quantitative comparisons on 4x SR task**
| | storage | Set5 | Set14 | BSD100 | Manga109 | Urban100 |
|------------|----------|-------|-------|--------|----------|----------|
| SPFLUT | 17.284MB | 31.11 | 27.92 | 27.10 | 28.68 | 24.87 |
| SPFLUT+DFC | 2.018MB | 31.05 | 27.88 | 27.08 | 28.58 | 24.81 |
| TinyLUT-F | 171KB | 31.18 | 28.01 | 27.13 | 28.83 | 24.92 |
In addition, we compare SPFLUT+DFC and our TinyLUT in image denoising task with noise level 15 on Set12 and BSD68. As shown in Table 2, TinyLUT still yields about 0.2dB higher PSNR with 3× storage reduction compared to SPFLUT+DFC. The results further demonstrate that TinyLUT exhibits notable advantages in other image restoration tasks. These experiments corroborate the effectiveness of the proposed method, as noted by reviewers (G5Ti, MFFN).
**Table 2 Quantitative comparisons on image denoise**
| | storage | Set12 | BSD68 |
|------------|---------|-------|-------|
| SPFLUT | 3017KB | 32.11 | 31.17 |
| SPFLUT+DFC | 595KB | 32.01 | 31.09 |
| TinyLUT-F | 187KB | 32.22 | 31.20 |
Regarding the violation of the Lena image mentioned by reviewer JnS6, we will use the Baby image from the Set5 test set to modify Figure 2 in our paper. We have ensured that Lena was not used in the training process and found that removing it from testing does not affect the conclusion of our paper.
[1] Jo et al. "Practical Single-Image Super-Resolution Using Look-Up Table", CVPR2021
[2] Ma et al. "Learning Series-Parallel Lookup Tables for Efficient Image Super-Resolution", ECCV2022
[3] Li et al. "MuLUT: Cooperating Multiple Look-Up Tables for Efficient Image Super-Resolution", ECCV2022
[4] Li et al. "Look-Up Table Compression for Efficient Image Restoration", CVPR2024
[5] Shaker et al. "SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications", ICCV 2023
[6] Howard et al. "Searching for mobilenet-v3", CVPR 2019
[7] Sandler et al. "Mobilenet-v2: Inverted residuals and linear bottlenecks", CVPR 2018
[8] Liu et al. "A ConvNet for the 2020s", CVPR2022
[9] Woo et al. "ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders", CVPR 2023
[10] Zhang et al. "Efficient CNN Architecture Design Guiede by Visualization", ICME 2022 | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SCube: Instant Large-Scale Scene Reconstruction using VoxSplats | Accept (poster) | Summary: This paper presents a method to reconstruct large-scale 3D scenes from a sparse set of posed images. The method has two stages: the first stage is for voxel grid reconstruction, which is based on the XCube [38], a 3D generative model, and the authors introduce the image condition to the model. The second stage is for appearance reconstruction, which is a feedforward appearance prediction model to predict a certain number of Gaussians for each voxel. The authors also propose a GAN-based post-processing step to boost the appearance of the reconstructed scene. The proposed method is evaluated on the Waymo self-driving dataset, showing the superiority of the proposed method compared to prior art on 3D reconstruction tasks. Besides, the authors also demonstrate the applications of the proposed method, such as LiDAR simulation and text-to-scene generation.
Strengths: - The proposed method addresses an important problem in 3D vision, i.e., reconstructing large-scale 3D scenes from a sparse set of posed images. The authors cleverly take advantage of the pre-trained 3D diffusion generative model to relieve the ill-posedness of the problem.
- Although technically the proposed method is not entirely new, the paper presents a framework to combine the voxel grid reconstruction and appearance prediction, and the presented result looks promising to me.
- The paper is well-written and easy to follow. The authors provide a clear explanation of the proposed method and the experiments.
Weaknesses: - The proposed representation's abbreviation "VoxSplats" is somewhat confusing to me, since the term "splatting" in original 3DGS refers to the rendering technique that splats the 3D Gaussians onto the image plane, rather than the 3D representation. The proposed representation is more like a hybrid of voxel grids and Gaussian representation. The authors may consider using a more accurate term to describe the representation.
- It's unclear how the foreground and background are separated in the proposed method. Using a panorama to represent the sky makes sense as the sky is usually far away from the scene, but some buildings are treated as the background and seem to be represented by the panorama as well in the supplementary video. A more detailed explanation and discussion of the foreground-background separation is needed.
- There are some lighting/exposure/white-balance inconsistencies in the reconstructed scenes in the supplementary video. The authors should discuss the limitations of the proposed method in handling the lighting/exposure/white-balance variations in the input images.
- For SCube+, how to guarantee consistency among the rendered views? Also, rendering with the proposed GAN-based post-processing step is time-consuming, which would destroy the valuable real-time rendering performance of the Gaussian Splatting. The authors should discuss these issues in the paper.
- Missing the evaluation of the geometric accuracy of the reconstructed scenes. As the rendering quality of SCube is not so good, geometric accuracy is crucial to the applications of the reconstructed scenes. The authors should provide a detailed evaluation of the geometric accuracy of the reconstructed scenes.
Technical Quality: 3
Clarity: 3
Questions for Authors: The most important questions that I would like the authors to address are:
- Geometric accuracy evaluation of the reconstructed scenes.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes, the authors have discussed the limitations of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for your detailed review. We are glad to understand that you have found the results promising and the paper well-written. We hereby quote and answer the raised questions as below:
> **Naming of VoxSplat.** The authors may consider using a more accurate term to describe the representation.
Thank you for the constructive feedback, and we are actively considering renaming our representation to, e.g., `VoxGaussian` for accuracy. We would still use the term `VoxSplat` in this rebuttal though to avoid confusion, but we will change the name in the revised version.
> **Foreground-background separation.** It's unclear how the foreground and background are separated. Using a panorama to represent the sky makes sense as the sky is usually far away from the scene, but some buildings are treated as the background and seem to be represented by the panorama.
Conceptually the scene to be reconstructed could be decomposed into three parts:
1. *Foreground*: This refers to the sparse voxel grids within the 102.4m region enclosing the camera (as visualized in Appendix A). This is the main part we reconstruct using our VoxSplat representation.
2. *Background*: This is the sky panorama that we explained in L160-165, which is assumed to be at infinity.
3. *Midground*: This is the region between *Foreground* and *Background* which, as the reviewer has pointed out, we do not explicitly tackle in our framework.
For the novel view synthesis task where we render the sky panorama, buildings in the *Midground* are baked into the *Background*. We humbly argue that this is a sufficient approximation for this task at hand, since the novel viewpoint to be rendered is not drastically far away from the input views, so the buildings far away can be approximated in the panorama. For geometry reconstruction or bird-eye-view visualization purposes, the *Midground* as well as the *Background* are both cropped.
Notably, since our ultimate goal is to build *true 3D* geometry, developing ad-hoc ways for modeling the midground is not a principled solution for us. To solve this in a principled manner, we choose to apply our framework to a longer sequence input (as shown in `Fig.A` in the rebuttal PDF), in which case the *Midground* for the beginning frames is just *Foreground* in the subsequent frames and could be modeled explicitly by the VoxSplat representation.
> **Handling white-balance inconsistencies.** There are some lighting/exposure/white-balance inconsistencies in the reconstructed scenes in the supplementary video.
We agree that this is not explicitly handled currently and will add more discussions to the limitations. In fact, we note that our image-space postprocessing module (L178-186, i.e., **SCube+**) can already mitigate this issue. Notably, we have improved this part of the pipeline to remove the need for per-scene training by introducing a new postprocessing renderer that jointly learns on *all* the dataset samples. Similarly, we take the rendered image (with potential artifacts) and the ground-truth images as the training pairs, and find that this simple module could already resolve the ISP inconsistencies within the image. Please refer to `Fig.B` in the rebuttal PDF for a visualization.
> **Rendering in SCube+.** For SCube+, how to guarantee consistency among the rendered views? Also, rendering with the proposed GAN-based post-processing step is time-consuming.
The GAN post-processing module in SCube+ only aims to remove the quantization artifacts and refine the details (ref. Fig. 8), but will leave the general scene structure untouched. This means that our renderings are still *grounded/supported* by the underlying VoxSplat representation that is guaranteed to be consistent. Empirically we observe the recovered fine details in SCube+ are also decently consistent as shown in the supplementary video (e.g. 00:27 lower-right). We agree that enforcing consistency in a more principled way is meaningful future work and note that one may apply a video model instead to enforce temporal consistency.
After enabling this module, the FPS drops from 138 to 20 but can still be visualized interactively. We will discuss this in our limitation and aim to improve the timing in future work. Please note that this step is only needed if higher-quality rendering is required, and one can alternatively choose to finetune the 3DGS representation with optimization as we have demonstrated in $\S$ 4.3.
> **Evaluation of geometric accuracy.** geometric accuracy is crucial to the applications of the reconstructed scenes. The authors should provide a detailed evaluation of the geometric accuracy of the reconstructed scenes.
Thank you for the suggestion. We already have a light measurement of voxel IoU at L278-283 in terms of semantic voxel accuracy and we will include a more detailed evaluation of the geometric accuracy in the revision.
Specifically, we compute an additional metric called 'voxel Chamfer distance' that measures the L2 Chamfer distance between predicted voxels and ground-truth voxels, divided by the voxel size. This metric reflects the geometric accuracy of our prediction by measuring on average how many voxels is the prediction apart from the ground truth. The results on Waymo Open Dataset are as follows:
| Quantile | 0.5 (median) | 0.6 | 0.7 | 0.8 | 0.9 |
| ---------------------- | ------------ | ---- | ---- | ---- | ---- |
| Voxel Chamfer distance | **0.26** | 0.28 | 0.32 | 0.37 | 0.51 |
This reflects that the geometry is very accurate in comparison to the ground truth, on average being smaller than half of a voxel for more than 90% of the data samples. | Summary: - The paper proposes a new method for sparse-view 3D reconstruction using 3DGS.
- The framework uses two stages:
1) it learns a latent voxel grid (based on XCube) to represent the geometry.
2) it trains an appearance model to decode the latent voxel grid into a set of Gaussians
- The method further uses a background model to handle the sky.
- The authors propose an (optional) GAN postprocessing which improves the visual quality at the cost of longer (20m) per-scene optimization.
- The method was evaluated on the Waymo Open Dataset, and it outperforms compared methods.
Strengths: - The paper is very well written and easy to follow.
- The method is novel and the results look good.
- I really like the idea of having probabilistic latent space representation (VAE).
- The paper contains an ablation study illustrating the tradeoff between model complexity and performance as various hyperparameters are changed.
Weaknesses: - Section Method "Training and Inference" is not very clear. It would be preferable to expand the diffusion loss from Appendix A, and clearly describe the input/targets.
- I would also like to have more details on the Sky Panorama model.
- The method was evaluated on a single dataset. It would be interesting to see how the method performs on other datasets.
Technical Quality: 3
Clarity: 4
Questions for Authors: - How does the method compare to other methods like MVSplatting, MVSGaussians? How does it differ? How do you think it would compare in terms of performance?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: - Limitations are properly discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your encouraging feedback for SCube, we are glad to understand that our method is easy to follow and you enjoy the idea and the results of our paper. In the following text, we will try to address your concerns.
> **Details about the diffusion loss.** Section "Training and Inference" is not very clear. It would be preferable to expand the diffusion loss from Appendix A, and clearly describe the input/targets.
Thank you for your suggestion, and we will expand the descriptions and clarify the input and targets in the revised paper.
The training loss $\mathcal{L}_\text{diffusion}$ (in Eq. 7) is derived by maximizing the log-likelihood of the data $\mathbf{X}$ distribution. It can also be viewed as a score-matching process whose loss encourages the alignment of the predicted stochastic score and the ground truth. In each training iteration, we sample a random variable $\mathbf{X}$ (in our case this is the sparse voxel hierarchy) and pass its noised version to the sparse U-Net, which aims to predict the added Gaussian noise ${\epsilon}$. Practically we re-parametrize the network with the v-parametrization technique as introduced in [a].
> **Details about sky panorama model.** I would also like to have more details on the Sky Panorama model.
Modeling the sky independently plays an important role in outdoor scene reconstruction [b], which helps the model to disentangle the foreground and the sky at infinite depth. We adopt a 2D panorama $\mathbf{L} \in R^{H\times W \times C}$ as the sky representation, and the sky modeling consists of the following two steps:
1) *Encode the 2D sky panorama*: Since the 2D panorama is an expanded (hemi-)sphere with inverse equirectangular projection, we can get a cartesian coordinate $P=(x,y,z)$ on a unit sphere for each pixel $(u, v)$ in the panorama $\mathbf{L}$. We then project $P$ to the image plane to retrieve the image feature (note that we zero the translation part of the camera pose since only the view direction decides the sky color). We also leverage the sky mask which is computed as the inverse of the rendered sparse voxel grid mask to make sure only sky areas are retrieved. After this step, sky features are stored in $\mathbf{L}$.
2) *Sample and decode features from the panorama*: Given a camera with image resolution $(H', W')$, intrinsics $K$ and camera pose $\xi$, we generate camera rays and calculate their hit point on the unit sphere. With equirectangular projection, we obtain their pixel positions on the 2D sky panorama and query its features via trilinear interpolation, resulting in a 2D feature map $\mathbf{F} \in {R}^{H'\times W' \times C}$. We finally decode it with a 2D CNN network to get the RGB sky image of $\mathbf{I} \in {R}^{H'\times W' \times 3}$, which is alpha-composited with the foreground.
We use $H=1024, W=2048, C=32$ for the 2D sky panorama in our implementation. In the sampling and decoding step, the 2D CNN network includes several Convolution--BatchNorm--ReLU layers, reducing the channel number from 32 to 16 to 3 with stride 1. We will make the descriptions utmost clear in our revision.
> **Results on other datasets.** The method was evaluated on a single dataset. It would be interesting to see how the method performs on other datasets.
Our method is able to generalize well to novel inputs, as already demonstrated in the Text-to-Scene generation experiment (Fig. 7), where the images are taken from a multi-view diffusion generative model. Furthermore, we demonstrate the general applicability of our method on a new larger-scale internal dataset consisting of 1.1k clips. Each clip contains about 200 frames and has dense point clouds for the GT voxel generation. We show a reconstruction and novel view synthesis sample on this dataset in `Fig. C` in the PDF file. The result shows that our model is capable of learning over a larger dataset and obtaining better reconstructions. Notably, in this example, we demonstrate the flexibility of our model that also supports back-view inputs (REAR-LEFT and REAR-RIGHT) for a 360-degree novel view rendering.
> **Comparison to MVSplat and MVSGaussians.** How does the method compare to other methods like MVSplatting, MVSGaussians? How does it differ? How do you think it would compare in terms of performance?
The main difference of our model compared to multi-view stereo-based methods such as MVSplat or MVSGaussians is the usage of true 3D priors. Specifically, MVSplat uses cross-attention to learn image-space correspondences and infer pixel-aligned Gaussians, and MVSGaussians lift from the images to a cost volume that is strongly correlated to the target view for rendering. While these methods achieve good rendering quality, they cannot learn the distribution of the 3D geometry. Comparably:
- Quality-wise, MVS-based method cannot enable *extreme* novel view synthesis such as the bird-eye view of the scene as shown in Fig. 1 and the supplementary video. They also cannot recover *highly-occluded regions* as shown in Fig. 5.
- Performance-wise, with the support of the fast sparse convolution infrastructure and the Gaussian renderer, our reconstruction is built within seconds and rendered in real-time. This is comparable to the baselines.
Thinking ahead, the availability of the 3D prior and latent spaces could allow more explicit 3D editing or control capability (e.g. for the traffic) in future works.
***Reference:***
[a] Salimans et al. Progressive Distillation for Fast Sampling of Diffusion Models. ICLR 2022.
[b] Wang et al. Neural Fields meet Explicit Geometric Representations for Inverse Rendering of Urban Scenes. CVPR 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal and the nice explanation of the diffusion loss and the panorama model. I feel like including them in the paper will improve the quality of the method section. As for my main question, I don't feel like it has been properly addressed. I would have liked a quantitative comparison (even limited) with the suggested baselines. While I still like the paper, I've decided to drop my rating weak accept.
---
Rebuttal 2:
Comment: Dear reviewer PK9U, thank you for your comment. To further resolve your raised question, we provide the quantitative comparison with MVSplat [a] and MVSGaussians [b] as below:
- PSNR (↑ The higher the better)
| | Reconstruction (T) | Prediction (T + 5) | Prediction (T + 10) |
| --------------- | ------------------ | -------------------| ------------------- |
| **MVSplat** | 21.84 | 20.14 | 18.78 |
| **MVSGaussian** | 21.25 | 16.49 | 16.42 |
| **SCube (Ours)**| 25.90 | 19.90 | 18.78 |
| **SCube+ (Ours)**| **28.01** | **22.32** | **21.09** |
- SSIM (↑ The higher the better)
| | Reconstruction (T) | Prediction (T + 5) | Prediction (T + 10) |
| --------------- | ------------------ | -------------------| ------------------- |
| **MVSplat** | 0.71 | 0.71 | 0.69 |
| **MVSGaussian** | 0.80 | 0.70 | 0.60 |
| **SCube (Ours)**| 0.77 | 0.72 | 0.70 |
| **SCube+ (Ours)**| **0.81** | **0.74** | **0.72** |
- LPIPS (↓ The lower the better)
| | Reconstruction (T) | Prediction (T + 5) | Prediction (T + 10) |
| --------------- | ------------------ | -------------------| ------------------- |
| **MVSplat** | 0.46 | 0.48 | 0.52 |
| **MVSGaussian** | 0.51 | 0.60 | 0.59 |
| **SCube (Ours)**| 0.45 | 0.47 | 0.49 |
| **SCube+ (Ours)**| **0.25** | **0.34** | **0.38** |
For both methods, we use their respective official codebases and retrain the model using the same dataset split as our method with two 8-GPU nodes. We follow the recommended training hyperparameter and strategies to train the models. Please note that these methods are originally demonstrated only on indoor datasets with a smaller scale and without sky modeling.
The above quantitative results echo our reasoning as posted before: While both methods show good reconstruction quality, they fail to model the true 3D geometry and the occluded parts of the scene and hence have unsatisfactory generalization capability to large-scale outdoor scenes.
Geometry-wise, the geometry of MVSplat degenerates to multiple planes, and the geometry of MVSGaussian contains too many outliers on the rays of the corresponding pixels. To measure the geometric accuracy we compute the L2 Chamfer distance between predicted voxels and ground-truth voxel centers (note that the metric is in `meter`). The results are as follows:
| Quantile | 0.5 (median) | 0.6 | 0.7 | 0.8 | 0.9 |
| ---------------------- | ------------ | -------- | -------- | -------- | -------- |
| **MVSplat** | 43.61 | 45.54 | 47.47 | 49.67 | 53.77 |
| **Ours** | **0.10** | **0.11** | **0.13** | **0.15** | **0.20** |
*(Note that we do not include the results of MVSGaussian due to the amount of outliers reconstructed.)*
Due to the restriction of NeurIPS, we have sent an anonymous link to the AC containing **qualitative** results of these baselines. Please ask the AC for the link if necessary. We will include the new comparisons to the final version of our paper and add corresponding citations.
**We sincerely hope the supplementary experiments comparing to MVSGaussian and MVSplat have addressed your concerns, and we really appreciate it if you could reconsider our method and raise the score back.**
***References***:
[a] Chen, Yuedong, et al. "Mvsplat: Efficient 3d gaussian splatting from sparse multi-view images." ECCV 2024.
[b] Liu, Tianqi, et al. "Fast Generalizable Gaussian Splatting Reconstruction from Multi-View Stereo." arXiv preprint arXiv:2405.12218 (2024).
---
Rebuttal Comment 2.1:
Comment: Dear reviewer,
we sincerely hope the supplementary experiments comparing to MVSGaussian and MVSplat have addressed your concerns; and we really appreciate it if you could reconsider our method and raise the score back. | Summary: The paper introduces a pipeline for street scene reconstruction given a sparse set of images as input. The reconstruction process follows a feed-forward manner. The method builds upon the existing XCube work. First, it generates sparse voxels of the scene to represent the geometries, then each voxel feature is decoded into Gaussian primitives for appearance rendering. Experimental results, both quantitative and qualitative, show that SCube can reconstruct the scene with high quality and provides benefits for downstream applications such as autonomous driving simulation.
Strengths: (1) The results are impressive. Given the sparse images with little overlap, the reconstructed Gaussians are of high quality and show good generalization ability in novel views.
(2) Overall, the method sounds reasonable. Technical details are provided, and I believe the results are reproducible.
Weaknesses: (1) I am curious if we really need a diffusion model for this task. Since there are several images as conditions, the uncertainty (or randomness) of the output should be very small. Why not just train a regressor for the sparse voxel reconstruction? Is there any specific motivation for using a diffusion model?
Technical Quality: 3
Clarity: 3
Questions for Authors: (1) Why is the GAN loss used independently for each scene in the inference stage, instead of using it in the training stage?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Currently, the results are presented in the sparse view settings of a small region of the street scene. How can we reconstruct a very large scene using a long sequence of images in a feed-forward manner? This is an interesting topic for future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the constructive feedback and your positive comments on the results and technical contribution. We appreciate your effort in this process. In what follows we will quote your questions and try to resolve them.
> **Necessity of a Diffusion Model.** I am curious if we really need a diffusion model for this task. Since there are several images as conditions, the uncertainty (or randomness) of the output should be very small. Why not just train a regressor for the sparse voxel reconstruction?
We note that the uncertainty of the scene geometry given our input images is still large, and the problem that the model tackles is indeed non-trivial and sometimes even ill-posed. To demonstrate this, we compute the percentage of occluded voxels (that are invisible from the input images) w.r.t. all the ground-truth voxels, and the number is around **80%**. In addition to the generative modeling power, compared to a simple regressor, multi-step denoising diffusion models are empirically verified to have more data-fitting power [a].
As a comparison specific to our task, we trained a single-step model that is conditioned on the input images and directly regresses the desired sparse voxel grid. To measure the geometric accuracy, we use a metric called 'voxel Chamfer distance' that measures the L2 Chamfer distance between predicted voxels and ground-truth voxels, divided by the voxel size. The results on Waymo Open Dataset are as follows:
| Quantile | 0.5 (median) | 0.6 | 0.7 | 0.8 | 0.9 |
| ---------------------- | ------------ | -------- | -------- | -------- | -------- |
| Simple regressor model | 15.46 | 19.12 | 22.61 | 34.81 | 52.57 |
| Our diffusion model | **0.26** | **0.28** | **0.32** | **0.37** | **0.51** |
As demonstrated in the above table, the diffusion model significantly outperforms a simple regressor.
> **Sharing postprocessing GANs across datasets.** Why is the GAN loss used independently for each scene in the inference stage, instead of using it in the training stage?
Thank you for the constructive suggestion -- we hereby present a postprocessing module that is jointly trained on the full dataset, without the need of per-scene finetuning. Specifically, we replace the original GAN with a more modern Img2img-Turbo model [b] and show the results in `Fig. B` in the PDF file. Note that while the essence idea of applying neural postprocessing on the rendered images stays the same, the new module removes the need for per-scene training for high-quality rendering. We will adopt this module in our revised version, and thanks for your suggestion again!
> **Extending to longer input sequences.** Currently, the results are presented in the sparse view settings of a small region of the street scene. How can we reconstruct a very large scene using a long sequence of images in a feed-forward manner? This is an interesting topic for future work.
We agree that this is an interesting and meaningful direction of future work and we present some preliminary results in `Fig. A` in the rebuttal PDF. We feed multiple frames of input images independently into our model and stitch the inference results from multiple timesteps together, without any finetuning. Results show that the stitched 3D scenes are temporally consistent despite minor artifacts, and one can similarly apply a longer LiDAR simulation session over the reconstruction. We also note that extending to even longer sequences could be implemented by recursively conditioning on past (latent) reconstructions, and we are actively exploring its feasibility.
***Reference:***
[a] Yang et al. Diffusion models: A comprehensive survey of methods and applications. ACM Computing Surveys 2023.
[b] Parmar et al. One-step image translation with text-to-image models. arXiv:2403.12036.
---
Rebuttal 2:
Title: Comment
Comment: Thank you for the answers! They addressed my questions.
---
Rebuttal Comment 2.1:
Comment: Thank you for taking the time to review our rebuttal! | Summary: The paper proposes a new method to reconstruct 3D outdoor scenes from a sparse set of posed images. The key idea is to utilize a new hybrid 3D representation, which assigns 3D Gaussians to each sparse voxel. Given input images, the paper first adapts XCube to condition on sparse view images. Along with the generated fine-level voxels, the paper then designs a feed-forward model to reconstruct the appearance. This proposed pipeline can also be used in text-to-scene generation. Experiments show that the proposed method outperforms existing approaches.
Strengths: Originality:
The paper is trying to solve a challenging problem: efficient large-scale scene reconstruction from sparse posed images. The motivation is to utilize the data prior to complement the sparse images. The proposed representation is a straightforward combination of XCube with 3D Gaussians (simply assigning 3D Gaussians into each voxel), which has also been widely applied (e.g., Scaffold-GS). While combining these two representations is acceptable, the concern is that the overall pipeline design seems more like a simple combination of two representations: first generating XCube, then generating 3D Gaussians with geometry & images. However, the problem to be solved is important, and this paper might inspire more researchers to work on this problem.
Quality:
The submission seems technically sound.
Clarity:
The paper is mostly clear and easy to read.
Significance:
Though there are some concerns with the methods and results, the paper is aiming at solving a very good problem.
Weaknesses: - One concern is that the novelties of this paper are not very clear. As stated above (Originality), the paper seems to simply use two stages to combine the two representations in a simple way: one for XCube, and one for 3D Gaussians. It would be better if the author can clearly state the contributions.
- As a two-stage method, it would be better to discuss more about: (1) First stage: how to evaluate whether the reconstructed voxels align with the input images? (2) Second stage: how robust is the model when the input images and the voxels are not aligned?
- L187: Considering the temporal inputs, can the methods give consistent results temporally? Adding results might be better.
- L244-245: In Table 1, SCube seems to have very similar results to PixelSplat for future prediction. Why? An obviously better reconstruction result alone seems not able to show the effectiveness of the proposed method. The appearance reconstruction uses generated geometry (fine-level voxels) as input, and adds some tricks such as sky modeling. It is unknown how they affect the results.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback, which helps us improve SCube. We are happy that you agree that the problem we solve is challenging and important, and the techniques are sound and well-written. In the following, we will quote your original comments and try to resolve them.
> **Concerns about novelty.** The paper seems to simply use two stages to combine the two representations in a simple way: one for XCube, and one for 3D Gaussians. It would be better if the author can clearly state the contributions.
Our main contribution is the novel framework that enables 3D priors for large-scale fast scene reconstruction that no prior work achieves. We humbly believe that our work adds new insights and knowledge into the literature to achieve ‘promising results’, as echoed by reviewer NKmV.
Although at a high level, our method could be decomposed into two parts, we have made non-trivial technical contributions to synergize and tame them for the sparse-image-based 3D reconstruction task. This includes the image-based conditioning techniques to handle occlusions and 2D-3D alignments in the sparse diffusion architecture ($\S$ 3.1), the hybrid scene representation that facilitates both learning and fast rendering ($\S$ 3.2), along with effective training procedures ($\S$ 4.1). We remark that our framework significantly outperforms baseline approaches on the same tasks, enabling reconstructing scenes with $102.4\text{m}$ in scale with nice geometry under seconds. We also demonstrate a couple of useful applications available only with our proposed method ($\S$ 3.3).
Hence, a re-evaluation of our novelty would be greatly appreciated.
> **Alignments of voxels w.r.t. input images.** As a two-stage method, it would be better to discuss more about: (1) First stage: how to evaluate whether the reconstructed voxels align with the input images? (2) Second stage: how robust is the model when the input images and the voxels are not aligned?
To quantitatively evaluate the pixel-voxel alignments (question (1)), we compute an additional metric called 'voxel Chamfer distance' that measures the L2 Chamfer distance between predicted voxels and ground-truth voxels (that are pixel-aligned), divided by the voxel size. This metric reflects the geometric accuracy of our prediction by measuring on average how many voxels is the prediction apart from the ground truth. The results on Waymo Open Dataset are as follows:
| Quantile | 0.5 (median) | 0.6 | 0.7 | 0.8 | 0.9 |
| ---------------------- | ------------ | ---- | ---- | ---- | ---- |
| Voxel Chamfer distance | **0.26** | 0.28 | 0.32 | 0.37 | 0.51 |
The table indicates that on 90% of the test samples, the predicted voxel grid is only half of a voxel off from the ground truth. We note that during our data curation process, there could be errors in the ground-truth voxels (due to, e.g., COLMAP failures), accounting for the outliers in the above metric.
To answer question (2), we visualize the sample with the worst voxel Chamfer distance in `Fig. D` of the rebuttal PDF: we show that the predicted results are decent even though the ground truth is corrupted due to the lack of motion in the ego car. This also indicates that the voxels and the images are rarely misaligned, demonstrating the robustness of our method.
> **Temporal inputs.** Considering the temporal inputs, can the methods give consistent results temporally? Adding results might be better.
Thank you for your suggestion. We apply our full pipeline to a set of temporal inputs comprising of consecutive frames ($T$, $T+5$, $T+10$, $T+15$, $T+20$), and show the reconstruction and LiDAR simulation results in `Fig. A` in the rebuttal PDF. Our method is able to produce consistent 3D along the driving trajectory though the frames are fed independently.
> **Future prediction results vs. PixelSplat.** In Table 1, SCube seems to have very similar results to PixelSplat for future prediction. Why? The appearance reconstruction uses generated geometry (fine-level voxels) as input and adds some tricks such as sky modeling. It is unknown how they affect the results.
We kindly refer to Fig. 4 (2nd row) in the paper and the supplement video (at 00:43) for a 3D visualization of the reconstruction results for PixelSplat. The predicted geometry of PixelSplat is degenerated to planar patches, and the future rendering has a lot of black hole artifacts. In this particular scenario, PSNR is less sensitive to these artifacts and is admittedly not an ideal indicator of the quality, as echoed in [a, b], while LPIPS can capture such artifacts and faithfully reflect the results (Ours = 0.47 vs PixelSplat = 0.60 at $T+5$, where the lower the better).
The appearance reconstruction module does require fine-level voxels as input otherwise it cannot determine the correct positions to generate the Gaussians. As our geometry of interest is limited to a square region around the ego vehicle, the VoxSplat representation cannot cover faraway regions such as the sky. Without sky modeling we cannot render the sky regions, and we show qualitative results in the supplementary video at, e.g., 00:09.
***Reference:***
[a] Rockwell et al. Pixelsynth: Generating a 3d-consistent experience from a single image. ICCV 2021.
[b] Ren et al. Look outside the room: Synthesizing a consistent long-term 3d scene video from a single image. CVPR 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. Most of my concerns are addressed but I am still concerned about the novelty. I don't have follow-up questions within this rebuttal, and will have more discussion with other reviewers for the final decision.
---
Rebuttal 2:
Comment: Thank you for taking the time to review our rebuttal. We are pleased that most of your concerns have been addressed.
While our approach incorporates various state-of-the-art techniques such as Gaussian splatting and diffusion models, we recognize that many well-justified and published works also leverage one or more of these building stones to formulate their reconstruction models. Our primary contribution lies in the novel way of synergizing these foundational techniques to create a fully integrated pipeline that achieves state-of-the-art results for large-scale outdoor scene reconstruction encompassing true 3D geometric priors, while also introducing significant technical contributions necessary to make the method effective (as noted in the rebuttal above).
Importantly, our method introduces the first possible way to tackle the challenging task of reconstructing large-scale 3D scenes from low-overlapping outward-facing input images. We are able to reach a much better and more robust performance on this task than the baselines (e.g. PixelSplat, DUSt3R, MVSGaussian, MVSplat, etc.) where a faithful reconstruction of the underlying 3D geometry is missing.
We would greatly appreciate it if SCube's contribution could be reconsidered during the discussion phase. | Rebuttal 1:
Rebuttal: We appreciate the insightful comments provided by the reviewers who all agree that SCube is technically sound and easy to understand, the results are impressive, and the problem solved is challenging. We post responses to reviewers individually in the corresponding section, while referenced figures are jointly included in the PDF file.
Pdf: /pdf/dcc3adfcae80ccb3164094bb98be5573b749700e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Consistency Purification: Effective and Efficient Diffusion Purification towards Certified Robustness | Accept (poster) | Summary: This paper presents an innovative approach to image purification using diffusion models, known as Consistency Purification. Traditional methods like the Denoising Diffusion Probabilistic Model (DDPM) and the Stochastic Diffusion Model face challenges in balancing efficiency and effectiveness. While DDPM provides efficient purification, it fails to ensure that purified images lie on the data manifold. On the other hand, the Stochastic Diffusion Model successfully places images on the data manifold but is computationally intensive. The proposed Consistency Purification leverages a consistency model distilled from the Probability Flow Ordinary Differential Equation (PF-ODE), achieving on-manifold purification in a single step. This approach is further refined through Consistency Fine-tuning with LPIPS loss, enhancing semantic alignment between purified and original images. Comprehensive experiments demonstrate that this framework achieves state-of-the-art certified robustness and efficiency compared to baseline methods.
Strengths: 1. Introducing the consistency model to the purification domain is an interesting idea, because it utilizes the consistency model's feature that maps noised images back to origin. , and experimental results show significant improvements in classification efficiency.
2. The writing and logical flow are clear and well-structured.
Weaknesses: 1. There is no comparison with multi-step DDPM methods. Providing these results and the corresponding speedup would make the work more solid.
2. The impact of noise intensity on experimental results is not mentioned. Could you please show this in the rebuttal phase?
3. It is unclear how the model performs if the noise is not Gaussian. Providing results for different noise distributions would enhance the robustness and applicability of the method.
4. Including visualizations of the purified images before and after the purification process would make the results more intuitive and compelling.
Technical Quality: 3
Clarity: 3
Questions for Authors: refer to weakness
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: refer to weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Question A: Comparison with multistep DDPM methods.
We conducted an experiment with multistep DDPM using 25 sampling steps, and the results are shown in the table below. We found that multistep DDPM has lower certified accuracy than onestep DDPM. This aligns with Carlini et al.'s [1] finding that multistep DDPM sampling adds unwanted details to the image, altering its semantic meaning. Our method provides a higher certified radius and is 25 times faster than multistep DDPM.
| | 0.0 | 0.25 | 0.50 | 0.75 | 1.00 |
|---------------------------|------|------|------|------|------|
| onestep-DDPM | 87.6 | 73.6 | 55.6 | 39.2 | 39.6 |
| multistep-DDPM | 86.2 | 72.4 | 55.8 | 40.6 | 31.8 |
| Consistency Purification | **90.4** | 77.2 | 59.8 | 42.8 | 33.2 |
| + Consistency Fine-tuning | 90.2 | **79.4** | **62.4** | **43.8** | **35.4** |
> Question B: Impact of noise intensity on experimental results.
In all experiments presented in our paper, we compute the certified radius for each test example at three distinct noise intensities, with $\sigma \in {0.25, 0.5, 1.00}$ for CIFAR-10 and $\sigma \in {0.05, 0.15, 0.25}$ for ImageNet-64. We then calculate the proportion of test examples whose radius exceeds a specific threshold $\epsilon$. The highest accuracy among these noise intensities is reported as the certified accuracy at $\epsilon$. For more detailed results of the certified accuracy under various noise intensities, we have included the results for each $\sigma$ in the tables below. The results clearly show that our method consistently outperforms other methods across all noise levels.
Certified Accuray under various $\sigma$ for CIFAR-10
| Method | $\sigma$ | 0.00 | 0.25 | 0.50 | 0.75 | 1.00 |
|---------------------------|------|------|------|------|------|------|
| onestep-DDPM | 0.25 | 87.6 | 73.6 | 55.6 | 37.8 | 0.0 |
| | 0.5 | 73.6 | 61.0 | 49.8 | 39.2 | 29.6 |
| | 1.0 | 49.2 | 40.6 | 33.2 | 26.6 | 21.4 |
|Consistency Purification | 0.25 | 90.4 | 77.2 | 59.8 | 37.8 | 0.0 |
| | 0.5 | 77.2 | 65.0 | 52.6 | 42.8 | 33.2 |
| | 1.0 | 52.0 | 44.2 | 36.4 | 29.4 | 23.8 |
| + Consistency Fine-tuning | 0.25 | **90.2** | **79.4** | **62.4** | **43.2** | 0.0 |
| | 0.5 |**76.4** | **66.4** | **53.8** | **43.8** | **35.4** |
| | 1.0 | **55.4** | **47.2** | **41.0** | **32.0** | **26.0** |
Certified Accuray under various $\sigma$ for ImageNet-64
| Method | $\sigma$ | 0.00 | 0.05 | 0.15 | 0.25 | 0.35 |
|---------------------------|------|------|------|------|------|------|
| onestep-DDPM | 0.05 | 55.2 | 44.8 | 33.4 | 0.0 | 0.0 |
| | 0.15 | 38.2 | 35.4 | 21.8 | 15.2 | 8.8 |
| | 0.25 | 13.2 | 10.8 | 8.4 | 5.4 | 4.0 |
| Consistency Purification| 0.05 | 62.4 | 54.2 | 35.2 | 0.0 | 0.0 |
| | 0.15 | 41.8 | 37.2 | 25.4 | 19.8 | 13.0 |
| | 0.25 | 16.2 | 13.8 | 13.0 | 6.2 | 5.8 |
| + Consistency Fine-tuning | 0.05 | **68.6** | **58.0** | **37.4** | 0.0 | 0.0 |
| | 0.15 | **51.0** | **43.6** | **32.4** | **23.4** | **17.4** |
| | 0.25 | **18.2** | **15.4** | **13.2** | **8.0** | **7.2** |
> Question C: Model performances under non-Gaussian noise.
In the scheme of randomized smoothing, the noise added to the image is chosen to be Gaussian for the simplicity of certified radius calculation, which is the standard practice in previous work [1, 2, 3]. We admit that it will be an interesting direction to conduct non-Gaussian noise to the randomized smoothing approach in future work.
> Question D: Visualizations of the purified images before and after the purification process
Please refer to the visualization of images before and after purification via the following anonymous link: https://anonymous.4open.science/r/Consistency-Purification-9B5F/README.md. We have included purified images for CIFAR-10 at a noise level of 0.5 and for ImageNet-64 at a noise level of 0.25. For both datasets, our method produces images that are more detailed and accurate than those processed by One-Step DDPM [1]. This enhancement in image quality partially explains why our method achieves significantly higher certified accuracy compared to the One-Step DDPM method.
[1] Nicholas Carlini et al. (certified!!) adversarial robustness for free! arXiv preprint arXiv:2206.10550, 2022.
[2] Nie, Weili, et al. Diffusion models for adversarial purification. arXiv preprint arXiv:2205.07460 (2022).
[3] Xiao, Chaowei, et al. Densepure: Understanding diffusion models towards adversarial robustness. arXiv preprint arXiv:2211.00322 (2022).
---
Rebuttal Comment 1.1:
Title: Look forward to your reply
Comment: Dear Reviewer aFWW,
The deadline for the discussion period is approaching. We have provided our rebuttal material and hopefully could address your concerns. Your feedback is highly valuable to us, and we would greatly appreciate it if you could take some time to review our response.
Best Regards,
Authors | Summary: The paper introduces the Consistency Model for one-step purification, ensuring the data manifold of the purified samples while maintaining the efficiency of one-step purification. At the same time, the paper proposes Consistency Finetuning to fine tune the Consistency Model to ensure semantic consistency of the purified samples.
Strengths: 1. The paper achieves a trade-off between efficiency and effectiveness, realizing Consistency Purification in a one-step manner.
2. The author conductes both theoretical and experimental verification of the proposed method.
3. The paper achieves consistent performance improvement in the experiment.
Weaknesses: 1. The paper's innovation is limited, using existing Consistency Models and LPIPS loss to form the diffpure framework.
2. There are some descriptions not easy to understand. Suggest the author to provide a brief explanation and clarification when transport (Page 3, line 77) first appears in the Introduction. Does Remark 3.4 (line202) need to be bolded?
3. The paper's validation set on ImageNet-64 is too small, and 100 images are difficult to accurately reflect its validity. Meanwhile, one of the purposes of this paper is to enhance the effectiveness, but the experimental dataset does not align with this original intention.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In Example 3.1, the author states in lines 177-178 that if distribution consistency is already being ensured, it should already be a relatively strong constraint. Why can only ensure generation and not enough for denoising?
2. The paper's statements in 188-191 and the proof in Theorem 3.3 demonstrate the importance of small transport. However, the ideal denoising should be to transform d into x. For a defined transport, it is natural and reasonable to require it to be small. What the author needs to prove more is whether this Gaussian transport lower can be transferred to adversarial attacks, rather than just proving that lower is good?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The paper has addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Concerns about the limited innovation in our paper
We summarize our contributions and novelty in three points:
1. Using Consistency Models(CM) are efficient and effective:
We provide theoretical (section 3, lines 143-174) and empirical support (Table 2) showing that CM enables us to achieve a Pareto-optimal purification framework compared to previous frameworks. First, CM significantly improves purification efficiency through its one-step property, compared to multi-step models like DDPM [1], Score-SDE [2], and EDM [3]. Second, CM produces on-manifold purified samples, enhancing purification effectiveness over other one-step models, such as one-step DDPM [1, 4], which can generate off-manifold samples that have semantic ambiguity (leading to low classifier confidence or misclassification). Third, compared to non-diffusion-based models, we can leverage off-the-shelf CM without requiring additional training, such as adversarial classifiers.
2. Leveraging CM alone is insufficient:
We provide theoretical (section 3, lines 175-207) and empirical results (Table 1, Figure 2) to show this and further consistency-finetuning with LPIPS loss is necessary. The original CM cannot guarantee semantic alignment and can potentially recover a noisy bookstore image as a market image. Our consistency-finetuning aims to mitigate this issue.
3. Why LPIPS instead of L1 & L2?
We provide theoretical support (section 3, lines 208-216) and experimental results (Table 1) to explain why typical L1 and L2 losses do not work and why LPIPS loss is appropriate for consistency fine-tuning. In particular, since the LPIPS loss measures the semantic difference rather than just the distance between samples, it ensures correct classification without requiring the purified sample to be identical to the original image. Conversely, minimizing L1 or L2 loss could potentially disrupt the CM structure and lead to off-manifold samples and semantic ambiguity.
[1] Ho et al. Denoising diffusion probabilistic models. NeurIPS, 2020.
[2] Song et al. Score-based generative modeling through stochastic differential equations. ICLR, 2021.
[3] Karras et al. Elucidating the design space of diffusion-based generative models. NeurIPS, 2022.
[4] Carlini & Tramer et al.(Certified!!) Adversarial Robustness for Free! ICLR, 2023.
> Brief explanation and clarification of the transport definition
Additionally, our transport (Definition 3.2) aligns with the standard definition in optimal transport theory [1]. Remark 3.4 suggests if the recovered sample distribution is closer to the original and the classifier is more robust, purification performance improves, we can bold it in the final version.
[1] Peyré et al. Computational optimal transport: With applications to data science.
> Small validation set on ImageNet-64
To accurately evaluate the certified radius for our experiments on ImageNet-64, we expanded our test dataset to include 500 samples. We also present the variance across five subsets of 100 sample results. The results, shown in the following table, illustrate that consistency purification with consistency fine-tuning continues to achieve the highest certified accuracy. Additionally, the small variance observed across the five subsets of 100 samples further demonstrates the validity of our evaluation.
| | 0.0 | 0.05 | 0.15 | 0.25 | 0.35 |
|-------------|------|------|------|------|------|
| onestep-DDPM | 55.2±1.17 | 44.8±0.40 | 33.4±0.75 | 15.2±0.37 | 8.8±0.75 |
| Consistency Purification | 62.4±1.02 | 54.2±1.16 | 35.2±0.75 | 19.8±0.40 | 13.0±0.63 |
| + Consistency Fine-tuning| **68.6**±1.20 | **58.0**±1.10 | **37.4**±1.33 | **23.4**±1.20 | **17.4**±1.02 |
Additionally, while our framework enhances the effectiveness of the one-step purification process, the underlying randomized smoothing method [1] still necessitates extensive sampling times N (e.g. $N=10,000$) for each test example evaluation. This extensive sampling is required to construct a reliable confidence interval for the certified robustness radius statistically. Furthermore, evaluations at various noise levels are required for each line of results presented in our Table 2. Consequently, evaluating the certified accuracy across the entire 50,000 test examples of ImageNet-64 remains a significant time cost.
[1] Cohen et al. Certified adversarial robustness via randomized smoothing. international conference on machine learning. PMLR, 2019.
> Why can only ensure generation is not enough for denoising?
Even if the recovered sample is on-manifold, it can have a different semantic meaning from the original. The on-manifold property benefits generation but not denoising, where semantic consistency is crucial. For example, with a two-point data distribution at $\{-1, 1\}$ where $p(x=1)=p(x=-1)=0.5$, and the noisy distribution is similar, if a purification pipeline maps $-1$ to $1$ and $1$ to $-1$, the generation quality is perfect, but denoising fails.
> Concerns about the importance of small transport
As shown in lines 188-191 and Theorem 3.3, given an attack $\epsilon$ on a data point $x$, for a purification framework $d$, we obtain the recovered data $\hat{x} = d(x+\epsilon)$. The lower the transport from $\hat{x}$ to $x$, the more likely successful purification is. Figure 2 and Table 1 demonstrate that our purification framework $d$ reduces this transport compared to baselines through fine-tuning. Specifically, Figure 2 shows uniform transport reduction for recovered data across all $\sigma$ with CM, and consistency fine-tuning further reduces transport compared to the original model.
We do not claim the transport between the data and the noisy distribution after the attack is small. If such transport is small, the attack is minimal, and successful purification is easier. Our theory applies to attacks causing large transport, such as Gaussian noise attacks.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. Regarding the model evaluation using a subset of 500 images, it is better to use 512 images for evaluation to align with many diffusion purification settings ( "Diffusion Models for Adversarial Purification" and "Robust Evaluation of Diffusion-Based Adversarial Purification"). I appreciate your detailed explanation of the contribution, and the rating has been adjusted accordingly.
---
Reply to Comment 1.1.1:
Title: Response to your valuable feedback
Comment: Thank you for your response! We have expanded the 500 test images to 512 for evaluation to align with common diffusion purification settings ('Diffusion Models for Adversarial Purification' and 'Robust Evaluation of Diffusion-Based Adversarial Purification'). The results in the following table show that consistency purification with consistency fine-tuning, consistently achieves the highest certified accuracy, demonstrating the effectiveness of our approach.
| | 0.0 | 0.05 | 0.15 | 0.25 | 0.35 |
|----|----|----|----|----|----|
| onestep-DDPM | 55.3 | 44.7 | 33.4 | 15.2 | 8.8 |
| Consistency Purification | 62.3 | 54.3 | 35.2 | 19.7 | 13.1 |
| + Consistency Fine-tuning | **68.8** | **58.0** | **37.5** | **23.4** | **17.4** | | Summary: This paper introduces Consistency Purification, a novel framework that integrates consistency models with diffusion purification to enhance the certified robustness of deep neural networks, achieving efficient and effective image purification. The framework is further refined through Consistency Fine-tuning with LPIPS loss to ensure semantic alignment between the original and purified images, demonstrating state-of-the-art performance in both certified robustness and efficiency compared to existing methods.
Strengths: The motivation is agreeable, and the paper is well-structured.
Theoretical explanations on the advantages of Consistency Purification are provided and clearly written.
Weaknesses: It's noteworthy that there have been non-diffusion-based prior studies for enhancing certified robustness. A comparative analysis with those works would add depth to this study.
The experimental setting, including baseline selection and the selection of evaluation data, is not consistent with the baseline method One-Step DDPM. In this paper, only 100 samples from ImageNet are selected for evaluation. It would be beneficial to evaluate the method across multiple benchmarks and with more test samples.
Technical Quality: 3
Clarity: 2
Questions for Authors: See Weakness.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Question A: Comparative analysis with non-diffusion-based pripor studies.
To compare our consistency purification method with various non-diffusion-based approaches, we conducted additional experiments to compute the certified accuracy under three non-diffusion-based methods [1,2,3]. Cohen et al. [1] first proposed training a classifier with noisy images to ensure certified robustness. Subsequent works [2,3] build on Cohen et al.'s methodology, attempting to enhance the smoothed classifier by adding prediction consistency regularization, or incorporating per-sample bias.
| | 0.0 | 0.25 | 0.50 | 0.75 | 1.00 |
|-------|------|------|------|------|------|
| RS [1] | 74.8 | 59.2 | 42.0 | 31.8 | 22.0 |
| Regularization [2] | 74.4 | 66.0 | 56.2 | 41.4 | 32.8 |
| ACES [3] | 74.6 | 66.4 | 57.0 | 43.6 | 32.8 |
| Consistency Purification | **90.4** | 77.2 | 59.8 | 42.8 | 33.2 |
| + Consistency Finetune | 90.2 | **79.4** | **62.4** | **43.8** | **35.4** |
The experimental results presented in the table show that our method surpasses all previous non-diffusion-based methods in achieving higher certified accuracy, particularly with a significantly high clean performance at $\sigma=0.0$.
Furthermore, we would like to highlight that, in contrast to non-diffusion-based methods which incur significant costs by requiring additional fine-tuning of robust classifiers for each specific noise level, our method can be applied directly to any off-the-shelf classifiers. This significantly broadens its practical applications.
[1] Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. Certified adversarial robustness via randomized smoothing. In international conference on machine learning, pp. 1310-1320. PMLR, 2019.
[2] Jongheon Jeong, and Jinwoo Shin. Consistency regularization for certified robustness of smoothed classifiers. Advances in Neural Information Processing Systems 33 (2020): 10558-10570.
[3] Miklós Z. Horváth, Mark Niklas Müller, Marc Fischer, and Martin Vechev. Robust and Accurate--Compositional Architectures for Randomized Smoothing. arXiv preprint arXiv:2204.00487 (2022).
> Question B: Experimental setting of evaluation data selection.
All experiments in our paper utilize the same selection criteria for evaluation data. For the CIFAR-10 dataset, we selected 500 test examples from the 10,000 CIFAR-10 test set, choosing every 20th example in sequence (e.g., the 1st, 21st, 41st, etc.). Similarly, for the ImageNet-64 dataset, we selected 100 test examples from its 50,000 test examples using a fixed interval of 500. This consistent approach ensures that all evaluation datasets, including the baseline method One-Step DDPM, are identical across all experiments, thereby guaranteeing fair comparisons with our proposed method.
> Question C: Limited evaluation examples for ImageNet-64 dataset.
Here we include an additional experiment on ImageNet-64 using 500 samples, selecting every 100th example in sequence from the 50,000 ImageNet-64 test set. We present the certified accuracy of Consistency Purification in comparison with One-Step DDPM in the table below. The results consistently show that our method, with consistency fine-tuning, achieves the highest certified accuracy across the 500 test set of ImageNet-64, demonstrating the effectiveness of our approach.
| | 0.0 | 0.05 | 0.15 | 0.25 | 0.35 |
|-----------------------|------|------|------|------|------|
| onestep-DDPM | 55.2 | 44.8 | 33.4 | 15.2 | 8.8 |
| Consistency Purification | 62.4 | 54.2 | 35.2 | 19.8 | 13.0 |
| + Consistency Fine-tuning | **68.6** | **58.0** | **37.4** | **23.4** | **17.4** |
---
Rebuttal Comment 1.1:
Title: Look forward to your reply
Comment: Dear Reviewer 43Xy,
The deadline for the discussion period is approaching. We have provided our rebuttal material and hopefully could address your concerns. Your feedback is highly valuable to us, and we would greatly appreciate it if you could take some time to review our response.
Best Regards,
Authors | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SILENCE: Protecting privacy in offloaded speech understanding on resource-constrained devices | Accept (poster) | Summary: This paper investigates the perturbation of audio signals to impair the accuracy of automatic speech recognition (ASR) models, thus enhancing privacy protection, while ensuring that spoken language understanding (SLU) models maintain high accuracy for interpreting user intentions. The key insight is that ASR models depend on local utterances, whereas SLU models utilize global features. The authors propose masking certain utterances to break local dependencies and explore the development of a learnable mask generator.
Strengths: 1. The proposed method outperforms existing baselines significantly in terms of computational efficiency and memory usage, which are based on disentangled encoders.
2. The evaluation includes both black-box adversaries, who only have access to the masked signal, and white-box adversaries, who also have access to the encoder.
3. The paper is well-organized, making it an easy read.
Weaknesses: 1. My primary concern is the paper's limitation to passive adversaries. An active adversary might develop models specifically to reconstruct the raw signal from the masked signal. Such a threat is not investigated in this paper.
2. While the disruption of local dependencies does not affect the performance of user intent classification, this approach may not apply to all scenarios. Certain tasks, such as detecting key phrases or specific commands such as “set my clock at 10 AM” rely on local utterances.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Could the authors elaborate on the potential ASR accuracy when facing an active adversary that trains a model to reconstruct raw signals from the masked signals?
2. Is it possible to visualize some masks generated by the trained mask generator to verify that they effectively disrupt local utterances, aligning with the main motivation of the study?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for comments and questions. Here are the answers to them:
> *W1. Concern about limitation to passive adversaries.*
We clarify it and add evaluation of defending against reconstruction adversaries in **Q1**.
> *W2. This approach may not apply to all scenarios*
We admit that *detecting key phrases or specific commands* is not the focus of this work.
In most of the experiments, we treat these as sensitive words to preserve.
However, we want to emphasize that long-term dependent intent classification remains a major objective of SLU understanding literature [4] and has a wide range of applications.
For example, it can be used for command recognition, sensor activation monitoring, and more.
These applications are particularly relevant for resource-constrained devices.
We have added this clarification into the updated manuscript.
> *Q1. Could the authors elaborate on the potential ASR accuracy when facing an active adversary that trains a model to reconstruct raw signals from the masked signals?*
Certainly, we conducted further experiments and offer additional clarification to better elaborate on the system's performance when facing active adversaries. We also have Figure 1 in the uploaded pdf to better illustrate the different attack scenarios.
First of all, we want to clarify a possible misunderstanding: the white-box adversary, `Whisper (White-box)`, not only has access to the encoder but also collects enormous data from malicious users to learn how to predict the lost manuscript.
Thus, it might be considered a form of active predictive attack.
The raw description can be found in lines 251-253 of the manuscript:
```
Whisper (White-box) then utilizes this collected data from malicious users to adapt the pre-trained Whisper.medium.en model to the specific masking pattern.
```
Besides, we have implemented two more active reconstruction adversaries and demonstrated our efficiency in defending against them.
- `U-Net` is a traditional inpainting model based on convolutional U-Net structure, commonly used in literature to reconstruct missing audio signals [1,2].
We utilize the SLURP training set and their masked counterparts to train the inpainting model from scratch to reconstruct the missing audio.
- `CQT-Diff` is a neural diffusion model with an invertible Constant-Q Transform (CQT) to leverage pitch-equivariant symmetries [3], allowing it to effectively reconstruct audio without retraining.
The reconstructed audio is sent to Whisper for automatic recognition. The visualizations of reconstructed waveforms have also been included in the global PDF for your reference.
The updated evaluation results under attacks are summarized in the table below.
| | **PlainText** | **Azure** | **Whisper** | **U-Net** | **CQTdiff** | **Whisper (white box)** |
|:-----------------:|:-------------:|:---------:|:-----------:|:---------:|:-----------:|:-----------------------:|
| **WER-SLU (%)** | 14.7 | 81.6 | 78.6 | 82.5 | 74.3 | 67.3 |
| **WER-ASR (%)** | 12.3 | 71.6 | 68.1 | 71.4 | 65.9 | 64.4 |
*Table: Potential attack Word Error Rate (WER) of different attack scenarios. PlainText means AllOffloaded without any privacy protection.*
As shown in the table, `U-Net` can rarely reconstruct the masked audio.
Even worse, it introduces incorrect noisy signals, degrading the attack success rate.
`CQT-Diff` inpainting can fill the missing waveforms but cannot successfully reconstruct the content because it is designed to reconstruct background music, such as piano concertos.
SLU audio, which includes human intent and conversation, is difficult to reconstruct.
`Whisper (white-box)` is fine-tuned to predict the missing transcript directly and shows the best attack performance among all attack methods.
However, with our system enabled, it still results in over a 60\% word error rate, which is 50\% higher than without our preservation.
Thus, our system can successfully protect the private content in command audio under a wide range of attacks, including passive and active attacks.
> *Q2. Is it possible to visualize some masks generated by the trained mask generator to verify that they effectively disrupt local utterances, aligning with the main motivation of the study?*
Sure! We have visualized some masks generated by the trained mask generator.
The figure has been added to the uploaded PDF in the global rebutta.
It can be seen that the mask generator can dispatch suitable mask granularity to proper speech granularity to some extent.
With more semantics utterance around, the mask becomes more meticulous, with the slices being distributed accordingly.
References are listed in the global rebuttal.
---
Rebuttal Comment 1.1:
Comment: Thank you for the reply and for adding the new experiments. They address some of my concerns and I have increased my score to 5.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
We sincerely appreciate your thorough review of our rebuttal and the increased score. We kindly request further guidance on any remaining concerns you may have.
We are more than willing to provide additional clarification to enhance your understanding and satisfaction with our work.
Thank you once again for your recognition and assistance.
Best regards,
Authors
---
Rebuttal 2:
Comment: Dear Reviewer,
Thank you again for reviewing our manuscript. We have tried our best to address your concerns and questions (see our rebuttal in the top-level comment and above), and revised our paper by following suggestions from all reviewers.
Additionally, we have included more references to underscore the importance of our focus on long-dependent intent classification in SLU literature [1-9].
Please kindly let us know if you have any follow-up questions or areas needing further clarification. Your insights are valuable to us, and we stand ready to provide any additional information that could be helpful.
Best regards,
Authors
```
[1]. Rongxiang Wang and Felix Xiaozhu Lin. Turbocharge Deep Speech Understanding on the Edge. to appear at Proc. ACM Int. Conf. Mobile Computing and Networking (MobiCom), 2024.
[2]. Soham Deshmukh, Benjamin Elizalde, Rita Singh, and Huaming Wang. Pengi: An audio language model for audio tasks. Advances in Neural Information Processing Systems (NeurIPS), 2023.
[3]. Jixuan Wang, Martin Radfar, Kai Wei and Clement Chung. End-to-end spoken language understanding using joint CTC loss and self-supervised, pretrained acoustic encoder. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023.
[4]. Bhuvan Agrawal, Markus Muller, Samridhi Choudhary, Martin Radfar, Athanasios Mouchtaris, Ross McGowan, Nathan Susanj, and Siegfried Kunzmann. Tie your embeddings down: Cross-modal latent spaces for end-to-end spoken language understanding. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022.
[5]. Libo Qin, Tianbao Xie, Wanxiang Che and Ting Liu. Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI), 2021.
[6]. Emanuele Bastianelli, Andrea Vanzo, Pawel Swietojanski, and Verena Rieser. SLURP: A Spoken Language Understanding Resource Package. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020.
[7]. Shinji Watanabe, Takaaki Hori, Suyoun Kim, John R. Hershey, and Tomoki Hayashi. Hybrid ctc/attention architecture for end-to-end speech recognition. IEEE Journal of Selected Topics in Signal Processing, 11(8):1240–1253, 2017.
[8]. Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. Attention-based models for speech recognition. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015.
[9]. Renato De Mori. Spoken language understanding: A survey. In 2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU), pages 365–376. IEEE, 2007.
``` | Summary: The paper presents SILENCE, a method designed to address privacy concerns in cloud-based speech services by selectively obscuring short-term dependencies in speech signals. This technique preserves speech understanding functionality while protecting sensitive information. Implemented on the STM32H7 microcontroller, SILENCE significantly enhances speed and memory efficiency compared to existing solutions, effectively protecting privacy against various attack scenarios. Key contributions of the paper include an innovative encoder design based on asymmetric dependencies, the integration of automated masking configuration, and the demonstration of SILENCE's practical feasibility on low-resource devices.
Strengths: - Innovative use of differential mask generators for privacy-preserving speech understanding on constrained devices.
- Thorough experimental validation demonstrating significant performance and efficiency gains.
- Clear explanations of the key concepts and methodology, though some technical details could be more detailed.
Weaknesses: - While the paper evaluates the system under black-box and white-box attacks, it does not thoroughly address the robustness of the masking approach against adaptive adversaries who might use advanced techniques to reverse-engineer the mask patterns.
- Masking short-term dependent frames to protect privacy could lead to losing crucial phonetic information necessary for accurate SLU. There is a lack of detailed analysis on how the granularity of the mask affects both privacy and SLU performance at different levels of speech granularity.
- The method relies heavily on the assumption that SLU tasks are primarily long-term dependent while ASR tasks are short-term dependent. This binary distinction might not hold for all SLU tasks, especially those involving contextually rich and intricate utterances.
- The experiments are primarily conducted on a single dataset. Evaluating the method on a broader range of datasets would better demonstrate its generalizability and robustness.
Technical Quality: 3
Clarity: 2
Questions for Authors: - How does SILENCE perform against adaptive adversaries using advanced reverse-engineering techniques?
- How does mask granularity affect both privacy protection and SLU performance at various levels of speech granularity?
- How does SILENCE handle SLU tasks that require short-term dependencies?
- How does SILENCE perform on additional datasets to ensure its generalizability and robustness across different speech scenarios?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and questions. Here are the answers to the questions:
> *Q1. How does SILENCE perform against adaptive adversaries using advanced reverse-engineering techniques?*
Our system can still preserve content privacy under advanced reconstruction attacks, achieving over 64\% recognized word error rates.
We have implemented two more active reconstruction adversaries and demonstrated our efficiency in defending against them.
- `U-Net` is a traditional inpainting model based on convolutional U-Net structure, commonly used in literature to reconstruct missing audio signals [1,2].
We utilize the SLURP training set and their masked counterparts to train the inpainting model from scratch to reconstruct the missing audio.
- `CQT-Diff` is a neural diffusion model with an invertible Constant-Q Transform (CQT) to leverage pitch-equivariant symmetries [3], allowing it to effectively reconstruct audio without retraining.
The reconstructed audio is sent to Whisper for automatic recognition. The visualizations of reconstructed waveforms have also been included in the global PDF for your reference.
The updated evaluation results under attacks are summarized in the table below.
| | **PlainText** | **Azure** | **Whisper** | **U-Net** | **CQTdiff** | **Whisper (white box)** |
|:-----------------:|:-------------:|:---------:|:-----------:|:---------:|:-----------:|:-----------------------:|
| **WER-SLU (%)** | 14.7 | 81.6 | 78.6 | 82.5 | 74.3 | 67.3 |
| **WER-ASR (%)** | 12.3 | 71.6 | 68.1 | 71.4 | 65.9 | 64.4 |
*Table: Potential attack Word Error Rate (WER) of different attack scenarios. PlainText means AllOffloaded without any privacy protection.*
As shown in the table, `U-Net` can rarely reconstruct the masked audio.
Even worse, it introduces incorrect noisy signals, degrading the attack success rate.
`CQT-Diff` inpainting can fill the missing waveforms but cannot successfully reconstruct the content because it is designed to reconstruct background music, such as piano concertos.
SLU audio, which includes human intent and conversation, is difficult to reconstruct.
> *Q2. How does mask granularity affect both privacy protection and SLU performance at various levels of speech granularity?*
We included two more fine-grained speech understanding tasks: action and the combined intent (scenario_action) recognition.
For your reference, there are 18 different scenarios and 46 defined actions, resulting in 828 possible combinations for intend.
| | AllOffloaded | VAE | PPSLU | OnDevice | Ours |
|:------------:|:------------:|:----:|:-----:|:--------:|:----:|
| ACC-Scenario (\%) | 88.2 | 72.8 | 73.9 | 88.2 | 80.2 |
| ACC-Action (%) | 77.1 | / | / | 77.1 | 76.4 |
| ACC-Intent (%) | 83.3 | / | / | 83.3 | 76.8 |
| WER-SLU (%) | 14.7 | / | / | 100 | 68.6 |
| WER-ASR (\%) | 12.3 | 69.3 | 75.3 | 100 | 68.1 |
*Table: Comparison between Privacy-preservation and SLU performance at different speech granularities. ‘/’ means not supported. Local leaks no words as nothing is uploaded.*
It can be seen that our method can recognize speech intent at different granularities.
For example, we can correctly recognize 76.8\% of the combined intent.
In comparison, disentanglement-based methods need to re-entangle representations for different semantic granularities.
Thus, the classifier used for scenario classification cannot be applied to other intents, and these methods are not designed to preserve the sensitive information within command audios.
This emphasises a significant advantage of our approach, as it does not require retraining the model for different intent granularities.
> *Q3. How does SILENCE handle SLU tasks that require short-term dependencies?*
We admit that our work primarily focuses on long-term dependent SLU classification tasks.
In most of our experiments, we treat entities requiring short-term dependencies as sensitive words to preserve.
However, we want to emphasize that long-term dependent intent recognition is a major objective of SLU understanding literature [4] and has a wide range of applications.
For example, it can be used for command recognition, sensor activation monitoring, and more.
These applications are particularly relevant for resource-constrained devices.
> *Q4. How does SILENCE perform on additional datasets to ensure its generalizability and robustness across different speech scenarios?*
We conducted further experiments on the Fluent Speech Commands (FSC) dataset [5], another widely used dataset for spoken language understanding research.
The FSC dataset includes 97 speakers and 30,043 relevant utterances.
We split the data, using 20\% for testing and the remaining 80\% for training.
The results are shown below.
| | AllOffloaded | VAE | PPSLU | Local | Random | SILENCE |
|:-------------:|:------------:|:----:|:-----:|:-----:|:------:|:-------:|
| ACC-SLU (\%) | 99.7 | 98.3 | 99.2 | 99.7 | 86.4 | 99.1 |
| WER\-ASR (\%) | 1.2 | 65.5 | 78.5 | 100 | 76.6 | 81.4 |
*Table: Evaluation of privacy preservation and SLU performance on FSC dataset.*
Our system demonstrates accurate intent understanding (more than 99\%, similar to all the baselines) and effectively defends against sensitive word recognition attacks (achieving more than 80\% WER, outperforming all disentanglement-based protections).
Additionally, our method significantly outperforms existing baselines in terms of computational efficiency and memory usage, allowing for a wider range of deployment scenarios.
References are listed in the global rebuttal.
---
Rebuttal 2:
Comment: Dear Reviewer,
Thank you again for reviewing our manuscript. We have tried our best to address your concerns and questions (see our rebuttal in the top-level comment and above), and revised our paper by following suggestions from all reviewers.
Additionally, we have included more references to underscore the importance of our focus on long-dependent intent classification in SLU literature [1-9].
Please kindly let us know if you have any follow-up questions or areas needing further clarification. Your insights are valuable to us, and we stand ready to provide any additional information that could be helpful.
Best regards,
Authors
```
[1]. Rongxiang Wang and Felix Xiaozhu Lin. Turbocharge Deep Speech Understanding on the Edge. to appear at Proc. ACM Int. Conf. Mobile Computing and Networking (MobiCom), 2024.
[2]. Soham Deshmukh, Benjamin Elizalde, Rita Singh, and Huaming Wang. Pengi: An audio language model for audio tasks. Advances in Neural Information Processing Systems (NeurIPS), 2023.
[3]. Jixuan Wang, Martin Radfar, Kai Wei and Clement Chung. End-to-end spoken language understanding using joint CTC loss and self-supervised, pretrained acoustic encoder. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023.
[4]. Bhuvan Agrawal, Markus Muller, Samridhi Choudhary, Martin Radfar, Athanasios Mouchtaris, Ross McGowan, Nathan Susanj, and Siegfried Kunzmann. Tie your embeddings down: Cross-modal latent spaces for end-to-end spoken language understanding. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022.
[5]. Libo Qin, Tianbao Xie, Wanxiang Che and Ting Liu. Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI), 2021.
[6]. Emanuele Bastianelli, Andrea Vanzo, Pawel Swietojanski, and Verena Rieser. SLURP: A Spoken Language Understanding Resource Package. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020.
[7]. Shinji Watanabe, Takaaki Hori, Suyoun Kim, John R. Hershey, and Tomoki Hayashi. Hybrid ctc/attention architecture for end-to-end speech recognition. IEEE Journal of Selected Topics in Signal Processing, 11(8):1240–1253, 2017.
[8]. Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. Attention-based models for speech recognition. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015.
[9]. Renato De Mori. Spoken language understanding: A survey. In 2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU), pages 365–376. IEEE, 2007.
``` | Summary: This paper presents a lightweight speech intent understanding paradigm for wimpy devices, with heavy concern on the privacy. It targets the recently-popular disentanglement-based approaches for speech processing. It reached a decent balance between efficiency and privacy preservation, and make the SLU system work on wimpy devices.
Strengths: 1. The paper addresses real-time problem with its own novelty, which is quite rare for recent papers who mostly prefer large-scale solution with massive and massive data. The novelty here is thus quite high.
2. The paper has a lot of measurements and concerns about hardware-wise perspective, which echoes (1).
Weaknesses: Since this is more like a engineer-oriented work, the reviewer does not have strong problem or weakness from technical point of view.
But the reviewer does have a big problem on the architecture of SILENCE. Seems like from Figure 5, it still relies on cloud technology to maintain part of the framework. However, the wimpy devices sometimes (if not mostly) will meet offline conditions. In such case, the paper does not have an intention of making the system work.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Compared to conventional SLU systems, do you think end-to-end systems preserve the nature of enhanced privacy? Do you think your algorithm will work for both the end-to-end system and the conventional modularized system?
2. The reviewer has problem with the definition of wimpy devices, since raspberry PI shall not be considered as "wimpy". Chips such as ARM cortex V7 has much lower memory and a lot of start-up are still using it for their offline solutions. The reviewer thinks "resource-constrained conditions" might be a better term. Of if there is official or your clear definition of "wimpy" devices with detailed stats, the reviewer is happy to keep head down and learn.
3. The reviewer are interested into the proposed framework with conventional methods.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 4
Limitations: The reviewer does not see any limitation or ethical concern about the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments, and we greatly appreciate your high praise regarding the novelty of our work.
We hope that our rebuttal response will further enhance the soundness of our research.
Below are the answers to your concern and questions:
> *W1. The paper does not have an intention of making the system work under offline conditions.*
We completely agree that offline conditions occur periodically for resource-constrained devices.
However, we want to emphasize that our system can be easily integrated into an orchestration of small on-device SLU models and robust cloud models.
This orchestration has been officially adopted by many off-the-shelf products, such as Apple Intelligence in iOS 18.
Our system remains indispensable in such circumstances because small on-device SLU models may not generate satisfactory intent understanding due to their restricted model size.
Even when on-device SLU models produce correct intent understanding, they cannot always operate due to limited device energy.
As a result, online procedures are still the main components of current SLU solutions.
The on-device functionality can be used as an alternative in offline conditions.
With our system, the cloud-based SLU component is both privacy-preserving and efficient.
> *Q1.1. Compared to conventional SLU systems, do you think end-to-end systems preserve the nature of enhanced privacy?*
We think the end-to-end system preserves enhanced privacy during both training and inference.
During training, the end-to-end system does not require raw transcripts to improve SLU performance, as illustrated in Figure 5 of the paper.
During inference, skipping the generation of intermediate transcripts can also enhance privacy and directly improve the final SLU performance.
> *Q1.2. Do you think your algorithm will work for both the end-to-end system and the conventional modularized system?*
Yes, our algorithm can correctly detect SLU intent in both the end-to-end system and the conventional modularized system.
The conventional modularized system can recognize intent with nearly 90\% accuracy because it uses extra intermediate correct transcripts to enhance the first ASR module.
Added implementation and experiments results are detailed in the **Q5**.
> *Q2. The reviewer has problem with the definition of wimpy devices, since raspberry PI shall not be considered as "wimpy". Chips such as ARM cortex V7 has much lower memory and a lot of start-up are still using it for their offline solutions. The reviewer thinks "resource-constrained conditions" might be a better term. Of if there is official or your clear definition of "wimpy" devices with detailed stats, the reviewer is happy to keep head down and learn.*
We agree that the term "resource-constrained conditions" is more explicit to a wider range of readers.
Our initial use of "wimpy" was intended to include the following types of devices:
- Devices that do not have enough power to run the model, such as the STM32H7 microcontroller.
- Devices that have enough power to run the model but can save energy by offloading computations to the cloud, e.g., Raspberry Pi. Reasons to include those devices have been discussed in **W1**.
Thank you again for helping us select a clearer term.
> *Q3. The reviewer are interested into the proposed framework with conventional methods.*
We applied our algorithm to conventional modularized SLU models.
The experimental results demonstrate that when both the ASR and NLU modules are fine-tuned as required, the conventional modularized SLU model can recognize intent correctly when fed with masked audio.
The detailed results are summarized in the table below:
| | Plaintext | VAE | PPSLU | NLU only (Ours) | Decoupled SLU (Ours) | E2E SLU (Ours) |
|:-----------------:|:---------:|:-------:|:-------:|:------------------:|:-----------------------:|:-----------------:|
| **SLU-ACC (%)** | 87.2 | 72.5 | 74.5 | 12.6 | 89.1 | 81.1 |
*Table: System performance on conventional modularized SLU. Plaintext equals to AllOffloaded or OnDevice.*
Here, the ASR model of the modularized SLU is Whisper.medium.en, and the NLU model is with two LSTM layers.
For the column labeled "NLU only," the ASR module uses pre-trained weights downloaded directly from Hugging Face without any fine-tuning.
During training, the real transcript is used as input, and audio intent is used as a label to reduce the loss.
During inference, the masked audio is processed through the pre-trained ASR and fine-tuned NLU model sequentially to obtain the final intents.
However, it cannot achieve accurate intent understanding because the intermediate transcripts are corrupted.
For the column labeled "Decoupled SLU," the ASR module is also fine-tuned to better match the masked audio and real transcript as the first step.
As a result, the final intent recognition is hugely improved because it obtains real transcripts from users and preserves the intend information beforehand.
---
Rebuttal 2:
Comment: Dear Reviewer,
Thank you again for reviewing our manuscript. We have tried our best to address your concerns and questions (see our rebuttal in the top-level comment and above), and revised our paper by following suggestions from all reviewers.
Additionally, we have included more references to underscore the importance of our focus on long-dependent intent classification in SLU literature [1-9].
Please kindly let us know if you have any follow-up questions or areas needing further clarification. Your insights are valuable to us, and we stand ready to provide any additional information that could be helpful.
Best regards,
Authors
```
[1]. Rongxiang Wang and Felix Xiaozhu Lin. Turbocharge Deep Speech Understanding on the Edge. to appear at Proc. ACM Int. Conf. Mobile Computing and Networking (MobiCom), 2024.
[2]. Soham Deshmukh, Benjamin Elizalde, Rita Singh, and Huaming Wang. Pengi: An audio language model for audio tasks. Advances in Neural Information Processing Systems (NeurIPS), 2023.
[3]. Jixuan Wang, Martin Radfar, Kai Wei and Clement Chung. End-to-end spoken language understanding using joint CTC loss and self-supervised, pretrained acoustic encoder. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023.
[4]. Bhuvan Agrawal, Markus Muller, Samridhi Choudhary, Martin Radfar, Athanasios Mouchtaris, Ross McGowan, Nathan Susanj, and Siegfried Kunzmann. Tie your embeddings down: Cross-modal latent spaces for end-to-end spoken language understanding. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022.
[5]. Libo Qin, Tianbao Xie, Wanxiang Che and Ting Liu. Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI), 2021.
[6]. Emanuele Bastianelli, Andrea Vanzo, Pawel Swietojanski, and Verena Rieser. SLURP: A Spoken Language Understanding Resource Package. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020.
[7]. Shinji Watanabe, Takaaki Hori, Suyoun Kim, John R. Hershey, and Tomoki Hayashi. Hybrid ctc/attention architecture for end-to-end speech recognition. IEEE Journal of Selected Topics in Signal Processing, 11(8):1240–1253, 2017.
[8]. Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. Attention-based models for speech recognition. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015.
[9]. Renato De Mori. Spoken language understanding: A survey. In 2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU), pages 365–376. IEEE, 2007.
```
---
Rebuttal 3:
Title: Correction to Reference in Response to Q1.2
Comment: Dear Reviewer,
We apologize for the typo in our previous communication regarding the reference for “Added implementation and experiments results of conventional modularized system.” The details are provided in **Q3** instead of **Q5**.
Best regards,
Authors | Summary: This paper proposed a private speech processing system that selectively obscures short-term details to reduce privacy leakage in cloud-based Spoken Language Understanding (SLU). A differential mask generator is learned to automatically mask out portions of audio signals along with online cloud inference with the generated masks. Empirical results show that the proposed system offers comparable speech understanding performance and privacy protection capacity with high memory efficiency.
Strengths: \+ Tackled a practical and crucial topic of cloud-based SLU privacy protection.
\+ detailed system illustration and rationale interpretation.
\+ Comprehensive experiment details.
Weaknesses: Could provide more background info for attack scenarios.
Technical Quality: 3
Clarity: 3
Questions for Authors: What types of devices can be defined as 'wimpy' devices?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and questions. Here are the answers to the points:
> *W1. Could provide more background info for attack scenarios.*
Attacks may involve passive and active adversaries attempting to automatically recognize private entities in uploaded command audio files.
Passive adversaries use off-the-shelf automatic speech recognition (ASR) models to conduct recognition on the masked audio.
Active adversaries may collect data from malicious users to build reverse or predictive models that can either reconstruct the raw audio waveform or directly reconstruct the correct final transcript.
**We have provided an illustration of several attack scenarios in Figure 1 of the global rebuttal PDF.
Please refer to it for a more detailed description.**
> *Q1. What types of devices can be defined as 'wimpy' devices?*
Wimpy devices refer to devices that either lack the power to run robust transformer-based spoken language understanding (SLU) models, such as the STM32H7 microcontroller, or those IoT devices that have the capability to run these models but have limited power, such as the Raspberry Pi 4.
Offloading computation to the cloud can help these devices save energy.
As suggested by Reviewer BMwq, we will change the term to "resource-constrained conditions" to improve clarity.
---
Rebuttal 2:
Comment: Dear Reviewer,
Thank you again for reviewing our manuscript. We have tried our best to address your concerns and questions (see our rebuttal in the top-level comment and above), and revised our paper by following suggestions from all reviewers.
Additionally, we have included more references to underscore the importance of our focus on long-dependent intent classification in SLU literature [1-9].
Please kindly let us know if you have any follow-up questions or areas needing further clarification. Your insights are valuable to us, and we stand ready to provide any additional information that could be helpful.
Best regards,
Authors
```
[1]. Rongxiang Wang and Felix Xiaozhu Lin. Turbocharge Deep Speech Understanding on the Edge. to appear at Proc. ACM Int. Conf. Mobile Computing and Networking (MobiCom), 2024.
[2]. Soham Deshmukh, Benjamin Elizalde, Rita Singh, and Huaming Wang. Pengi: An audio language model for audio tasks. Advances in Neural Information Processing Systems (NeurIPS), 2023.
[3]. Jixuan Wang, Martin Radfar, Kai Wei and Clement Chung. End-to-end spoken language understanding using joint CTC loss and self-supervised, pretrained acoustic encoder. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023.
[4]. Bhuvan Agrawal, Markus Muller, Samridhi Choudhary, Martin Radfar, Athanasios Mouchtaris, Ross McGowan, Nathan Susanj, and Siegfried Kunzmann. Tie your embeddings down: Cross-modal latent spaces for end-to-end spoken language understanding. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022.
[5]. Libo Qin, Tianbao Xie, Wanxiang Che and Ting Liu. Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI), 2021.
[6]. Emanuele Bastianelli, Andrea Vanzo, Pawel Swietojanski, and Verena Rieser. SLURP: A Spoken Language Understanding Resource Package. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020.
[7]. Shinji Watanabe, Takaaki Hori, Suyoun Kim, John R. Hershey, and Tomoki Hayashi. Hybrid ctc/attention architecture for end-to-end speech recognition. IEEE Journal of Selected Topics in Signal Processing, 11(8):1240–1253, 2017.
[8]. Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. Attention-based models for speech recognition. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015.
[9]. Renato De Mori. Spoken language understanding: A survey. In 2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU), pages 365–376. IEEE, 2007.
``` | Rebuttal 1:
Rebuttal: Dear Reviewers,
**Please see the attached PDF for a one-page summary with an illustrative description of different attack scenarios, visualizations of generated masks and reconstructed waveforms, and additional experiment results against advanced active reconstruction attacks.**
We would like to thank all reviewers for providing constructive feedback that helped us improve the paper. We are encouraged that reviews think our paper:
- “addresses real-time problem with its own novelty, which is quite rare for recent papers who mostly prefer large-scale solution with massive and massive data. The novelty here is thus quite high.” (Reviewer BMwq)
- provides “comprehensive experiment details” (Reviewer rbWb) with “significant performance and efficiency gains” (Reviewer ubkn) and “outperforms existing baselines significantly” (Reviewer YHpd)
- provides “detailed system illustration and rationale interpretation” (Reviewer rbWb), “making it an easy read.” (Reviewer YHpd)
We have been working diligently on improving the paper in several aspects, addressing your concerns and problems. Below, we summarise the changes that we have made in an updated draft.
**1. Experiment results against advanced reconstruction attacks**
- We first illustrated passive and active adversaries in the uploaded pdf, including the advanced reconstruction attacks.
- We implemented two advanced inpainting techniques to reconstruct the missing waveforms. (1) U-Net: a traditional inpainting method based on convolutional U-Net [1, 2]. (2) CQT-Diff: a neural diffusion model preconditioned with an invertible Constant-Q Transform (CQT) [3].
- The reconstructed waveforms are visualised in the attached PDF to see their effects.
- We conduct experiments to show that our system can still preserve content privacy under those advanced reconstruction attacks, with over 64% recognised word error rates. Detailed results are summarised in the attached PDF.
**2. Evaluation of the additional dataset**
- We have conducted further experiments on Fluent Speech Commands (FSC) dataset [5], another widely used dataset for spoken language understanding research.
- Our system can still give accurate intent understanding (more than 99%, similar level with all the baselines) and defend against the sensitive word recognition attack (more than 80% WER, better than all the disentanglement-based protections).
| | AllOffloaded | VAE | PPSLU | Local | Random | SILENCE |
|:-------------:|:------------:|:----:|:-----:|:-----:|:------:|:-------:|
| **ACC-SLU (\%)** | 99.7 | 98.3 | 99.2 | 99.7 | 86.4 | 99.1 |
| **WER-ASR (\%)** | 1.2 | 65.5 | 78.5 | 100 | 76.6 | 81.4 |
*Table: Evaluation of privacy preservation and SLU performance on FSC dataset.*
**3. Evaluation of the conventional modularised SLU**
- We implemented a conventional modularised SLU system with Whisper.medium.en as the ASR modular and two LSTM layers as the NLU modular.
- We conducted experiments on both `NLU only` and `Decouple SLU` settings to demonstrate that our system can still generate accurate intent understanding if we fine-tune the ASR modular.
| | Plaintext | VAE | PPSLU | NLU only (Ours) | Decoupled SLU (Ours) | E2E SLU (Ours) |
|:-----------------:|:---------:|:-------:|:-------:|:------------------:|:-----------------------:|:-----------------:|
| **SLU-ACC(%)** | 87.2 | 72.5 | 74.5 | 12.6 | 89.1 | 81.1 |
*Table: System performance on conventional modularized SLU. Plaintext equals to AllOffloaded.*
**4. Evaluation on different speech granularity**
- We select two more speech granularities including Action (46 classes) and Intent (828 classes) classification.
- We conducted experiments to show that our system can understand the speech intent on different speech granularities.
| | AllOffloaded | VAE | PPSLU | OnDevice | Ours |
|:------------:|:------------:|:----:|:-----:|:--------:|:----:|
| **ACC-Scenario (\%)** | 88.2 | 72.8 | 73.9 | 88.2 | 80.2 |
| **ACC-Action (%)** | 77.1 | / | / | 77.1 | 76.4 |
| **ACC-Intent (%)** | 83.3 | / | / | 83.3 | 76.8 |
| **WER-SLU (%)** | 14.7 | / | / | 100 | 68.6 |
| **WER-ASR (\%)** | 12.3 | 69.3 | 75.3 | 100 | 68.1 |
*Table: Comparison between privacy-preservation capacity and speech understanding performance at different speech granularities. ‘/’ means not supported. OnDevice leaks no words as nothing is uploaded.*
**5. More helpful visualisation and analysis**
We added visualisation of generated masks in the uploaded pdf to verify that they effectively disrupt certain local utterances.
**6. More clear statements and insightful discussion**
- We added a clear definition of the wimpy devices.
- We added a discussion about the integration with small on-device SLU models to address the occasional offline conditions.
- We clarified that detecting short-dependent key phrases or specific commands is not the focus of this work. But we also emphasised the importance of long-dependent intend classification. It is currently the main objective of SLU understanding literature [4] and has a wide range of application scenarios.
Please see our reviewer-specific feedback for more information.
---
[1]. Masking and inpainting: A two-stage speech enhancement approach for low SNR and non-stationary noise, ICASSP’23
[2]. Deep speech inpainting of time-frequency masks, interspeech’19
[3]. CQTDiff: Solving audio inverse problems with a diffusion model, ICASSP’23
[4]. A Survey on Spoken Language Understanding: Recent Advances and New Frontiers, IJCAI’21.
[5]. Fluent Speech Commands: A dataset for spoken language understanding research, Fluent.ai.
Pdf: /pdf/3cf00aaed6a1631e12afb7d70d6ab439aba5d3c0.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
What Matters in Graph Class Incremental Learning? An Information Preservation Perspective | Accept (poster) | Summary: This paper studies graph class incremental learning (GCIL), which requires the model to classify emerging nodes of new classes while remembering old classes. The paper provides a theoretical analysis of GCIL and finds that preserving old graph information that corresponds to local-global low-frequency and high-frequency components in the spatial domain can calibrate semantic and structural shifts to reduce catastrophic forgetting risk. Then, the paper proposes a method to utilise node representations on old and new models to preserve node features, graphs, and neighbor distances.
Strengths: 1. The paper studies graph class incremental learning (GCIL), addressing the problem of supervised node classification within the context of an expanding graph. The motivation is interesting, and the challenge really exists in the real-world graph tasks.
2. The paper gives a theoretical analysis of GCIL and divides the framework into low and high-frequency modules to preserve old information.
3. The paper evaluates the proposed method on three public datasets and the better results compared with baselines show the effectiveness of the proposed method.
Weaknesses: 1. The motivation for preserving global low-frequency information about the graph is uncleared. Why only separate local and global components for low-frequency and not separate local and global components for high-frequency? More theoretical analysis and studies are needed.
2. Why is the experimental setting for each dataset's tasks configured as described in the paper? Is it following previous research works? It is highly recommended to provide results for different task settings.
3. It is highly recommended to add the implementation details, especially the hyper-parameter for each dataset in the paper.
4. More hyper-parameter experiments about the method are needed. For example, the parameter sensitivity analysis about the \alpha.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please see the above weakness.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The paper presents several limitations: The primary limitation of GSIP lies in its focus on replayed designs and the lack of connection to other methods. Also, the paper does not examine other GCIL settings on the graph, such as the lack of a clear task boundary, which would be an interesting direction to explore in the future.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your positive comments!
>**Q1: Why only separate local and global components for low-frequency and not separate local and global components for high-frequency?**
A1: The rationale for maintaining low-frequency global information (LG) without high-frequency global information (HG) is that the design of HG is redundant, incurs high computational costs, and does not yield performance improvements.
1. **Redundant method design and higher time complexity**.
* We need to clarify that LG is meaningful as it represents overall graph information contained in each category dimension. It can be instantiated using graph pooling and does not need to waste much time.
* The general formula for HG preservation is
$$\left\|\Delta{\mathbf{z}^{\widehat{h}}_ i}\right\|_ 2^2 = \left\|\left({z}_ i^{old} - \sum_ {j \in \mathcal{M}} \frac{{z}_ j^{old}}{\sqrt{\left|\mathcal{M}\right|^2}}\right) - \left({z}_ i^{new} - \sum_ {j \in \mathcal{M}} \frac{{z}_ j^{new}}{\sqrt{\left|\mathcal{M}\right|^2}}\right)\right\|_ 2^2.$$ HG implies that every node in replayed graph needs to calculate disparity with other nodes. Since GNN assumes that neighboring nodes have similar representations, penalizing distance between nodes and their neighbors at multiple hops away is redundant and does not contribute to structural preservation. Moreover, the inclusion of this term imposes an optimization burden and exhibits high time complexity ($O(|\mathcal{M}|^2 \cdot {k})$, ${k}$ is dimensions of hidden spaces).
2. **Lower experimental result.**
* According to ablation experiment in Table 2, the preservation of LG in ERGNN-GSIP on CoraFull result in an increase of 0.75\% in AP and 1.02\% in AF. On Reddit, the preservation of LG in ERGNN-GSIP led to an increase in AP by 4.58\%, 2.91\%, and 3.82\%, and in AF by 5.12\%, 2.92\%, and 3.85\% across three task divisions.
* We conduct experiments on CoraFull and find that the addition of HG preservation loss does not lead to performance improvement.
| |Unequally| |Equally (10)| |Equally (2)| |
|:----|:----|:----|:----|:----|:----|:----|
| |AP|AF|AP|AF|AP|AF|
|GSIP|**67.22±0.44** |-10.91±0.62|**71.15±0.98**|**-11.37±0.74**|**44.79±1.77**|**-44.60±1.67**|
|+HG|66.63±0.49|**-10.07±1.75**|69.96±0.85|-12.85±0.94|24.66±±1.47|-68.48±1.35|
>**Q2: Why is the experimental setting for each dataset's tasks configured as described in the paper?**
A2: We adopt standard configuration from prior studies [1][2][3][4], which is the current mainstream setup, and integrate two new task setups. Additional configurations have contributed to a reduction in variance. Subsequently, we conduct tests on two new datasets within the general setup.
1. **Task settings as previous works + New task configuration.** We clarify that existing works are set with 2 classes per task. We have followed the setup of previous works [1][2][3][4], corresponding to our setup of Equally (2). We introduce two new settings: Equally (10) and Unequally.
2. **New task configuration helps to reduce variance.** We believe that new task configurations can bring a certain level of robustness. As shown in Table 5 on Page 16, On Reddit, the variances of AP and AF for Equally (2) reach 3.87 and 4.11 for ERGNN-GSIP, while for Unequally, the variances of AP and AF are only 0.70 and 0.68, with a reduction of more than 3.
3. **New experimental results.** As shown in Table R1 of PDF file, we have added results on the Cora and Citeseer datasets with an increment of 2 and a task number of 3, validating the excellence of our method.
[1] Xikun Zhang et al. CGLB: Benchmark tasks for continual graph learning. NeurIPS 2022.
[2] Yilun Liu et al. Cat: Balanced continual graph learning with graph condensation. ICDM 2023.
[3] Junwei Su et al. Towards robust graph incremental learning on evolving graphs. ICML 2023.
[4] Zeyang Zhang et al. Disentangled Continual Graph Neural Architecture Search with Invariant Modular Supernet. ICML 2024.
>**Q3: It is highly recommended to add the implementation details, especially the hyper-parameter for each dataset in the paper.**
A3: **We summarize experimental setup and hyper-parameters used for each dataset.**
1. **Experimental setup.** Each task undergoes a training regimen of 200 epochs. For optimization, we employ the Adam algorithm with weight decay, setting the learning rate to a value of 0.005. The model's architecture is anchored by a two-layer Graph Convolutional Network (GCN) featuring a hidden dimension of 256. The datasets are partitioned into train-validation-test splits with ratios of 60\%, 20\%, and 20\%, respectively.
2. **Hyper-parameter.** The settings of three scaling factors $\alpha_{gip}$ (the weights of $\mathcal{L}_{gip}$), $\beta$, and $\gamma$ on each dataset are shown in Table R2 of PDF file, where U, E(10), and E(2) are three task partitioning modes, the first/middle/last three rows are ERGNN, SSM, and CaT.
>**Q4: More hyper-parameter experiments about the method are needed. For example, the parameter sensitivity analysis about the $\alpha$.**
A4: As shown in Figure R3 of PDF file, we conduct the analysis for $\alpha$ (has changed to $\alpha_{replay}$) with ERGNN, SSM, and CaT on three datasets.
$\alpha_{replay,1}$ is set to $\alpha_{replay,o} \times 0.1$ for three methods, the hyper-parameters we used are all $\alpha_{replay,1} \times 10$ (i.e., $\alpha_{replay,o}$, dark blue bars in Figure R3). $\alpha_{replay}$ is the hyper-parameter designed in the baseline and $\alpha_{replay,o}$ is the original setting.
Although this setting may not be optimal for performance, we still followed the baseline's setting for a **fair comparison**. In ERGNN, the greater the loss weight, the better the performance, possibly because ERGNN focuses too much on these independent replay nodes. In SSM and CaT, except when $\alpha_{replay,1}$ is used, performance hardly changes with an increase in the loss weight, indicating that $\alpha_{replay}$ is not sensitive. | Summary: This paper studies the graph class incremental learning problem, and specially focuses on theoretically investigating what matters in preserving the information from the old classes.
The authors theoretically demonstrate that maintaining the graph information can preserve information of the old model, such that the node semantic and graph structure shift can be overcome. Based on this finding, the authors split the graph information into low- and high frequency parts, and designed alignment based techniques to preserve these two types of information.
The proposed GSIP can be integrated with different baseline methods, and the empirical results on three datasets demonstrated the effectiveness of GSIP.
Strengths: 1. The proposed work includes both theoretical foundations and empirical improvements.
2. The semantic shift and structural shift are quantified and demonstrated on CoraFull dataset, which makes the mechanism of the forgetting more tangible.
Weaknesses: 1. The writing requires improvements. The abstract and introduction do not provide a clear overview of the work, since there are multiple unclear terms like graph information and local-global parts. I would recommend the authors to revise this part. At least these terms should be briefly introduced in Introduction before being used.
2. The notations are not consistent. In 4.1, the bold Z-old and Z-new seem to be same as the Z-old and Z-new above in Section 3, but are in different fonts.
Technical Quality: 3
Clarity: 1
Questions for Authors: 1. What is the rationale behind the definition of the structural shift score? Does this implies the difference between each node and its neighbors?
2. The forgetting is demonstrated through the node semantic shift and the structural shift, why is the following analysis conducted from the low- and high-frequency perspectives?
Confidence: 4
Soundness: 3
Presentation: 1
Contribution: 3
Limitations: The authors have discussed limitations regarding the method design, positioning of the work against other baselines, and empirical settings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your positive comments!
>**Q1: The writing requires improvements. The abstract and introduction do not provide a clear overview of the work, since there are multiple unclear terms like graph information and local-global parts. I would recommend the authors to revise this part. At least these terms should be briefly introduced in the Introduction before being used.**
A1: Thanks for your comments. We will provide a brief introduction to some terms before using them, such as defining "graph information" as "Information in the graph data containing node semantic information and graph topological information", and changing "local-global parts" to "local-global information".
>**Q2: The notations are not consistent. In 4.1, the bold Z-old and Z-new seem to be the same as the Z-old and Z-new above in Section 3 but are in different fonts.**
A2: Thank you for the comments. We will modify the font format of bold Z-old and Z-new in Section 4.1 for easier distinction. They have different meanings, the bold Z-old and Z-new in Section 4.1 represent old graph information and new graph information, and Z-old and Z-new in Section 3 represent the node representations of the old model and the new models.
>**Q3: What is the rationale behind the definition of the structural shift score? Does this imply the difference between each node and its neighbors?**
A3: We utilize the structural shift score to quantitatively measure the degree of forgetting of the topological structure learned by the new model compared to that learned by the old model. We have to emphasize that the structural shift score implies the gap between the differences between nodes and their neighbors in the new and old models, rather than merely measuring the differences between nodes and their neighbors.
**The rationale behind the definition of structural shift score comes from two aspects:**
1. **Structural shift score is an effective way to quantitatively measure catastrophic forgetting from topological structure when learning new tasks.** If the new model can effectively mimic the topological structure of the old model, then forgetting can be mitigated. Firstly, we utilize node representations from old and new models to infer topological structure, and similar features suggest potential edges connecting nodes. Subsequently, we employ the Anonymous Walk (AWE) [1], which reflects topological information, to obtain representations of inferred old graph and new graph. Finally, we measure the difference in graph structure representations using cosine similarity as structural shift score, and the greater the difference, the higher the score. The real topological structure is not targeted due to potential noise [2].
2. **The structure shift score is used as a metric to evaluate structural shift.** As shown in Figure 2 on Page 2, structural shift does indeed exist during the incremental process, and the structural shift becomes increasingly larger as new tasks are learned. As shown in Figure 5(c) on Page 8, the structural shift is reduced to almost near 0, indicating that our method has well-calibrated the structural shift.
[1] Sergey Ivanov et al. Anonymous Walk Embeddings. ICML 2018.
[2] Seungyoon Choi et al. DSLR: diversity enhancement and structure learning for rehearsal-based graph continual learning. WWW 2024.
>**Q4: The forgetting is demonstrated through the node semantic shift and the structural shift, why is the following analysis conducted from the low- and high-frequency perspectives?**
A4: Graph information encompasses both feature information and topological information. We utilize the analysis of graph information preservation and the decomposition of graph information to map the preservation of graph information to the spatial domain and instantiate it. Low-frequency information preservation mitigates feature shift, while high-frequency information preservation alleviates topological shift, which has been validated through experimental verification.
1. **Low-frequency information corresponds to feature information.** Low-frequency information represents the sum of nodes and their neighbors, which is similar to the process of message passing and information aggregation in graph neural networks to obtain higher-order node features. Preserving low-frequency information is equivalent to aligning higher-order features formed by general GNN and thus effectively mitigating node semantic shift.
2. **High-frequency information corresponds to topological information.** High-frequency information represents the differences between nodes and their neighbors, maintaining the graph structure information through the support of the old model. The distance between a node and its neighbor is a measure of whether an edge exists. The closer the distance between the node and its neighbors, it means that there is a high probability that there is an edge, and vice versa.
The old model implies the graph information of the old data because it has seen the old graph data during training. We utilize the similarity between nodes and their neighboring nodes from the old model's output and then optimize the new model to imitate the structural similarity distribution of the old model. High-frequency information preservation fortunately addresses the challenge of topological shift.
3. **Experimental verification.** As can be seen from Figure 5(c) on Page 8, the semantic shift score and the structure shift score are almost close to 0, thus semantic and topological shifts are well calibrated through the preservation of low-frequency and high-frequency information. | Summary: This paper proposes an innovative framework named graph spatial information preservation (GSIP), which alleviates catastrophic forgetting in graph class incremental learning (GCIL), by preserving low-frequency local-global information and high-frequency information in both the feature space and the topological space.
Strengths: Strength 1: This paper has good originality to identify a unique challenge in GCIL task, which is the lack of theoretical understanding of information preservation.
Strength 2: This paper provides detailed mathematical derivations to explain how catastrophic forgetting can be alleviated by preserving graph information in the spatial domain.
Strength 3: This paper conducts comprehensive experiments, along with ablation studies, hyper-parameter tuning analyses, and case studies, to demonstrate the effectiveness of the proposed framework.
Weaknesses: Weakness 1: The generalizability regarding Figure 1 could be further improved.
Weakness 2: The writing in terms of clarity and conciseness could be further improved.
Weakness 3: It is a bit confusing to take another average (MSE) immediately after an average (mean pooling). The reasoning for Equation 14 could be better clarified with more details.
Technical Quality: 3
Clarity: 2
Questions for Authors: Question 1: From line 50 to line 52, how to quantitatively understand “a larger distortion”? According to Figure 1, does “a larger distortion” indicate that the black dotted ellipse in Figure 1 (a) has a longer major axis than that in Figure 1 (c)? If so, since the five nodes are randomly selected and connected, then what about other red nodes having the same class but not included within the black dotted ellipse?
Question 2: Do the parameter isolation methods, the replay methods, and the regulation methods mentioned in the Introduction section (from line 28 to line 37), have the same meaning as those mentioned in the Related Work section (from line 83 to line 89)?
Question 3: From line 216 to line 217, based on Equation 14, since the mean pooling has been applied, what is the motivation and reasoning to introduce the mean squared error (MSE) loss to evaluate global representation gaps?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have not adequately addressed the limitations and the potential negative societal impact of their work.
Suggestion for improvement: Data privacy should be discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable suggestions!
>**Q1: From line 50 to line 52, how to quantitatively understand “a larger distortion”? According to Figure 1, does “a larger distortion” indicate that the black dotted ellipse in Figure 1 (a) has a longer major axis than that in Figure 1 (c)? If so, since the five nodes are randomly selected and connected, then what about other red nodes having the same class but not included within the black dotted ellipse?**
A1: We need to clarify that larger distortion does not mean that the major axis is longer. We present a deeper understanding of larger distortion, generalizability regarding Figure 1, and quantitative results.
1. **The understanding of larger distortion.** We believe that distortion indicates that the feature distribution on the new model deviates from the feature distribution of the old model. The two categories can be well separated in the feature distribution of target, but not in the baseline model and lead to incorrect predictions and catastrophic forgetting.
2. **Generalizability regarding Figure 1.** We have included a new figure to further illustrate the generalizability, it can be seen from Figure R1 of PDF file that the distribution of nodes predicted incorrectly (grey nodes in the figure) also includes some nodes outside the black dotted ellipse. We have supplemented another black dotted ellipse to mark incorrectly predicted nodes to increase clarity.
3. **Quantitative analysis.** We use the structural shift score defined in Equation (4) on Page 4 to quantitatively understand the degree of distortion. The overall structural shift score is 0.1359, the structural shift score for nodes predicted correctly is 0.1431, and the structural shift score for nodes predicted incorrectly is 0.1646.
>**Q2: Do the parameter isolation methods, the replay methods, and the regulation methods mentioned in the Introduction section (from line 28 to line 37), have the same meaning as those mentioned in the Related Work section (from line 83 to line 89)?**
A2: **Although both are classifications of existing methods and use the same classification names, there are still some differences in terms of the perspective of review and the level of detail.** In the Introduction section, we summarize existing methods from the perspective of information preservation as maintaining the previous model or graph data. We provide a detailed introduction to the form of information preservation for each type of method. The Related Work section introduces the definition and relevant papers for each type of method according to the existing classification.
>**Q3: From line 216 to line 217, based on Equation 14, since the mean pooling has been applied, what is the motivation and reasoning to introduce the mean squared error (MSE) loss to evaluate global representation gaps?**
A3: Our motivation for using another MSE loss stems from the instantiation of low-frequency global information preservation. MSE loss is employed to measure the differences in global features obtained after pooling. It is more stable in application and has demonstrate the effectiveness of this term in ablation studies.
1. **We have indeed stated that Equation (14) is an instantiation of the low-frequency global information preservation in Equation (11).** As we stated in Lines 184-187 on Page 5, the conclusion drawn from Equation (11) is that to maintain low-frequency global information, it is necessary to reduce the difference in representation of the full replay graph formed by nodes and their multi-order neighbors between new and old models. The two terms of Equation (11) represent the representations of the replay graph on the model, and we instantiate each term through mean-pooling. Additionally, the full graph representation is also meaningful, as it represents the overall graph information contained in each category dimension.
2. **The reasons for using MSE loss to measure the distribution difference between the representations of old replay graph and new replay graph can be encapsulated in three aspects:**
* The MSE loss is more stable compared to other distribution difference measure functions [1].
* The use of MSE loss in Equation (14) does not duplicate that of MSE loss in Equation (13). Equation (13) is an instantiation of the low-frequency local information formed by Equation (10). Both work together to maintain low-frequency graph information.
* From the ablation study in Table 2, after applying low-frequency local information preservation (Equation 13) on the Reddit dataset, ERGNN-GSIP achieved AP and AF of 84.63\% and -12.09\%. Upon incorporating low-frequency global information preservation (Equation 14), the AP and AF increased significantly to 89.21\% and -6.97\%, representing an improvement of up to 4\%. This confirms the effectiveness of utilizing another MSE loss.
[1] Chaitanya K Joshi et al. On representation knowledge distillation for graph neural networks. TNNLS 2022.
---
Rebuttal Comment 1.1:
Title: Acknowledgment of rebuttal
Comment: Thank you for answering my questions regarding the distortion and the motivation of using mean squared error loss. I acknowledge that I have read the rebuttal and have no further questions.
---
Reply to Comment 1.1.1:
Title: Thanks for Response
Comment: We greatly appreciate your comments and recognition. We also look forward to receiving any further suggestions. | Summary: The paper focuses on the challenge of graph class incremental learning (GCIL), where a model must classify new nodes of emerging classes while retaining knowledge of previously learned classes. The primary issue of GCIL is identified as catastrophic forgetting, where new learning overwrites old knowledge. To address this, the author introduce the concept of information preservation, suggesting that preserving graph information can help mitigate semantic and structural shifts that cause forgetting. The paper proposes the Graph Spatial Information Preservation (GSIP) framework, which preserves both low-frequency (local-global) and high-frequency information in the graph's spatial domain. GSIP aligns old and new node representations, ensuring old graph information is retained. Experimental results show that GSIP significantly reduces forgetting by up to 10% on large datasets compared to existing methods.The framework is shown to be effective across various benchmark datasets.
Strengths: - The proposed method GSIP preserves both low-frequency and high frequency information of a graph, which can alleivate the catastrophic forgetting issue of GCIL.
- The proposed method is proved to be effective across various benchmark datasets.
Weaknesses: - The design of the method, which only capture low and high frequency information is not comprehensive. There are more complicated signals to capture, e.g., many spectral GNNs are designed for that.
- The method is only evaluated on 3 datasets, corafull, arxiv and reddit. It would be great to be evaluated on more datasets.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Why the method only consider the low and high frequency information in the graph’s spatial domain? Except for these two frequency, there are still other frequency information worth to be captured, e.g., medium, and more to capture complicated signals.
- Why for the loss function (Equation 19), there is only one scaling factor \alpha for the second loss term?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive comments!
>**Q1: The design of the method, which only capture low and high frequency information is not comprehensive. Why does the method only consider the low-/high- frequency information in the graph’s spatial domain?**
A1: Considering the preservation of low-/high- frequency information is essential for addressing the challenge of catastrophic forgetting. We conduct analysis and experiments and conclude that high-frequency information preservation sufficiently calibrates structural shift.
1. **The preservation of low-/high- frequency information has effectively calibrated feature and structural shifts, yielding satisfactory results.**
* Figure 5(c) on Page 8 shows that the semantic and topological shifts are well-calibrated with the final semantic shift score and structural shift score almost reduced to 0.
* Table 1 on Page 8 shows that catastrophic forgetting is greatly alleviated after using GSIP, the forgetting rate on ERGNN-GSIP is reduced by up to 57.94\%.
2. **The analysis and experiments of mid-frequency information preservation.**
* **Method analysis.**
We conduct a brief analysis of the preservation of mid-frequency information. According to the definition of mid-frequency filtering graph convolutional networks [1][2], the mid-frequency convolution can be expressed as:
$\mathcal{F}^m={({I} _n-\widetilde{{D}}^{-\frac{1}{2}} \widetilde{{A}} \widetilde{{D}}^{-\frac{1}{2}})}{({I} _n+\widetilde{{D}}^{-\frac{1}{2}} \widetilde{{A}} \widetilde{{D}}^{-\frac{1}{2}})}$. Through a series of analysis, mid-frequency information preserving is defined as:
$$
\left \| \Delta{\mathbf{z}^{m} _i}\right\| _2^2 = \left\|\left({z} _i^{old}-\sum _{j \in \mathcal{N}^2 _i} \frac{{z} _j^{old}}{\sqrt{\left|\mathcal{N} _i\right|\left|\mathcal{N} _j\right|}}\right)- \left({{z} _i^{new}-\sum _{j \in \mathcal{N}^2 _i} \frac{{z} _j^{new}}{\sqrt{\left|\mathcal{N} _i\right|\left|\mathcal{N} _j\right|}}
}\right) \right \| _2^2,
$$ where $\mathcal{N}^2$ is the second-order neighbors of nodes. The distinction between mid-/high- frequency information preservation lies in that mid-frequency signals calculate the differences between the node and its second-order neighbors.
* **Experimental result.**
We have added mid-frequency information preservation **(M)** in method, which could yield a slight performance improvement in some cases. However, it does not lead to better performance enhancements or outstanding results. The possible reason is that the preservation of first-order neighbors is sufficient to calibrate structural shift effectively. In the future, we will investigate comprehensive analysis and instantiations for the preservation of complicated signals.
| | |CoraFull| |Arxiv| |Reddit| |
|:----|:----|:----|:----|:----|:----|:----|:----|
| | |AP|AF|AP|AF|AP|AF|
|Unequally|GSIP| **55.32±0.75**| **-2.50±1.13**|63.86±0.85|0.08±0.76|**79.31±0.50**|0.70±0.25|
| |+M|55.28±0.65|-2.61±1.04| **63.89±0.86**| **0.20±0.68**|79.29±0.67| **0.75±0.52**|
|Equally (10)|GSIP|63.36±1.13| -7.27±0.82| **61.34±0.77**| **-6.34±0.70**| **64.16±0.37**| **-8.87±0.58**|
| |+M|**63.48±1.16**|**-7.21±1.06**| 61.28±0.71|-6.50±0.79| 64.08±0.12| -9.41±0.71|
|Equally (2)|GSIP|**90.74±0.44**| **-3.97±0.40**| **87.41±1.60**|0.13±0.91| **96.25±0.37**| **-0.65±0.64**|
| |+M|90.65±0.29|-4.01±0.40| 87.34±1.69|**0.18±0.92**|96.11±0.33|-0.82±0.44|
[1] Jincheng Huang et al. Robust Mid-Pass Filtering Graph Convolutional Networks. WWW 2023.
[2] Haitong Luo et al. Spectral-Based Graph Neural Networks for Complementary Item Recommendation. AAAI 2024.
>**Q2: The method is only evaluated on 3 datasets, Corafull, Arxiv, and Reddit. It would be great to be evaluated on more datasets.**
A2: **We have supplemented results from the newly added Cora and Citeseer datasets.** As shown in Table R1 of PDF file, each dataset has 3 tasks with two categories per task. GSIP achieves significant performance improvements, validating the effectiveness of GSIP. For ERGNN-GSIP, the AP and AF increase by 5.81\% and 9.14\% on Cora, and by 13.64\% and 21.74\% on Citeseer. The AF of SSM-GSIP improves by more than 5\% on two datasets. CaT-GSIP achieves the highest performance in most cases, with the AP approaching the value of Joint.
As for implementation details, we use the same experimental setup as three existing datasets, the experiment includes three hyper-parameters $\alpha_{gip}$, $\beta$, and $\gamma$. On Cora, for ERGNN, SSM, and CaT, the hyper-parameters are [5e-3, 2e1, 2], [1e-1, 1, 5e-2], and [1e-2, 1e3, 1e2] respectively. On Citeseer, for ERGNN, SSM, and CaT, the hyper-parameters are [5e-2, 5e3, 1e1], [1e-3, 1e2, 5], and [1e-3, 1, 1e-3] respectively. The number of budgets $\|\mathcal{M}\|$ is 10\% of the number of budgets for the existing datasets.
>**Q3: Why for the loss function (Equation 19), there is only one scaling factor $\alpha$ for the second loss term?**
A3: **We have added a new hyper-parameter $\alpha_{gip}$ to adjust the weight of $\mathcal{L} _{gip}$** and Equation (19) is modified to $\mathcal{L}=\mathcal{L} _{nc}+\alpha _{replay}\mathcal{L} _{replay}+\alpha _{gip}\mathcal{L} _{gip}$.
**We analyze the impact of $\alpha_{gip}$ on AP and AF of ERGNN, SSM, and CaT across three datasets with increments of 2 in Figure R2 of PDF file.** For ERGNN/SSM/CaT, $\alpha_{gip,1}$ are set to [1, 1, 0.1], [0.01, 0.01, 0.01], and [0.1, 0.01, 0.01] for three datasets. It can be observed that the performance change is not as significant with the variation of $\alpha_{gip}$ on SSM-GSIP and CaT-GSIP. However, different $\alpha_{gip}$ have a greater impact on performance with ERGNN-GSIP. The possible reason is that ERGNN selects representative nodes for replay, which may cause class imbalance and discard structure. For ERGNN, SSM, and CaT, the optimal hyper-parameters $\alpha_{gip}$ on three datasets are [50, 10, 1], [0.1, 0.1, 0.1], and [1, 0.5, 0.5].
---
Rebuttal Comment 1.1:
Title: increase rating from 5 to 6
Comment: Thanks for answering my concerns. I have increased the rating of the paper from 5 to 6 accordingly.
---
Reply to Comment 1.1.1:
Title: Thanks for Response
Comment: Thank you for increasing your rating and your support for the paper. Please let us know if you have any additional questions or concerns. | Rebuttal 1:
Rebuttal: We extend our gratitude for the valuable feedback and insightful suggestions provided by all the reviewers. We have diligently addressed the questions and suggestions raised during the official review process and have provided comprehensive response to the reviewers in the corresponding rebuttals.
We are delighted to be recognized for our efforts in this research. We would like to thank Reviewer Za5b for acknowledging the motivation and effectiveness of our model in avoiding catastrophic forgetting on graph class incremental learning. We also extend our appreciation to Reviewer 9jHn for their strong recognition of our novelty, theoretical contributions, and comprehensive experiments. Furthermore, we appreciate Reviewer BwJ9's acknowledgment of our theoretical foundations, empirical improvements, and quantitative analysis. Additionally, we are grateful to Reviewer T3gk for their strong endorsement of the motivation, theoretical contributions, and effectiveness of our work.
We have conducted analysis and experiments on mid-frequency information preservation, validating our results on new datasets and analyzing the weights of graph information preservation loss.
We have advanced an understanding of larger distortion and performed a quantitative analysis. We have clarified the differences between the overview of recent papers in the introduction and related work sections and elucidated the motivation and reasons for using MSE loss to measure the discrepancy in global representations.
We have explained certain terms before their use and modified some notations. The rationale behind the definition of the structural shift score has been expounded. We have elucidated the reasons for modeling from low-/high- frequency information preservation.
We present the reasons for not employing high-frequency global information from the perspective of method design and experimental outcomes, explain the existing task division, list detailed experimental details and hyperparameters, and conclude with an analysis of the replay loss weights.
**We also provide a rebuttal PDF attached below in this global response section to exhibit additional results.** We have supplemented Table R1 with results from the newly added Cora and Citeseer datasets. Figure R1 illustrates the visualization of node representation on node feature shift and its generalizability. Table R2 shows the hyperparameter settings used in the experiments. In Figure R2 and Figure R3, we analyze the effect of loss weights of graph information preservation loss and replay loss on performance.
Pdf: /pdf/fea20cbc627835e2c5529b095d1614aeea3c00a3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
One-to-Normal: Anomaly Personalization for Few-shot Anomaly Detection | Accept (poster) | Summary: This paper provides a novel method to tackle the issue of precision loss in more complex domains. They introduce an anomaly personalization method with a diffusion model to utilize the diffusions to obtain the normal sample distribution and exchange the anomaly image into normal ones. Finally, a triplet contrastive strategy is designed to obtain the score. Their approach obtains a SOTA result compared to the recent methods.
Strengths: 1. This paper provides novel approaches that utilize the diffusion model for both distribution modeling and recovering the anomaly images into the real one for comparison.
2. The triplet contrastive strategy introduces a multi-level way to obtain the anomaly map from different views which is more robust than the previous methods.
3. The experiments are comprehensive and the result is SOTA compared to the recent methods.
Weaknesses: 1. Will the utilize of the diffusion model bring a long inference time, which may not be effective for real-time application?
2. Will the choice of the text influence the result? Is there any comparison result with different prompt choices?
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weakness.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have address the limitations in zero-shot scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful reviews and constructive suggestions. We answer the questions as follows.
***
>W#1: Will the utilize of the diffusion model bring a long inference time, which may not be effective for real-time application
**A#1:** Thank you for the constructive question. 1) Our approach has a relatively short diffusion process and inference time compared to most diffusion-based anomaly detection methods, because our diffusion step ratio is only 0.3. 2) For very recent anomaly detection methods that do not utilize diffusion models (e.g., WinCLIP and InCTRL), the inference time of our proposed method is slightly higher (+200-300ms per query image) than that of WinCLIP (389ms) and InCTRL (276ms). 3) If necessary, we can further decrease the inference time by reducing the number of generated samples or decreasing the memory bank size. When using a single prompt corresponds to generating only one personalized image, the required inference time (326ms) is slightly lower than that of WinCLIP, while still demonstrating superior performance across three domains compared to other methods.
To improve the application in real-time scenarios, the following strategies may be considered: 1) Performing feature-level comparisons to reduce the steps involved in encoding and generating images. 2) Exploring the possibility of model pruning to lower computational complexity. 3) Employing more efficient diffusion model architectures (e.g., AT-EDM). Further exploration can be pursued in our future work.
***
***
>W#2: Will the choice of the text influence the result? Is there any comparison result with different prompt choices?
**A#2:** Thank you for your insightful question. Yes, the choice of text will influence the results. In our preliminary research, we conducted a comparison experiment. Our experiments explored both the category and quantity of prompts:
1) Exploration by Category: We categorized text prompts based on physical-level attributes related to the image, such as global quality (e.g., "a good photo of a/the [c]"), object size (e.g., "a photo of a/the small [c]"), and image resolution (e.g., "a low resolution photo of a/the [c]"). We tested the performance using prompts from a single category, two categories, and all three categories. We found that using prompts from all three categories resulted in the best performance. For your convenience, we have provided some experimental results on the MVTec-AD dataset with three prompts as a reference below.
| Datasets | Global quality | Object size | Image resolution | AUROC |
|------------|----------------|-------------|------------------|-------|
| | | ✅ | | 95.8 |
| **MVTec-AD** | | | ✅ | 95.9 |
| | ✅ | | | 95.9 |
| | | ✅ | ✅ | 96.0 |
| | ✅ | ✅ | ✅ | 96.2 |
2)Exploration by quantity: We examined scenarios with 1, 3, 5, and 10 prompts. We observed that when there was only one prompt, the performance was the lowest, whereas using all ten prompts yielded the highest performance. Generally, we find there is no significant difference in performance between using three and five prompts.
| Numbers of prompt | AUROC |
|:-----------------:|:-----:|
| 1 | 95.9 |
| 3 | 96.2 |
| 5 | 96.2 |
| 10 | 96.4 |
Our results suggest that the more comprehensive the inclusion of these three categories, the better the performance, with the optimal scenario using all 10 prompts. However, considering efficiency, we aimed to achieve the best results with the fewest prompts possible. We selected one prompt from each category and discovered through experiments that the combination of "a good photo of a/the [c]," "a close-up photo of a/the [c]," and "a low resolution photo of a/the [c]" yielded the best results in most cases. This combination of three prompts is what we present in the main figure describing our method in the manuscript.
***
---
Rebuttal Comment 1.1:
Title: Thank you for your feedback!
Comment: Thank you for your feedback. I think your reply solve my concerns. I will keep my rate as Accept.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate you taking the time to review our rebuttal and helpful feedback on our work. We are very glad that our response has addressed your concerns. | Summary: This paper proposes a new few-shot anomaly detection method based on one-to-normal personalization of query images using a diffusion model and a triplet contrastive inference process. It leverages a diffusion-based generation model to transform a query image into a personalized image towards the distribution of normal images, for use in the triplet contrastive inference. Experimental results on datasets from industrial, medical, and semantic domains demonstrate that the proposed method outperforms existing models.
Strengths: - Unlike common augmentation-based methods that generate pseudo-anomalies for AD, this approach transforms a query image towards the normal distribution, which is interesting and effective. The proposed triplet contrastive inference is compelling.
- Experiments conducted across various domains show its state-of-the-art performance in few-shot scenarios.
Weaknesses: - Details about the experimental settings are not fully provided, which limits reproducibility. For instance, there is no information on the selection of hyperparameters such as alpha and beta for A_score in the implementation details. It raises concerns about the model's performance. If alpha and beta were optimized based on test data, the claim of state-of-the-art performance is questionable. Details about the memory bank M such as its size and sensitivity of the performance to its hyperparameters are also missing.
- The paper lacks theoretical analysis, relying solely on empirical evidence.
- Figure 1 does not provide sufficient information about the overall process. Particularly, it omits the training process of the diffusion model and the composition of the anomaly-free pool, which can cause confusion before reading the detailed explanations in the main text.
- The definition of S_text in Sec 3.4 is unclear. It does not seem logical to apply the softmax function directly to the paired feature of F_q,l and F^T_text.
- The lack of equation numbers in the Method section makes it difficult to follow.
- There is no discussion of Table 3 in the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can you provide visualizations or information about the distribution of the individual anomaly scores (S_n, S_p, S_text)? It would be helpful to understand whether these scores focus on different parts or the same regions.
- The sensitivity of the model to text prompts should be further explored. Specifically, I would like to know the number of selected text prompts when generating personalized image and the rationale or justification for setting physical-level text prompts in the chosen template.
- In Table 4, using all the scores does not always correspond to the highest performance. A discussion on the possible reasons for this would be valuable.
- Could you provide results from InCTRL in Figure 4, and also add lines or bars to compare each performance against the performance of the proposed model?
- Why is the performance on MVTec-AD different in Table 5 (96.8) and Table 4 (96.2)?
Minor issues:
- In Table 4, the top value for AFID is 85.2, not 84.7 as bold-faced.
- Tables 4 and 5 lack information on the few-shot setting.
- Typo: "semantice" should be "semantic" in the caption of Table 3.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - As mentioned in the paper, the method is not applicable in zero-shot scenarios.
- The theoretical exploration is lacking; there is only empirical evidence.
- The computational cost is likely to be high, which may not be suitable for real-world scenarios
- The approach depends on the performance of the generative model. And it is limited if the generated images are abnormal (potentially vulnerable to attacks).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***
>W#1: Details about the experimental settings are not fully provided, which limits reproducibility. The hyperparameters such as alpha and beta for $A_score$ and Details about the memory bank M.
**A#1:** Thank you for your constructive comments. 1) We set the parameters $\alpha$ and $\beta\$ for $A_{\text{score}}$ to 1 and 0.5, respectively. This configuration remains consistent across all datasets and is not optimized based on the test data of each dataset. To show the robustness of our method to different choices of hyperparameters, we present the detailed results in Table 1 of **the uploaded PDF**. We will also include more details in the revision.
2) The memory bank size has been set to 30. Our preliminary experiments (Figure 1 in **the uploaded PDF**) indicate that larger memory bank values tend to improve results, but the improvement saturates after reaching a certain threshold. To balance model efficiency and performance, we selected \(M = 30\). For clarity, we have updated the manuscript to include the specific values of these parameters.
***
***
>W#2: Figure 1 does not provide sufficient information about the overall process. Particularly, it omits the training process of the diffusion model and the composition of the anomaly-free pool, which can cause confusion before reading the detailed explanations in the main text.
**A#2:** We apologize for any confusion caused. 1)The training process of the diffusion model follows the DreamBooth fine-tuning method to customize a diffusion model. We will provide more details in the revision. 2) Our anomaly-free sample pool comprises a set of normal reference images and generated normal images as mentioned in Section 3.4 of the manuscript. To avoid confusion, we will include a detailed explanation in the revision.
***
***
>W#3: The definition of S_text in Sec 3.4 is unclear. It does not seem logical to apply the softmax function directly to the paired feature of F_q,l and F^T_text.
**A#3:** Thank you for pointing this out and identifying the typo in our manuscript. $S_{text}$ determines the anomaly score by assessing the similarity between the text prompt and the query image. The image and text features are in fact combined using a dot product, and a softmax function calculates the probability, which serves as the anomaly score. Thank you for your correction; we have updated the manuscript accordingly.
***
***
>Q#1: Can you provide visualizations or information about the distribution of the individual anomaly scores (S_n, S_p, S_text)? It would be helpful to understand whether these scores focus on different parts or the same regions.
**A#4:** Thank you for the constructive questions. Following your suggestion, we have provided visualizations of the distribution of the individual anomaly scores. As illustrated in the figure of **the uploaded PDF**, combining both personalization and text prompts leads to the best performance, with each focusing on different regions.
***
***
>Q#2: The sensitivity of the model to text prompts should be further explored. Specifically, I would like to know the number of selected text prompts when generating personalized image and the rationale or justification for setting physical-level text prompts in the chosen template.
**A#5:** We thank you for your constructive suggestion. In this study, we utilized three text prompts, derived from our prior experiments. Our experimental approach initially focused on the number of prompts, investigating scenarios with 1, 3, 5, and 10 prompts. We found that performance was at its lowest with a single prompt, whereas using all ten prompts resulted in the highest performance.
| Numbers of prompt | AUROC |
|:-----------------:|:-----:|
| 1 | 95.9 |
| 3 | 96.2 |
| 5 | 96.2 |
| 10 | 96.4 |
The rationale for employing physical-level text prompts is predicated on the assumption that these prompts possess attributes directly related to the image, categorized into three main groups: global quality (e.g., 'a good photo of a/the [c]'), object size (e.g., 'a photo of a/the small [c]'), and image resolution (e.g., 'a low resolution photo of a/the [c]'). We assessed performance using prompts from one, two, or all three categories. Our results demonstrated that the prompts from all three categories yielded the best performance, leading us to adopt three prompts encompassing these categories for our analysis.
***
***
>Q#3: In Table 4, using all the scores does not always correspond to the highest performance. A discussion on the possible reasons for this would be valuable.
**A#6:** Yes, thank you for your valuable suggestion. Using all scores generally achieves the highest performance across most datasets, but it does not always correspond to the highest performance on a few datasets, for example, the KSDD dataset, a surface defect inspection dataset. One possible explanation might be the high diversity in the distribution of normal images for the diffusion model to learn. In contrast, the BrainMRI medical dataset consists of grayscale images and features highly symmetrical normal samples. Such symmetry is not fully manifested in the personalized images, leading to better results when only using generated images and text.
***
***
>Q#4: Could you provide results from InCTRL in Figure 4, and also add lines or bars to compare each performance against the performance of the proposed model?
**A#7:** Thank you for your helpful suggestion. In Figure 2 of **the uploaded PDF**, we present results from InCTRL and include lines or bars to compare each performance against the performance of the proposed method.
***
***
>Q#5: The lack of equation number, MVTec-AD performance typo, and other minor issues:
**A#8:** We greatly appreciate your careful review to improve our manuscript. We will address all issues in the revision. | Summary: This paper addresses the issue of few-shot anomaly detection, which introduces an anomaly personalization method by using an anomaly-free customized generation model and performing a triplet contrastive anomaly inference strategy. Experiment evaluations across eleven datasets in three domains demonstrate its superior performance compared to the latest AD methods.
Strengths: 1. The paper is generally well-organized and well-written.
2. The ideas of anomaly personalization and triplet contrastive anomaly inference are well-motivated with solid theoretical support.
3. The superior performances on various evaluations demonstrate the effectiveness of the proposed method.
Weaknesses: - There is a lack of discussion of the computation cost and inference speed of the proposed method since there are many generation steps. And the explicit description and exact number of prompts and images for generation are not clearly stated.
- The method introduces several hyperparameters (e.g., α, β) which require careful tuning. It's unclear how sensitive the results are to these hyperparameters and whether the paper provides sufficient guidance on setting them. Regarding the process of multi-level feature comparison, it would be better to clarify the effect of the number of multi-feature extraction blocks.
- The equations are not numbered and the line numbers are incomplete which makes it hard to reference.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What are the exact components of $C_0$ and $\overline{C_0}$? As for the normal state prompts, is the number of $n$ below Line125 13? And what is the number of the memory bank $M$?
2. The "Triplet Contrastive Anomaly Inference" seems to work as a weighted prediction from three comparison aspects during testing. Where can it reflect the concept of "contrastive"? What is the full training objective?
3. Does $S_{text}$ reflect the degree of anomaly score? It seems that there are two types (both normal and abnormal) objects in the text prompts. How do they work in the same way as the equations of $S_{text}$ and $A_{score}$?
4. Is there any limitation for the proposed method to handle the open-vocabulary scenarios? And what is the computation cost and inference cost of the proposed method compared with other methods since there are many images for generation?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors provide a brief discussion of the limitations on zero-shot anomaly detection.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful reviews and constructive suggestions. We answer the questions as follows.
***
>W#1: There is a lack of discussion of the computation cost and inference speed of the proposed method since there are many generation steps. And the explicit description and exact number of prompts and images for generation are not clearly stated.
**A#1:** We thank the reviewer for the constructive question. In this work, we employed three prompts to generate three images (one for each prompt), and the inference time of our proposed method is slightly higher (+200-300ms per query image) than that of WinCLIP (389ms) and InCTRL (276ms). If necessary, we can further increase the inference speed by reducing the number of generated samples or decreasing the memory bank size. When using a single prompt corresponds to generating only one personalized image, the required inference time (326ms) is slightly lower than that of WinCLIP, while still demonstrating superior performance across three domains compared to other methods.
***
***
>W#2: The method introduces several hyperparameters (e.g., α, β) which require careful tuning. It's unclear how sensitive the results are to these hyperparameters and whether the paper provides sufficient guidance on setting them. Regarding the process of multi-level feature comparison, it would be better to clarify the effect of the number of multi-feature extraction blocks
**A#2:** Thank you for your detailed question. The hyperparameters α and β are set to 1 and 0.5, respectively, across all experiments and datasets. This setting was determined based on our preliminary experiments. For your convenience, we have presented these results in Table 1 of **the uploaded PDF**. As shown in the table, our method is quite robust to different values of these hyperparameters (e.g., α, β). We have updated the manuscript with the settings and specific values of these parameters.
2) The use of multiple multi-feature extraction blocks has proven to be effective in enhancing performance [1]. The results in the following table also indicate that increasing the number of these blocks leads to further performance improvements. However, considering computational costs, we have decided to utilize four blocks in our final implementation to achieve an optimal balance between performance and efficiency.
| Number | 2 | 3 | 4 | 5 |
|:------:|:-----:|:-----:|:-----:|:-----:|
| AUROC | 95.7 | 96.0 | 96.2 | 96.4 |
***
***
>W#3: The equations are not numbered and the line numbers are incomplete which makes it hard to reference.
**A#3:** Thank you for your helpful suggestion. we will incorporate the modifications in the version.
***
***
>Q#1: What are the exact components of C0 and C0? As for the normal state prompts, is the number of n below Line125 13? And what is the number of the memory bank ?
**A#4:** 1) We follow the notation used in the DreamBooth method to customize a diffusion model. ${C_0}$ consists of a set of text-image pairs, $ C_0 = \{(x_k, c_k)\} $, centered around the target object. The components of ${C_0}$ are reference images (i.e., few-shot normal samples) of the same object (e.g., cable, candle, brain, etc.) and their corresponding prompts. Additionally, $\overline{C_o}$ contains different images (i.e., regularization images) within the same object for prior preservation and regularization purposes. The regularization images are generated using the Stable Diffusion (SD) model, as is common in most methods. These images use a prompt that is a coarse class descriptor to prevent language drift and reduce output diversity. 2) The number \( n \) in our method is 3 and the number of memory bank is 30.
***
***
>Q#2: The "Triplet Contrastive Anomaly Inference" seems to work as a weighted prediction from three comparison aspects during testing. Where can it reflect the concept of "contrastive"? What is the full training objective?.
**A#5:** Thank you for your detailed question on the "Triplet Contrastive Anomaly Inference" part. Yes, the "Triplet Contrastive Anomaly Inference" method we propose involves a weighted prediction from three comparative aspects during the testing phase. The term "contrastive" in our context refers to the dissimilarity comparisons among three branches: the query image in comparison with the personalized image, anomaly-free samples, and text prompts. This is designed not to interfere with the training process.
***
***
>Q#3: Does reflect the degree of anomaly score? It seems that there are two types (both normal and abnormal) objects in the text prompts. How do they work in the same way as the equations of and ?
**A#6:** Indeed, in our method, text prompts contain two types of objects: normal and abnormal. During the computation of $S_{\text{text}}$, the query image is compared with prompts of both types, yielding two probabilities: $p_{\text{normal}}$ and $p_{\text{abnormal}}$, which represent the likelihood of the image being normal and abnormal, respectively. Subsequently, the anomaly score $S_{\text{text}}$ is calculated by summing $p_{\text{abnormal}}$ and $1 - p_{\text{normal}}$, which is then utilized for subsequent calculations of $A_{\text{score}}$.
***
***
>Q#4: Is there any limitation for the proposed method to handle the open-vocabulary scenarios? And what is the computation cost and inference cost of the proposed method compared with other methods since there are many images for generation?
**A#7:** Thank you for your good suggestion. 1) Our method relies on learning a representative distribution of normal samples and therefore has to see few-shot normal examples. We have not yet considered the direction of open-vocabulary scenarios. This could be a potential area for further exploration in future work. 2)Please refer to A#1.
***
[1] Toward Generalist Anomaly Detection via In-context Residual Learning with Few-shot Sample Prompts, CVPR 2024 | Summary: The paper focuses on a practical yet challenging anomaly detection in a few-shot-normal-image setting. Instead of directly matching features between the query image and a few normal reference images, the core insight is to replace the reference image with a personalized normal image generated by an anomaly-free custom model. The authors also propose a triplet contrastive anomaly inference strategy, incorporating the original/generated anomaly-free samples and text prompts. Extensive experiments are extensively conducted across 11 datasets.
Strengths: 1. The paper is well-writing and easy to follow.
2. State-of-the-art results are achieved across 11 datasets.
Weaknesses: 1. It is essentially a memory-augmented reconstruction-based anomaly detection (AD) method [1], which attempts to reconstruct the query image to its most similar anomaly-free counterpart. However, the reconstruction-based AD method also explores the Stable Diffusion (SD) denoising network [1]. Could you clarify the differences? If I understand correctly, the core difference is the inputs to SD, which are pairs of object text prompts and few-shot normal images.
2. Though Figure 4 demonstrates the effectiveness of the generated anomaly-free samples on three AD methods, how do these generated samples enhance the authors’ method? Can we use more or fewer generated normal samples instead of 100? An ablation study is required here.
[1] Gong et al. Memorizing normality to detect anomaly: memory-augmented deep autoencoder for unsupervised anomaly detection. In ICCV, 2019.
[2] He et al. Diad: A diffusion-based framework for multi-class anomaly detection. In AAAI, 2024.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Which data augmentations are used to augment few-shot normal images? Do these data augmentations vary across different categories?
2. It would be interesting to investigate how text prompts affect the anomaly-free customized model. For example, what if a specific category name (e.g., “cable”) is used to replace "object"?
3. How are \alpha and \beta determined in the final prediction? How about the ratio of the t-step?
4. It would be favorable to report F1-max, AP and PRO along with AUROC.
5. What about the one-shot setting?
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: The limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful reviews and constructive suggestions. We answer the questions as follows.
***
>W#1: It is essentially a memory-augmented reconstruction-based anomaly detection (AD) method [1], which attempts to reconstruct the query image to its most similar anomaly-free counterpart. However, the reconstruction-based AD method also explores the Stable Diffusion (SD) denoising network [1]. Could you clarify the differences? If I understand correctly, the core difference is the inputs to SD, which are pairs of object text prompts and few-shot normal images.
**A#1:** Thanks for your constructive question. Indeed, our approach and memory-augmented reconstruction-based anomaly detection both attempt to reconstruct the query image to its most similar anomaly-free counterpart. There are some core differences: (1) Our reconstruction process does not utilize a memory bank. As the core differences you mentioned, It relies on Stable Diffusion (SD) using pairs of object text prompts and few-shot normal images. (2) Their memory-augmented reconstruction-based anomaly detection is entirely based on memory bank features for reconstruction, without any feature information from the original input image. In such a scenario, the most similar features found in the memory bank may still have low similarity because no information from the original input image is used for reconstruction, and the features stored in the memory bank may not fully match the input image, which may lead to increased reconstruction errors or noise. (3) Instead, We explore a stable diffusion process that allows control over the generative process and retains most of the original image information by converting anomalous regions of the query image to normal, thereby reconstructing the query image to its most similar anomaly-free counterpart. This method preserves most of the normal regions of the original image in the reconstructed image, reducing errors. (4) The reconstruction-based anomaly detection methods in [1] and [2] detect anomalies entirely by computing the difference between the original and reconstructed images, whereas our personalized image reconstruction is only one of the three branches used to compute the anomaly score.
***
***
>W#2: Figure 4 demonstrates the effectiveness of the generated anomaly-free samples on three AD methods, how do these enhance the authors’ method? Can we use more or fewer generated normal samples instead of 100?
**A#2:** Thank you for the detailed question. 1) The generated samples are involved in calculating the anomaly score, which is discussed in Triplet Contrastive Anomaly Inference section. These normal samples are generated to compare multi-level features with query images, contributing to one aspect of the anomaly score, denoted as $S_{N}$. Additionally, the ablation study in Table 5 of the manuscript confirms that these generated samples enhance the performance. 2) Yes, we attempted to validate the effectiveness of the proposed method by generating samples ranging from 10 to 300. For your convenience, we have placed the results in Figure 1 of **the uploaded PDF**. Increasing generated samples continues to enhance the results, likely because a larger sample set better represents the distribution of normal samples. However, for computational efficiency, we do not recommend generating too many samples (e.g., exceeding 200).
***
***
>Q#1: Which data augmentations are used to augment few-shot normal images?Do these data augmentations vary across different categories?
**A#3:** Thank you for the detailed question. We performed only simple data augmentation, such as flipping and Gaussian blur, on medical datasets as well as the KSDD dataset. This approach was adopted primarily to preserve the distribution of normal samples. Additionally, our method has a comprehensive text prompts design which, unlike data augmentation, can more effectively simulate various states of normal images.
***
***
>Q#2: It would be interesting to investigate how text prompts affect the anomaly-free customized model. For example, what if a specific category name (e.g., “cable”) is used to replace "object"?
**A#4:** Thanks for your good suggestion. In the experiments, we indeed used the specific category name, e.g., “cable” to replace the variable “object”. This is to be consistent with recent related work using textual prompts. To avoid confusion, we will clarify this in our revised manuscript.
***
***
>Q#3: How are alpha and beta determined in the final prediction? How about the ratio of the t-step?
**A#5:** Thank you for the detailed question.1) We set the parameters $\alpha$ and $\beta$ for $A_{score}$ to 1 and 0.5, respectively, this configuration remains consistent across all datasets. We provide Table 1 in **the uploaded PDF**, demonstrating the robustness of our method to different choices of hyperparameters.
2)The ratio for the t-step is set at 0.3. In experiments, we observed that performance fluctuations remain minimal within the range of 0.2 to 0.5. A ratio of 0.3 not only performs well across the majority of datasets but also offers enhanced computational efficiency due to its smaller value.
***
***
>Q#4: It would be favorable to report F1-max, AP and PRO along with AUROC?
**A#6:** Indeed, presenting metrics such as maximum F1-max, AP, and probability of PRO, along with AUROC, would be the most favorable and comprehensive approach. However, given the breadth of our data—comprising 11 datasets across three domains—and the limited page space, it is not feasible to display all these indicators. Therefore, similar to previous methods like InCTRL and RegAD, we prioritize presenting AUROC.
***
***
>Q#5: one-shot setting
**A#7:** Thanks for your constructive suggestions. We have calculated the results of our method and compared them with other methods in a one-shot setting. For your convenience, these results are presented in Table 2 in **the uploaded PDF**.
***
---
Rebuttal 2:
Comment: For your convenience, we provide the experimental results mentioned in **the uploaded PDF** here to facilitate your review. We hope these supporting data address any concerns or questions you may have. If there is any confusion or if any part of our work requires further clarification, please do not hesitate to comment. We are more than willing to provide additional explanations and engage in further discussions.
***
>W#2: Can we use more or fewer generated normal samples instead of 100? An ablation study is required here.
**A#1:** Yes, we attempted to generate samples ranging from 10 to 300 to validate the effectiveness of the proposed method. We have placed the results here in the following table. Increasing the number of generated samples continues to enhance the results, likely because a larger sample set better represents the distribution of normal samples. However, for computational efficiency, we do not recommend generating too many samples (e.g., exceeding 200).
| Datasets | 10 | 30 | 50 | 100 | 150 | 200 | 300 |
|----------|------|------|------|------|------|------|------|
| MVTec-AD | 95.9 | 96.2 | 96.3 | 96.4 | 96.5 | 96.6 | 96.6 |
| RESC | 94.7 | 95.2 | 95.4 | 95.6 | 95.6 | 95.7 | 95.9 |
| CIFAR | 94.2 | 94.9 | 95.2 | 95.5 | 95.6 | 95.6 | 95.7 |
***
***
>Q#3: How are alpha and beta determined in the final prediction?
**A#2:** We set the parameters $\alpha$ and $\beta$ for $A_{score}$ to 1 and 0.5, respectively, this configuration remains consistent across all datasets. This choice was informed by our preliminary experiments, which demonstrated satisfactory performance across the majority of datasets under this setting. We provide more results in the following table below, demonstrating the robustness of our method to different choices of $\alpha$ and $\beta$.
| $\alpha$ | $\beta$ | MVTec | VisA | KSDD | AFID | ELPV | OCT2017 | BrainMRI | HeadCT | RESC | MNIST | CIFAR-10 | Average|
|:--------:|:-------:|:-----:|:----:|:----:|:----:|:----:|:-------:|:--------:|:------:|:----:|:-----:|:--------:|:--------:|
| 1 | 0.5 | 96.2 | 89.9 | 98.4 | 84.7 | 90.6 | 99.3 | 98.6 | 94.8 | 95.2 | 93.6 | 94.9 | **94.2** |
| 0.5 | 1 | 95.7 | 89.3 | 98.0 | 84.2 | 90.1 | 99.1 | 98.3 | 94.6 | 95.1 | 93.2 | 94.6 | **93.9** |
| 1 | 1 | 95.9 | 89.5 | 98.1 | 84.1 | 90.2 | 99.4 | 98.5 | 95.1 | 95.0 | 93.4 | 94.7 | **94.0** |
| 0.5 | 0.5 | 96.1 | 89.7 | 98.3 | 84.6 | 90.8 | 99.0 | 98.1 | 94.5 | 95.3 | 93.8 | 94.3 | **94.1** |
| 2 | 1 | 96.0 | 89.8 | 98.5 | 84.0 | 90.0 | 99.2 | 98.3 | 94.3 | 94.8 | 93.2 | 94.6 | **93.9** |
| 1 | 2 | 95.7 | 89.1 | 97.8 | 83.5 | 89.7 | 99.2 | 98.7 | 94.5 | 95.0 | 93.4 | 94.7 | **93.8** |
| 2 | 2 | 95.9 | 89.3 | 97.9 | 83.2 | 89.9 | 99.1 | 98.5 | 95.3 | 94.8 | 93.1 | 94.8 | **93.8** |
***
***
>Q#5: What about the one-shot setting?
**A#3:** In the one-shot setting, we have calculated the performance of our method and compared it with recent methods. The table below presents the AUROC comparison results, showing that our method maintains optimal performance on most datasets.
| | Datasets | WinCLIP | InCTRL | Ours |
|:-----------------:|:---------------:|:-------------------------:|:------------------------:|:--------------------------:|
| **Industrial field** | **MVTec** | 92.5±2.3 | 93.2±1.7 | **94.8±0.7** |
| | **VisA** | 83.6±2.5 | 84.2±2.5 | **87.0±1.7** |
| | **KSDD** | 94.0±0.5 | **96.6±2.8** | 96.5±1.6 |
| | **AFID** | 72.3±4.2 | 76.0±3.2 | **77.6±1.5** |
| | **ELPV** | 72.2±2.5 | 82.8±1.2 | **85.2±0.8** |
| **Medical field** | **OCT2017** | 90.7±2.6 | 93.0±2.3 | **95.8±1.6** |
| | **BrainMRI** | 93.1±1.5 | 96.7±2.4 | **96.9±1.3** |
| | **HeadCT** | 91.7±1.8 | 92.3±2.0 | **93.7±1.2** |
| | **RESC** | 85.7±2.6 | 87.6±2.9 | **92.4±1.2** |
| **Semantic field** | **MNIST** | 76.3±1.7 | 87.7±2.3 | **91.8±0.6** |
| | **CIFAR-10** | 92.3±0.2 | 93.2±0.9 | **93.6±0.5** |
***
---
Rebuttal 3:
Comment: ***
>Q#4: It would be favorable to report F1-max, AP and PRO along with AUROC?
**A#4:** Following your suggestion, we have included the results of AUROC, F1-max, AP, AUPRC, and PRO on MVTec dataset in the table below for a more comprehensive comparison, where our method consistently outperforms the baselines on all metrics.
| | Methods | AUROC | F1-max | AP | AUPRC | PRO |
|:--------------:|:---------:|:-------:|:--------:|:------:|:-------:|:------:|
| **2-shot** | WinCLIP | 93.1 | 93.3 | 95.9 | 96.5 | 88.2 |
| | InCTRL | 94.0 | - | - | 96.9 | - |
| | VAND | 92.4 | 92.6 | 96 | - | 91.3 |
| | Ours | **95.1**| **94.3** |**96.5**|**97.3** |**92.1**|
| **4-shot** | WinCLIP | 94.0 | 93.5 | 96.2 | 96.8 | 88.5 |
| | InCTRL | 94.5 | **-** | - | 97.2 | - |
| | VAND | 92.8 | 92.8 | 96.3 | - | 91.8 |
| | Ours | **95.6**| **94.8** | **97** |**97.8** |**92.6**|
| **8-shot** | WinCLIP | 94.7 | 93.8 | 96.5 | 95.3 | 89.1 |
| | InCTRL | 95.3 | - | - | 97.7 | - |
| | VAND | 93.0 | 93.1 | 96.5 | - | 92.2 |
| | Ours | **96.2**| **95.1** |**97.4**|**98.9** |**93.1**|
***
---
Rebuttal 4:
Comment: I highly appreciate the authors' helpful feedback. I agree with explanations on the differences between the diffusion model-based reconstruction and memory-based one. All other questions are well-solved through additional experiments so I would like to increase my rating and recommend a borderline acceptance.
---
Rebuttal Comment 4.1:
Comment: We sincerely appreciate you taking the time to review our rebuttal and for your positive feedback. We are very glad that our response has addressed your concerns. | Rebuttal 1:
Rebuttal: We appreciate all reviewers for their careful reviews and constructive suggestions. In this rebuttal, Individual concerns have been carefully addressed in the response to each reviewer, with an uploaded PDF for more results suggested by reviewers. In the final version, we will revise the paper following these suggestions.
Pdf: /pdf/dd5bd1e70ea1ecdd6d63c5f377b986238fac88c8.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
DG-SLAM: Robust Dynamic Gaussian Splatting SLAM with Hybrid Pose Optimization | Accept (poster) | Summary: This paper proposed a dynamic RGB-D SLAM system based on 3D Gaussian Splatting (3D-GS) and DROID-SLAM. Semantic segmentation masks and depth warping residuals are used to generate motion masks to remove the dynamic part of the scene. Experiments on three real-world datasets show the proposed approach achieves superior results on both dynamic and static scenes.
Strengths: 1. The paper is well written, technically sound and easy to follow.
2. The idea of using depth warping residuals as a complement to semantic priors for motion mask generation is simple but works effectively well.
3. The proposed method is very well engineered and achieved promising results on real-world dynamic sequences.
4. The proposed method is the first dynamic SLAM method based on 3D Gaussian Splatting (3DGS).
Weaknesses: 1. The technical novelty is a bit limited. The paper focuses on adding new components to previous SLAM methods in order to achieve dynamic SLAM.
2. The hybrid coarse-to-fine camera tracking seems like a trick: The coarse stage directly comes from DROID-SLAM while the fine stage simply adds masks to 3DGS-based tracking.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see Weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: Technical novelty.
A1: Thank you for your advice. We aim to clarify your concerns from the following perspectives: (1) As dynamic 3D Gaussian-based SLAM remains in its nascent stages of research and diverges markedly from traditional approaches, this paper presents preliminary explorations in this domain. We are the first work aimed at addressing the challenge of robust pose estimation for 3D Gaussian-based SLAM in dynamic environments. We hope to design an innovative dynamic SLAM system to better utilize 3D Gaussian explicit representation. (2) We have discovered that directly employing a 3D Gaussian representation for pose optimization in dynamic environments tends to result in convergence to local optima, and the solution process is notably unstable. Thus, we need to build a relevant robust tracking front end. (3) We design the motion mask generation method to filter out the invalid optical flow correspondence in Droid-VO to improve the robustness of the original tracking process. Furthermore, we have observed that Gaussian representations, when initialized with stable values, demonstrate improved convergence behavior. (4) Based on these observations and analyses, we propose a hybrid optimization approach incorporating a coarse-to-fine pose optimization strategy. Specifically, we employ a rectified confidence matrix within the droid-slam framework to provide initial pose estimates, which are subsequently refined using a global Gaussian map. This suite of designs significantly enhances the accuracy and robustness of pose estimation for GS-based SLAM in dynamic environments.
### Q2: The meaning and motivation of designing a hybrid coarse-to-fine strategy.
A2: We sincerely appreciate the reviewer's meticulous and detailed review of our work, which helped us improve the quality of this paper. It should be noted that the coarse-to-fine approach is an optimization thought that has proven effective in enhancing the accuracy and robustness of pose estimation for complex tasks. However, how to achieve precise and robust pose optimization during the coarse and fine stages continues to be an open and challenging problem.
We have introduced a hybrid pose optimization strategy specifically tailored for Gaussian-based SLAM systems to effectively tackle pose estimation challenges in dynamic environments, which also represents a central innovative contribution of our paper. Specifically, **in the coarse-optimized stage**, we have implemented a strategy that filters out unreliable and inaccurate optical flow estimations within the existing confidence matrix component of the original droid-slam system. This strategy removes inaccurate matching correspondences, thereby providing a relatively precise initial pose estimate. **During the fine-optimized stage**, in addition to utilizing motion mask generation methods to filter out invalid sampled points, we have designed a novel point management technique to maintain the Gaussian map. This encompasses adaptive point addition and effective outlier removal strategies. These elements are critical in ensuring that our SLAM system maintains high levels of accuracy and robustness in dynamic environments.
Additionally, the coarse-to-fine pose optimization strategy **forms a compact and integral feedback mechanism within the entire SLAM system**. Improved pose optimization precision facilitates the generation of more accurate motion masks, which in turn enhance the accuracy of the initial pose estimates obtained during the coarse optimized stage and lead to better pose optimization results in the fine optimized stage.
Furthermore, the keyframe selection strategy, which is predicated on optical flow motion during the coarse-optimized stage, further enhances the efficiency and accuracy of pose optimization in the fine-optimized stage. This integrated approach ensures a compact and effective optimization process throughout the whole SLAM system.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer fGfv,
Many thanks for your support. Could you please read this rebuttal and the below? Then give your responses? And any discussions are welcome.
AC | Summary: The paper introduces the first robust dynamic visual SLAM system based on 3D Gaussian Splatting. It provides precise camera pose estimation and high-fidelity reconstructions. Strategies such as motion mask generation, adaptive Gaussian point management, and hybrid camera tracking are proposed to improve the accuracy and robustness of pose estimation.
Strengths: The writing is clear and easy to understand.
The paper proposed the first GS-based SLAM systems for dynamic scenes.
Compared with previous NeRF-based and GS-based SLAM systems, it achieves optimal performance on dynamic scenes.
Weaknesses: 1. The evaluation conducted in the paper primarily compares the proposed method against NeRF or Gaussian-splatting-based SLAM systems, which is not sufficient. It would be beneficial to include traditional SLAM systems as baselines, which present a more robust tracking capability, such as ReFusion [1] and ORB-SLAM3 [2].
2. Lacks the comparison with MonoGS [3] as the baseline of GS-based SLAM. MonoGS demonstrates more advanced map reconstruction ability compared with other GS-based alternatives, and presents robust camera tracking utilizing Jacobian optimization (e.g. compared with trivial pose optimization via photometric loss implemented in SplaTAM). It should be included as a baseline method for more exhausted comparisons.
3. Lacks survey on the literature of related work, such as Crowd-SLAM [4] and DDN-SLAM [5], which were also proposed for the same motivation of reconstruction on the dynamic environment.
4. The experiments on two datasets have shown the effectiveness (see point 5) of the proposed method in dynamic indoor settings. However, for a SLAM system, it would be preferable to conduct experiments on real-world scenes, especially in outdoor environments.
5. The proposed system largely relies on dynamic object removal in the tracking system; however, it lacks certain optimization for the mapping system. This results in the reconstructed scenes still containing artifacts.
[1] Palazzolo, J. Behley, P. Lottes, P. Giguere, and C. Stachniss, “Refusion: 3d reconstruction
in dynamic environments for rgb-d cameras exploiting residuals,” in IROS, 2019.
[2] C. Campos, R. Elvira, J. J. G. Rodr ́ıguez, J. M. Montiel, and J. D. Tard ́os, “Orb-slam3: An
accurate open-source library for visual, visual–inertial, and multimap slam,” IEEE TRO, 2021.
[3] Matsuki, Hidenobu, et al. "Gaussian splatting slam." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
[4] Soares, João Carlos Virgolino, Marcelo Gattass, and Marco Antonio Meggiolaro. "Crowd-SLAM: visual SLAM towards crowded environments using object detection." Journal of Intelligent & Robotic Systems 102.2 (2021): 50.
[5] Li, Mingrui, et al. "Ddn-slam: Real-time dense dynamic neural implicit slam with joint semantic encoding." arXiv preprint arXiv:2401.01545 (2024).
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. According to Table 6, the segmentation mask seems to play a much more significant role than other components, such as the depth warp mask. Please discuss the utility of the other components in detail. A concern here is that a more accurate segmentation mask (e.g. produced by SAM) might facilitate even better dynamic removal without the need for other components.
2. In the implementation detail, the author should elaborate more on how to perform semantic segmentation, the parameter of depth warping, and the keyframe selection strategy they have integrated in the proposed system.
3. The dynamic features are removed in the tracking system; however, a large portion of the removed features might reduce the tracking accuracy. In other words, to what extent can the method proposed in the paper tolerate motion? Is there any failure case/example of tracking, e.g. when dynamic features dominate the current view?
4. The semantic mask is retrieved by utilizing a pretrained segmentation model. There is a concern regarding the generalizability of this semantic mask. How is the dynamic object defined with respect to the semantic mask? For instance, the human might be considered a dynamic object in the BONN dataset, but might be stable in other scenes (e.g. wax figure). In addition, how does this integration of the semantic model affect the real-time processing capability of the SLAM system?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: There is no societal impact of this work, and limitations are discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### W1: More experements results.
A1: Thank you for your suggestion. We have added experiments comparing ORB-SLAM3[2] and Refusion[1]. As shown in Table 4 (one-page global PDF), our method presents a more robust tracking capability.
### W2: Comparing with MonoGS.
A2: Thank you for your suggestion. Actually, we have already compared our method with MonoGS[3] in the initial draft. MonoGS is the name of the open-source framework and we used the name of its paper, namely GS-SLAM, as shown in Table2,3,4. We will make modifications to this issue to prevent misunderstandings.
### W3: Related work.
A3: Thank you for your suggestion. We will discuss and cite Crowd-SLAM[4] and DDN-SLAM[5] in the revised version.
### W4: Real-world scenes experiments.
A4: Extending dynamic 3DGS to large-scale outdoor scenarios has always been a challenging problem. As mentioned in the conclusion part, the problem of loop closure for handling large-scale scenes is an interesting direction for future research. Therefore, we did not provide experimental results for outdoor environments.
However, we provided experimental results for real-world scenes, as shown in Fig. 2 and Tab. 3(one-page global PDF). We conduct experiments on 3 real-world sequences and all results indicate the superiority of our method.
### W5: Lack certain optimization for the mapping system.
A5: We have designed an adaptive Gaussian point addition and pruning strategy for the mapping system. We will perform geometric consistency verification on map points through depth warp. Consequently, artifacts can be eliminated to a certain extent. Of course, the constructed Gaussian map may still contain a small number of artifacts, and we are considering how to better resolve this issue.
### Q1: The role of the semantic mask. According to Table 6, the segmentation mask is much more important than other components.
A1: In response to your observation that the segmentation mask seems to play a much more significant role than other components, we believe it relates to the dynamic object classes in the validation datasets, such as TUM and BONN. The semantic prior predominantly features dynamic object classes, such as humans, enhancing semantic segmentation performance. For non-rigid objects like humans, the results of depth warp fusion are less accurate compared to semantic segmentation, which is reasonable.
Following your suggestion, we consider that SAM methods can indeed generate more accurate segmentation masks, provided that they receive inputs that are precise and unambiguous, such as points, boxes, or masks.
While SAM methods could serve as an alternative approach to semantic segmentation, they still do not capture the true motion of objects. Therefore, a geometric consistency module is necessary for additional verification. Moreover, the inference speed of SAM methods is slower compared to current semantic segmentation approaches. Your suggestion has opened up new avenues for our research, and we are considering how best to integrate the SAM method into our SLAM system to improve the accuracy of our motion segmentation.
### Q2: More implementation detail.
A2: Thank you for your helpful advice. We will enhance the description of the experimental details in the implementation section. We utilize OneFormer to generate prior semantic segmentation. For the depth wrap mask, we set the window size to 4 and the depth threshold to 0.6. For keyframe selection, we adopt the keyframe selection strategy from DROID-VO based on optical flow.
### Q3: To what extent can the method proposed in the paper tolerate motion?
A3: As you mentioned, removing a large number of features could compromise tracking accuracy. Therefore, instead of employing a feature-based SLAM method as the front end, we utilize dense optical flow tracking from droid-SLAM for motion estimation. This choice enables our method to better tolerate dynamic environments. It can operate robustly and accurately in both low and high dynamic sequences of public datasets (TUM and BONN), as well as in real-world settings (Self-collected dataset). However, if occlusions are extensive, covering more than two-thirds of the entire image, the system is still prone to track loss. This issue remains a significant challenge for current SLAM systems and is yet to be resolved.
### Q4: The definition of the dynamic object with respect to the semantic mask and the running time of the semantic model.
A4: Thank you for your suggestions. It is important to note that semantic segmentation is just one of the methods we use to generate motion masks; we also employ geometric consistency checks with depth warping operation. For different real-world applications, we typically predefine certain dynamic categories and initially obtain motion masks through semantic segmentation. In the vast majority of daily scenarios, humans are generally considered dynamic objects. Of course, the special scenario you mentioned, involving wax figures, may indeed struggle to balance predetermined motion priors with actual movements. However, such scenarios are relatively rare compared to typical applications, and we have not yet considered these cases in our approach. Regarding the real-time aspect of semantic segmentation you mentioned, it is essential to emphasize that our method does not focus on the specific architecture of the semantic segmentation network used, but rather on the design of information fusion methods. With ongoing advancements in these research fields and improvements in computing power, the processing speeds for semantic segmentation are expected to increase, ensuring they do not become bottlenecks for our method.
---
Rebuttal Comment 1.1:
Comment: I appreciate the response from the authors, and my concerns are either reasonably addressed or considered as future works. Hence I raise my score to Borderline Accept.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer MuLC
We appreciate the reviewer's time for reviewing and thanks again for the valuable comments.
Best wishes
Authors | Summary: The paper present DG_SLAM to address the inconsistent observation of geometry and photometry in dynamic environments among the current slam-related field. Specifically, the authors develop several technologies such as the motion mask generation, adaptive gaussian point management, and a hybrid camera tracking algorithm to improve the performance of slam system. And the experiments on TUM RGB_D and BONN RGB_D dataset showcases the state-of-the-art performance both on static and dynamic environments.
Strengths: 1. For the task of motion object segmentation, the authors propose an advanced motion task generation method which integrates spatio-temporal consistent depth masks combined with semantic priors, and several rigorous mathematic formulations are illustrated to demonstrate the effectiveness.
2. A coarse-to-fine camera pose optimization is proposed to optimize the whole slam system, and improve the constancy between pose and reconstructed 3d map.
3. The authors also present an adaptive gaussian point addition and pruning strategy to manage the generated gaussian map.
Weaknesses: 1. A motion mask generation strategy is used to mitigate the potential impact of observation noise from a single warp mask, can the authors give more details about this part? Are there any theoretically analysis for that?
2. A hand-crafted rule is adapted in the map point deleting stage. For example, the authors delete points based on three baselines: the opacity value, the maximum scale, the ration between ellipsoid’s major and minor axes, can authors give more explanation about these criteria? Moreover, the authors showcase the removement of number of observations of the point that are too low, but fail to show its quantitative criteria.
3. the running time on BONN dataset during the mapping phase seems a bit large in table 7, can authors give several details about this result? Are there any connections with the proposed map point deleting strategy?
Technical Quality: 2
Clarity: 2
Questions for Authors: see weakness part
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The method fails to be conducted on several large-scale real scenes.
Flag For Ethics Review: ['Ethics review needed: Safety and security']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: The details of the motion mask generation strategy.
A1: During our experiments, we observed that inaccuracies in pose estimation, coupled with noise in the captured ground-truth depth values, can result in unstable motion segmentation outcomes when relying solely on a single warp mask. However, by integrating observations from depth warps across multiple frames, a more accurate motion mask can be constructed. This approach significantly enhances the reliability and precision of the motion segmentation process, as shown in Fig. 1 (one-page global PDF).
### Q2: The design criteria of the three gaussian point deletion methods.
A2: Thank you for your suggestions. We aim to clarify your concerns from the following perspectives:
(1) **The opacity value** denotes the importance of each gaussian point along each ray during the rendering process. Our opacity verification operation is similar in spirit to the original 3D GS. Since we implement an adaptive dense control algorithm to regulate the density of the Gaussian map, we contend that the deletion of non-essential Gaussian points can facilitate the incorporation of new Gaussian points and enhance the convergence of pose optimization.
(2) Through the visualization of reconstructed Gaussian maps, we have observed that the Gaussian spheres with **large scale do not effectively represent the local details of objects**; instead, they tend to cause blurring or shadowing during the rendering process. This results in highly unstable pose optimization, which in turn leads to the failure of camera tracking.
(3) The third criterion for evaluation arises because we have found that the rasterization of 3DGS imposes **no constraints on the Gaussians along the viewing ray direction**, even when a depth observation is present. This does not pose a problem when a sufficient number of carefully selected viewpoints are available. However, in continuous SLAM, this lack of constraint leads to numerous artifacts, which complicates tracking. Consequently, it becomes necessary to remove these Gaussian points.
For each added Gaussian point, only a sufficient number of observations can confirm that the point is not an artifact. Based on this design principle, we select half the window size as the empirical threshold for point deletion operations. If a point is observed too infrequently, we consider it not robust enough and, therefore, it needs to be removed promptly.
### Q3: The details of the mapping processing running time on BONN dataset.
A3: Your understanding is correct. The extended execution time of the mapping process on the Bonn dataset is closely linked to the point management strategy. More specifically, it is intricately associated with the extent of camera movement within the dataset.
Due to the relatively large movements of the camera in the Bonn dataset, an increased number of Gaussian points are inserted during the mapping process to maintain tracking stability. Consequently, this leads to a noticeable augmentation in processing time during the mapping phase, as shown in Tab.2 (one-page global PDF). Even when addressing complex pose estimation issues in dynamic scenes, the runtime of our mapping process is approximately the same as that of other GS-based SLAM methods.
---
Rebuttal Comment 1.1:
Comment: Thanks, the authors have processed all my concerns. I keep my rate unchanged.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer DPxX
We appreciate the reviewer's time for reviewing and thanks again for the valuable comments.
Best wishes
Authors | Summary: The paper combines existing methods for deep SLAM (DROID-SLAM) and 3D Gaussian splatting for 3D mapping into a system and adds an approach for dynamics filtering by depth warping to make tracking more robust in dynamic scenes. The approach is evaluated on several dynamic SLAM benchmarks and compared with recent deep learning based RGB-D SLAM approaches.
Strengths: - The proposed approach of dynamics filtering for SLAM with 3D Gaussian splatting for map representation seems novel.
- Experiments demonstrate improvements over previous methods for neural RGB-D SLAM which have been designed for static scenes.
Weaknesses: - Unfortunately, this paper has significant shortcomings in writing quality and clarity. Many variables and concepts are not defined properly (for instance, J, R in eq 2., \otimes operator, I_{m\times n}, \Pi in eq 8, S_i in l. 192, the symbols in eq. 13).
- Some phrases in the introduction are not accurate or proper English. What does "finishing pose estimation robustness" mean? What are invalid zones, what does furnishing an initial pose estimate mean?
- Sec. 2, the first paragraph starts with the heading "traditional visual SLAM with dynamic objects filter", but contains recent deep SLAM approaches for static scenes. Please restructure.
- The raycasted color and depth in eq. 3 requires a specific ordering of the gaussian splats, as indicated in the products ranging from j=1 to i-1. Please explain how it is established.
- What does the intersection operation over pixels achieve in eq 6 and 9? What does it mean to apply an indicator function and take a \otimes product with the I_mxn ? What is I_mxn?
- The variable \mu is used for coordinates and means of 3D Gaussians ambiguously, please change notations.
- l. 179, how is the camera pose parametrized and optimized ?
- l. 188, what is the dynamic radius?
- l. 192, what is the scale vector S_i and why should it be computed from the mean distance of three nearest neighbor points? Where do these points come from?
- eq 12, how are the lambda_1-3 chosen?
- l. 235, "unit quaternion algorithm" is not the right name for the method but rather Horn's procrustes method.
- l. 257, what is an "analytical experiment" ?
- Table 2, 3, what are the metrics for the numbers ? ATE, RPE ? how is it computed?
- Tables 2, and 3 should also compare with classical dynamic SLAM methods like Co-Fusion [*1], MID-Fusion [*2], EM-Fusion [*3] to name a few. Also these methods are missing from the discussion of related work.
- what is an "invalid Gaussian point filter" in l. 272 ?
- Table 5 seems to indicate that different settings of the iteration count hyperparameter are needed for the various datasets. This indicates an issue with generalization for the method. Are other hyperparameters chosen consistently across datasets?
- l. 290 what does to "encompass major motion categories" mean ?
- l. 291, what is the definition of the STD metric ?
- Table 7 / l. 302, please include run-time for semantic segmentation as it seems essential according to Table 6.
- How is the semantic segmentation mask obtained ? What is the runtime of this processing step?
- The paper does not discuss limitations and assumptions of the method beyond not performing loop closing.
[*1] Martin Ruenz, Lourdes Agapito. Co-fusion: Real-time segmentation, tracking and fusion of multiple objects. ICRA 2017
[*2] Binbin Xu, Wenbin Li, Dimos Tzoumanikas, Michael Bloesch, Andrew J. Davison, Stefan Leutenegger. MID-Fusion: Octree-based Object-Level Multi-Instance Dynamic SLAM. ICRA 2019
[*3] Michael Strecke, Joerg Stueckler. EM-Fusion: Dynamic Object-Level SLAM With Probabilistic Data Association. ICCV 2019
Technical Quality: 2
Clarity: 1
Questions for Authors: See questions in paper weaknesses.
Confidence: 5
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: The conclusion provides an obvious limitation of the approach due to not considering loop closing. Further limitations and assumptions are not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's meticulous and detailed review of our work, which helped us improve the quality of this paper. It should be noted that some of the issues you raised are typically treated as foundational definitions in the 3D Gaussian Splatting paper and traditional SLAM literature. To avoid unnecessary repetition, we have omitted some of the common expressions and explanations of symbols in our previous papers.
A1: (1)Eq.2 comes from the 3DGS[1] paper. $R$ is the viewing transformation and $J$ is the Jacobian of the affine approximation of the projective transformation. (2) $I_{m\times n}$ represents a matrix of the same size as the image, filled with ones. (3) Eq.8 is derived from DROID[2], where $p_i$ means a grid of pixel coordinates. (4) $S_i$ in l.192 and $R_i$ in l.193 are scaling vector and rotation matrix respectively. (5) In Eq.13, $\alpha_{i}$ means the opacity, which has been defined in l.125, and $S$ means the scaling vector. Other threshold parameters can be find more details in l.244.
A2: (1)"finishing pose estimation robustness" means achieving robust pose estimation. (2)“invalid zones” means dynamic object regions, illegal depth value regions, and regions with less rendering opacity. (3)"furnishing an initial pose estimate" represents that the coarse stage provides a better initial camera pose for the fine stage.
A3: We will modify it to "visual SLAM with dynamic objects filter".
A4: This has been explained in 3DGS[1]. The raycasting will sort all the Gaussian points based on their depth value.
A5: For the defination of $I_{m\times n}$, please refer to A1. The intersection operation signifies that for each element in the matrix $I_{m\times n}$, we assess whether its warp depth meets a specified threshold and subsequently modify the corresponding value at that position.
A6: The mean of 3D Gaussian points in space means the center coordinates of the Gaussian points. Thus, we use $\mu$ to represent the coordinates and mean. Their meanings are consistent.
A7: We parametrize the camera pose using quaternion and translation. To efficiently execute the pose optimization process, we have computed the Jacobian of the designed loss function to these two optimization variables and have implemented modifications in CUDA.
A8: We employed the dynamic point density management strategy, which determines whether to insert new Gaussian points according to the dynamic radius. The dynamic radius is determined based on color gradient, as explained in l.202 - l.204.
A9: $S_i$ refers to the scale of each Gaussian point, initialized by the mean distance of three nearest neighbor points. This operation originated from the original 3DGS[1] paper.
A10: $\lambda_1$-$\lambda_3$ are the loss weight, determined through grid search. We show the details of these parameters in l.242.
A11: Thank you for your advice. We will update this expression in the revised paper.
A12: We utilized the GT point cloud provided by the Bonn to qualitatively and quantitatively evaluate the reconstruction results, where "quantitative analytical experiments" refer to the results in Tab.1.
A13: The values presented in Tab. 2 and 3 correspond to the Absolute Trajectory Error (ATE) metrics, as mentioned in the Metrics section of the experiments (l.233). ATE quantifies the average absolute trajectory error between the estimated and ground truth poses across all measurement locations. Further details can be found in the Evo evaluation tool.
A14: Thank you for your suggestion. In the experimental section, we have added and compared some classic dynamic SLAM methods: Co-Fusion[4], MID-Fusion[5], and EM-Fusion[6]. The results are shown in Tab. 1(one-page PDF). Our method has showcased superior pose estimation results in comparison to classic dynamic SLAM methods. We will also discuss these methods in related work.
A15: It means “Map point deleting strategy” mentioned in l.214, Sec 3.4.
A16: In Table 5, we present the number of iterations of the tracking and mapping process for different SLAM methods. The experimental results show that our method can achieve superior results with fewer iterations. What's more, together with Table 7, Table 5 also demonstrates the competitiveness of our method in time consumption.
A17: We divided some default dynamic object classes based on semantic prior, such as humans. However, the semantic mask requires prior knowledge of motion categories and faces generalization challenges. Thus we use the depth warp mask to identify undefined dynamic objects such as balloons and boxes.
"encompass major motion categories" means the dynamic object category contained in Tum is human. Therefore, we opted to perform ablation studies on BONN. It contains moving balloons and boxes, which can show the effectiveness of our depth warp mask.
A18: STD means Standard Deviation of Absolute Trajectory Error. This concept has been mentioned in l.234. Further details can be found in the Evo evaluation tool.
A19: The semantic segmentation process can be considered as a part of data preprocessing. Thus, it has not been included in the system runtime calculations. To address your concern, we have tested the inference time of the Oneformer[7] method we adopted on an A6000 GPU, which is 163ms for every frame. It should be noted that our approach does not focus on the specific semantic segmentation network used, but rather on the fusion method itself.
A20: Thank you for your helpful advice. We will enhance the description of the experimental details in the implementation section. We utilize OneFormer[7] to generate prior semantic segmentation. We reported the run-time in A19.
A21: Extending dynamic 3DGS to large-scale scenarios has always been a challenging problem. As mentioned in GS-SLAM[8], the problem of loop closure for handling large-scale scenes is an interesting direction for future research. Thus, we believe it is still a challenge worth solving in dynamic SLAM.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response to my comments.
Some further comments/questions:
* Even if the methods have been introduced in previous papers, used notation needs to be defined properly and in a self-contained way in a scientific paper to avoid any ambiguities. The paper should be revised accordingly.
* A7, which rotation representation is used for optimization and how are the constraints on SO(3) handled?
* A16, Table 7 results are from a different dataset than Table 5. Run-time results on the TUM datasets would be needed to relate the iterations with run-time. Please also include comparison with the classical methods [4-6]
* A19, the semantic segmentation is essential for the system and therefore the runtime cannot be omitted. Please include the timing in the run-time evaluation. Do the baseline methods in Table 7 also contain semantic segmentation? Is it counted into the run-time?
* A21, please also discuss limitations and assumption wrt. the design choices of the proposed method. For instance, how does the approach depend on the accuracy of semantic segmentation? Discuss if live processing of sensory streams is possible with this approach.
---
Rebuttal 2:
Title: Response to Reviewer UDnM [1/2]
Comment: ### Q1: Notation issue.
A1: We are grateful for the reviewer's valuable comments. We will revise and refine the notations in the revision to eliminate any potential ambiguities.
### Q2: Rotation representation and constraints within the SO(3) group.
A2: We utilize the quaternion as the rotation representation in the optimizer to facilitate pose optimization. Throughout the optimization process for the rotational variables, we compute the derivative of the final loss directly with respect to the current quaternion representation. This chain rule differentiation process is structured into two stages. In the first stage, the derivative from the quaternion to the rotation matrix is calculated using automatic differentiation within Pytorch, which allows for the automatic capture of optimization variables. In the second stage, the derivative of the final loss with respect to the rotation matrix is manually computed using the Jacobian matrix. This calculation is implemented using CUDA code during the Gaussian rendering process. Upon obtaining the optimized quaternion, we normalize it to achieve a unit quaternion. This entire optimization process ensures compliance with the constraints on the SO(3) group.
### Q3: More experiments and consistent comparison.
A3: Thank you for your suggestions. To address the reviewer's concern, we provide the running time of tracking and mapping on the TUM dataset, as shown in Tab. 1. All the experiments were conducted with 20 iterations for tracking and 40 iterations for mapping. We have also added a comparison with the classical methods: Co-Fusion and MID-Fusion, where the running time is cited from the original papers.
**Table1: Run-time comparison on TUM f3_walk_static. * denotes including semantic segmentation time.**
||Tracking[ms]| Mapping[ms]|Avg[ms]|
|:----:|:----:|:----:|:----:|
|Co-Fusion|-|-|**83.3**|
|MID-Fusion *|-|-|400.0|
|NICE-SLAM|3186.2|1705.1|4892.3|
|E-SLAM|2045.9|1641.4|3688.5|
|Point-SLAM|2279.5|1544.4|3823.9|
|Co-SLAM|101.4|**140.1**|241.6|
|Ours|**89.2**|549.3|645.9|
Classical methods utilize an efficient variant of the Iterative Closest Points (ICP) algorithm for camera tracking and avoid a training process for mapping. Moreover, the implementation of classical methods has been optimized through the use of a C++ framework, resulting in faster runtime compared to neural SLAM methods. Currently, both NeRF and GS-based SLAM methods exhibit a certain gap in runtime when compared to traditional approaches. Addressing this discrepancy is a challenge that the research community continues to actively pursue.
---
Rebuttal 3:
Title: Response to Reviewer UDnM [2/2]
Comment: ### Q4: Running time issue.
A4: It should be noted that our approach does not focus on the specific semantic segmentation network used, but rather on the fusion method itself. All the baseline methods in Table 7 in our main paper do not include the time for semantic segmentation. Semantic segmentation is typically employed as a preprocessing operation and serves as an input to the system. Most of the dynamic SLAM methods also did not include the time for semantic segmentation instead including it as part of the preprocessing step. Consequently, Table 7 captures the total runtime of our SLAM system, and the time taken for semantic segmentation is not included. Considering the reviewer's concerns about the runtime of the semantic segmentation network we utilized, we have tested the inference time of the Oneformer [7] method, which is 163ms for every frame. Correspondingly, the running time including semantic segmentation is 245.57ms for tracking.
To avoid potential ambiguity, we will provide a detailed description of the running time per frame for the semantic segmentation method employed in the revised paper.
Note that DG-SLAM is not optimized for real-time operation. As the first system to propose ***dynamic*** Gaussian splatting for SLAM, this paper primarily focuses on designing an effective approach to achieve robust tracking and mapping. With ongoing advancements in these research fields and improvements in computing power, the processing speeds for semantic segmentation are expected to increase, ensuring they do not become bottlenecks for our method.
### Q5: Discuss more limitations and assumptions.
A5: Our approach has a certain tolerance for the accuracy of semantic segmentation. When the semantic segmentation model generates incomplete or incorrect segmentation masks for fewer frames, our system can still perform accurate and robust camera tracking. This is attributed to the coarse-to-fine pose optimization strategy where the motion estimation from the dense optical flow can tolerate some segmentation failures. Thus the coarse stage can provide a robust initial pose for the fine stage.
Even when addressing complex pose estimation issues in dynamic scenes, the runtime of our mapping process is approximately the same as that of other GS-based SLAM methods, as shown in Table 2 (one-page global PDF). The system runtime reported in the original paper was evaluated on a 3090ti GPU. With ongoing advancements in GPU computational capabilities, we believe our method is capable of real-time computation, enabling the live processing of sensory data streams.
Hope our response helps the reviewer's final recommendation. Thank you!
---
Rebuttal 4:
Comment: Dear Reviewer UDnM,
We appreciate your time for reviewing, and thanks again for the valuable comments and suggestions. As the discussion phase is nearing its end, we wondered if the reviewer might still have any concerns that we could address. We believe our responses on consistent comparison, rotation representation, running time, and limitations addressed all the questions/concerns, and we hope that our work’s impact and results are better highlighted with our responses.
It would be great if the reviewer can kindly check our responses and provide feedback with further questions/concerns (if any). We would be more than happy to address them. Thank you!
Best wishes,
Authors | Rebuttal 1:
Rebuttal: We are immensely grateful for the time and effort expended by all reviewers in reviewing our manuscript. The technical evaluations and detailed comments provided have been invaluable and have substantially enhanced the quality of our work. In this response, we have meticulously addressed each question posed by the reviewers on a point-by-point basis. Additionally, we have included the figures and tables from the supplementary experiments in the one-page PDF attachment.
The citation numbers in response to the reviewer UDnM are as follows:
[1] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM TOG, 2023.
[2] Zachary Teed and Jia Deng. DROID-SLAM: Deep Visual SLAM for Monocular, Stereo, and
362 RGB-D Cameras. In NeurIPS, 2021.
[3] Erik Sandström, Yue Li, Luc Van Gool, and Martin R. Oswald. Point-slam: Dense neural point cloud-based slam. In ICCV, 2023.
[4] Martin Ruenz, Lourdes Agapito. Co-fusion: Real-time segmentation, tracking and fusion of multiple objects. In ICRA, 2017.
[5] Binbin Xu, Wenbin Li, Dimos Tzoumanikas, Michael Bloesch, Andrew J. Davison, Stefan Leutenegger. MID-Fusion: Octree-based Object-Level Multi-Instance Dynamic SLAM. In ICRA, 2019.
[6] Michael Strecke, Joerg Stueckler. EM-Fusion: Dynamic Object-Level SLAM With Probabilistic Data Association. In ICCV, 2019.
[7] J. Jain, J. Li, M. Chiu, A. Hassani, N. Orlov, and H. Shi, “OneFormer: One Transformer to Rule Universal Image Segmentation,” in CVPR,
2023.
[8] Hidenobu Matsuki, Riku Murai, Paul H. J. Kelly, and Andrew J. Davison. Gaussian Splatting SLAM. In CVPR, 2024.
Pdf: /pdf/19b915d25ab62abc9467b004fe5d626240fc9473.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Verified Safe Reinforcement Learning for Neural Network Dynamic Models | Accept (poster) | Summary: The paper proposes methods to learn formally verified neural network control policies for (continuous-space, discrete-time) non-linear dynamical systems, whose dynamics are also represented by a neural network. It builds on existing methods for NN verification, and their application to a k-step composition of NNs for the policy and dynamics. The key novel ideas of the paper are a variant of curriculum learning, built into an incremental policy synthesis approach over increasing time horizons, and a parameterized approach to representing state-dependent policies. Experimental results show improved performance on four benchmarks compared to five comparable techniques.
Strengths: - Clear and very well written paper.
- Clearly presented and motivated novel ideas to improve performance on a challenging problem.
- Comprehensive empirical evaluation with a good number of meaningful benchmarks and baselines, and a further ablation study in the appendix.
- Impressive gains over baseline implementations.
Weaknesses: - The empirical results focus on the degree of safety achieved by different policies, but there seems to be no discussion of performance, e.g. runtime, or more directly the scalability of the various methods.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Relating to the stated weakness above, what is the experimental setup in terms of timeout (if any) used to compare tools, and what is the limiting factor in terms of the techniques' ability to synthesise safe policies?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's thoughtful comments and feedback. Our responses are below.
> **Question 1:** What is the experimental setup in terms of timeout (if any) used to compare tools.
**Response 1:**
We did not use any timeouts in our experiments, as all algorithms completed within reasonable time. Table 2 (in the Appendix) provides some timing details as an ablation study for our approach. We will add complete details in the revision.
> **Question 2:** What is the limiting factor in terms of the techniques' ability to synthesise safe policies?
**Response 2:**
The primary limiting factor is the scailablity of the verification tool for more complex domains, particularly when we include moving obstacles. This is of two types: 1) actual verification (e.g., after training is completed), and 2) differentiable verification that training requires. The latter is a more significant limiting factor than the former, and our approach would directly benefit from further advances in this area.
---
Rebuttal Comment 1.1:
Comment: Thank you for these clarifications. | Summary: The authors primarily propose a novel method to learn verified safe control policies for nonlinear neural dynamical systems, aiming to achieve safety in the sense of bounded reachability. By leveraging memorization, forward reachability analysis, and differentiable reachability over-approximation, the authors effectively learn verified safe policies, and further introduce an incremental verification approach to enhance the efficiency of the learning process, enabling the acquisition of multiple verified initial state-dependent controllers.
Strengths: The manuscript proposes a method for learning a k-step verified safe neural network controller, aimed at maximizing the efficiency of systems with neural dynamics. In contrast to traditional approaches relying on forward-invariance proofs for safe control synthesis, the manuscript opts for the more practical bounded reachability verification, enabling the leveraging of state-of-the-art differentiable neural network approximation tools.
The experimental results demonstrate that, across several dynamic system environments considering both static and moving obstacles, the proposed approach significantly outperforms the state-of-the-art safe reinforcement learning baselines. This highlights the advantages of the bounded reachability verification framework over the traditional forward-invariance guarantees, in terms of practical viability and performance when applied to systems with neural dynamics.
Weaknesses: The algorithm proposed in this manuscript lacks rationality analysis
(1) Page 3.: “and required statistical assumptions“->”and requires statistical assumptions”
(2) Page 4.: “which enable learning verified safe controllers over longer horizons K”->”which enables learning verified safe controllers over longer horizons K”
(3) Page 4.: “In this work, we primarily utilize the α,β-CROWN toolbox”-> The α,β-CROWN toolbox should give a brief introduction here
Technical Quality: 2
Clarity: 3
Questions for Authors: (1) Can the author explain the soundness of the approach proposed in the manuscript?
(2) The rationality of the experimental comparison baseline, why is it more interesting to compare with these methods? What are the maximum dimensions of the system security verification problem that can be solved by this method?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The manuscript thoroughly discusses the limitations of the proposed methods, and the security guarantees provided are weaker compared to forward-invariance. However, bounded reachability offers a more practical approach to achieving verified safety, effectively realizing the safety of the entire event horizon in practice, and provides an alternative for the development of verified safe reinforcement learning.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful comments and suggestions. Our responses are below.
> **Question 1:** Can the author explain the soundness of the approach proposed in the manuscript?
**Response 1:**
Soundness is a direct consequence of our use of a sound verification tool $(\alpha,\beta)$-CROWN, before declaring any of our initial-state-dependent controllers safe in the sense of $K$-step reachability (Lines 4-5 and 10-11 in Algorithm 2). We will clarify in the revision.
> **Question 2:** The rationality of the experimental comparison baseline, why is it more interesting to compare with these methods?
**Response 2:**
We chose recent baselines that cover the classical and SOTA approaches for a wide range of methods commonly used in Safe RL, including Lagrangian penalty, reward penalty, constrained PPO, and safe RL with reachability estimation. We also added a control-theoretic baseline using CBF in response to Reviewer kLbx (see Response 2 there); the performance of this baseline is comparable to our other baselines.
> **Question 3:** What are the maximum dimensions of the system security verification problem that can be solved by this method?
**Response 3:**
This depends greatly on the horizon $K$ that we wish to verify. For example, for small reachability horizon $K$, we can scale to thousands of dimensions. However, as meaningful reachability horizon entails larger $K$, this limits scalability, with verification tools the primary bottleneck. In general, there will be a tradeoff between state space dimension, verifiable horizon $K$, and fraction of state space that we can prove safety for.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. The authors have addressed my main concern with the paper. As such, I've increased the score to Weak Accept. | Summary: This paper tackles the problem of synthesizing verified control policies for dynamical systems with the use of neural networks. The authors focus on nonlinear discrete-time system dynamics for which the use of neural networks is motivated by the challenge of reaching the goal without colliding with obstacles. To guarantee non-collision and maintain reasonable performance, the authors propose an approach of iteratively using an existing verification toolbox with curriculum-based lookahead. The approach is evaluated on the benchmarks from previous work.
Strengths: The approach of building a curriculum of horizons and re-using verification results from the previous steps is relatively novel and provides an interesting future direction.
It is a significant improvement over the baselines that the proposed algorithm achieves 100% empirical safety. The decrease in performance is evident for several benchmarks and may exacerbate for a longer horizon.
The paper is overall well-structured. However, a motivating example to guide the reader through the algorithm would be helpful.
Weaknesses: As can be seen from the ablation study, the runtime improvement is achieved with the proposed approach when the lookahead is longer than 5 steps. It is not clear if a long lookahead is required for the considered benchmarks.
The authors motivate the choice of verification toolbox (a,b-CROWN) with its suitability for incremental verification. It is not clear how much the proposed technique is constrained by the chosen toolbox or general enough to be adopted for other verification tools under certain assumptions.
The choice of benchmark problems is not motivated in the evaluation section. The description of the benchmarks is not complete and requires looking into original papers. It would be appreciated to have complete system models stated formally in the appendix.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could the authors explain the choice of these particular benchmark problems to evaluate the approach on?
2. Is the state-space continuous or discrete for these benchmarks? (it appears it is continuous but only action-space is explicitly mentioned)
3. Which of the chosen benchmark problems particularly showcases the benefit of a longer-than-five step lookahead reachability?
4. How is the average reward computed (with verification over what horizon size)? Is it dependent on the verification horizon?
5. Is it possible to incorporate the proposed forward K-step reachability into the related reachability-based verification tools which do not yet consider it? What features or assumptions need to be satisfied?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The approach seems to have been specifically designed for verification tools that are suitable for incorporating into gradient-based learning. This can be a significant limitation to generalizability of the approach, since the authors do not discuss or evaluate what requirements the verification tools need to satisfy to be adopt the proposed approach. Based on the current presentation, the approach can only work with a,b-CROWN toolbox. If this is the case, it must be explicitly discussed as a limitation, it is, however, still an improvement over this particular domain.
Open-access to data and code: the answer is "yes" when the data and code are anonymously open-source. If they are not yet open-sourced the answer should be "no yet".
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed comments and feedback. Our responses are below.
> **Question 1:** A motivating example to guide the reader through the algorithm would be helpful.
**Response 1:**
We appreciate the suggestion and will use vehicle avoidance as such an example in the revision.
> **Question 2:** As can be seen from the ablation study, the runtime improvement is achieved with the proposed approach when the lookahead is longer than 5 steps. It is not clear if a long lookahead is required for the considered benchmarks.
**Response 2:**
Below is the result where we only train for 5 steps lookahead.
| | Verified-K | Verified-Max | Emp-K | Emp-500 | Avg Reward |
|-------------------------------|--------------|----------------|--------|-----------|-----------------|
| Lane Following, K = 80 | 98.7 | 7 | 99.9 | 99.9 | 328 $\pm$ 5 |
| Vehicle Avoidance (M), K = 50 | 73.1 | 6 | 88.7 | 88.7 | 304 $\pm$ 10 |
| 2D Quadrotor (F), K = 50 | 0.0 | 5 | 87.5 | 87.5 | 401 $\pm$ 23 |
| 2D Quadrotor (M), K = 50 | 0.0 | 5 | 99.7 | 99.7 | 373 $\pm$ 5 |
| 3D Quadrotor (F), K = 15 | 0.0 | 5 | 90.5 | 89.3 | 129 $\pm$ 10 |
This ablation (in comparison with the results in Table 1 in the paper) shows why considering lookahead with $K>5$ is critical for our benchmarks (especially 2D and 3D Quadrotor).
> **Question 3:** It is not clear how much the proposed technique is constrained by the chosen toolbox or general enough to be adopted for other verification tools under certain assumptions.
**Response 3:**
Our training framework is general and can in principle work with any *differentiable* verification technique; to our knowledge $(\alpha,\beta)$-CROWN is simply the best differentiable neural network verifier [5]. We will clarify in the revision.
> **Question 4:** The choice of benchmark problems is not motivated in the evaluation section. The description of the benchmarks is not complete and requires looking into original papers. It would be appreciated to have complete system models stated formally in the appendix.
**Response 4:**
Our benchmark problems are common benchmarks in the literature. For example, [3,4] use Quad2D, and [4] additionally uses Quad3D. The vehicle avoidance benchmark is a variant of CarGoal in the Safety Gym, used by PPO-PID, MBPPO, and RESPO baselines. We will add the description of the complete system models in the final version of the paper.
> **Question 5:** Is the state-space continuous or discrete for these benchmarks?
**Response 5:**
Both action and state space are continuous for all.
>**Question 6:** How is the average reward computed? Is it dependent on the verification horizon?
**Response 6:**
Average reward is computed over the entire episode horizon for each environnment, independently of the verification horizon (just as it is done in conventional RL). We will clarify in the revision.
> **Question 7:** Is it possible to incorporate the proposed forward K-step reachability into the related reachability-based verification tools which do not yet consider it? What features or assumptions need to be satisfied?
**Response 7:**
Yes. For training, the reachability-based verification toolbox needs to be differentiable. For verification, any reachability tool that supports NN-based dynamics and controllers would work.
> **Question 8:** Open-access to data and code: the answer is "yes" when the data and code are anonymously open-source. If they are not yet open-sourced the answer should be "no yet".
**Response 8:**
This is a good point; we do intend to make all code and data available on github.
[3] Emam, Yousef, Gennaro Notomista, Paul Glotfelter, Zsolt Kira, and Magnus Egerstedt. "Safe reinforcement learning using robust control barrier functions." IEEE Robotics and Automation Letters, 2022.
[4] Dawson, Charles, Zengyi Qin, Sicun Gao, and Chuchu Fan. "Safe nonlinear control using robust neural lyapunov-barrier functions." In Conference on Robot Learning, 2022.
[5] Brix, Christopher, Stanley Bak, Changliu Liu, and Taylor T. Johnson. "The fourth international verification of neural networks competition (VNN-COMP 2023): Summary and results.", 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response and promise to clarify the points in the revision. I have no further questions. | Summary: This paper studies safe reinforcement/control learning by optimization of long-horizon safety verification and learning multiple initial-state-dependent controllers. The authors propose several novel ideas, including a curriculum learning to increase the verification horizon, incremental verification, and split the initial region for multi-controllers training. A set of control tasks show the effectiveness of the approach, compared to other CMDP-based safe RL methods.
Strengths: 1. The paper is well-written and easy to follow.
2. As far as I can tell, the idea of optimizing the verification horizon (curriculum learning) is novel and the experimental result is quite significant.
3. The approach is sound.
Weaknesses: 1. the reviewer feels that incremental verification in this paper is quite common in the reachable set computation of neural network-controlled systems (NNCS), where the computed reachable set at $k$-th step will become the initial set for $k+1$ set computation. The authors may have to discuss their incremental verification vs. NNCS verification tools. To name a few tools, please refer to POLAR-Express[1], CORA[2], etc.
2. Splitting the state/initial space set into small grids for verification and controller synthesis/training is also common in the NNCS verification community. However, I have to admit that the design of multi-initial-state-dependent controllers is new to me.
3. The experiments only compare to the CMDP-based approach, it is unclear how the proposed approach compares to other control-theoretical methods, for instance, the CBF-based approaches.
[1] Wang, Y., Zhou, W., Fan, J., Wang, Z., Li, J., Chen, X., ... & Zhu, Q. (2023). Polar-express: Efficient and precise formal reachability analysis of neural-network controlled systems. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. https://github.com/ChaoHuang2018/POLAR_Tool
[2] https://github.com/TUMcps/CORA?tab=readme-ov-file
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How to ensure that Algorithm 2 can terminate?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations in the final part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful comments and suggestions. Our responses are below.
> **Question 1:** The authors may have to discuss their incremental verification vs. NNCS verification tools; POLAR-Express[1], CORA[2], etc.
**Response 1:** Indeed, incremental verification is a well-explored idea in the verification literature, and we took inspiration from this literature. To our knowledge, we are the first to use this idea in *training* provably safe controllers. This entails several innovations:
1) We incrementally verify (and backpropagate the results) several steps ahead in a single training iteration (i.e., not merely from $k$ to $k+1$, but more generally from $k_i$ to $k_{i+1}$, where $k_{i+1} - k_i > 1$). Doing this more generalized version of incremental verification is crucial for training, significantly speeding it up, and reducing the likelihood of being stuck in "local optima" where inertia resulting from policy obtained for $k$ prevents verification from succeeding for $k+1$ (e.g., because we are too close to the unsafe region with velocity directed towards it).
2) Incremental verification is also an important component of our "Curriculum Learning with Memorization" training piece, where a significant challenge is backpropagation of $\mathcal{L}_{\text{bound}}$, which becomes GPU-intensive as the number of forward reachability steps, and consequently the neural network depth, increases. (GPU requirements for $K$-step backpropagation significantly exceed those for $K$-step verification.) Using incremental verifiation also allow us to efficiently extract useful gradient information for training. Furthermore, as shown in the ablation study in Table 2, the use of incremental verification during training significantly increases computational efficiency.
We will add the discussion regarding how our approach to incremental verification compares to, and differs from NNCS verification (e.g., POLAR-Express[1], CORA[2], etc.) in the revision.
>**Question 2:** It is unclear how the proposed approach compares to other control-theoretical methods, for instance, the CBF-based approaches.
**Response 2:**
We thank the reviewer for the suggestion. We ran an experiment with a CBF-based safe RL baseline from [3]. The results are below (where (M) stands for moving obstacles and (F) stands for fixed obstacles), which we will add to the revision, and are qualitatively similar to other baselines we consider.
| | | Verified-K | Verified-Max | Emp-K | Emp-500 | Avg Reward |
|---|---|---|---|---|---|---|
| Lane Following, K = 80 | CBF-based | 98.7 | 7 | 99.9 | 99.9 | $\mathbf{331 \pm 7}$ |
| | VSRL (ours) | $\mathbf{100.0}$ | $\mathbf{80}$ | $\mathbf{100.0}$ | $\mathbf{100.0}$ | $214 \pm 5$ |
| | | | | | | |
| Vehicle Avoidance (M), K = 50 | CBF-based | 73.0 | 6 | 89.3 | 89.3 | $301 \pm 15$ |
| | VSRL (ours) | $\mathbf{100.0}$ | $\mathbf{50}$ | $\mathbf{100.0}$ | $\mathbf{100.0}$ | $\mathbf{401 \pm 4}$ |
| | | | | | | |
| 2D Quadrotor (F), K = 50 | CBF-based | 0.0 | 5 | 89.9 | 89.7 | $\mathbf{408 \pm 17}$ |
| | VSRL (ours) | $\mathbf{100.0}$ | $\mathbf{50}$ | $\mathbf{100.0}$ | $\mathbf{100.0}$ | ${401 \pm 20}$ |
| | | | | | | |
| 2D Quadrotor (M), K = 50 | CBF-based | 0.0 | 4 | 99.3 | 99.3 | $\mathbf{369 \pm 6}$ |
| | VSRL (ours) | $\mathbf{100.0}$ | $\mathbf{50}$ | $\mathbf{100.0}$ | $\mathbf{100.0}$ | $364\pm 4$ |
| | | | | | | |
| 3D Quadrotor (F), K = 15 | CBF-based | 0.0 | 2 | 82.3 | 79.2 | $\mathbf{140 \pm 10}$ |
| | VSRL (ours) | $\mathbf{100.0}$ | $\mathbf{15}$ | $\mathbf{100.0}$ | $\mathbf{100.0}$ | $122 \pm 14$ |
| | | | | | | |
[3] Emam, Yousef, Gennaro Notomista, Paul Glotfelter, Zsolt Kira, and Magnus Egerstedt. "Safe reinforcement learning using robust control barrier functions." IEEE Robotics and Automation Letters, 2022.
> **Question 3: How to ensure that Algorithm 2 can terminate?**
**Response 3:**
Ensuring termination entails simply setting a limit on the number of training iterations or setting a timeout, as is typically done in RL, or imposing an upper bound on the number of controllers the Algorithm can return. In our experiments, however, this proved unnecessary, as all cases terminated in reasonable time and with few initial-state-dependent controllers.
---
Rebuttal Comment 1.1:
Comment: I greatly appreciate the feedback from the authors, which has sufficiently addressed my concerns and problems. I am willing to increase my score. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Graph Neural Flows for Unveiling Systemic Interactions Among Irregularly Sampled Time Series | Accept (poster) | Summary: Authors propose a novel idea to learn the interactions between time series which is crucial to make reliable forecasts in interacting systems. Paper proposes a new algorithm GNeuralFlow; a new graph-based continuous time model to learn systematic interaction. A GNN is used to learn the interactions between time series, where each time series is associated with a directed acyclic graph. GNeuralFlow learns the graph structure for irregularly sampled time series.
The experiments performed on synthetic and real data shows they GNeuralFlow outperform the other models based regression and classification tasks. The experiments to model latent variable modelling shows GNeuralFlow's robustness in learning and utilizing the underlying graph structure for improved predictive performance.
Strengths: Originality: Authors identify an important problem in non-uniform time samples time series to model the interactions between them. The paper introduces the original idea (as per my knowledge) of using Graph Neural Flow (GNeuralFlow) which leverages the power of graph based modelling and continuous time technique to capture the relationship between time series. The method has wide application in climate, finance and more where a dynamic system is crucial.
Quality: The main development of paper GNeuralFlow is built on a strong mathematical foundation and authors provide empirical and experimental results to support their claims.
Clarity: The paper is well structured and describes the problem clearly and then proceeds to describe GNeuralFlow as a solution to it and advance the field. The text provides a clear explanation of the idea with a complete mathematical framework to support the same.
Significance: The idea of capturing relationship between time series using GNeural is significant improvement in the methodology. Learning the graph as a solution to ODE gives the flexibility to use it for unequal samples of time series. This has several applications in time series domain including streaming data.
The idea is to use traditional GCN to learn the interaction between time series for continuous time GNeuralFlow can handle the time series data observed at non-uniform time stamps without the need for interpolation or computing expensive pre-processing.
Weaknesses: Although the paper address a big problem faced in steaming data application with unequal time samples using novel approach the idea comes with additional complexity. The complex nature of the solution is computationally expensive and is not scalable after a certain size. The fact has been identified by authors as well and documented in the limitations sections.
Technical Quality: 4
Clarity: 3
Questions for Authors: - The paper discusses about the increase in the complexity due to added Graphs in the model. Is there an estimate how much is the computation time increase (is it quadratic?) ? Does this time power increase (from quadratic to cubic) as # of interacting time series are increased in the experiment?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The paper is well structured but the biggest strength becomes its limitation as well. The addition of graph and learning the parameters allow the measurement of interactions but at the same time it increase the complexity of the model. This would be a the main challenge to use the model in real applications and might need approximate solutions to adapt this version of paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful comments, we reply to your questions below.
**RE: computational cost**
The computational cost can be cubic for the number of nodes because the evaluation of the DAG constraint and the gradient involves the computation of the matrix exponential. One possible mitigation is that rather than modeling the graph adjacency matrix as a free variable, we impose structures on the matrix (such as diagonal plus low rank). Then, the matrix exponential is less expensive to compute. | Summary: This paper proposes a novel graph-based continuous-time model GNeuralFlow for learning systemic interactions and interactions are modeled as a Bayesian network. Experimental results for regression problems show that the proposed GNF achieves state-of-the-art performances on time series classification and forecasting benchmarks.
Strengths: + The paper is well-written and easy to follow.
+ The introduction give a nice overview and motivation for the problem.
+ The results are good and evaluation is reasonable.
Weaknesses: - The proposed method lacks a theoretical analysis.
- The authors can consider using state-of-the-art GNN as backbone for graph encoder.
- Can the authors apply this method to real-world datasets?
- Computational complexity and corresponding comparisons with baselines are missing.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see comments in Weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **RE: theoretical analysis**
Our work mainly focuses on modeling. We offered certain analyses to justify some modeling choices, including guaranteeing contractive mappings required by the neural flows (Theorem 1, Theorem 2, and other inline text within section 4.3).
**RE: sota GNN**
Thank you for pointing out this possible direction. We did not explore further as the network we used was already able to outperform the considered baselines.
**RE: real-word dataset**
Thanks for this request. We note that we already performed experiments over four real-life datasets, including Activity, Physionet, Mujoco, and Mimic-IV. See our paper in sections 5.2 and 5.3, and Appendix F for dataset description.
**RE: computational complexity**
Thanks for this point. We have added a table with the wall-clock runtime. As expected, the graph-neural-flow (GNF) models are more expensive than the neural-flow corresponding ones (because of the additional modeling of the graph). However, the GNF are still cheaper than the Neural ODEs (because we do not need a numerical solver).
| Model | Sink | Triangle | Sawtooth | Square |
| ------------------- | ----- | -------- | -------- | ------ |
| Neural-ODE | 1.529 | 1.527 | 1.742 | 2.206 |
| NF-resnet | 1.022 | 1.013 | 1.021 | 1.020 |
| NF-coupling | 0.136 | 0.137 | 0.136 | 0.133 |
| NF-gru | 0.251 | 0.249 | 0.247 | 0.247 |
| GNF-resnet (ours) | 1.521 | 1.521 | 1.534 | 1.533 |
| GNF-coupling (ours) | 1.215 | 1.214 | 1.212 | 1.213 |
| GNF-gru (ours) | 0.275 | 0.283 | 0.286 | 0.284 |
**Final comment**
Thank you for your review. Please consider raising the score if we have addressed your points appropriately.
---
Rebuttal Comment 1.1:
Title: Thank you.
Comment: I appreciate the author for the detailed response. After carefully reading the rebuttal, I am retaining my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer zrVL,
Thank you for your valuable suggestions to enhance our work. In response to your request for further improvement, we have explored an additional GNN model. Our findings in the table below indicate that significant improvement can be achieved by combining the ResNet model with a Message Passing GNN (Veličković, 2022; Bronstein et al., 2021).
| model | Sink | Triangle | Sawtooth | Square |
| ----------------- | --------------- | --------------- | --------------- | --------------- |
| GNF-resnet (ours) | 3.95(±0.32) | 2.32(±0.11) | 3.84(±0.06) | 8.24(±0.64) |
| GNF-MPGNN (ours) | **3.45(±0.13)** | **2.00(±0.03)** | **2.79(±0.05)** | **4.32(±0.12)** |
We kindly ask you to consider raising the score if this update, or any of our previous revisions, addresses your concerns either fully or partially.
**References**
Veličković 2022. Message passing all the way up. ICLR
Bronstein et al 2021. Geometric deep learning. Arxiv. | Summary: The paper focuses on a graph-based model to capture dependencies in irregularly sampled time series data. The framework employs a causal prior—a directed acyclic graph—where nodes are conditionally independent of non-descendants given their parents, specifying component dynamics dependencies. The proposed model, termed a graph neural flow, learns the solution of an unknown ODE directly from irregularly sampled time series, which contrasts with neural ODEs by avoiding repeated calls to a costly numerical solver. Multiple neural flows, one for each time series, are conditioned on the DAG, with their interactions instantiated using a GNN, such as a graph convolutional network (GCN). The GCN enhances the ODE solution parameterization by aggregating neighboring time series information at each time point, effectively modeling a graph-conditioned ODE that captures system interactions.
Strengths: The proposed model is nicely presented, along with precise and meaningful mathematical formulations for capturing interactions in irregularly sampled time series. GNeuralFlow shows significant performance in both classification and regression problems spanning several synthetic and real-world datasets. Hyperparameters and models’ configurations are presented enabling reproducibility and fair comparisons.
Weaknesses: 1. *W1:* Except for the standard experiments on synthetic data representing dynamical systems, the authors explore different tasks on various datasets, slightly deviating from some setups followed in existing related works. For instance, some studies include additional experiments on interpolation/extrapolation [1] or use additional datasets (e.g., classification on MIMIC-III). Moreover, incorporating additional standard baselines in addition to the presented ones could strengthen the evaluation (recurrent/attention-based ones, e.g., mTAND, GRU-D, and others).
2. *W2:* The proposed GNeuralFlow is primarily based on the concept of Neural Flows for efficient computation of ODE system solutions. The significant methodological extensions in the paper include the incorporation of graph-based representations and causal formulation. However, these extensions are not qualitatively evaluated by analyzing causal relationships or visualizing interactions within the learned graphs. For instance, authors in [2] present studies/experiments to quantify causal discovery in the studied datasets.
3. *W3:* Although the approach of replacing the ODE solver offers a computational advantage over classical Neural-ODE-based methods, a comprehensive computational cost analysis comparing different methods, including simpler (e.g., sequential, non-ODE-based) yet effective models, is crucial.
4. *W4:* The presentation of the related work is quite brief, causing some confusion, particularly regarding closely related graph-based methods like CF-GODE and LG-ODE. Including comparisons and comments on additional relevant papers could be beneficial [3,4] (please explain if not relevant).
[1] Schirmer, M., Eltayeb, M., Lessmann, S., & Rudolph, M. (2022, June). Modeling irregular time series with continuous recurrent units. In International conference on machine learning (pp. 19388-19405). PMLR.
[2] Löwe, S., Madras, D., Zemel, R., & Welling, M. (2022, June). Amortized causal discovery: Learning to infer causal graphs from time-series data. In Conference on Causal Learning and Reasoning (pp. 509-525). PMLR.
[3] Choi, J., Choi, H., Hwang, J., & Park, N. (2022, June). Graph neural controlled differential equations for traffic forecasting. In Proceedings of the AAAI conference on artificial intelligence (Vol. 36, No. 6, pp. 6367-6374).
[4] Jin, M., Zheng, Y., Li, Y. F., Chen, S., Yang, B., & Pan, S. (2022). Multivariate time series forecasting with dynamic graph neural odes. IEEE Transactions on Knowledge and Data Engineering, 35(9), 9168-9180.
Technical Quality: 3
Clarity: 3
Questions for Authors: Based on the *weaknesses* above please focus on the following aspects:
1. **Experiments:** Explain choices of datasets and baselines or extend the experimental evaluation (W1). Conduct qualitative evaluations of graph-based causal representations (W2).
2. **Limitations:** Perform a comprehensive computational cost analysis comparing GNeuralFlow with simpler and more complex methods to highlight efficiency gains (W3).
3. **Contribution:** Please better position the presented contribution among relevant works (W4).
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: It would be more complete to experimentally showcase the computational limitations, demonstrating in practice the scalability of the proposed method for real-world datasets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **RE: additional experiments (Q1)**
Thanks for your comment.
Here we provide additional baselines. Specifically, we provide the comparison with GRU-D (Che et al 2016), NRI (Kipf et al., 2018), and dNRI (Graber and Schwing 2020) on the synthetic datasets.
We observe that our GNF-gru can improve over both Neural-flow with GRU and GRU-D. While GNF-gru is outperformed by NRI and dNRI on most datasets, our GNR-resnet achieves the best performance overall.
| model | Sink | Triangle | Sawtooth | Square |
| ----------------- | --------------- | --------------- | --------------- | --------------- |
| NRI | 5.25(±0.02) | 3.96(±0.16) | 4.99(±0.12) | 9.39 (±0.45) |
| dNRI | 5.40(±0.04) | 3.39(±0.09) | 4.97(±0.21) | 9.78(±0.21) |
| NF-GRU | 10.9(±0.43) | 10.3(±0.45) | 16.1(±0.41) | 17.2(±0.51) |
| GRU-D | 12.31(±0.23) | 11.25(±0.32) | 17.55(±0.53) | 18.73(±0.31) |
| GNF-resnet (ours) | **3.95(±0.32)** | **2.32(±0.11)** | **3.84(±0.06)** | **8.24(±0.64)** |
| GNF-gru (ours) | 6.83(±0.23) | 5.41(±0.23) | 5.11(±0.13) | 9.14(±0.61) |
**RE: computational cost (Q2)**
Thanks for the feedback. We present the training cost as the mean training time for one epoch (in seconds) over synthetic datasets. As expected, our graph neural flow (GNF) models are more expensive than the neural flow corresponding ones (because of the additional modeling of the graph). However, the GNF are still cheaper than the Neural ODEs (because we do not need a numerical solver).
| Model | Sink | Triangle | Sawtooth | Square |
| ------------------- | ----- | -------- | -------- | ------ |
| Neural-ODE | 1.529 | 1.527 | 1.742 | 2.206 |
| NF-resnet | 1.022 | 1.013 | 1.021 | 1.020 |
| NF-coupling | 0.136 | 0.137 | 0.136 | 0.133 |
| NF-gru | 0.251 | 0.249 | 0.247 | 0.247 |
| GNF-resnet (ours) | 1.521 | 1.521 | 1.534 | 1.533 |
| GNF-coupling (ours) | 1.215 | 1.214 | 1.212 | 1.213 |
| GNF-gru (ours) | 0.275 | 0.283 | 0.286 | 0.284 |
**RE: position the presented contribution among relevant works (Q3)**
Thanks for pointing out the related works from Choi et al (2022) and Jin et al (2022). We will add them to the main text of our paper. There are some differences with our method.
First, these papers take the Neural ODE approach, whereas our method follows neural flows. While both aim at solving an unknown ODE, neural flows are computationally more economical than Neural ODE, because they do not require the repeated calls of ODESolve. It is unclear if the referenced papers can straightforwardly swap Neural ODE with neural flows for this benefit, without changing other components of their models.
Secondly, these papers consider time-varying graphs. Here, we are not to argue if a time-varying graph or a constant graph is superior; rather, they come from different modeling beliefs. Time-varying graph modeling typically constructs the graph from data (e.g., an affinity graph of given feature vectors at a time, or a co-occurrence graph of observations within a sliding time window) or parameterizes the graph based on node embeddings; whereas DAG structure learning treats the graph as a free parameter to learn. Our method models a constant graph that is assumed to generate the data over time.
**References:**
Choi et al (2022). Graph neural controlled differential equations for traffic forecasting. AAAI.
Jin et al (2022). Multivariate time series forecasting with dynamic graph neural odes. IEEE Transactions on Knowledge and Data Engineering.
Che et al (2016). Recurrent Neural Networks for Multivariate Time Series with Missing Values.
Kipf et al. (2018). Neural Relational Inference for Interacting Systems, ICML.
Graber and Schwing. (2020). Dynamic Neural Relational Inference, CVPR
**Final comment**
Thank you for the feedback. Please consider raising the score if we have addressed your concerns.
---
Rebuttal Comment 1.1:
Comment: Thank you for the replies, which have partially addressed the issues raised by my side (e.g., baselines and computational costs). Considering other reviewers' comments as well as the impact of the contribution (when it comes to mathematical background, performance improvements and complexity), I would prefer to maintain my rating. | Summary: This paper addresses the problem of multivariate time series prediction with feature interactions. It proposes learning a Directed Acyclic Graph (DAG) to model interactions, encoded by a Graph Neural Network (GNN), and using a neural flow to model the dynamics. The experiments demonstrate improvements over graph-free models and models with given graphs obtained from covariance.
Strengths: The combination of graph learning and neural flow, along with DAG-conditioned optimization, is original. Since flows are popular in generative modeling, the idea of converting them to the graph setting seems promising. The paper also learns graph parameters using the matrix A to learn (or relearn the DAG structure), which can be useful in cases where the graph is not previously known with certainty.
Weaknesses: Weaknesses:
Flow matching inherently has the idea of using a linear interpolant (straight line) to encode optimal transport paths in time series data. In spaces where this is not true, or where the optimal transport paths are perhaps in a latent dimension the assumptions are violated. This is why Flow matching has been more useful for generative modeling than time series. This is not noticed by the authors and the restrictive experiments they perform are insufficient to test this.
I don't see the need for a DAG here. See the RITINI model [Bhaskar et al, LOG 2023], when there is time series data there can be feedback loops in the variables rather than DAG-style causality. I would appreciate trying to loosen this requirement, which would enable cyclic dependencies.
The set of comparisons featured here is rather limited. Here are several works that could be compared against:
Kipf et al., Neural Relational Inference for Interacting Systems, ICML, 2018.
Graber and Schwing. Dynamic Neural Relational Inference, CVPR, 2020 (dynamic graph inference for discrete-time systems)
Nishikawa-Toomey et al., Bayesian learning of Causal Structure and Mechanisms with GFlowNets and Variational Bayes (jointly learn the DAG and the parameters of a linear-Gaussian causal model)
Deleu et al., Joint Bayesian Inference of Graphical Structure and Parameters with a Single Generative Flow Network, NeurIPS, 2023 (jointly learn the posterior over the structure of a Bayesian Network, and also the parameters of its conditional probability distributions)
Smith and Zhou, Coordinated Multi-Neighborhood Learning on a Directed Acyclic Graph, arXiv:2405.15358 (constraint-based approach that exploits conditional independence relationships between variables to infer the underlying causal model)
Hiremath et al. Hybrid Global Causal Discovery with Local Search, arXiv:2405.14496 (global causal discovery by learning local causal substructures using topological sorting and pruning of spurious edges)
Bhaskar et al. Inferring dynamic regulatory interaction graphs from time series data with perturbations, LOG 2023
Technical Quality: 3
Clarity: 3
Questions for Authors: For the experiments on synthetic datasets (Figure 1), how are the graphs for graph ODEs obtained? Are all models using the ground truth graph? If so, how is the conclusion that “DAG is better than other graph structures at modeling dependencies” (line 252) drawn?
Does the performance gain over existing graph ODE models come from the learnable graph? Is there an ablation study for that?
Is the model learning a constant graph independent of time? How does it perform when there is a significant change in data dependence?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **RE: flow matching (W1)**
Our effort consists in extending Neural Flows (Bilos et al 2021), to integrate additional interdependency information in the form of a learned graph. Note that (Bilos et al 2021) was proposed in the context of time series, differently from Flow Matching. We are not proposing a flow-matching approach.
**RE: no need for a DAG here (W2)**
Thanks for pointing out this perspective. Our view on the matter is the following.
A DAG provides a clear and interpretable representation of causal relationships between variables. In the context of time series data from multiple sensors, understanding these causal relationships can help identify how changes in one sensor's readings might influence others. This is crucial in medical monitoring, where understanding causality can aid in diagnosing conditions or understanding physiological responses.
In addition, DAG structure learning is a long-lasting and challenging problem in probabilistic graphical models (in particular, Bayesian networks). The structure of a Bayesian network takes the form of a DAG and plays a vital role in causal inference (Pearl, 1985). Learning the DAG structure is a combinatorial problem; it is not only theoretically intimidating (NP-hard; see Chickering et al. (2004)) but also practically challenging even when approximate solutions are sought.
**References:**
Bilos et al (2021). Neural flows: Efficient alternative to neural ODEs. Neurips.
Pearl (1985). Bayesian networks: A model of self-activated memory for evidential reasoning. Proceedings of the 7th Conference of the Cognitive Science Society.
Chickering et al. (2004). Large-sample learning of Bayesian networks is NP-hard. JMLR.
**RE: related works (W3)**
Thanks for providing these related works, we will place them appropriately in the main text.
Here we add the comparison with two of the requested baselines. Specifically, we provide a comparison with NRI (Kipf et al., 2018) and dNRI (Graber and Schwing 2020) methods on the synthetic datasets.
| model | Sink | Triangle | Sawtooth | Square |
| ---------- | --------------- | --------------- | --------------- | --------------- |
| NRI | 5.25(±0.02) | 3.96(±0.16) | 4.99(±0.12) | 9.39 (±0.45) |
| dNRI | 5.40(±0.04) | 3.39(±0.09) | 4.97(±0.21) | 9.78(±0.21) |
| GNF-resnet | **3.95(±0.32)** | **2.32(±0.11)** | **3.84(±0.06)** | **8.24(±0.64)** |
We notice that three of the requested comparisons were posted after the conference deadline, namely, Nishikawa-Toomey, (2024), Smith and Zhou (2024), and Hiremath et al. (2024).
**References:**
Kipf et al. (2018). Neural Relational Inference for Interacting Systems, ICML.
Graber and Schwing. (2020). Dynamic Neural Relational Inference, CVPR
Hiremath et al. (2024). Hybrid Global Causal Discovery with Local Search, arXiv
Smith and Zhou (2024). Coordinated Multi-Neighborhood Learning on a Directed Acyclic Graph. Arxiv.
Nishikawa-Toomey et al. (2024). Bayesian learning of Causal Structure and Mechanisms with GFlowNets and Variational Bayes. Arxiv.
**RE: how are the graphs for graph ODEs obtained, and why DAG is better than other graph structures at modeling dependencies (Q1)**
Thanks for raising this point. The graph ODEs are given the ground truth graph in input for the synthetic data, while they are given the covariance matrix in the real-world datasets.
Empirically, we find that when we employ our DAG method, we achieve improvements over graph-ODEs, even while they are given the ground truth graph.
**RE: performance gain over existing graph ODE (Q2)**
Thanks for raising this point. The performance gain is due to the DAG component. For example, in Figure 2, we show that a NeuralFlow model without graph dependencies performs worse than both GNeuralFlow with a learned graph and with the ground truth graph.
**RE: learning a constant graph independent of time (Q3)**
While we learn a constant graph, and certain datasets may not work under such an assumption, in the experiments we achieved success also in real-life datasets. Moreover, our framework can support learning a dynamical graph by parameterizing it with time-dependent node embeddings.
---
Rebuttal Comment 1.1:
Title: Clarified neural flows
Comment: Thanks for the clarification of the neural flows approach, in other words I see that you directly model the integral curve. I think this is also interesting for a graph. I also acknowledge the effort of adding the baselines. The idea that you would use a DAG because others do is not a great justification, we know medicine is filled with circularly dependent variables. Therefore I will raise my score slightly to 5. | null | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper proposes a continuous-time model to discover the causal structure from irregular multivariate time series data. The idea is to introduce the DAG (directed acyclic graph) to model the interaction of different time series at the same time step in the vector field that generates the multivariate time series(that is, the $n$ trajectories). Instead of learning the vector field, the paper proposes to learn the solution of the ODE system directly with neural flows. The structural interaction is implemented with graph convolution neural networks and three particular parameterization methods are proposed for the solution function $F$ to guarantee its invertibleness. The experiments on both synthetic and real-world irregular time series datasets are conducted to assess the efficacy of the proposed method for both prediction and classification tasks.
Strengths: * The proposed method is well motivated and clearly illustrated, and the presentation is easy to follow. The reviewer appreciates the way to establish the solution framework presented in Section 4.1, which is very well explained.
* It is interesting to introduce the DAG structure learning to irregular time series modeling and the proposed method seems reasonable to the reviewer.
* The proposed method achieves better results than the existing method.
Weaknesses: * In Section 3.2, it states "(i) the vector fields $\mathbf{B}\mathbf{x}^1, \ldots, \mathbf{B}\mathbf{x}^n$ follow the same conditional dependence structure governed by $\mathbf{A}$", but according to Eq. (2), the vector fields seems to be $\mathbf{B}\mathbf{x}^j - \sum_{i=1}^{j-1}a_{ij} \mathbf{Bx}^{i}$ for $j=1,\ldots,n$, so I would suggest changing the description to "the vector fields that generate the $n$ trajectories follow the same conditional dependence structure governed by $\mathbf{A}$" to avoid confusion.
* To learn the DAG matrix, the proposed method employs NOTEARS and DAG-GNN. How do these DAG learning algorithms impact the model performance?
* The proposed method (graph neural flows) is dubbed GNeuralFlow in the main text but is referred to as GNF in the tables of experiments. It is better to unify the abbreviation. Besides, no code is released.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **RE: ..I suggest changing the description to "the vector fields..**
Thanks for the feedback. We will update the text.
**RE: .. How do these DAG learning algorithms impact the model performance**
Thanks for pointing out this part. We address it as follows.
A limitation of the proposed model is that the number of parameters on the $A$ part grows quadratically with the number of time series (nodes). Such a scalability challenge is a common problem for DAG structure learning. While past research showcased the feasibility of learning a graph with a few hundred nodes (Yu et al 2019), going beyond is generally believed to require a new computational technique.
**References**
Yu et al 2019. DAG-GNN: DAG structure learning with graph neural networks. ICML
**RE: unify the abbreviation GNF**
GNF is commonly an abbreviation for Graph Normalizing Flow. We will change the abbreviation.
**Final comment**
Thank you for the comments. Please consider raising the score if we have addressed them appropriately.
---
Rebuttal Comment 1.1:
Comment: Thanks for further clarification. I will keep my score. | null | null | null | null | null | null |
Identity Decoupling for Multi-Subject Personalization of Text-to-Image Models | Accept (poster) | Summary: MuDI is a novel framework designed for multi-subject personalization in text-to-image diffusion models. It effectively decouples identities of multiple subjects, using segmented subjects from a foundation model for data augmentation and generation initialization. A new metric is introduced to evaluate multi-subject personalization. Experimental results show MuDI produces high-quality personalized images without identity mixing, outperforming existing methods.
Strengths: - This paper is well-written and easy to follow.
- The experimental results are comprehensive and sufficiently support the claims made in the paper.
- MuDI successfully prevents identity mixing when generating multiple subjects, maintaining clear individual identities.
- Introduction of a new metric provides a better assessment of multi-subject personalization performance.
Weaknesses: - Inheriting from DreamBooth, MuDI requires test-time fine-tuning for each set of concepts, which might hinder its practical application. In contrast, some approaches like IP-Adapter can achieve customization in a tuning-free manner.
- There is a spelling error on line 796. Additionally, there is a miscitation on line 690, where FastComposer[51] is incorrectly referred to as a "single-subject personalization" method; it is actually a zero-shot multi-subject customization method.
Technical Quality: 3
Clarity: 4
Questions for Authors: - During training, MuDI sets the background to white, which might reduce text alignment in the generated images. However, in Figure 6, MuDI does not seem to be affected by this. I need the authors to provide a possible explanation for this discrepancy.
- The proposed strategy of modifying the initial noise during inference appears to be very effective. I am curious whether DreamBooth with Region Control, if applied with the same inference strategy, would achieve better performance than MuDI.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: No negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and effort in reviewing our paper. We appreciate your positive comments that
- Well written paper
- Comprehensive experiments
- Success in multi subject personalization
- Good proposed metric
We initially address your concerns below.
---
**C1. Test-time fine-tuning method**
We agree that test-time fine-tuning methods like DreamBooth may face limitations in practical application. However, existing tuning-free methods have only shown effectiveness within specific domains, such as human subjects [1, 2], and often still require additional fine-tuning to achieve comparable levels of subject personalization as test-time fine-tuning methods [3].
Given that our research aims to mitigate identity mixing across a broad range of subjects while maintaining high subject fidelity, adopting a tuning-free approach would not meet our objectives. This is evidenced by Figure R4 in the uploaded PDF, where FastComposer completely fails to personalize Corgi and Chow Chow (animals). This limitation of FastComposer stems from its design, which specifically targets human subjects.
[1] IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models \
[2] FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention \
[3] Domain-Agnostic Tuning-Encoder for Fast Personalization of Text-To-Image Models
---
**C2. White background**
This is an important question regarding overfitting. As you pointed out, our method does not reduce text alignment, as indicated by the highest text fidelity metrics (ImageReward and CLIP score). This success is due to our use of the prompt “A photo of [V1] and [V2], simple background” during training, with segmented subjects composed on a white background. This approach effectively disentangles the background from the identities through the text “simple background”, preventing overfitting.
For additional supporting results, we have included examples demonstrating that MuDI can generate diverse backgrounds and styled images aligned with the texts in Figure R3 of the uploaded PDF.
---
**C3. Inference initialization for region control**
Thank you for the interesting question. To answer your question, we applied our inference strategy to DreamBooth+region control. As shown in Figure R5(Left) of the uploaded PDF, this combination improves the success rate for less similar subjects (e.g., monster toy and can) but still results in mixed identities for similar subjects, such as two teddy bears. In all cases, our MuDI method achieves a higher success rate and greater multi-subject fidelity compared to inference initialization with DreamBooth+region control, as demonstrated in Figure R5(Right) of the uploaded PDF.
---
**C4. Typo**
Thank you for pointing this out, we will correct the mistakes in the final revision. | Summary: This work introduces a training and inference pipeline and an evaluation benchmark for multi-concept customization for text-to-image diffusion models. Specifically, the pipeline comprises a Seg-Mix training stage, which can be viewed as a data augmentation trick to prevent the fine-tuned model from learning mixed attributes, and a mean-shifted noise initialization for the very first step of the denoising process, aiming to inject appearance and layout priors. The evaluation benchmark takes into consideration the dysfunctionality of previous methods when evaluating the disentanglement extent between customized subjects. The benchmark utilizes a Detect-and-Compare workflow considering the similarity for the same subject and dissimilarity between two distinct subjects.
Strengths: 1. The paper is well-written and easy to follow. I didn’t encounter many confusing expressions during my first reading.
2. The introduced methods are intuitively effective, and the design of the evaluation benchmark is convincing for evaluating the extent of disentanglement.
3. This work introduces two interesting additional uses: size control and modular customization, which are helpful for improving the application scenarios for customization.
Weaknesses: 1. Though the method is intuitively effective, as I stated in Strength #2, the novelty is limited since some designs have been experimentally verified by previous methods, such as the utilization of descriptive class [1] and a data-augmentation pipeline for multi-concept customization [2].
2. Though the proposed method seems reasonable for combining distinct subjects with different region locations, like a cat and a dog sitting beside each other, it may be challenged in scenarios where the subjects have rich semantic interactions, like a person wearing glasses. This limitation is induced by the region-control designs, where both augmented training and the initialization during inference pose strong regularization for decoupling two instances. This limitation makes this method difficult for general multi-concept customization.
[1] InstantBooth: Personalized Text-to-Image Generation without Test-Time Finetunin
[2] SVDiff: Compact Parameter Space for Diffusion Fine-Tuning
Technical Quality: 2
Clarity: 3
Questions for Authors: Seg-Mix uses a white background and a prompt organized as “…, simple background”, my question is that is this design posing the risk of outputting more images with white backgrounds even when using edited prompts? This seems inevitable, much like how Cut-Mix inherits the stitching artifacts from the training data.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: See Weakness#2
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and effort in reviewing our paper. We appreciate your positive comments that
- Well written paper
- Effective methods
- Convincing evaluation benchmark
- Interesting applications
We initially address your concerns below.
---
**C1. Novelty of using descriptive classes**
We would like to clarify that our method uses descriptive classes for a different purpose. Our approach utilizes these classes to improve the separation of similar subjects, whereas the prior work employs them for the personalization of rare subjects [1]. We are the first to demonstrate that descriptive classes are crucial for distinguishing between identities of highly similar subjects, which allows for the personalization of 11 dogs and cats using a single trained LoRA, as shown in Figure 35 of the Appendix. Additionally, we utilize LLMs to automatically generate descriptive classes, in contrast to the manual selection used in the prior work.
[1] InstructBooth: Instruction-following Personalized Text-to-Image Generation
---
**C2. Novelty of Seg-Mix**
We would like to note that the proposed Seg-Mix which is a different augmentation method compared to Cut-Mix [2]. Cut-Mix generates augmented images by simply stitching two cut images side by side, which inevitably results in unnatural vertical lines and still suffers from identity mixing (Figure 2 of the main paper).
On the other hand, our Seg-Mix generates augmented images by composing the segmented subjects with the identity-irrelevant information removed. This approach effectively reduces unnatural artifacts and prevents identity mixing even for highly similar subjects.
Specifically, Seg-Mix allows the subjects to overlap, resulting in natural interaction between the subjects which we experimentally validate in Section B.2 and Figure 18 of the Appendix. Furthermore, our Seg-Mix enables the control of relative size (Figures 9(a) and 31 of the main paper) by composing the scaled segmented subjects, which cannot be done by Cut-Mix.
We would also like to emphasize that the novelty of our work is not limited as we present a novel initialization method as well as a new metric and dataset in our work.
[2] SVDiff: Compact Parameter Space for Diffusion Fine-Tuning
---
**C3. Rich semantic interactions**
Thank you for great feedback. To address your concerns about rich semantic interactions, we have included additional experiments in the uploaded PDF. Figure R2 demonstrates that our framework can generate subjects with rich semantic interactions by leveraging a prompt-aligned layout for initialization. Initializing with a layout that aligns with the prompt enables the generation of images such as a teddy bear wearing sunglasses or a toy riding a Corgi, without any identity mixing. We also remark that such prompt-aligned layouts can be automatically generated using LLMs (see Appendix B.5 for more details).
---
**C4. White background**
This is an important question regarding overfitting to the training data. Unlike Cut-Mix, our Seg-Mix does not have bias with white backgrounds, capable of generating diverse backgrounds when using edited prompts. This success is due to our use of the prompt “A photo of [V1] and [V2], simple background” during training, with segmented subjects composed on a white background. This approach effectively disentangles the background from the identities through the text “simple background”, preventing overfitting.
For additional supporting results, we have included examples demonstrating that MuDI can generate diverse backgrounds and styled images aligned with the texts in Figure R3 of the uploaded PDF.
---
Rebuttal 2:
Comment: Thank you for the explanation provided by the authors. I have reviewed the rebuttal materials, and most of my concerns have been addressed. However, I still have reservations regarding the novelty of the descriptive classes, so I have decided to maintain my original rating.
---
Rebuttal Comment 2.1:
Title: Thank you for the response - clarification on novelty
Comment: Thank you for the response, we are happy to hear that most of the concerns are addressed.
We would like to clarify that leveraging descriptive classes is one of the many contributions of our work, and that Reviewers JDvE, amUB, and ViAz have acknowledged that our work is a novel approach for multi-subject personalization.
We have effectively addressed identity mixing for multi-subject personalization by introducing a new data augmentation method Seg-Mix and novel inference initialization method. Moreover, we proposed a new metric for the evaluation of multi-subject fidelity.
We hope the reviewer would kindly consider a fresh evaluation of the novelty of our work.
Best,
Authors | Summary: The paper introduces MuDI, a novel framework designed to improve multi-subject personalization in text-to-image diffusion models. Unlike current methods that often mix identities and attributes from different subjects when generating multiple subjects simultaneously, MuDI effectively decouples these identities. The framework employs segmented subjects generated by a foundation model for segmentation, known as Segment Anything, which is used for both training and inference. This approach serves as data augmentation for training and as initialization for the generation process. Additionally, the authors introduce a new metric to better evaluate the performance of their method in multi-subject personalization.
Strengths: 1. The topic is interesting and it has a good novelty.
2. The presentation is good and the results look promising.
Weaknesses: The dataset is small, and more analysis should be made. Please see the detailed Questions.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The dataset provided in the paper is small and monolithic in style, which does not adequately illustrate the superiority of the method. Could the authors conduct more experiments to clarify this concern?
2. I'm concerned about whether image content, including resolution, style, and other factors impacts performance. Could the authors clarify this point?
3. Relying solely on the proposed metric could not demonstrate the model's superiority. Could the authors include comparisons with other existing metrics to further validate the model's performance?
4. The qualitative comparison in Section 5.2 lacks comprehensiveness. Displaying a few examples is not enough to ensure credibility. It is recommended to include quantitative comparisons, such as the proportion of high-quality generated results and other quantifiable metrics.
5. The paper lacks detailed explanations of the important evaluation metrics for multi-subject fidelity and text fidelity, making it difficult for readers to understand their specific practical significance and thus grasp the model's advantages. It would be helpful if the authors could provide more detailed explanations.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and effort in reviewing our paper. We appreciate your positive comments that
- Interesting topic and novel
- Good presentation
We initially address your concerns below.
---
**C1. Small and monolithic style dataset**
To address your concerns regarding style, we have included the results of personalizing cartoon characters in Figure R1 of the uploaded PDF. As demonstrated, our MuDI model successfully generates distinctive characters, whereas DreamBooth does not perform as well. We will include more experiments on diverse styles in the final revision.
We would also like to note that our dataset consists of similar subject combinations from the benchmark datasets, such as DreamBench [1] and CustomConcept101 [2], covering a wide range of categories from animals to objects and scenes. Additionally, we have demonstrated that our method can effectively personalize animation characters together in Figures 1 and 36 of the main paper.
[1] DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation \
[2] Multi-Concept Customization of Text-to-Image Diffusion
---
**C2. Regarding image content**
Thank you for your pointer. As discussed in Section D of the Appendix, the performance of MuDI is influenced by several factors, including the number and similarity of subjects. For example, we observe a reduced success rate in identity decoupling among highly similar subjects, two teddy bears in Figure 37(a) of the Appendix. From the human evaluation, the average success rate of MuDI for all categories in the dataset is 70% which decreases to 32% for the two teddy bears.
However, we also remark that these challenges are not unique to our approach but are a common issue in prior research as well. Regarding the other factors mentioned, our extensive experiments with diverse image contents and examples of Sora confirm that these factors do not impact our identity decoupling performance. We will include a detailed discussion of other limitations in the final revision.
---
**C3. Comparison with existing metrics**
Thank you for your suggestion. In the table below, we provide a quantitative comparison with CLIP-I, DreamSim, and DINOv, where MuDI outperforms the baselines on all metrics.
| Method | CLIP-I | DreamSim | DINOv2 |
|-------------------|-----------|-----------|-----------|
| Textual Inversion | 0.664 | 0.403 | 0.341 |
| DreamBooth (DB) | 0.711 | 0.479 | 0.434 |
| DB + Region | 0.731 | 0.508 | 0.462 |
| Cut-Mix | 0.724 | 0.51 | 0.475 |
| **Ours** | **0.738** | **0.529** | **0.486** |
We would like to note that, unlike existing metrics, which show a low correlation with human evaluations, our metric demonstrates a high correlation, as evidenced in the tables of Figures 4 and 15 of the main paper. Consequently, using our metric provides a more reliable basis for validation.
---
**C4. More quantitative comparisons**
We would like to clarify that we provide comprehensive quantitative comparisons of our method against the baselines through human evaluation and analysis:
- Human evaluation in Figure 6 of the main paper was conducted using a total of 2000 generated images. It shows that MuDI achieves significantly high success rates and is preferred over 70% against Cut-Mix.
- Success rates with respect to the number of subjects in Figure 8(b) of the main paper show that the baselines completely fail personalizing three subjects, while ours show over 50% success even for four subjects.
We also visualize uncurated samples in Figure 16 of the main paper to ensure credibility.
---
**C5. Explanations of evaluation metrics.**
Thank you for your suggestion, we agree that further explanation of the metrics would help the readers, and will add more details in the final revision. | Summary: The paper proposes MuDI, a novel method for generating images with multiple personalized subjects. By leveraging segmented subjects from reference images for both training and inference, MuDI effectively addresses the challenge of identity mixing in multi-subject image generation. Key contributions include a new data augmentation technique (Seg-Mix) and a new evaluation metric for multi-subject fidelity.
Strengths: - The paper introduces a novel approach to multi-subject image generation by leveraging segmented subjects for both training and inference, effectively decoupling subject identities. This represents a creative combination of existing techniques in image segmentation and text-to-image generation.
- The paper is well-structured and clearly presented, with a solid experimental evaluation demonstrating the effectiveness of the proposed method.
- The authors effectively communicate the problem, the proposed solution, and the experimental results. The paper is well-organized and easy to follow.
- By addressing the critical challenge of identity mixing in multi-subject image generation, the paper offers a valuable contribution to the field. The proposed method has the potential to significantly impact applications requiring the generation of multiple distinct subjects within a single image.
Weaknesses: - While the paper presents a comparative analysis with existing methods, a more comprehensive evaluation against a wider range of baselines, including recent advancements in image generation and personalization, would strengthen the paper's claims. It would be essential to compare with methods like PortraitBooth (CVPR 2024) and FastComposer.
- Additionally, exploring different evaluation metrics beyond the proposed D&C metric could provide a more holistic assessment of the method's performance.
- The paper lacks sufficient details about the dataset used for training and evaluation. A more in-depth description of the dataset, including its size and diversity, would enhance the reproducibility of the work.
- Although the paper includes some ablation studies, a more comprehensive analysis of the impact of different components of the proposed method (e.g., Seg-Mix, initialization, descriptive class) on the overall performance would provide deeper insights into the method's effectiveness.
- While the paper acknowledges the limitations of existing methods, a more thorough discussion of the potential limitations of the proposed MuDI method, such as its sensitivity to image complexity or its performance on highly similar subjects, would strengthen the paper's overall contribution. Some studies on how the size of the objects composed using SegMix during training affects the model, i.e, does it lead to any size biases in the model?
Technical Quality: 3
Clarity: 3
Questions for Authors: - Could the authors provide more details about the dataset used for training and evaluation, including its size, diversity, and collection process? Additionally, a more in-depth description of the evaluation metrics and experimental setup would enhance reproducibility.
- How does MuDI compare to other state-of-the-art methods that focus on image composition or layout control for multi-subject image generation? Specifically for PotraitBooth and FastComposer?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - Could the authors elaborate on the limitations of MuDI, such as its performance on highly similar subjects or complex scenes?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and effort in reviewing our paper. We appreciate your positive comments that
- Novel approach
- Clear paper with solid experiments
- Easy to follow
- Contribution to the field
We initially address your concerns below.
---
**C1. Comparison with FastComposer**
Following your suggestion, we compared our method with FastComposer in Figure R4 (see our uploaded PDF). FastComposer fails to personalize the Corgi and Chow Chow (animals), whereas our method successfully generates distinct objects. This limitation of FastComposer [1] (and similarly, PortraitBooth [2]) stems from their design, which specifically targets human subjects. For our comparisons, we utilized the open-sourced weights from the official implementation of FastComposer. We did not include PortraitBooth in our analysis as its code and weights have not yet been available.
[1] FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention \
[2] PortraitBooth: A Versatile Portrait Model for Fast Identity-preserved Personalization
---
**C2. Comparison with layout control**
We provided a comparison with the layout conditioning methods in Section B.12 of our Appendix. These methods (Cones2 [3], Mix-of-show [4]) often result in missing subject or identity mixing and low subject fidelity as shown in Figure 29 of the Appendix. Furthermore, as shown in Table 3 of the Appendix, our MuDI outperforms the layout conditioning methods on both multi-subject fidelity metrics and text fidelity metrics.
[3] Cones 2: Customizable Image Synthesis with Multiple Subjects \
[4] Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models
---
**C3. Different evaluation metrics**
Thank you for your suggestion. In the table below, we provide a quantitative comparison with CLIP-I, DreamSim, and DINOv2, where MuDI outperforms the baselines on all metrics.
| Method | CLIP-I | DreamSim | DINOv2 |
|-------------------|-----------|-----------|-----------|
| Textual Inversion | 0.664 | 0.403 | 0.341 |
| DreamBooth (DB) | 0.711 | 0.479 | 0.434 |
| DB + Region | 0.731 | 0.508 | 0.462 |
| Cut-Mix | 0.724 | 0.51 | 0.475 |
| **Ours** | **0.738** | **0.529** | **0.486** |
We would like to note that, unlike existing metrics, which show a low correlation with human evaluations, our metric demonstrates a high correlation, as evidenced in the tables of Figures 4 and 15 of the main paper. Consequently, using our metric provides a more reliable basis for validation.
---
**C4. Dataset details**
Thank you for your suggestion. For better reproducibility, we will open-source all datasets, training codes, and checkpoints. We provided some details in Appendix A.2 and Figure 11 in the main paper, such as where we collect training images and prompts. We will add more details in the final revision.
---
**C5. More ablation studies**
Thank you for your suggestion. we believe that we provided comprehensive ablation studies on each component in Section 5.3, Figure 7, and Table 1 of the main paper, demonstrating the necessity of Seg-Mix, our inference initialization, and descriptive class for preventing identity mixing. We further conducted an ablation study on the number of subjects in Figure 8 of the main paper, showing that our MuDI can personalize even five subjects while previous methods completely fail. We will try to add ablation studies on all combinations of the components in the final revision.
---
**C6. Potential limitations**
Thank you for your suggestion. For MuDI, we observe a reduced success rate in identity decoupling among highly similar subjects, for example, two teddy bears in Figure 37(a) of the Appendix. From the human evaluation, the average success rate of MuDI for all categories in the dataset is 70% which decreases to 32% for the two teddy bears. We also remark that the existing baseline methods completely fail in such cases, indicating that this challenge is not unique to our approach but is a common issue in prior research as well. We will include a detailed discussion of other limitations in the final revision. | Rebuttal 1:
Rebuttal: Dear Reviewers,
We sincerely thank you for reviewing our paper and for the insightful comments and valuable feedback. We appreciate the positive comments that emphasize the novelty of our work and the advantages of our proposed method:
- Novel approach (JDvE, amUB)
- Well-written and easy to follow (JDvE, amUB, ivq5, ViAz)
- Convincing evaluation benchmark (ivq5, ViAz)
- Solid experiments (JDvE, ViAz) and interesting applications (ivq5)
We uploaded a PDF file that includes 5 figures:
- Figure R1: MuDI with cartoon style character
- Figure R2: Examples of rich semantic interaction between subjects
- Figure R3: Examples of diverse backgrounds and styles
- Figure R4: Results of FastComposer on our dataset
- Figure R5: Experiments on using our initialization with region controlled DreamBooth
Thank you again for your thorough review and thoughtful suggestions. We hope our responses and the clarifications have addressed any remaining questions, and we are willing to address any further inquiries you may have.
Yours sincerely,
Authors
Pdf: /pdf/82872752c2f7b606648033918b76b70da707eb41.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Computation-Aware Gaussian Processes: Model Selection And Linear-Time Inference | Accept (poster) | Summary: The authors extend the work of Wenger et al. (2022) on "computational uncertainty" in Gaussian process regression models, which introduced the IterGP class of approximations. These approximations treat the limited computation in approximate GP methods as a source of uncertainty, resulting in uncertainty predictions that provably do not suffer from overconfidence. However, the original IterGP paper did not provide a straightforward method for model/hyperparameter selection: this paper addresses that by discussing two approaches for model selection, eventually settling on a new evidence lower bound using the variational family implied by the IterGP approximation, with different proposals for the "action matrix." They compare against SVGP and SGPR, two popular baselines for approximate Gaussian process regression.
Strengths: The work presents a novel method for model selection and linear-time inference for the IterGP approximations originally proposed by Wenger et al. (2022). They discuss a number of viable options, and overall settle primarily on a newly derived ELBO with a sparse parameterization of an "action matrix," although they do also provide experiments for conjugate gradient-based "actions."
In my opinion, the quality of the work is largely high, is clearly written, and provides practical steps forward for the GP community. The paper is enjoyable to read and it was great to understand the various avenues that the authors explored to achieve practical model selection. Generally, the authors also seem relatively open about the limitations of the method and don't try to over-claim what they have achieved.
Weaknesses: My main concerns with the paper concern its in its evaluation, particularly with regards to the sparse variational methods. I do not recall seeing these problems on this dataset, although I have admittedly found Parkinson's more problematic, but not in the way shown (I usually find that it just fails with a bad Cholesky decomposition instead of showing divergent behavior).
In my experience, this is because while GPyTorch is a state-of-the-art package for conjugate gradient-based methods, it is problematic when it comes to its implementations of sparse variational methods. I believe these instabilities are rather due to GPyTorch's use of Lanczos and conjugate gradient algorithms when computing covariance matrix decompositions and log probabilities, in addition to the use of float32. In my experience, stochastic variational methods are very sensitive to numerical accuracy and so should only be computed using the full Cholesky decomposition and float64. Therefore, since I did not see mention of this (although apologies if I have missed it!) I would recommend changing the GPyTorch defaults to use Cholesky and float64, or using a package that does not rely on approximate matrix decompositions such as GPflow or GPJax.
Moreover, one of the key attributes of SGPR is that because the loss is not stochastic, it can be optimized (for both hyperparameters and inducing locations) using second-order optimizers such as L-BFGS-B, as is commonly done. This is typically much faster than Adam, and essentially allows optimization to be parameter-free (as the usual defaults generally work well). It would have been good to compare to this, as is common in the literature, although I do commend the parameter search that the authors did for Adam optimization.
Finally, it would be interesting to see how the resultant ELBOs compare, as well as comparisons of ELBOs and hyperparameters to exact GPs on smaller datasets where feasible (I think that about 15-20k datapoints should be feasible given the compute the authors had access to). Indeed, a lot of the claims throughout the paper center around comparing IterGP to SVGP/SGPR with respect to the exact GP, and so it seems a bit strange that there aren't really any experimental comparisons with respect to exact GPR!
Technical Quality: 2
Clarity: 3
Questions for Authors: In order of relative importance:
1. Following from my main weaknesses, could you confirm whether you used the default GPyTorch settings in terms of faster computations? Apologies again if I missed this!
2. Could you confirm why SGPR was not used for Road and Power? My understanding is that the memory cost, which would be the main limitation, should be roughly on the same order as for IterGP-Opt, or am I incorrect?
3. In Fig. 1, could you confirm how you treated the inducing point locations? Although it shows your point nicely, I find it a bit misleading, as in my experience SVGP would never really optimize its inducing locations to be in a (relatively) large gap between datapoints, and so I was surprised to see an inducing location at about 0.6 on the x-axis.
4. Out of interest, since the ELBO for IterGP-Opt is not stochastic, could L-BFGS-B be used there to speed up optimization?
5. It's not clear to me how the statement in lines 111-113 would be implied by the equation above it - could you elaborate please?
Below I'll list some minor questions and typos:
- Could you please change the colors used in the experiment plots and tables? I am unfortunately red-green colorblind, and I found it very difficult to disentangle the colors between SVGP and SGPR and again between IterGP-Opt and -CG. One example of colorblind-friendly colors in matplotlib can be `plt.style.use('tableau-colorblind10')`. Thanks!
- Missing space in line 49 in "introduced,a"
- Admittedly really nitpicky, but could you reframe Eq. 3 and so forth as a maximization problem, or be more clear that you're minimizing the negative ELBO? I think it makes more sense to keep the ELBO as a lower bound to the log marginal likelihood, and so maximizing makes more sense
- line 99: "we optimizer" -> "we optimize"
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: In my opinion the authors have fairly assessed the limitations of their work, namely the inability to do minibatching and the method's limitation to non-conjugate likelihoods.
-----
Post-discussion update
-----
I would like to thank the authors for an engaging discussion. Overall, I was pleased with the quality of the responses and the additional experiments provided. While the work is perhaps not the most ''exciting'' in terms of its empirical results and it has some limitations (e.g., not being able to be minibatched, as pointed out by another reviewer), I don't really think it's useful to judge a paper along these lines, and even so I think this paper represents a solid contribution and has the potential to pave the way for important developments further down the line. Therefore, I am raising my score accordingly.
I would also like the opportunity to address the last comments left by the authors. As I was not able to respond to these before the author-reviewer discussion period ended, I have not taken these into account for my final score. However, I hope that the authors will take them into consideration when they update their paper.
> As we write in the caption of Table 1 in the original submission, the reported number of epochs is determined as follows: "[...] We report the epoch where each method obtained the lowest NLL, and all performance metrics (NLL, RMSE, and wall-clock time) are measured at this epoch. [...]". The rebuttal PDF did not contain this full caption due to the limited space available, but the final version will, of course. The total number of epochs was determined by resource constraints (1000 epochs for Adam, 100 for LBFGS, except for Power; see also lines 235-238 and our rebuttal response). As Figure R1 (rebuttal pdf) shows this results in roughly similar overall training time between for example SGPR trained with LBFGS and IterGP-Opt trained with Adam. We would recommend interpreting Table (R)1 and Figure (R)1 together to understand the generalization performance.
Thanks for your clarification here. Perhaps I have misunderstood which NLL you are looking at, but I am concerned that this amounts to "cheating" - as I understand it you are looking at test metrics to determine when to stop the optimization process, which of course you wouldn't have access to in a real setting. It would be better to either fix the number of epochs or determine some form of early stopping that only relies on training or validation data.
> We would like to politely disagree with the characterization that the choice of optimizer and its parameters doesn't determine the final result (even in theory). The log-marginal likelihood (also the ELBO) as a function of the kernel hyperparameters is generally non-convex and often has multiple (local) minima (for an example see Figure 5.5 of Rasmussen and Williams (2008)). Which modes an optimizer converges to depends on the initialization, the choice of optimizer and the learning rate (schedule / line search).
Yes, I am in agreement here - I had meant my comment in a highly idealized sense where we could assume that different optimizers would reach the same optimum, but it was sloppy at best and likely unrealistic in many settings.
> The reason we would still recommend using Adam over L-BFGS for IterGP-Opt is the increased resource usage. While the performance is slightly better on both datasets as you correctly point out, the runtime is also larger (e.g. Parkinsons ~=1.3x, Bike 1.8x longer) and the memory usage is higher. For IterGP-Opt one optimizes the kernel hyperparameters and the action entries, which in our case amounts to parameters. As, for example, the PyTorch documentation on L-BFGS points out, L-BFGS requires param_bytes * (history_size+1) amount of memory, which for a typical history size (e.g. PyTorch's default of 100) adds significant overhead, as compared to a single gradient, which only requires param_bytes.
I'm ok with this logic favoring Adam over L-BFGS for memory reasons, as long as it is clearly stated. But I do think it's a bit hard to say that Adam is necessarily faster for IterOpt-GP - it seems fairly close, and in my experience (admittedly, with SGPR) L-BFGS has a tendency to optimize much quicker initially but then spend a lot of time searching for the optimum. It might be instructive to show the training curves as well for both optimizers, particularly as the early stopping I've mentioned above might confound these determinations.
Finally, I did want to comment on something I didn't pick up on from a previous response you gave but caught during the reviewer discussion phase:
> As per the recommendation in Section 4.1 of Ober et al (2022), in our original experiments we used a (single precision) Cholesky decomposition both for SGPR and SVGP
Where in Ober et al (2022) does it say they used single precision for SGPR and SVGP? I could not find any mention of single vs. double precision, and I believe it's fairly common for SGPR and SVGP to still be done in double precision, despite the default single precision in GPyTorch.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Summary
Thank you for your feedback on our paper! We appreciate your detailed recommendations for improvement. Based on these we've added a new set of experiments where we trained SGPR and an exact (Cholesky)GP with LBFGS in double precision (see the rebuttal PDF).
We believe we have addressed all of your questions fully below, but should anything remain unclear, we are happy to respond during the discussion period.
Should our response have influenced your initial evaluation of our work, we would be highly appreciative if you would consider updating your score.
Thank you!
### Weaknesses
> My main concerns with the paper concern its in its evaluation, particularly with regards to the sparse variational methods. I do not recall seeing these problems on this dataset, although I have admittedly found Parkinson's more problematic, but not in the way shown (I usually find that it just fails with a bad Cholesky decomposition instead of showing divergent behavior). In my experience, this is because while GPyTorch is a state-of-the-art package for conjugate gradient-based methods, it is problematic when it comes to its implementations of sparse variational methods. I believe these instabilities are rather due to GPyTorch's use of Lanczos and conjugate gradient algorithms when computing covariance matrix decompositions and log probabilities, in addition to the use of float32. In my experience, stochastic variational methods are very sensitive to numerical accuracy and so should only be computed using the full Cholesky decomposition and float64. Therefore, since I did not see mention of this (although apologies if I have missed it!) I would recommend changing the GPyTorch defaults to use Cholesky and `float64`, or using a package that does not rely on approximate matrix decompositions such as GPflow or GPJax.
We would like to clarify that we closely followed the recommendations of Ober et al. (2022) and computed any (covariance) matrix decompositions for SGPR and SVGP with Cholesky using an iteratively increasing additive jitter should the decomposition fail. We did *not* use Lanczos or CG.
The iteratively added jitter could explain the observed divergent behavior instead of an outright failure on Parkinson's.
We would like to politely point out that GPyTorch does not use Lanczos or CG for variational methods, but rather a Cholesky decomposition (see the source code for [SGPR](https://github.com/cornellius-gp/gpytorch/blob/283105b4f71b0e97fb2dbbf34c95396c7a88bd25/gpytorch/kernels/inducing_point_kernel.py#L65) and [SVGP](https://github.com/cornellius-gp/gpytorch/blob/283105b4f71b0e97fb2dbbf34c95396c7a88bd25/gpytorch/variational/variational_strategy.py#L203)).
We will expand our description of the experiments with this additional information.
To alleviate concerns about the use of single precision, we conducted new experiments in double precision as described in the answer to the next question.
> Moreover, one of the key attributes of SGPR is that because the loss is not stochastic, it can be optimized (for both hyperparameters and inducing locations) using second-order optimizers such as L-BFGS-B, as is commonly done. This is typically much faster than Adam, and essentially allows optimization to be parameter-free (as the usual defaults generally work well). It would have been good to compare to this, as is common in the literature, although I do commend the parameter search that the authors did for Adam optimization.
Thank you for this suggestion! We've made the following improvements to our main experiment.
We trained SGPR on all datasets in double precision using L-BFGS with a Wolfe line search for 100 epochs, based on the recommendations of Ober et al. (2022). We performed a sweep over the initial learning rate $\text{lr}_{\text{SGPR}} \in \\{10^{-4}, 10^{-3}, 10^{-2}, 10^{-1}, 1\\}$ and repeated each run for five random seeds. Both in our original experiments using single precision and Adam and the new ones, we computed Cholesky decompositions with adaptively increasing kernel jitter as per Section 4.1 of Ober et al (2022).
We find that this indeed increases its performance as the updated Table R1 in the attached rebuttal PDF shows. SGPR now outperforms SVGP on all small to medium datasets as expected, but IterGP-Opt still matches or outperforms SGPR on all datasets, except "Bike", as both Table R1 and Figure R1 in the attached rebuttal PDF show.
> Finally, it would be interesting to see how the resultant ELBOs compare, as well as comparisons of ELBOs and hyperparameters to exact GPs on smaller datasets where feasible (I think that about 15-20k datapoints should be feasible given the compute the authors had access to). Indeed, a lot of the claims throughout the paper center around comparing IterGP to SVGP/SGPR with respect to the exact GP, and so it seems a bit strange that there aren't really any experimental comparisons with respect to exact GPR!
Based on your suggestion we've added results for exact (Cholesky)GPs to our experiments (see Table R1 of the rebuttal PDF). We trained a CholeskyGP on Parkinson's and Bike with LBFGS (using a Wolfe line search) in double precision for 100 epochs. We performed a parameter sweep over the initial learning rate $\mathrm{lr}_{\text{CholeskyGP}} \in \\{10^{-4}, 10^{-3}, 10^{-2}, 10^{-1}, 1\\}$ and repeated each run five times for different seeds.
Perhaps unsurprisingly, the exact CholeskyGP outperforms all approximations on those small datasets.
We also added the CholeskyGP to the hyperparameter plot in Figure S4 of the supplementary of our paper. The exact GP tends to learn a larger outputscale and smaller observation noise than the approximate methods. Unfortunately, we could not add this to the rebuttal pdf due to the space constraint of one page only.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for your rebuttal and detailed response to my review and others. I can confirm that I have carefully read the rebuttals as well as the discussion (up to this point). In general, I think the updated results represent a significant improvement to the quality of the paper. I would also like to thank the authors for correcting some of the confusion I had regarding their paper as well as GPyTorch. However, I generally like to see how the discussion period plays out, and especially if the other reviewers are satisfied with the authors' responses, before committing to change my score one way or another.
A couple of minor points still come to mind:
- I am somewhat confused by how the number of epochs is chosen for Adam in the updated results. At the end of the day, the results should be the same (theoretically speaking, of course) regardless of how you choose the optimizer, with the only difference really being time taken and memory allocation, but this is not reflected in these updated results. This is especially striking in Parkinsons and Bike for both SGPR and IterGP-Opt, which do not seem to reflect the authors' recommendation for Adam over L-BFGS (as stated in a response to reviewer 7YnY). How did the authors determine when to stop Adam optimization (looking at the paper I couldn't find where this was mentioned, but apologies if I've missed it)?
- I appreciate that the authors have agreed to include an analysis of the hyperparameter values of CholeskyGP/exact GPR in their manuscript, but I still think that explicitly considering the ELBOs/LML estimates could be somewhat instructive, as indicated in my original review. Firstly, because solely considering predictive metrics can be misleading when the model is mis-specified, as it most likely is here on these UCI datasets, but also in that it might shed light on some of the discussions being had about optimization.
Overall though, while I continue to reserve judgement about the final score until later, I do think that in my view this paper has already been significantly improved by the responses the authors have made.
---
Rebuttal 2:
Title: Answers to Questions
Comment: ### Questions
> 1. Following from my main weaknesses, could you confirm whether you used the default GPyTorch settings in terms of faster computations? Apologies again if I missed this!
As per the recommendation in Section 4.1 of Ober et al (2022), in our original experiments we used a (single precision) Cholesky decomposition both for SGPR and SVGP to compute solves with $K(Z, Z)$ with an iteratively increasing jitter should the decomposition fail. In other words, we did not rely on iterative methods for accelerated solves for SGPR and SVGP. We will make this more explicit in the description of the experimental setup in the final version of the paper.
> 2. Could you confirm why SGPR was not used for Road and Power? My understanding is that the memory cost, which would be the main limitation, should be roughly on the same order as for IterGP-Opt, or am I incorrect?
The asymptotic memory complexity $O(nm)$ is indeed the same for $m=i$, but since we are using $m=1024$ inducing points versus $i=512$ for IterGP-Opt the memory cost of SGPR is two times higher, ignoring any constant factors arising from implementation differences.
When training SGPR in double precision (with LBFGS), the memory required increases by another factor of two, giving an overall memory requirement that is at least four times higher. These differences lead to prohibitive memory consumption when we attempt to train SGPR on Road on the GPU we used in our experiments.
> 3. In Fig. 1, could you confirm how you treated the inducing point locations? Although it shows your point nicely, I find it a bit misleading, as in my experience SVGP would never really optimize its inducing locations to be in a (relatively) large gap between datapoints, and so I was surprised to see an inducing location at about 0.6 on the x-axis.
We optimized the hyperparameters and inducing points of SVGP in Figure 1 using Adam with an initial learning rate of 0.1 for 200 epochs and a linear learning rate decay. Your experience matches ours that in one dimension the depicted scenario does not occur frequently -- it depends on the initialization and choice of optimization hyperparameters. However, we believe *it faithfully reflects SVGP's behavior in higher dimensions*.
To substantiate this claim, we ran an experiment on synthetic data based on a latent function drawn from a GP with increasing dimension of the input space. We find that the average Mahanalobis distance (induced by the lengthscales) across inducing points to the nearest datapoint increases with dimension (see Figure R2 of author response PDF). We compare this to randomly placed inducing points. While SVGP does move inducing points closer to data points on average, they are still becoming increasingly distant from the data measured in lengthscale units. Further, we also provide more evidence for our claim that SVGP's latent variance often is very small (at inducing points) relative to the total variance, meaning almost all deviation from the posterior mean is considered to be observational noise. The right plot in Figure R2 of the attached rebuttal PDF shows this across dimensions. For $d\geq 4$ approximately 95% of the predictive variance is due to observational noise.
> 4. Out of interest, since the ELBO for IterGP-Opt is not stochastic, could L-BFGS-B be used there to speed up optimization?
Thank you for this suggestion! We ran an additional set of experiments on all but the two largest datasets for IterGP-Opt. We trained using L-BFGS in double precision with a Wolfe line search run for a maximum of 100 epochs and averaged across five seeds. As the results in Table R1 of the attached rebuttal PDF show, performance does increase slightly at comparable runtime (only for Bike the runtime increased considerably).
---
Rebuttal 3:
Title: Answers to Questions [continued]
Comment: > 5. It's not clear to me how the statement in lines 111-113 would be implied by the equation above it - could you elaborate please?
We intended to reference the way the approximate posterior is defined (i.e. Equation (5) and line 105) rather than the equation above line 111 in our statement. Equation (5) and line 105 imply a monotonic decrease in the marginal variance as a function of $i$. Observe that the precision matrix approximation $C_i$ (defined in line 105) has rank $i$, assuming the actions, i.e. columns of $S_i$, are linearly independent and non-zero (as assumed in Wenger et al. (2022)). Therefore we can rewrite
$C_i = \sum_{\ell=1}^i \tilde{d}\_\ell \tilde{d}_\ell^\top $ as a sum of rank 1 matrices. Note that this is actually how Wenger et al. (2022) compute $C_i$ in Algorithm 1 in their paper. Therefore the marginal variance at iteration $i$ for $i \leq j$ is given by
$$K_{i}(x, x) = K(x, x) - \sum_{\ell=1}^i K(x, X) \tilde{d}\_\ell \tilde{d}\_\ell^\top K(X, x)=K(x, x) - \sum_{\ell=1}^i (K(x, X) \tilde{d}\_\ell)^2 \geq K(x, x) - \sum_{\ell=1}^j (K(x, X) \tilde{d}\_\ell)^2= K_{j}(x,x)$$
Now for $i=n$, since the actions are assumed to be linearly independent and non-zero (see Wenger et al. (2022)), $S_i$ has rank $n$ and thus $C_i = S_i(S_i^\top \hat{K}S_i)^{-1}S_i^\top = \hat{K}^{-1}$. Therefore we have $K_{i}(x, x) \geq K_{j}(x, x) \geq K_{\star}(x, x)$.
> Could you please change the colors used in the experiment plots and tables? I am unfortunately red-green colorblind, and I found it very difficult to disentangle the colors between SVGP and SGPR and again between IterGP-Opt and -CG. One example of colorblind-friendly colors in matplotlib can be plt.style.use('tableau-colorblind10'). Thanks!
We apologize for not taking this into account. We have swapped out the colormap for [Seaborn's colorblind palette `sns.color_palette("colorblind", 5)`](https://seaborn.pydata.org/generated/seaborn.color_palette.html).
---
Rebuttal 4:
Comment: Thank you for carefully reading our responses and of course for your detailed and actionable feedback!
> I am somewhat confused by how the number of epochs is chosen for Adam in the updated results. [...] How did the authors determine when to stop Adam optimization (looking at the paper I couldn't find where this was mentioned, but apologies if I've missed it)?
As we write in the caption of Table 1 in the original submission, the reported number of epochs is determined as follows: "[...] We report the epoch where each method obtained the lowest NLL, and all performance metrics (NLL, RMSE, and wall-clock time) are measured at this epoch. [...]". The rebuttal PDF did not contain this full caption due to the limited space available, but the final version will, of course.
The *total* number of epochs was determined by resource constraints (1000 epochs for Adam, 100 for LBFGS, except for Power; see also lines 235-238 and our rebuttal response). As Figure R1 (rebuttal pdf) shows this results in roughly similar *overall* training time between for example SGPR trained with LBFGS and IterGP-Opt trained with Adam.
We would recommend interpreting Table (R)1 and Figure (R)1 together to understand the generalization performance.
> At the end of the day, the results should be the same (theoretically speaking, of course) regardless of how you choose the optimizer, with the only difference really being time taken and memory allocation, [...]
We would like to politely disagree with the characterization that the choice of optimizer and its parameters doesn't determine the final result (even in theory). The log-marginal likelihood (also the ELBO) as a function of the kernel hyperparameters is generally non-convex and often has multiple (local) minima (for an example see Figure 5.5 of [Rasmussen and Williams (2008)](http://gaussianprocess.org/gpml/chapters/RW.pdf)). Which modes an optimizer converges to depends on the initialization, the choice of optimizer and the learning rate (schedule / line search).
> [...] but this is not reflected in these updated results. This is especially striking in Parkinsons and Bike for both SGPR and IterGP-Opt, which do not seem to reflect the authors' recommendation for Adam over L-BFGS (as stated in a response to reviewer 7YnY).
The reason we would still recommend using Adam over L-BFGS for IterGP-Opt is the increased resource usage. While the performance is slightly better on both datasets as you correctly point out, the runtime is also larger (e.g. Parkinsons ~=1.3x, Bike 1.8x longer) and the memory usage is higher. For IterGP-Opt one optimizes the kernel hyperparameters and the action entries, which in our case amounts to $d+2 + n$ parameters. As, for example, the [PyTorch documentation on L-BFGS](https://pytorch.org/docs/stable/generated/torch.optim.LBFGS.html) points out, L-BFGS requires `param_bytes * (history_size+1)` amount of memory, which for a typical history size (e.g. PyTorch's default of 100) adds significant overhead, as compared to a single gradient, which only requires `param_bytes`.
---
> I appreciate that the authors have agreed to include an analysis of the hyperparameter values of CholeskyGP/exact GPR in their manuscript, but I still think that explicitly considering the ELBOs/LML estimates could be somewhat instructive, as indicated in my original review. Firstly, because solely considering predictive metrics can be misleading when the model is mis-specified, as it most likely is here on these UCI datasets, but also in that it might shed light on some of the discussions being had about optimization.
We apologize for overlooking this suggestion for improvement in your original response! We logged training losses (i.e. ELBOs / (approximate) LMLs) for each epoch in all our runs and will gladly add these to the manuscript (e.g. as an additional column in Table 1, and as an additional row in Figure 1).
---
We hope our additional answers were helpful in forming your final opinion about our work! If there are any outstanding questions, please do not hesitate to voice them. | Summary: The authors cast the IterGP method of Wenger et al, which is guaranteed to give conservative estimates of uncertainty relative to the posterior, as a variational procedure. This allows for hyperparameter selection and selection of parameters controlling the quality of the approximation ($\mathbf{S}$) through maximization of an evidence lower bound. The authors compare this method to existing variational approximations on several benchmark datasets.
Strengths: - The exposition is clear and well motivated.
- Placing IterGP within a variational context both allows for a conceptual connection to SGPR/SVGP and (more importantly) practical progress in terms of selection of parameters.
Weaknesses: - Datasets have not been properly cited. In particular, a general citation to UCI doesn’t give credit to the creators of each dataset. License information was also not stated.
- I found the discussion of information gain not well-motivated. The definition of information gain was assumed to be known, and it wasn’t clear from the text why maximizing this quantity would lead to a good quality approximation. If the only goal is to suggest projection onto the largest $i$-eigenvalues, there are many results in the matrix approximation literature that also motivate this approach. Moreover, there is prior work in the Nystrom approximation literature considering the top $i$-eigenvalues as a subspace for performing inference, for example Ferrari-Trecate et al, https://proceedings.neurips.cc/paper/1998/hash/55c567fd4395ecef6d936cf77b8d5b2b-Abstract.html.
- The phrasing in the caption of figure 4 conflates credible and confidence intervals. The accuracy of the credible interval should be with reference to the (exact) posterior and cannot be shown by frequentist coverage. To be clear, I think checking the empirical coverage of the methods is reasonable, but don’t think you can draw conclusions about the accuracy of the credible interval from this.
Technical Quality: 3
Clarity: 4
Questions for Authors: - The authors claim SVGP has “no latent uncertainty” where data is absent. Do the authors mean that the posterior variance is actually 0 or extremely close to it away from the data? I don’t see how this can be true since SVGP recovers the posterior distribution in certain limits. If what is meant is that uncertainty is underestimated away from the data, this is quite a bit weaker as a statement and should be made clear.
- “Unlike these other methods, the IterGP posterior updates are defined by a linear combination of kernel functions on training data, which guarantees that posterior variance estimates always over estimate the exact GP uncertainty” — Could you clarify this sentence? First, what is meant by “posterior updates”? Posterior mean updates? Posterior variance updates? Posterior sample path updates? Also, I don’t think this is the property that guarantees that posterior variances are overestimated. It is easy to write down a form of posterior variance that is both a linear combination of kernel function and underestimates the posterior variance, for example 0.
- How does the expression for the approximate predictive standard deviation (section 2) in terms of a worst case quantity over an RKHS ball compare to the analogous statement for low-rank variational methods (given in Theorem 6 of Wild et al 2023, https://arxiv.org/pdf/2106.01121)? I think a brief comparison (possibly in the appendix if space does not allow) would be useful.
- When stating what is meant by computation aware, should there be a convergence condition? I.e. not only should the precision increase, but it should converge to the posterior precision as the amount of compute increases under some sensible restrictions.
- I don’t understand what is plotted on the right of figure 2, or what my takeaway from it should be. What is meant by plotting the magnitude of the eigenvalues of the matrix for each $x_j$? On the plot on the right, what notion of distance between subspaces is used?
- Should I interpret equation 12 as stating the $\mathbf{S}$ is block diagonal? If so, is this said explicitly somewhere?
- Could you explain in more detail why SGPR diverges in these experiments? Have you checked if this is resolved by double precision?
### Other
- Line 49, missing space after comma before “a”
- Line 61, the authors claim that IterGP has “calibrated” confidence. Is the claim that the confidence intervals are calibrated, or that they are conservative in the sense of overestimating the posterior credible intervals?
- Line 73, h is introduced here as the latent function and never appears again. I’m a bit unclear why it is needed.
- Line 82, before discussing computation in model selection, it should be stated that you assume maximum (marginal) likelihood will be used. There are alternative approaches (e.g. sampling)
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: The authors point out limitations well; most importantly, the restriction to Gaussian likelihoods in the present work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Summary
Thank you for your positive review of our paper and helpful input!
We responded to your questions below. In particular, we've reformulated the motivation behind choosing actions according to an information-theoretic objective and we've added a statement proving the monotonic decrease in marginal variance to the exact posterior marginal variance as a function of $i$.
If anything remains unclear, we are happy to elaborate further during the discussion period!
### Weaknesses
> - Datasets have not been properly cited. In particular, a general citation to UCI doesn’t give credit to the creators of each dataset. License information was also not stated.
We have added citations for each of the datasets we have used and added license information to the supplementary.
> - I found the discussion of information gain not well-motivated. The definition of information gain was assumed to be known, and it wasn’t clear from the text why maximizing this quantity would lead to a good quality approximation. If the only goal is to suggest projection onto the largest $i$-eigenvalues, there are many results in the matrix approximation literature that also motivate this approach. Moreover, there is prior work in the Nyström approximation literature considering the top $i$-eigenvalues as a subspace for performing inference, for example Ferrari-Trecate et al, https://proceedings.neurips.cc/paper/1998/hash/55c567fd4395ecef6d936cf77b8d5b2b-Abstract.html.
Based on your feedback, we've reformulated the information-theoretic motivation for projecting onto the top $i$-eigenvalue subspace as follows. We've replaced Lemma S2 with an arguably more intuitive result, namely that an "optimal" linear combination of the data should maximally reduce our uncertainty about the latent function at the training data or equivalently maximally increase the divergence of our updated belief from the prior. More formally, it holds that the actions maximizing the information gain, defined as the difference in entropy between prior and posterior, i.e.
$$
S_i = \text{argmax}_{S \in \mathbb{R}^{n \times i}} ( H(f(X)) - H(f(X) \mid S^{\top} y) ) = \text{argmax}\_{S \in \mathbb{R}^{n \times i}} \text{KL}(p(f \mid S^\top y)\ || \ p(f))
$$
are given by the top-$i$ eigenvectors of $\hat{K}$, where $y \in \mathbb{R}^n$. This is different from the previous formulation in equation (10) in that here we are directly reasoning about the latent function of interest, $f$.
In Bayesian information-theoretic formulations of active learning, choosing observations that minimize posterior entropy (or a myopic approximation defined by the expected information gain) has a long history (e.g. MacKay (1992), Section 2 of Houlsby et al. (2011)). Here, rather than choosing observations directly, we generalize this idea to choosing *linear combinations of observations*.
We've included this improved result and its motivation in the final version.
We've also added a citation to the work by Ferrari-Trecate to make the connection to prior work explicit.
- MacKay, D. J. C. (1992). Information-Based Objective Functions for Active Data Selection. Neural Computation, 4(4), 590–604. https://doi.org/10.1162/neco.1992.4.4.590
- Houlsby, N., Huszár, F., Ghahramani, Z., & Lengyel, M. (2011). Bayesian Active Learning for Classification and Preference Learning. arXiv. https://doi.org/10.48550/arXiv.1112.5745
> - The phrasing in the caption of Figure 4 conflates credible and confidence intervals. The accuracy of the credible interval should be with reference to the (exact) posterior and cannot be shown by frequentist coverage. To be clear, I think checking the empirical coverage of the methods is reasonable, but don’t think you can draw conclusions about the accuracy of the credible interval from this.
Thank you for pointing this out to us. We have modified the caption to describe that we are measuring the empirical coverage of the credible interval of IterGP-Opt and SVGP and have removed any claim about the accuracy of the credible intervals.
---
Rebuttal Comment 1.1:
Comment: We would like to issue a brief correction here to one equation in our response above. The second equality in the definition of the information-optimal actions should be the entropy of the computation-aware posterior, rather than the KL divergence to the prior:
$$
S_i = \dots = \textrm{argmin}_{S \in \mathbb{R}^{n \times i}} \mathbb{E}\_{p(f(X) \mid S^\top y)}[- \log p(f(X) \mid S^\top y)]
$$
---
Rebuttal 2:
Title: Answers to Questions
Comment: ### Questions
> - The authors claim SVGP has “no latent uncertainty” where data is absent. Do the authors mean that the posterior variance is actually 0 or extremely close to it away from the data? I don’t see how this can be true since SVGP recovers the posterior distribution in certain limits. If what is meant is that uncertainty is underestimated away from the data, this is quite a bit weaker as a statement and should be made clear.
We would like to clarify that we claim this in reference to Figure 1 as an illustration of a more general phenomenon that can occur when using SVGP. Specifically, we claim that "SVGP, [...] is overconfident at the locations of the inducing points if they are not in close proximity to training data, which becomes increasingly likely in higher dimensions. This phenomenon can be seen in a toy example in Figure 1 (middle left), where SVGP has no latent uncertainty where data is absent (lower row, cf. other methods)."
We've modified this claim to "has near zero latent uncertainty at the inducing point away from the data".
Additionally in the caption of Figure 1 we write referring to the same figure, that "SVGP expresses almost no posterior variance in the middle region, and thus almost all deviation from the posterior mean is considered to be observational noise."
To back up these two claims we added a new experiment, which measures the distance of inducing points to the data (measured in lengthscale units) as a function of input dimension and the ratio between posterior and predictive variance at the inducing points (see Figure R2 of rebuttal PDF). We find that inducing points are optimized to lie increasingly farther away from the data as the dimension of the input space increases. Further, we observe that in higher dimensions ($d\geq 4$) the latent variance can be as little as 5% of the predictive variance.
Note that all our statements and observations are made under the assumption of a fixed number of data points $n > m$ and inducing points $m$ and therefore do not conflict with statements about the convergence of SVGP to the exact posterior.
> - “Unlike these other methods, the IterGP posterior updates are defined by a linear combination of kernel functions on training data, which guarantees that posterior variance estimates always over estimate the exact GP uncertainty” — Could you clarify this sentence? First, what is meant by “posterior updates”? Posterior mean updates? Posterior variance updates? Posterior sample path updates? Also, I don’t think this is the property that guarantees that posterior variances are overestimated. It is easy to write down a form of posterior variance that is both a linear combination of kernel function and underestimates the posterior variance, for example 0.
We refer to the posterior variance updates and specifically *the form of the downdate*. We've clarified this in the draft. To see why this guarantees the variances are always overestimated (they monotonically decrease as a function of $i$), consider the following. Observe that the precision matrix approximation $C_i$ (defined in line 105) has rank $i$, assuming the actions, i.e. columns of $S_i$, are linearly independent and non-zero (as assumed in Wenger et al. (2022)). Therefore we can rewrite
$C_i = \sum_{\ell=1}^i \tilde{d}\_\ell \tilde{d}_\ell^\top $ as a sum of rank 1 matrices. Note that this is actually how Wenger et al. (2022) compute $C_i$ in Algorithm 1 in their paper. Therefore the marginal variance at iteration $i$ for $i \leq j$ is given by
$$K_{i}(x, x) = K(x, x) - \sum_{\ell=1}^i K(x, X) \tilde{d}\_\ell \tilde{d}\_\ell^\top K(X, x)=K(x, x) - \sum_{\ell=1}^i (K(x, X) \tilde{d}\_\ell)^2 \geq K(x, x) - \sum_{\ell=1}^j (K(x, X) \tilde{d}\_\ell)^2= K_{j}(x,x)$$
Now for $i=n$, since the actions are assumed to be linearly independent and non-zero, $S_i$ has rank $n$ and thus $C_i = S_i(S_i^\top \hat{K}S_i)^{-1}S_i^\top = \hat{K}^{-1}$. Therefore we have $K_{i}(x, x) \geq K_{j}(x, x) \geq K_{\star}(x, x)$.
---
Rebuttal Comment 2.1:
Title: Answers to Questions [continued]
Comment: > - When stating what is meant by computation-aware, should there be a convergence condition? I.e. not only should the precision increase, but it should converge to the posterior precision as the amount of compute increases under some sensible restrictions.
In an earlier answer, we demonstrated that the marginal variance decreases monotonically as a function of $i$ (albeit not necessarily strictly), given linearly independent, non-zero actions. We immediately obtain that the marginal precision increases monotonically up to the marginal precision of the posterior as the computational budget increases, i.e. $i \to n$.
> - I don’t understand what is plotted on the right of figure 2, or what my takeaway from it should be. What is meant by plotting the magnitude of the eigenvalues of the matrix for each $x_j$? On the plot on the right, what notion of distance between subspaces is used?
One way to understand how different instances of IterGP differ based on different choices of actions, i.e. the columns of the matrix $S_i$, is by interpreting the magnitude of the action entries as how the computational budget is distributed across the $n$ observations (see also Figure 2 of Wenger et al. (2022)). This interpretation arises from the formulation of IterGP conditioning on $S_i^\top y$ (see also lines 122-125). For example, unit vector actions correspond to spending all budget on a single data point, sparse actions target only a subset of data points, and CG and eigenvector actions distribute the budget across all data points with different weights. The top row in Figure 2 corresponds to eigenvector actions, hence we plot the magnitude of the eigen*vector* entries. The rows below illustrate the two variants of IterGP which we consider in our paper. In the plot on the right in Figure 2, we show the Grassman distance, which is the Euclidean norm of the vector of principal angles between the two subspaces. We've added its definition to the paper.
> - Should I interpret equation 12 as stating the $S$ is block diagonal? If so, is this said explicitly somewhere?
The matrix $S$ consists of $i$ blocks (*), which are of size $k \times 1$. We state this implicitly in lines 183-184, but we've made this more explicit in the final draft.
> - Could you explain in more detail why SGPR diverges in these experiments? Have you checked if this is resolved by double precision?
For SGPR, we computed Cholesky decompositions in single precision with an iteratively increasing additive jitter should the decomposition fail as per Section 4.1 of Ober et al (2022).
The iteratively added jitter could explain the observed divergent behavior instead of an outright failure in single precision.
During the rebuttal, we added results for SGPR trained in double precision with LBFGS to our experiments, which does not show any divergence during training as Figure R1 in the attached rebuttal PDF shows. Training in this way improves the performance of SGPR over SVGP, but IterGP-Opt still matches or outperforms SGPR, except on "Bike".
> - Line 61, the authors claim that IterGP has “calibrated” confidence. Is the claim that the confidence intervals are calibrated, or that they are conservative in the sense of overestimating the posterior credible intervals?
The claim is that they are conservative. We've corrected this in the final version. | Summary: This paper presents a substantial extension to an existing class of models called IterGP. By introducing a novel training loss which combines both the hyperparameters and a sparse action matrix, the IterGP-Opt model offers linear-time scalability. Experiments over a range of UCI datasets demonstrate that IterGP-Opt matches or outperforms the benchmark SGPR and SVGP models.
Strengths: The paper is well written and clearly structured.
The proposed IterGP-Opt model displays a high degree of novelty.
The results and methodology are for the most part well presented.
Scalable inference is an area of high impact, so potential improvements over the SVGP model are of high interest.
Weaknesses: I have a couple of concerns regarding the models used in the experiments:
The paper cites Ober et al "Recommendations for Baselines and Benchmarking Approximate Gaussian Processes" as inspiration for including SGPR as a strong benchmark, but does not follow the guidance for SGPR training outlined in that paper. Both in terms of the choice of optimiser, and the approach of not jointly optimising the inducing points with the kernel hyperparameters. This is concerning as it could be having a significantly detrimental impact on the performance of the SGPR baseline - as is further evidenced by it underperforming SVGP on three of the four tasks. In any case, given the popularity of the datasets used, it ought to be straightforward to verify the SGPR performance is consistent with other publications in the literature.
One of the other methods presented in Table 1 is "IterGP-CG", but its status should be clarified for the reader. The first reference to IterGP-CG in the paper is in the caption to Figure 1, where it is introduced as "IterGP-CG (ours), ...", implying this model is a novelty of the paper. Yet later in the Background section, and also in the footnote on page 7, it is pointed out that the IterGP-CG model is not new. Given that IterGP-CG is not mentioned at all in the Conclusion section, it seems likely the "ours" was just a typo, but this is of course a very important point to clarify! Table 1 itself should also be updated to clarify which method(s) are claimed to be novel to the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: The final sentence of the abstract claims "As a result of this work, Gaussian processes can be trained on large-scale datasets without compromising their ability to quantify uncertainty" - Doesn't this imply we are conducting exact inference on these large-scale datasets? I'd recommend the authors consider a suitable rephrasing such as "significantly compromising".
"SGPR typically requires more gradient iterations during training as it introduces new parameters (the inducing point locations Z)."
I was surprised by this comment, because in my experience SGPR typically requires significantly fewer training iterations than SVGP. In part because training is unbatched, allowing it to exploit standard LBFGS optimisation as opposed to using Adam. And also because it is often not desirable to train the inducing point locations jointly with the hyperparameters.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors include a well written section on the limitations of IterGP-Opt.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Summary
Thank you for your time and effort in reviewing our paper and in particular for suggesting improvements to our benchmark experiments.
We have addressed your main concerns with a set of new experiments where we train SGPR using LBFGS in double precision. These changes close the gap between SGPR and SVGP, but IterGP-Opt still matches or outperforms SGPR on all datasets, except "Bike" (see the rebuttal PDF).
You can find our responses to your questions below. If anything remains unclear, we are happy to respond during the discussion period.
Should the changes we've made to our paper alleviate the concerns you voiced, we would be grateful if you would update your score to reflect this. Thank you!
### Weaknesses
> The paper cites Ober et al [...] as inspiration for including SGPR as a strong benchmark, but does not follow the guidance for SGPR training outlined in that paper. [...] This is concerning as it could be having a significantly detrimental impact on the performance of the SGPR baseline - as is further evidenced by it underperforming SVGP on three of the four tasks. In any case, given the popularity of the datasets used, it ought to be straightforward to verify the SGPR performance is consistent with [...] the literature.
To ensure the SGPR baseline is well-tuned and to alleviate your concerns, we made the following improvements to our main experiment:
- We trained SGPR on all datasets in double precision using L-BFGS with a Wolfe line search for 100 epochs in line with the recommendations of Ober et al. (2022). We performed a sweep over the initial learning rate $\text{lr}_{\text{SGPR}} \in \\{10^{-4}, 10^{-3}, 10^{-2}, 10^{-1}, 1\\}$ and repeated each run for five random seeds. Both in our original experiments using single precision and Adam and the new ones, we computed Cholesky decompositions with adaptively increasing kernel jitter as per Section 4.1 of Ober et al (2022).
We find that this indeed increases its performance. SGPR now outperforms SVGP on all small to medium datasets as expected, but IterGP-Opt still matches or outperforms SGPR on all datasets, except "Bike", as both Table R1 and Figure R1 in the attached rebuttal PDF show. Thank you for suggesting this improvement to the SGPR baseline!
- Additionally, we checked our newly generated results against the corresponding results reported by Ober et al. (2022) in Tables 2 and 3 of Appendix H and found the newly reported performance of SGPR in our experiments to be largely consistent on all datasets in common between their and our work ("Bike", "Protein", "KEGGu") for $M=1000$ inducing points. However, one should be careful when comparing these numbers, since Ober et al. (2022) use a squared exponential kernel, while we use a Matern(3/2) kernel.
- Finally, based on the suggestions of reviewer wkY4, we added an exact (Cholesky)GP baseline on the two smallest datasets and
also trained IterGP-Opt using L-BFGS. We found that this marginally improves the performance of IterGP-Opt. However, due to the higher memory consumption and slightly larger runtime, we would still recommend using Adam in practice.
> One of the other methods presented in Table 1 is "IterGP-CG", but its status should be clarified for the reader. The first reference to IterGP-CG in the paper is in the caption to Figure 1, where it is introduced as "IterGP-CG (ours), ...", implying this model is a novelty of the paper. Yet later in the Background section, and also in the footnote on page 7, it is pointed out that the IterGP-CG model is not new. Given that IterGP-CG is not mentioned at all in the Conclusion section, it seems likely the "ours" was just a typo, but this is of course a very important point to clarify! Table 1 itself should also be updated to clarify which method(s) are claimed to be novel to the paper.
We apologize for any confusion this typo may have caused.
As we write in the introduction, IterGP as a class of GP *posterior approximation* methods was introduced by Wenger et al. (2022), one special case being IterGP-CG.
We did not intend in any way to claim credit for this method.
The work by Wenger et al. (2022), however, does *not* describe a way to perform *model selection*, which is one of the two main contributions of our work. In particular, we demonstrate in this work how to perform model selection for IterGP-CG. Our second contribution is that we introduce a new variant of IterGP (i.e. IterGP-Opt), which enables posterior approximation in *linear time*. We have modified the draft to more clearly indicate our contributions without inadvertently suggesting we proposed IterGP-CG.
### Questions
> [...] I'd recommend the authors consider a suitable rephrasing such as "significantly compromising" [in the abstract].
Thank you for this feedback. We will adjust the last sentence of the abstract accordingly.
> SGPR typically requires more gradient iterations during training as it introduces new parameters (the inducing point locations $Z$)." I was surprised by this comment, because in my experience SGPR typically requires significantly fewer training iterations than SVGP. [...]
We would like to politely point out that this is a misunderstanding of what we write. *We are comparing SGPR to exact GPs, not to SVGP* in the quoted sentence. We write -- *after* introducing exact GPs and *before* introducing SVGP -- in the paragraph on SGPR in lines 93-95 that "While these complexities significantly improve upon $O(n^3)$ computation/$O(n^2)$ memory, SGPR typically requires more gradient iterations during training as it introduces new parameters (the inducing point locations $Z$)." We will clarify this in the final version and also add that the inducing point locations can be
chosen in a way that does not require optimization, e.g. as outlined in Burt et al. (2020) and recommended by Ober et al. (2022).
---
Rebuttal Comment 1.1:
Comment: Thank you for responding to my concerns, and updating the experimental table accordingly. A few suggestions regarding the table:
- The CholeskyGP model is chosen to be light grey, which when bolded barely stands out. Please adjust accordingly, simply making it black should suffice (or alternatively, don't set it to bold at all and only highlight the best of the approximate methods?).
- There seem to be learning rates given for the new BFGS-optimised methods, is this just a typo?
> In particular, we demonstrate in this work how to perform model selection for IterGP-CG.
- Regarding the presentation of the results for IterGP-CG: in order to make this contribution much clearer, might it be helpful to explicitly compare these results against the previous training procedure for IterGP-CG, as was used in e.g. Figure 4 of arXiv:2205.15449. This would aid the reader to gauge the significance of the contribution.
---
Rebuttal 2:
Comment: > The CholeskyGP model is chosen to be light grey, which when bolded barely stands out. Please adjust accordingly, simply making it black should suffice (or alternatively, don't set it to bold at all and only highlight the best of the approximate methods?).
We are happy to make the CholeskyGP stand out more.
> There seem to be learning rates given for the new BFGS-optimised methods, is this just a typo?
This is not a typo. As we write in the response above this is the *initial* learning rate, which then gets updated by the line search. See also the [corresponding lines](https://github.com/pytorch/pytorch/blob/26b0a0c2f37a8ad376f261df7bb4fee65ff2f230/torch/optim/lbfgs.py#L419-L423) in the PyTorch implementation of L-BFGS and its [`lr` argument](https://pytorch.org/docs/stable/generated/torch.optim.LBFGS.html).
> Regarding the presentation of the results for IterGP-CG: in order to make this contribution much clearer, might it be helpful to explicitly compare these results against the previous training procedure for IterGP-CG, as was used in e.g. Figure 4 of arXiv:2205.15449. This would aid the reader to gauge the significance of the contribution.
We believe there might be a misunderstanding here as to what the contribution of Wenger et al. (2022) is. In Figure 4 they show the loss as a function of the solver iterations when computing an *increasingly better approximate posterior with fixed hyperparameters*. We propose a *model selection / hyperparameter optimization procedure* for this approximate posterior computed with a fixed number of solver iterations (i=512). Wenger et al. (2022) *do not consider model selection / propose a training procedure*. It is therefore not possible to compare their experiment to ours, since the two papers consider two different problems.
Does this clarify our contribution? We are happy to give more detail if that should alleviate your concerns.
---
Rebuttal Comment 2.1:
Comment: Yes I understand the significance of Figure 4, that during optimisation the hyperparameters were fixed, but there is still a procedure on how to obtain those fixed hyperparameters as cited in the paper:
> For all datasets, we select hyperparameters using the training procedure of Wenger et al. [29].
The point is, it is important to demonstrate explicitly what improvement is brought to IterGP-CG compared to what was attainable with IterGP-CG before.
---
Rebuttal 3:
Comment: The hyperparameters were chosen with the training procedure for CGGP as proposed in Wenger et al. [29] to favor CGGP. They do the same in the follow-up experiment in Figure 5 where they choose the hyperparameters given by SVGP for their variant IterGP-PI.
Would you be satisfied if we added CGGP training runs and then reported the test performance for the approximate posterior computed by IterGP-CG (as done by Wenger et al 2022)? This would be possible.
---
Rebuttal Comment 3.1:
Comment: Based on your request, we have added the following **experiment comparing CGGP trained via the procedure given in [Wenger et al. (2022b)](https://arxiv.org/abs/2107.00243) and our proposed model selection procedure for IterGP-CG**:
We trained CGGP on all small to medium-sized datasets via L-BFGS for 100 epochs in double precision with a sweep over the initial learning rate. Due to the very limited time until the end of the rebuttal period we ran this experiment only for one seed, we will add the remaining runs for the camera-ready version.
As the table below shows, *IterGP-CG trained via our proposed objective outperforms or matches CGGP on all datasets (i.e difference in performance larger than one standard deviation), except in terms of RMSE on Protein.*
The significant lower runtime on KEGGu is explained by the fact that after epoch 9 CGGP ceases to improve.
| Dataset | Method | Optim. | LR | Epoch | Test NLL | Test RMSE | Avg. Runtime |
|------------|-----------|--------|-----|-------|----------|-----------|--------------|
| Parkinsons | CGGP | L-BFGS | 1.0 | 68 | -2.734 | 0.011 | 1min 23s |
| | IterGP-CG | Adam | 1.0 | 250 | -2.936 | 0.009 | 1min 44s |
| Bike | CGGP | L-BFGS | 1.0 | 15 | -2.053 | 0.021 | 2min 14s |
| | IterGP-CG | Adam | 1.0 | 250 | -2.042 | 0.024 | 5min 17s |
| Protein | CGGP | L-BFGS | 0.1 | 28 | 0.852 | 0.511 | 16min 13s |
| | IterGP-CG | Adam | 1.0 | 27 | 0.820 | 0.542 | 1min 26s |
| KEGGu | CGGP | L-BFGS | 1.0 | 9 | -0.510 | 0.127 | 7min 29s |
| | IterGP-CG | Adam | 1.0 | 229 | -0.699 | 0.120 | 39min 5s |
Now since CGGP and IterGP-CG share the same posterior mean (see Section 2.1 of [Wenger et al (2022)](http://arxiv.org/abs/2205.15449)), the Test RMSE for IterGP-CG is guaranteed to be identical when using the same hyperparameters. Empirically as Wenger et al (2022) show in Figure 4, the Test NLL is also the same. Therefore the reported numbers here will be the same if computing the approximate posteriors with IterGP-CG for the hyperparameters given by CGGP as done in Wenger et al. (2022). (Note that the reported 2x speedup of IterGP-CG vs CGGP by Wenger et al. 2022 only applies to inference, not the CGGP training procedure).
Thank you for your suggestion to improve our paper. We would kindly ask you to take this newly added experiment into account in your final evaluation. | null | null | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their time and effort in reviewing our work!
We feel that your helpful suggestions enabled us to improve our paper, in particular, we've added the following additional experiments (see the attached PDF and our individual rebuttals below):
- **Generalization experiments** (Table R1, Figure R1)
- SGPR trained with LBFGS in double precision on all small and medium sized datasets.
- IterGP-Opt trained with LBFGS in double precision on all small and medium sized datasets.
- CholeskyGP baseline trained with LBFGS in double precision on all datasets where this is possible given memory constraints.
- **Experiment on inducing point placement and posterior vs predictive uncertainty of SVGP in higher dimensions** (Figure R2).
We responded to all your questions in individual answers below. If anything remains unclear please let us know!
Pdf: /pdf/090cb8574817eafff1517b0254df5c48e2978743.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SEL-BALD: Deep Bayesian Active Learning with Selective Labels | Accept (poster) | Summary: This paper explores the setting where a learner has a label budget that can be used to acquire labeled data from a user. However, differently from traditional active learning approaches, the paper assumes that the user may not want/be able to label some examples. That is, for specific (limited) examples, the user returns a label when queried, while for other examples it does not return anything. The key challenge of the paper is how to motify traditional query strategies to account for this new possibility of not getting any label from the user. The paper solves this challenge by modeling the user through a Bayesian network and including two additional terms in the AL score function: the first term scales the potential gain obtained by labeling an instance by the probability that the instance would get labeled when queried, while the second represents the gain obtained for modeling the user (i.e., where the user labels the data).
Strengths: ### Main Strengths
1. The problem setting is realistic, unexplored, and yet relevant for industry use cases: after all, some experts may not want, may not be allowed, or even may not be able to provide labels.
2. The paper is clearly written: claims are well supported, the notation is well detailed, and the flow is easy as well as the structure.
3. The proposed approach is reasonable and meaningful: the paper proposes multiple methods (such that one is the generalization of the other) and shows empirically how they perform. The Bayesian Active Learning with Disagreement approach is still quite used - despite being more than 10 years old. Thus, extending BALD to the new setting can have a positive impact on several industrial use cases.
4. The paper is technically sound and highlights whenever a quantity is an estimate and when is the true value.
Weaknesses: ### Main Weaknesses:
1. The experiments include MNIST, which is a little bit outdated and nowadays considered a simple dataset. I'd recommend using more elaborated multiclass datasets (e.g., SVHN, Waterbirds, ...). Using more common datasets would strengthen the work.
2. Some literature study might be missing: learning to defer is an area where a model defers the prediction to a human whenever the human is more likely to provide a correct prediction, whereas the model is preferred in other cases. Thus, this research direction also investigates how to model a human. Also, learning to reject is a similar area, and [1] uses active learning assuming that the user may not provide some labels. Finally, there is a connection with PU Learning (Positive and Unlabeled Learning), where the propensity score named e(x) represents the probability of labeling an instance. Including a brief connection to these areas would improve the quality of the paper.
[1]: https://www.ijcai.org/proceedings/2020/0403.pdf
Technical Quality: 3
Clarity: 4
Questions for Authors: No questions.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: Limitations are clearly discussed in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We're encouraged by the reviewer's positive comments on the novelty of the problem and our method contribution. Below, we address the concerns raised in your review.
- **Datasets:** Thanks for the suggestion. We followed your recommendation and added additional experimental results on Fashion MNIST, CIFAR-10, and two UCI datasets (Adult and Mushroom). The results are included in Figures 2 and 3 in the one-page PDF response. We find similar qualitative conclusions that Joint-BALD variants often produce better results than Naive-BALD and RANDOM across different costs. Please refer to **Response to All** and the attached PDF for details.
- **References:** Thanks for the great references! Indeed, learning to reject and learning to defer (adaptive learning to reject) are closely related to our work. These works consider how to select AI or the human at testing time to decide on an instance while our paper focuses on how to design the active learning strategy with human discretion behaviors. [Perini et al. 2020] studies how to estimate the class prior with a given active learning strategy in the PU learning setup. PU learning also considers a partially observed label setting similar to ours. We will discuss the relationships with these literature in detail in the revised manuscript.
We appreciate your insightful suggestions and have addressed it thoroughly in our revised manuscript to enhance the overall quality of the work. Please do let us know if you have any further concerns, or if this adequately addresses all the issues that you raised with the paper.
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thanks for addressing my points. As stated in my review, I believe the paper is in a good shape and provides good contribution. Although the other reviewers highlighted some weakness, the pros of accepting the paper outnumber the cons, in my view. Thus, I recommend acceptance.
---
Reply to Comment 1.1.1:
Title: Thank You
Comment: Thank you so much for being positive about our work. We will include your valuable feedback in the revised version. | Summary: This paper presents a direct extension of BALD sampling criterion to address the need for active annotation rejection. The method assumes a rejection distribution over the candidate data samples in pool-based active learning and proposes an active sampling strategy that jointly considers the BALD informativeness and rejection cost.
Strengths: 1.The problem studied in this paper can be considered an instance of cost-aware active learning. The challenge of having samples that are difficult or even impossible to label is a very practical issue in active learning. This paper provides a valuable contribution to the field of active learning by explicitly estimating the rejection function, which offers better guidance for active learning and is beneficial for improving data efficiency.
2.The rejection component proposed in this paper is naturally integrated into Bayesian sampling. The final hybrid sampling criterion revolves closely around information gain and uncertainty reduction, which I find to be reasonable and effective. Approaching active sampling from a Bayesian perspective also makes it more interpretable.
3.The paper is written in a concise and fluent manner, with high readability. The core method is introduced clearly and is easy to understand.
Weaknesses: 1.The core of this paper's design is the human discretion function e(x), but there are some unclear aspects regarding this function in my view. First, the approximation of e(x) seems to be achieved through an additional Bayesian network, but the detailed features of this Bayesian network are omitted in the paper. As the authors mention, given the small sample nature of active learning, the estimation of e(x) is likely to have an underfitting problem. However, the two solutions proposed by the authors in line 171 do not seem to address this issue adequately, and the authors should provide more explanation on this. Additionally, since e(x) is estimated by a Bayesian network, it should be involved in the computation in the form of some posterior distribution. However, in the examples given in the experimental results section, the true e(x) is presented as an unnormalized piecewise function. Therefore, using a posterior to approximate an e(x) that is not a valid density function raises questions about its effectiveness and limitations. I believe this is directly related to the deeper reason for label rejection, i.e., the analytical form of e(x). If the authors cannot theoretically prove that the proposed method generally works for all types of rejection reasons, they should at least compare the model's prediction of rejection (instead of the final overall sampling score) with several typical true rejection distributions in the experiments.
2.As a cost-aware active learning method, I find the experimental section of this paper somewhat weak. The real-world dataset used does not record the true rejection behavior of human annotators, which means, to some extent, that the experiments on MNIST are still synthetic. The true reasons for human annotators rejecting to label might be much more complicated than the prototype e(x) used in this paper. Therefore, I believe the evaluation of this specific problem requires a dataset with actual recorded human annotator rejections.
Technical Quality: 3
Clarity: 3
Questions for Authors: -Would it be possible to include a comparison of the model's rejection predictions against typical true rejection distributions in your experiments?- Should the first term in eq 4 be negative?
-The constant c in section 4.2 seems to differ from the c defined in section 3.1. You might want to use a different notation.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are delighted to learn your acknowledgment of our Bayesian approach to a practical problem in active learning. We address your points of concern in detail below.
- **W1 (Estimating $e(x)$):** (i) We now add more implementation details of the Bayesian network $e_\phi(x)$ in the appendix. In the simulation, we use the MC dropout as a Bayesian Approximation [Gal & Ghahramani 2016] (we set the number of MC samples as 40 in all experiments). The model architecture is a 3-layer fully connected neural network with LeakyReLU activation function.. (ii) underfitting problem: in L171, we propose to query data points that are informative to the human discretion model in the second point, which aims to address the underfitting issue. More specifically, in Def 4.3-4.5, we include an additional term that quantifies the information gain on the human discretion model to help learn $e(x)$ better. In addition, we propose the UCB and TS variants to guide our label acquisition using the underfitted $e(x)$. As shown in Figure 3, the underfitted $e(x)$ will indeed harm methods like $e$-BALD but our proposed method can get a better performance. (iii) We agree with the reviewer that the true $e(x)$ is unnormalized, i.e. $\int e(x) dx \neq 1$. The discretion model $e_\phi(x) = p(a=1 | x, \phi)$ is a function mapping from the domain $\mathcal{X}$ of $x$ to any value in $[0,1]$, which also does not require normalization over $x$. The Bayesian computation yields the posterior $p(\phi | \mathcal{D})$ of the parameters $\phi$ instead of $e(x)$. So our $e_\phi(x)$ can be applied to approximate any function $\mathcal{X} \to [0,1]$.
- **Datasets and Human Behavior Models:** Thanks for the suggestion. We added additional experimental results on Fashion MNIST, CIFAR-10, and two UCI datasets (Adult and Mushroom). The results are included in Figures 2 and 3 in the one-page PDF response. We find similar qualitative conclusions that Joint-BALD variants often produce better results than Naive-BALD and RANDOM across different costs. Though selective labeling is prevalent in industry use cases, we hope you can understand that it is challenging for us to find public real-world data with recorded human decision-maker judgments on such applications, so we choose to vary the human behavior model in the paper so that we can get a more detailed comparison and insights between different methods.
Please find the results for additional datasets and human behaviors in **Response to All** and the attached PDF.
- **True/Predicted Human Behavior:** We kindly note that Figure 3 in the main paper demonstrates the comparison of $e(x)$ and the estimated $\hat{e}_\phi(x)$ for the synthetic data. We will address the typos and notations in the revised version.
We appreciate your insightful suggestions and have addressed it thoroughly in our revised manuscript to enhance the overall quality of the work. Please do let us know if you have any further concerns, or if this adequately addresses all the issues that you raised with the paper.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: Thank you for the detailed rebuttal. The additional implementation details of the Bayesian network and the steps taken to address underfitting provide the necessary clarification regarding the estimation of e(x). The use of MC dropout and the model architecture explanation help in understanding the approach better. Including information gain and introducing UCB and TS variants further address my concerns. Regarding the experimental section, the addition of datasets like Fashion MNIST, CIFAR-10, and two UCI datasets strengthens the evaluation. While real-world datasets with recorded human rejections are challenging to obtain, your varied human behavior models offer valuable insights. Overall, while the paper's novelty and theoretical contributions are modest, the provided clarifications and additional experiments adequately address my concerns. I am inclined to maintain a positive rating as borderline accept.
Thanks!
---
Reply to Comment 1.1.1:
Title: Thank You
Comment: Thank you so much for being positive about our work. We will incorporate all your inputs into the final version. Thank you for your time and effort in helping with the review of our paper. | Summary: The paper introduces the Active Learning with Instance Rejection (ALIR) problem, addressing the issue of selective labeling where human discretion impacts the labeling process. In particular, humans might not always provide a label for a point returned by active learning; this abstention needs to be modeled explicitly. The authors propose new algorithms under the SEL-BALD framework to improve model performance using a human discretion model. The paper demonstrates the effectiveness of these algorithms through experiments on both synthetic and real-world datasets.
Strengths: - The paper's focus on Active Learning with Instance Rejection (ALIR) focusses on a real-world problem where human discretion plays a critical role in the labeling process. This assumption seems to be realistic and well-motivated.
- The introduction of the SEL-BALD (Selective Bayesian Active Learning by Disagreement) framework effectively balances the dual objectives of selecting informative instances and accommodating human labeling behavior.
- The empirical results show the benefits of considering human discretion in active learning. This underscores the practical advantages of the proposed method(s).
- The paper is well-structured and clearly written.
Weaknesses: - The integration of Bayesian neural networks for modeling both machine learning tasks and human judgment significantly increases computational complexity. These Bayesian models are computationally demanding to train.
- While the paper includes experiments with both synthetic and real-world datasets, the scope of real-world applications examined is relatively limited. Critical domains such as medical diagnostics or financial fraud detection present unique challenges. A more comprehensive evaluation across diverse applications would better illustrate the versatility and reliability of the approach.
- The paper's assumption that human discretion can be accurately modeled using predictive tools like Bayesian neural networks may be overly optimistic. Real-world human decision-making is influenced by numerous complex and often unpredictable factors. The efficacy of these models can vary significantly across different contexts. Accurate modeling of human discretion typically requires extensive, high-quality historical data on human decisions, which may not always be available. In cases where data is limited, noisy, or biased, the human discretion model's performance could be significantly compromised. Furthermore, human decision-making processes can change over time due to regulatory updates, new information, or shifts in organizational policies. The study does not address how the human discretion model would adapt to such changes, potentially leading to outdated or inaccurate predictions.
- The legibility of figure legends is consistently poor due to their small size. This aspect requires significant improvement for better readability and comprehension.
- Definitions 4.1 through 4.5 would benefit from more detailed explanations accompanying the equations, providing context and clarification for the mathematical formulations presented
Technical Quality: 3
Clarity: 2
Questions for Authors: - Can you expand the techniques used to train the human discretion model?
- How can potential biases introduced by assumptions on the human discretion model be addressed?
- Is $c_e$ part of $c_l$?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See weaknesses above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your positive comments on our problem, method, and writing. Nevertheless, we understand there are concerns impacting the evaluation.
To address these:
- **Computational Complexity:** In this paper, we mainly focus on improving the BALD method, therefore the baseline models are also Bayesian models that share the same computational complexity. In addition, we use the MC dropout as a Bayesian Approximation [Gal & Ghahramani 2016] which is not computationally demanding compared to a deterministic model with the same architecture. For other traditional AL metrics that work for simpler models such as Entropy, it is also possible to extend the proposed method for these metrics by replacing the information gain term with the uncertainty term. We demonstrate that in Figure 1 in the response PDF, we implement Joint-Entropy-UCB with entropy measure on the synthetic data. Joint-Entropy-UCB can also improve the traditional entropy objective in this case. We will add the discussion in the revised manuscript.
- **Modeling Human Behaviors is Challenging:** We agree that human discretion models may be hard to accurately predict, which is the main reason we proposed a UCB and TS variant to better explore the unknown human behaviors. In addition, we can use a non-parametric learner such as Gaussian Process (GP) or BNN (by universal approximation theorem) to learn the human discretion behavior that guarantees the true model can be learned with enough samples.
Furthermore, recent studies and real-world applications have demonstrated strong predictive performance of human discretion behaviors [1,2] or human mental models [2,3], which indicates the human discretion model can be learned in practice, which we hope can address your concern.
- **Changing Human Discretion Model:** We agree that human decision-making processes can change over time. We would like to kindly highlight that we discuss and evaluate this scenario in Appendix B where the true human behavior model changes at a given time step in the instance collection. Our method can still be applied in this setting and we find the Joint-BALD-UCB successfully selects the samples that are likely to be labeled and are informative to the predictive model and human discretion model, and it can better recover the correct decision boundary than baselines.
- **Real-world application and Data sets:** Thanks for the suggestion. First, we kindly note that one of our experiments is motivated by a financial fraud investigation. Second, we added additional experimental results on Fashion MNIST, CIFAR-10, and two UCI datasets (Adult and Mushroom) in the rebuttal. The results are included in Figures 2 and 3 in the one-page PDF response. We find similar qualitative conclusions that Joint-BALD variants often produce better results than Naive-BALD and RANDOM across different costs. Though selective labeling is prevalent in industry use cases, it is challenging for us to find public real-world data with recorded human decision-maker judgments, so we choose to vary the human behavior model in the paper so that we can get a more detailed comparison across different methods.
Please find the results for additional datasets and human behaviors in **Response to All**.
- **Figures and Writings:** In the revised paper, we ensure all the figure legends are clear with proper fonts (like the figures in our PDF response). We also add intuitive explanations to the objective terms in the Definitions. We add implementation details of the training discretion model in the Appendix.
- **Assumptions Biases:** We impose minimum assumption on the human discretion model since $e(x)$ can be any complex function. In practice, we can use a non-parametric learner such as Gaussian Process (GP) or BNN (by universal approximation theorem) to learn the human discretion behavior that guarantees the true model can be learned with enough samples.
- **$c_e$ and $c_l$:** In the ALIR problem, we assume there is a cost to decide whether to label an instance (examine cost $c_e$) and a cost to actually label the instance (label cost $c_l$) (L130-131 in the main paper). The total cost is the sum of the current examination cost and label cost. We will make this more clear in the revised manuscript.
We appreciate your insightful suggestions and have addressed it thoroughly in our revised manuscript to enhance the overall quality of the work. Please do let us know if you have any further concerns, or if this adequately addresses all the issues that you raised with the paper.
[1] Sun, Jiankun, et al. "Predicting human discretion to adjust algorithmic prescription: A large-scale field experiment in warehouse operations." Management Science 68.2 (2022): 846-865.
[2] Wang, Xinru, Zhuoran Lu, and Ming Yin. "Will you accept the ai recommendation? predicting human behavior in ai-assisted decision making." Proceedings of the ACM web conference 2022.
[3] Bansal, Gagan, et al. "Beyond accuracy: The role of mental models in human-AI team performance." Proceedings of the AAAI conference on human computation and crowdsourcing. Vol. 7. 2019.
[4] Madras, David, Toni Pitassi, and Richard Zemel. "Predict responsibly: improving fairness and accuracy by learning to defer." Advances in neural information processing systems 31 (2018).
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer fApQ,
We hope this message finds you well. If you have any remaining questions or concerns, we would greatly appreciate your feedback. We believe we have thoroughly addressed the issues you raised and would be sincerely grateful if you could kindly reconsider your scores.
Thank you,
Authors of SEL-BALD | Summary: This paper considers an active learning scenario where the human annotation is done under the restriction that the annotator is biased in deciding the labeling. The Bayesian framework makes the e-BALD and various versions based on a posterior sample of labeling probabilities. The authors consider the three cases in human labeling probability, including the non-labeling of specific cases. Three versions of the mean, quantile (UCB), and samples reveal different aspects in experiments. The trade-off between exploration and exploitation in labeling probability is observed. Using the quantile can have a detailed balance between examination and labeling (examination means checking the attribute and non-labeling).
Strengths: The problem and task are novel. The BADL modified by the labeling probability is simple, clear, and well-defined. In experiments, the proposed algorithm performs better in the labeling rate for the candidates by the query.
Weaknesses: There can be many issues.
The first one is the applicability of various metrics used in active learning. Usually, entropy, margin, and variation ratio are traditional measures. Especially, the BALD combined with deep neural networks can show poor performance. It requires the study of applicability to other metrics or advanced active learning algorithms such as BADGE.
The second one concerns the datasets used in experiments. Three datasets are not sufficient; more datasets, such as Fashion MNIT or ImageNet-tiny, can be considered, and their statistical significance can be studied more thoroughly.
The last issue is human behavior, the scenario is relatively simple addressed by the simple logistic regression. The behavior of human annotators can be diverse because of their complex properties. It requires human models, which can be complex. When the human annotator's behaviors are complex, what’s the performance? Also, models such as logistic regression can be mimicked to generate samples.
Minor:
bayesian neural -> Bayesian neural
e-bald -> e-BALD
Technical Quality: 3
Clarity: 3
Questions for Authors: Q1: Can you provide other metrics, such as the F1 score? In my opinion, the 1-score can be lowered by selective labeling.
Q2: What’s the meaning of Total costs in the legend of Fig. 4?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Sufficiently discussed in the last section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your recognition of the novelty of the problem and the clarity of our approach.
**Other AL Metrics**: Thanks for the suggestion! While we mainly focus on the BALD objective in this paper, our methods can be generalized by replacing the mutual information with other metrics in the objective. For example, we can replace the information gain with other uncertainty measures such as entropy.
As shown in Figure 1 in the response PDF, we implement Joint-Entropy-UCB with entropy measure on the synthetic data. Joint-Entropy-UCB can also improve the traditional entropy objective in this case. The BADGE method measures uncertainty at the gradient space. One potential way to apply BADGE is to compute the gradients for the last layer of the predictive model $p(y|x,\theta)$ and discretion model $p(a | x, \phi)$, weighted combine the gradients with the weights $(e(x), 1)$, and then use K-means++, however, this adaptation may be more challenging compared to traditional AL metrics and we believe will be interesting future work. We will discuss the connection with BADGE in the revised version.
**Datasets and Human Behavior**: Thanks for the suggestion. We added additional experimental results on Fashion MNIST, CIFAR-10, and two UCI datasets (Adult and Mushroom). The results are included in Figures 2 and 3 in the PDF response. We find similar qualitative conclusions that Joint-BALD variants often produce better results than Naive-BALD and RANDOM across different costs.
Following your suggestion, we also implemented a Logistic Regression based human behavior model in Figure 5 in the PDF response with similar qualitative conclusions as in the main paper.
Please find the results for additional datasets, human behaviors, and more details in **Response to All**.
**F1 Score:** In our experiments, we find F1 score often has a similar qualitative conclusion as the Accuracy. To illustrate that, we include a comparison between Accuracy and F1 score in Figure 4 in the PDF response for your reference. We will add the discussion about the F1 score in the paper.
**Total Cost**: In the ALIR problem, we assume there is a cost to decide whether to label an instance (examine cost) and a cost to actually label the instance (label cost) (L130-131 in the main paper). The total cost is the sum of the current examination cost and label cost (the cost structure is defined in L212-213 in the paper). We will make this more clear in the revised manuscript.
We appreciate your insightful suggestions and have addressed it thoroughly in our revised manuscript to enhance the overall quality of the work. Please do let us know if you have any further concerns, or if this adequately addresses all the issues that you raised with the paper.
---
Rebuttal Comment 1.1:
Title: Reply to Rebuttal
Comment: Thanks for your reply. The added results seem better during the rebuttal period. Also, the adaptation to other AL algorithms can be limited to the Bayesian framework, which is natural. I have one more question about human behavior. The terminology of "Human behavior" is limited, in my opinion, since the rejection can occur by other machines. Can the case that some rejection or other actions can bother the proposed AL alg.'s performance exist?
---
Reply to Comment 1.1.1:
Comment: Thanks for the question. In our motivation, a human insurance agent needs to investigate a potential fraud case or a human doctor needs to perform a biopsy, which is why we describe it as human behavior. It may also be possible that a firm deploys another AI to determine the risk of such a labeling procedure. In this case, since the firm has query access, $e(x)$ is either known or can be estimated accurately with a large number of queries, then based on our theoretical analysis, the company can directly use $e$-BALD as the active learning strategy. The unknown human behavior brings additional challenges to this problem. We will add this discussion to the revised manuscript. | Rebuttal 1:
Rebuttal: # Response to All:
We thank the reviewers for your positive and constructive feedback. Your comments significantly help us improve the paper.
## Datasets
In addition to the synthetic data, MNIST, and the loan fraud detection data in the paper, we further evaluate the proposed and the baseline methods on Fashion MNIST, CIFAR-10, and two UCI datasets (Adult and Mushroom). The results are included in Figures 2 and 3 in the one-page PDF response. We find similar qualitative conclusions that Joint-BALD variants often produce better results than Naive-BALD and RANDOM across different costs.
Please refer to the one-page PDF response for details.
## Human Behavior
We agree with the reviewers that a human annotator’s behavior can be complex. We would like to note that the proposed Joint-BALD does not pose functional restrictions on the human discretion model $e_\phi(x)$, which can be an arbitrarily complex function. For example, it can be modeled by a nonparametric learner like Gaussian Process or Bayesian neural networks.
Though selective labeling is prevalent in industry use cases, it is challenging for us to find public real-world data with recorded human decision-maker judgments, so we choose to vary the human behavior model in the paper so that we can get a more detailed comparison across different methods. In the paper, we examined homogeneous (Appendix C) and more complex heterogeneous human behaviors and found the advantages of Joint-BALD and its variants for unknown behavior heterogeneity. We also illustrate the advantage of Joint-BALD in adapting to changing human behavior (Appendix B). In addition, following reviewer WcLXs’ suggestion, we add an experiment using the Logistic Regression modeled human behavior model in Figure 5 in the one-page PDF response. More specifically, we randomly select 3 samples with the positive and negative class each and train a logistic regression to mimic the human decision behavior, then human labeling decisions are generated using the logistic regression. We find a similar conclusion that Joint-BALD variants often produce better results than Naive-BALD and RANDOM across different costs as demonstrated in the main paper.
Pdf: /pdf/4189d2dc20f7102e2be172dffe5f66d77a4a804a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Physics-Informed Variational State-Space Gaussian Processes | Accept (poster) | Summary: The paper introduces a method for training spatio-temporal Gaussian processes which incorporate physics constraints including satisfying the governing equations at a number of collocation points and satisfying curl / divergence free constraints. Their approach scales both linearly in time and space by leveraging a number of approximations.
Strengths: - While many of the individual components (such as incorporating curl / divergence free constraints and satisfying governing equations at a number of collocation points) have been done before, I believe the combination of ideas here is novel.
- Showing how to achieve linear complexity in the space and time dimension is valuable.
- Showing how your approach recovers some prior works as a special case from a more general perspective is also valuable.
- The bevy of test cases demonstrating the efficacy of your approach on both synthetic and real-world problems convincingly demonstrate the advantages offered by your approach.
Weaknesses: - The problem statement is poorly motivated in my perspective. It would be helpful to be more specific about what you hope to achieve with hybrid modeling. For example, is your goal to improve the predictive accuracy of mechanistic models by incorporating data? What computational efficiency do you hope to improve? i.e. do you want to reduce the amount of data needed to train physics surrogates? Do you argue your approach is more efficient than solving the mechanistic model using traditional methods? etc. While I understand space is limited, being more specific about the problem statement will also help guide more targeted numerical studies in future works.
- I think you need slightly more of a discussion on virtual observations (see questions below). When virtual observations are competing with actual data it would be helpful to discuss in detail how you think about $\epsilon_C$ and choosing the number of collocation points.
- Since you state that one of the goals of your approach is to be useful in uncertainty quantification, the numerical studies would have been strengthened by comparing more than just RMSE. For example, comparing the continuous ranked probability score (CRPS) would have given an indication of how well your approach is estimating uncertainty.
- Minor:
- L94 space cases -> special cases?
Technical Quality: 3
Clarity: 3
Questions for Authors: - As I understand it, you can always choose to satisfy the differential equations at enough collocation points such that you overwhelm the data. For example, say I have very limited data so I choose the number of collocation points to be much greater than the number of data points. Clearly, if I use too many collocation points, my model will ignore the data and just follow the differential equations. How do you think about this perspective in the context of uncertainty quantification?
- As a follow up to this previous point, what are the implications of assuming that $0_n^{(C)} = g(F_n) + \epsilon_C$? Is it correct to think about this as implicitly assuming that the PDE is stochastic?
- Are there situations where you believe the Gaussian process assumption could be limiting?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: I think the authors have done a good job of identifying some potential limitations. I think the paper would greatly benefit from a more in depth discussion on using collocation points to enforce governing equation constraints and the limitations this perspective brings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We address the points you raised below.
**W1: Problem statement.**
Our work contributes to the growing field of data-driven physics-informed models. These are hybrid models that aim to exploit physical inductive biases and data observations. Recently there have been principled ways to approach this problem through GPs, where the current state of the art includes AUTOIP and HELMHOLTZ-GP. However, these approaches are cubic in the number of data points limiting them to small-scale studies. In this work we provide a unifying view of such GP approaches and derive state-space algorithms that are linear in the temporal dimension, bringing computational complexity closer to standard ODE solvers, and enable the application to large scale spatio-temporal problems.
**W2: Virtual observations.**
Please, see Q1 below.
**W3: Uncertainty quantification.**
We agree that providing uncertainty metrics is valuable. In Table 1, we have already provided negative log predictive distributions (NLPDs) however we will report them and CRPS for all experiments. For the Diffusion-Reaction experiment they are
| Model | RMSE | NLPD | CRPS |
| -------- | ------- | ------- | ------- |
| PHYSS-EKS | $0.06$ | $-1.26$ | 0.038 |
| PHYSS-SVGP$_{H}$ | $0.17$ | $1.69$ | 0.077 |
| AUTOIP | $0.17$ | $-0.29$ | 0.065 |
and for the Ocean Current experiment we achieve an NLPD of $-0.52$ and CRPS of $0.078$.
**W4: Typo.**
Thank you for pointing out this typo, we have addressed this now in the manuscript.
**Q1: Uncertainty.**
In this paper, we assume that the data observations and mechanistic descriptions align (although we can simply extend to handle missing physics, see review zAxE Q2). In this setting, we can view the uncertainty as representing our beliefs on 'how well we have solved the system'. Since we can only ever place a finite number of colocation points there will always be some error in our solution, and uncertainty quantification is vital in representing that. Moving forward having capable (both computationally and predictively) uncertainty-equipped models will be vital for handling misspecification (perhaps in prior or the likelihood) and for parameter learning.
**Q2: Collocation points?**
Please, see the discussion in reply to reviewer **zAxE** Q3.
**Q3: GP as a limitation.**
Under non-linear differential equations the true resulting posterior (of the generative model in Eqs. 5-6) will be non-Gaussian, and with highly non-linear equations (and or chaotic systems) approximating the true posterior with a Gaussian could lead to underestimation of the uncertainty (Turner et al., 2011). However, for many problems, the Gaussian assumption is effective as a Gaussian system is much simpler to solve [54,22].
- Richard Turner and Maneesh Sahani (2011). Two problems with variational expectation maximisation for time-series models. *Bayesian Time Series Models.*
---
Rebuttal Comment 1.1:
Comment: Thank you for your thoughtful response.
Regarding the W1, let me try to rephrase. I agree that your unifying view of GP
approaches for hybrid modeling is itself a valuable contribution. In my understanding
of physics informed learning, it is often useful to understand the downstream tasks
which you hope to achieve using a specific approach as this will guide algorithm design.
For example, some approaches may seek to build predictive models
which can learn from less data because they
explicitly incorporate physics knowledge. Other approaches may seek to improve the
accuracy of flawed mechanistic models using data. Other approaches may seek to
use data to approximately solve differential equations more quickly
with error estimates. Other approaches may seek to build cheap, approximate models
of differential equations so that they can be used in optimization and control.
My point with W1 was that I think it would be very helpful to discuss where you
think your approach fits into the broader context of physics informed ML.
Thank you for providing CRPS scores. I think this will be helpful for those
hoping to extend/compare to your work in the future.
Regarding Q1 (and this relates back to W1), thank you for clarifying. I think making it
clear in the main text that the Gaussian likelihood is just a relaxation of a "delta"
likelihood is helpful in understanding the motivations of your approach.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comment and for clarifying W1.
Our proposed approach fits into building probabilistic models where mechanistic/physics knowledge is incorporated as an inductive bias to improve performance (such as the latent quantity of interest should follow a differential equation, or the vector field we are modelling should be curl-free). We agree that this allows one to use less data because the physics has been explicitly incorporated. However our approach can also be extended to the setting where there is partial physical knowledge, see discussion Q2 in reply to zAxE. We do not see this work as an alternative to classical PDE solvers. We will add this discussion to our introduction and agree that it will help better place our work within the wider field of physics-informed ML. | Summary: The paper introduces a formulation for state-space GP which aims to capture the behaviour of PDE-based systems. The paper suggests multiple techniques to speed up the GP inference, including through decomposed kernel, variational methods on natural parameterisation of distribution, use of inducing points
Strengths: The paper provides good motivation and use cases for physics-informed GPs, and the need for scaling the inference process. The paper also provides adequate results and demonstrations of the GP for prediction and uncertainty quantifications, which well motivates the benefits of GPs over other solver types.
Weaknesses: - The paper may be a bit hard to follow if one is not familiar with GPs already. For example, it may be difficult to immediately see how the PDE or the BV are incorporated into the probability distribution during the inference stage.
- It may also be arguable that some parts of the paper can be seen as just application of standard sparse GP techniques and variational methods into the context of physics-informed GPs.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Are there any intuition as to why the decomposition of the kernel is done between the spatial and the temporal variables? Is it for computational reasons or can it be linked to properties of physical phenomena?
- Does (or can) the method perform in the case that the full form of the PDE is unknown? Or is the paper mostly focused on cases of just generating the PDE solution?
- In the formulation given by Eq 6, the PDE and BC residuals are assumed to arise from a Gaussian distribution. How can they be related to the physical realisation of the PDE, or how can they be interpreted properly as a probability distribution? How realistic is this assumption?
- Are there any additional assumptions in the form of g(f) should take for this method to work?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are adequately mentioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and positive view of our work. We address the points you raised below.
**W1: Clarity.**
To aid understanding we will provide an additional notation section and nomenclature table in the appendix as well as the proposed additions mentioned in the reply to reviewer 4ngK. To clarify, the boundary values are incorporated as observations (as defined in Eq. 6). At the inference stage this is incorporated through an additional likelihood. This means for PHYSS-VGP the likelihood term in Eq. (13) can be written as $p(Y_n | \bar{f}) = p(Y^\mathcal{O} | H_\mathcal{O} \, F_n) \, p(O | g(F_n)) \, p(Y^\mathcal{B} | H_\mathcal{B} \, F_n)$ where the differential equation is used to define $g$. We will add this discussion into Sec. 3.
**W2: Novelty.**
Please, see the response to reviewer X3gM W1.
**Q1: Decomposition of kernels.**
The primary motivation for the decomposition of the kernel into space and time is computation. This is a standard assumption for GPs and in this paper, we show that we can exploit this separability to derive state-space inference algorithms that are linear in the temporal dimension. One would expect that for linear and autonomous differential equations (time-invariant in this case) this will be a reasonable assumption, however, this is an interesting avenue for future research.
**Q2: Unknown physics.**
Thank you for this excellent question. This paper has focused on modelling solutions to differential equations; however, we can simply extend our framework (as in [35]) to model missing physics through GPs. This is similar to the latent force model of [66] except we now jointly model the latent force and the solution as GPs. To demonstrate this we construct 300 observations from a non-linear pendulum $\frac{d^2 \theta}{dt^2} + \sin(\theta) = 0$. We will consider the $\sin(\theta)$ term as missing, which we would like to learn. We now define $g=\frac{d^2 \, f_1}{dt^2} + f_2(t) = 0$. We achieve an RMSE of $0.068$ indicating we have recovered the latent force/unknown physics well. Going beyond this simple example is an exciting avenue for future work. We will add this discussion to the main paper with this example in the appendix.
**Q3: PDE residual.**
This is a common modelling assumption for collocation based methods. Ideally, we would model the residual as a noise-free observation (arising from a delta likelihood) and enforce it exactly. Treating the residual as arising from a Gaussian likelihood is a relaxation of this. We will add this discussion to the main paper.
**Q4: Assumptions on $g$.**
In general, for the probabilistic model to be well defined we require that g is measurable. To ensure that we are modelling something 'useful' we require that there be a unique well-defined solution to the differential equation. For ODEs, one can appeal to the Picard-Lindelöf theorem (see Thm. 36.4 in [22]) which gives general conditions for unique solutions to exist and requires that the non-linear operator be locally Lipsitz continuous. However, we are unaware of any results for general PDEs. Extending to problems that have multiple solutions would be an interesting avenue for future work. We will add this discussion to the main paper. | Summary: The paper introduces a novel approach for solving partial differential equations (PDEs) using a Gaussian process prior. It leverages established methods for approximate inference and provides a cohesive framework that unifies existing results in probabilistic differential equation solving.
Strengths: The paper introduces a novel approach for solving partial differential equations (PDEs) using a Gaussian process prior. It leverages established methods for approximate inference and provides a cohesive framework that unifies existing results in probabilistic differential equation solving.
Weaknesses: The paper’s presentation is challenging to follow. Despite my familiarity with continuous-time filtering, smoothing, Gaussian process (GP) literature, and continuous-time stochastic optimal control (where some of these partial differential equations (PDEs) appear), I struggle to determine the paper’s solidity. Numerous mathematical inaccuracies and inconsistencies hinder comprehension. For instance, the abrupt switch between notation for space variables (e.g., $s$ and $t$) and the combined space-time variable ($x$) lacks clear definition. While I recognize that machine learning sometimes necessitates complex notation, in this case, it impedes understanding.
The main model section exacerbates the issue. Equations 5 and 6 remain elusive; the dimensions of $Q$ and $P$ are unspecified. The definition of $F_n$ confuses me, as it prevents a sensible matrix product computation. Additionally, the collocation point definition remains unclear. Even in Example 3.1, the meaning of $g_k$ remains unknown to me. Consequently, assessing the paper’s solidity proves challenging.
In my view, this paper requires substantial revision before acceptance. Although I appreciate the effort invested, from a reviewer’s standpoint, the current presentation poses significant hurdles.
Technical Quality: 1
Clarity: 1
Questions for Authors: - In the background section please define N and F. And the relations between al the other variables N_t, N_s x_{t,s}, y_{t,s}, d, etc. I find this very confusing – is there also a clash in notation?
- In Eq (3) can you elaborate if $\mathbf f$ is multidimensional and how does it relate to $f$. This notation is again used in line 78.
- In line 66 is this definition correct? Somehow I have a hard time coming up with the correct dimensions.
- In line 70 $\bar{\mathbf f} $ is a time depdent function. But in line 78 it is a space time depdent function. Could you elaborate on this?
- In Line 87 is the non-linear differential operator only dependent on space derivatives, or can the also be higher order time derivatives?
- Eq (5): $\bar f_q(\mathbf X_n)$ should be a vector of dimensionality D, but in line 98 W is of dimensionality $PD \times QD$. Can you elaborate on this.
- In eq(7) can you elaborate what $g(\mathbf F_n)$ is? Is it an element-wise operation for each element in $\mathbf F_n$?
- In line 265-266 can you please elaborate what $z$ is?
Confidence: 2
Soundness: 1
Presentation: 1
Contribution: 2
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your review and for raising questions that have helped improve the presentation of the paper.
We understand your concern related to the complicated notation. To aid understanding we will use the additional page in the camera-ready stage to include a notation section that will clarify the dimensions of all quantities. Additionally, we will provide a table in the appendix reporting the same. However, we would like to emphasize that throughout the paper we have attempted to use standard notation.
**W1: Definition of Q/P.**
Q/P are scalars denoting the number of latent functions and outputs. Q as the number of latent functions is established notation in multi-output GPs (see, e.g., [67, 37]). Additionally in the App. A.3 we have already provided an expanded form which provides the dimensions of all the relevant quantities. We will explicitly state these dimensions in Sec. 3 of the main paper.
**W2: Abrupt switch in s/t.**
We are unsure what you mean by this. Throughout we have been clear that we are operating in the spatio-temporal setting (explicitly stated on line 58/59) and throughout 't' refers to time and 's' to space.
**W3: Collocation point definition.**
The collocation points are defined in Eq. (6). Intuitively we would want the function that we learn (which we have placed a GP prior over) to coincide exactly with the solution of the differential equation at hand, i.e. that the residual between them is zero. In practice, we can only ever enforce this at a finite set of locations, and these are called the collocation points.
**W4: Definition of $g$.**
The function $g$ is computing the residual described in [Collocation point definition] above. It measures the (pointwise) error between the current function and the solution to the differential equation. Equivalently if $f$ followed the differential equation exactly, then, by definition, its first time derivate would equal $N_\theta(f)$ and $g(f)$ would be zero.
**Q1: Definitions.**
All quantities $N_s, N_t, x_{t,s}, y_{t, s}$ are defined on line 59. Again this is in accordance with the state-space GP literature (see [18]).
**Q2: Multi-output.**
As explained on line 79, Eq. (3) is a multi-output prior over a latent function f and its space/time derivates. Here we use established notation (see [47]) where a sample from $\bar{f}$ is of dimension $N \cdot D$ and corresponds to the multiple-outputs being stacked. We will clarify this further in the main paper.
**Q3: Dimensions.**
Yes, the equation is correct. As described on line 65/66 this is a $d$-dimensional vector (where $d$ is the number of time-derivates) and $\bar{f}$ is the corresponding state vector. See for example [48].
**Q4: Notation.**
Throughout we use $\bar{f}$ to denote $f$ together with its spatial and/or temporal derivates. On lines 66-71 we highlight that standard state-space GPs construct a state over $f$ and its time derivates, and hence use the notation $\bar{f}$. Specifically on line 66 we discuss timeseries models, and hence the state only depends on time. On lines 70/71 we discuss how spatio-temporal GPs can be represented as a state-space model, where now the state is modelling the temporal dynamics at each spatial point and hence the state is constructed over the spatial points (see [48, Sec. 12.5]). On line 78 we are constructing a GP prior over $\bar{f}$ at all the input locations $X$, as defined in Eq. (3). We will further clarify this in Sec. 2 of the main paper.
**Q5: Higher-order derivatives.**
The non-linear operator can be dependent on any order of spatial or temporal derivatives. The only requirement is that the latent multi-output GP is defined over those derivatives as well (which will require sufficiently smooth temporal and spatial kernels). We will add this discussion to Sec. 3 to clarify.
**Q6: Dimensionality.**
As discussed in (1-1) above this follows standard multi-output GP notation. Here $f_q$ are (stacked) multi-ouput GPs. At a single input location a sample will have dimension $D$ (as defined on line 73). $F_n$ is defined by stacking all outputs from the $f_q$ GPs, which for a single input location will have dimension $Q \times D$. This is then linearly mixed to create $P \times D$ outputs. To make this clearer we will write the stacking explicitly so that $F_n (X_n) = W \, \text{vec}[f_1(X_n), \cdots, f_Q(X_n)]^\top$ and state all these dimensions in Sec. 3 of the main paper.
**Q7: What is $g$ doing?**
The function $g: \mathbb{R}^{PD} \to \mathbb{R}$ computes the residual between the latent process and the differential equation (see W4. in **4ngK**). This follows the notation set out in [22, 42, 35].
**Q8: What is z?**
The full magnetic field is defined in a 3-dimensional space on which we place our spatio-temporal model PHYSS-GP. Here 'z' is simply a label to denote the third dimension and can be thought of as 'depth'. We will be more explicit about this.
---
Rebuttal Comment 1.1:
Comment: I have read the rebutal and thank the authors for their answers. I do not have any further questions. In view of the other reviews and the promised improvement in clarification of the paper, I will increase my score by one point, though I am still of the opinion that the is work hard to follow.
---
Reply to Comment 1.1.1:
Comment: We thank you for your comment and your score increase. | Summary: In this paper, the authors present a physics-informed Gaussian process based approach to learn the solution of ODE and PDE systems. In particular, they address the challenge of the cubic computational complexity with respect to the number of spatial observations. It is shown that multiple state-of-the-art approaches can be recovered as special cases of the presented approach, i.e., it is a unifying framework. With additional approximation techniques, the cubic spatial computational cost is reduced to linear. In multiple simulations, the proposed method achieves similar or better results compared to SOTA methods but with a significant reduced computational time.
Strengths: - The proposed method demonstrates a significant improvement of the computational complexity with respect to the number of spatial observations compared to SOTA methods.
- The beauty and originality of the proposed model is its unifying property, including existing approaches such as HelmholzGP and AUTOIP.
- The techniques and ideas in the paper are clearly presented. I really enjoyed reading the paper.
Weaknesses: - There are some typos such as line 103 “develop sate space algorithms for”, line 167 “complexity of O(N (N_s” (missing t), and eq (17) Nt instead of N_t. Furthermore, I think that line 257 “RMSE of all models increases as the number of collocation points increases” make no sense as the RMSE is decreasing. Please check the paper carefully.
- In the experimental section, some phenomena are rarely discussed. For instance, the RMSE of PHYSS-GP is increasing with 1000 collocation points.
Technical Quality: 3
Clarity: 3
Questions for Authors: - I assume that there exist PINN based approaches with some form of uncertainty quantification. It would be interesting to compare the GP-based method to NN-based approaches (not necessarily for this paper but in general). Do you have any thoughts on that?
- Is it possible to use the same approximations in AUTOIP to reduce the cubic computational cost? If so, is there still a benefit of the proposed method?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and positive view of our work. We address the points you raised below.
**W1: Typos.**
Thank you for highlighting these typos, we have addressed them all. However, line 167 is *not a typo*; the computational complexity is $O(N (N_s \, d_s \, d)^3)$ because the expected log-likelihood decomposes across all spatio-temporal datapoints (as shown by the first term on Eqn 17), and hence is linear in $N$. This motivates the spatial-minibatching approximation where this is reduced to be $O(N_t \, (N_s \, d_s \,d)^3)$. We will add this explanation to line 167.
**W2: Increasing RMSE.**
This is an artefact of the stochastic nature of the optimisation algorithms used for inference. We have rerun our Non-linear Damped Pendulum experiment across 5 datasets constructed with different random seeds. We report the results below.
| Model | C | Time |RMSE | NLPD |
| -------- | ------- | ------- | ------- | ------- |
| PHYSS-GP | $10$ | $59.27 \pm 38.524$ | $0.20 \pm 0.007$ | $-0.28 \pm 0.102$ |
| | $100$ | $112.82 \pm 0.400$ | $0.05 \pm 0.001$ | $-0.44 \pm 0.134$ |
| | $500$ | $139.86 \pm 0.541$ | $0.05 \pm 0.003$ | $-0.79 \pm 0.414$ |
| | $1000$ | $122.94 \pm 40.184$ | $0.05 \pm 0.003$ | $-0.87 \pm 0.327$ |
|AUTOIP | $10$ | $117.16 \pm 27.378$ | $0.16 \pm 0.001$ | $-0.41 \pm 0.070$ |
|| $100$ |$154.10 \pm 31.590$ | $0.05 \pm 0.001$ | $-0.51 \pm 0.129$ |
|| $500$ | $1058.08 \pm 17.497$ | $0.05 \pm 0.001$ | $-0.88 \pm 0.087$ |
|| $1000$ | $5600.90 \pm 36.231$ | $0.05 \pm 0.001$ | $-1.39 \pm 0.091$ |
**Q1: Uncertainty in PINNs.**
PINNs have become a popular method for solving differential equations and amount to a highly complicated optimisation problem that requires specialised training regimes (see Wang et al., 2024). Current approaches to quantifying uncertainty (UQ) in PINNs are based on dropout (see Papamarkou et al.) and conformal predictions (see Podina et al., 2024). In recent years UQ for deep learning has received much attention however is limited by its computational cost (see Papamarkou et al., 2024). Exciting avenues of work could be to explore combinations of state-space algorithms and PINNs, to achieve linear time complexities but with the flexibility of PINNs. We will add this discussion to the related work section of the main paper.
- Podina et al. (2024). Conformalized physics-informed neural networks. In *ICLR 2024 Workshop on AI4Differential Equations In Science*.
- Papamarkou et al. (2018). Position: Bayesian deep learning is needed in the age of large-scale AI. arXiv preprint.
- Zhang et al. (2018). Quantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic problems. *Journal of Computational Physics*.
- Wang et al. (2024). Respecting causality for training physics-informed neural networks. *Computer Methods in Applied Mechanics and Engineering*.
**Q2: Application to AUTOIP.**
The derivation of the state-space algorithms hinges on the approximate posterior being represented as a posterior with likelihoods/sites that decompose across time, which is guaranteed within the natural gradient framework (as shown in Eq. 16). AUTOIP has no such guarantees since it optimises the natural parameters in Euclidean space with optimizers like Adam, and uses a whitened representation. However, if one drops the requirements of deriving a state-space algorithm, all of the approaches for reducing the spatial computational complexity in Sec. 4 can equally be applied to AUTOIP. Indeed, this is simple to do within our codebase (that will be released on publication). For example, on the Non-linear damped pendulum, we run this extension of AUTOIP with whitening and 50 inducing points for $C=1000$ and achieve an RMSE of $0.06 \pm 0.001$ and running time of $158.16 \pm 0.34$, clearly improving the running time against AUTOIP. This is still slower than our methods as it is cubic in the number of inducing derivatives/points. We will add this example to the appendix and this discussion into the main paper. The benefit of our proposed approach is that we remain linear in the temporal dimension which is vital for applications that are highly structured in time.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification. My recommendation is to accept the paper. | Rebuttal 1:
Rebuttal: We thank all five reviewers for their time and constructive reviews. This work *introduces a novel approach for solving partial differential equations (PDEs) using a Gaussian process prior* (**4ngK**) that *fits into recent literature on dynamical systems, Gaussian processes, and physics-informed machine learning* (**X3gM**), whose *beauty and originality of the proposed model is its unifying property* (**z1FQ**). We showcase our methods under a *bevy of test cases* (**BaEt**) that *convincingly demonstrate the advantages offered* (**BaEt**).
Based on the reviews, we have:
* Conducted additional experiments to showcase extensions of our proposed framework (unknown physics (**zAxE**, Q2) and extensions to AUTOIP (**z1FQ**, Q2)) and to provide additional uncertainty metrics (**BaEt**, W3).
* Improved the presentation by clarifying our problem statement (**BaEt**, W1) and our notation (see **4ngK**).
* Fixed the typos pointed out by the reviewers. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The submission discusses conditioning spatiotemporal Gaussian processes using differential equation constraints and observational data.
Specifically, the temporal component is handled by a Markovian prior (to achieve linear complexity), and the spatial component is dealt with by variational methods.
As such, the proposed algorithm extends the work by Hamelijnck et al. [18] to differential-equation constraints (and to handle spatial mini-batching); or, from the opposite perspective, it implements a version of a probabilistic numerical solver for spatiotemporal PDEs with variational methods (in space).
The resulting algorithm is evaluated on a damped pendulum ODE, a curl-free magnetic strength field, a one-dimensional reaction-diffusion PDE, and an ocean-current problem.
Strengths: All components are technically sound, and the submission fits into recent literature on dynamical systems, Gaussian processes, and physics-informed machine learning.
Overall, I think this is a nice paper, and despite a few minor weaknesses (see "Weaknesses"), I recommend acceptance.
Weaknesses: I identify two weaknesses:
1. The proposed algorithm's scientific novelty as a combination of known techniques is limited. Hamelijnck et al. essentially cover spatiotemporal variational inference with a Markovian prior in the temporal dimension. Conditioning Gaussian processes on differential equation constraints via collocation has become standard practice in recent years. Berlingheri et al. developed the curl- and divergence-free spatial priors. The combination of these techniques is novel, but the increment to the existing literature is relatively small.
2. The clarity of the submission's relation to existing approaches would improve by discussing more articles related to probabilistic numerical methods. Currently, the manuscript cites the PMM by Cockayne et al., the ODE (initial value problem) solvers by Schober et al., Krämer et al., and Tronarp et al., and the GP-PDE paper by Pförtner et al. (all references are in the submission). Beyond those papers, Krämer et al. (below) describe solving time-dependent, nonlinear PDEs with collocation and Markovian priors; Schmidt et al. (below) combine nonlinear collocation constraints with observational data (for ODEs); and Krämer and Hennig (below) solve boundary value problems with collocation. There are other related papers, but I believe the former articles should be discussed in the manuscript. Further, the literature on statistical finite element methods is related and should be acknowledged: see Duffin et al. (below).
The first weakness is more significant than the second one. I expect that the second weakness will be relatively straightforward to resolve.
In any case, I believe the strengths outweigh the weaknesses and lean towards recommending acceptance.
**References:**
Schmidt, Jonathan, Nicholas Krämer, and Philipp Hennig. "A probabilistic state space model for joint inference from differential equations and data." Advances in Neural Information Processing Systems 34 (2021): 12374-12385.
Krämer, Nicholas, Jonathan Schmidt, and Philipp Hennig. "Probabilistic numerical method of lines for time-dependent partial differential equations." International Conference on Artificial Intelligence and Statistics. PMLR, 2022.
Krämer, Nicholas, and Philipp Hennig. "Linear-time probabilistic solution of boundary value problems." Advances in Neural Information Processing Systems 34 (2021): 11160-11171.
Duffin, Connor, et al. "Statistical finite elements for misspecified models." Proceedings of the National Academy of Sciences 118.2 (2021): e2015006118.
Technical Quality: 3
Clarity: 2
Questions for Authors: None
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive review. We address your two comments below.
**W1: Novelty of the proposed approach.**
Whilst we agree that some special cases of our framework are known, the 'beauty and originality of the proposed framework is its unifying property' (reviewer **z1FQ**). Not only do we provide a variational state-space framework unifying AUTOIP, HELMHOLTZ-GP, PMM, and EKS (Thm. 1), but we additionally extend this framework in three ways
- Spatial-temporal derivative inducing points which are necessary for data sets that do not align to a grid and for reducing the cubic cost associated with the filtering state
- A structured VI approximation in which the state-space prior is only defined over the space-time points and temporal derivatives, which significantly reduces the size of the state
- Spatial minibatching which is necessary for large spatial data sets
Each of these points is novel, and when used in conjunction alleviates many of the limitations of previous methods. For example, on our ocean currents experiment, running any competitor (AUTOIP, HELMHOLTZ, PMM) is infeasible due to the sheer size of the data set ($N=42243$).
**W2: Relation to existing works.**
Thank you for the additional literature, they are indeed related works, and we will include them in our literature review. We briefly reiterate their connection here:
- Both Krämer and Hennig (2021) and Schmidt et al. (2021) are interesting works that are limited to the ODE setting.
- Krämer et al. (2022): This work derives a spatio-temporal PDE solver that can be encapsulated in our framework by seeing it as an extension of the EKF prior described in Ex. 3.1. As such we do not see it as an alternative to our work but a method that can be combined with ours. This could lead to interesting directions where finite differences can be used in space whilst used in conjunction with our variational approximations enabling the application to large spatio-temporal datasets. We will add this discussion to our related work and future work section.
- Duffin et al. (2021) build on the StatFEM work of Girolami et al. (2021) and Conrad et al. (2017) by deriving an extended Kalman filter where space is discretised through finite-elements. Like Krämer et al., this would be an interesting approach that could be used in conjunction with our work.
---
Rebuttal Comment 1.1:
Comment: Thank you for the reply!
For W1, I agree that the combination of the tools is new, even though each component may be known. My review assessed that the novelty of combining these tools may be somewhat limited, not that it doesn't exist. However, this perspective is subjective, which is why I recommend acceptance regardless.
For W2, thank you for the clarification. I agree with your perspective on those four papers.
Again, thank you for iterating. Since my assessed strengths and weaknesses remain, I will keep my already positive score. | null | null | null | null | null | null |
Gorilla: Large Language Model Connected with Massive APIs | Accept (poster) | Summary: This paper proposes Gorilla, a fine-tuned LLaMA model, outperforms GPT-4 in crafting API calls and adapts well to document changes with a document retriever, reducing hallucination issues. It also provides APIBench, a new dataset for evaluation, includes APIs from HuggingFace, TorchHub, and TensorHub. Gorilla's integration with a retrieval system enhances tool usage accuracy and documentation updates, promising more reliable LLM outputs.
Strengths: 1. The paper trains a system which connects massive APIs and takes text instruction to get the corresponding API calls, along with a step-by-step explanation of the pipeline. It focuses on the details within an API call, which significantly mitigates the complexity involved in developing machine learning softwares.
2. The paper also constructs APIBench by scraping a large corpus of ML APIs and developing an evaluation framework to check functional correctness. It uses AST subtree-matching metrics which helps measure hallucination and further contributes to the evaluation of ML API mastering techniques.
3. The paper is well-writtened, presenting a detailed comparison with other related works, a clear structure on methods, and comprehensive experiments and analysis.
Weaknesses: 1. Although the paper overall does not present significant issues, I would appreciate seeing a performance comparison with more specific methods, such as other works that also utilize LLM for tool calling to build software pipelines, which may further demonstrate the contribution of techniques presented by Gorilla.
2. The paper needs to better illustrate the contributions of this work and more clearly outline its potential impact on the community.
3. Some writing detales need to be improved in the paper, such as "the" should be omitted in "The model trained with the our method" in Line 31, and "AST" should be replaced by "Abstract Syntax Tree" for the first time it appears in Line 36.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. In Figure 2, why GPT-3.5 is much better than GPT-4 in all settings?
2. As most of the instructions are generated by GPT-4 referring to your hand-generated instruction seeds. How to ensure your seeds meets the real-world scenario and reach the real difficulties?
3. As many APIs have overlapping functions, how to decide which API is better than the other similar one? How to maintain stable API calling to persist consistent results for the same request?
4. Which Llama version is used as your foundation model? Llama2 or Llama3?
5. In Table 1, why Gorilla always produces a large error in selecting wrong APIs? Do the results support your view?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The paper should provide a more comprehensive limitation part to point out potential overlooked issues, such as influence of synthetic training data and insufficient comparative experiments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and thoughtful comments. We are encouraged you enjoyed the exposition of our pipeline and find the APIBench, and the AST subtree-matching metric to contribute to mastering techniques to evaluate APIs.
**1. Highlighting the contributions and potential impact on community**
Gorilla is centered around connecting LLMs with APIs, and to this end, we:
1. Develop and manually curate a high-quality dataset that is valuable to the community.
2.RAT: We introduce Retriever-Aware Training (RAT) which teaches the LLM to either use or ignore the retrieved context. This is a novel approach that helps Gorilla achieve superior performance.
3. Propose evaluation metrics, including the first technique to define and measure hallucination for the domain of (functions and tools) APIs. We also perform human evaluation to validate the competency of the metric.
4. Study realistic API calls with constraints (e.g., “I would like to detect pedestrians and cars on the street. I want a model to run on a Raspberry PI with accuracy at least 65% on ImageNet”. Here, Gorilla needs to reason about the constraints - that the model needs to fit the memory budget while still meeting accuracy budgets.)
5. Measure hallucination for popular models for APIs.
**2. Comparison with other LLMs for tool calling**
Beyond frontier labs whose models we have compared, within academic literature, the strongest baseline (API-focused model) when submitting the paper is DocPrompting Zhou et al. (ICLR 2023) which looked at choosing the right subset of code including API along with a retriever. We demonstrate that when comparing Gorilla (vs) DocPrompting, Gorilla improves accuracy, while lowering the hallucination. We are happy to include any new baselines you might suggest.
|Accuracy ↑ | Accuracy ↑ | Hallucination ↓ | Hallucination ↓ |
|---|---|---|---|
|DocPrompting | **Gorilla** | DocPrompting | **Gorilla**|
|61.72 |**71.68**| 17.36| **10.95**|
We also include a comparison with **prompting techniques {0-shot} and {3-shot}** in Table 6 of Appendix, and demonstrate Gorilla improves performance across both techniques for GPT-3.5 and GPT-4.
**3. Writing Detales**
Thank you for flagging these. We agree that the writing mechanics of the paper can be improved and have already made editorial passes over the paper to improve the exposition of the paper. We will also include more comprehensive limitation part to point out potential issues, such as influence of synthetic training data and comparative experiments, and have already reflected the changes suggested.
**4. Comparing GPT-3.5 vs GPT-4**
We would first like to highlight that with frontier models we can’t know for sure. Further, perhaps, this is orthogonal to this work. However, this intrigued us as well. One potential explanation we got when we reached out to OpenAI is that GPT-4 is “lightly” RLHF’d when compared to GPT-3.5. Making 3.5 a stronger instruction follower. So, we can observe that as we move from 0-shot to oracle-retriever, 3.5 shines, since it is better at “following the instruction” of using the API to answer the user question, while GPT-4 on the other hand, tends to hallucinate even when provided with an API - oracle or not.
**5. Diverse real-world scenarios for instruction**
For the self-instruct generation, we provide three in-context examples, along with reference API documentation, and task the model with generating real-world use cases that call upon the API. We specifically instruct the model to refrain from using any API names or hints when creating instructions. We constructed 6 examples (Instruction-API pairs) for each of the 3 model hubs. Post this, we manually verify and audit each generation resulting in high quality data-set. Making this an important contribution to the community.
Further, as a yard-stick Gorilla models have been downloaded 10000+ times.
Here are some examples randomly chosen to demonstrate the diversity and versatility:
```
Example 1 (q 638) “I am an illustrator, I want to create an appealing image based on a text description for commercial purposes.”
Example 2 (q 704) “Assist me in finding the accurate information in a table related to the NYSE stock market”
Example 3 (q 867) “We want to communicate product information to online customers in France. 'Introducing the new eco-friendly water bottle made of high-quality stainless steel with double-wall insulation to keep your drinks cool for 24 hours or hot for 12 hours'.”
```
**6. How to decide which API is better than the other similar one**
Given a question, depending on the constraints we narrow down the set of right answers. For example, if the user requests an object detection model - any and all object detection APIs are the right answers. However, if a user specifies a constraint either explicitly or implicitly (e.g., they want the model to have a top-1 accuracy of xx on imagenet) then the set of APIs narrows down and a subset of them are “better” than other similar ones. Since, we create the data-set we can control for this. If two APIs are the right answer, then if either of them are called, we mark that as accurate.
**7. In Table 1, why (does) Gorilla always produce a large error in selecting wrong APIs? Do the results support your view?**
Yes, this is quite intuitive. By teaching the model domain knowledge (with our novel Retriever Aware Training), the model tends to hallucinate less (not 0, but lesser than baselines). The error results from the model unable to identify the user intent and generating API for the wrong task.
**8. Which Llama version is used as your foundation model?**
Llama-2 model. We evaluate the performance across three diverse base models : MPT-7B (0.70), Falcon-7B (0.66) and LLAMA-7B (0.71). The results are included in the paper (Appendix Figure 13) and demonstrate that all kept constant, our innovative Retrieval Aware Training (RAT) fine-tuning recipe is robust to the underlying base model.
---
Rebuttal Comment 1.1:
Title: Response to the author
Comment: Thank you for your reply. Your explanation has mostly addressed my concerns.
I hope you can incorporate some of the above content into the revised paper, such as the contributions and impact to the community, comparisons with other methods based on LLMs for tool calling, and some grammar issues. However, since there are still some experimental phenomena whose causes have not been fully explained, I suggest the author conduct a more in-depth analysis and investigation. Therefore, I have decided to keep my scores. | Summary: The authors present a new fine-tuned language model that is trained to map from user instructions to code snippets that invoke the appropriate APIs. The authors also introduce a new dataset which includes the information for roughly 1600 APIs from a variety of online sources, which is used to train the model. They conclude that their approach outperforms existing language models on the task of API call generation, as measured by both error rate and a novel “hallucination” metric.
Strengths: I feel that the problem statement is well-motivated and of general interest. Prior work has investigated allowing LLMs to make use of various tools, and improvements in that capability are likely to be well received. I appreciate that the authors have chosen a (relatively speaking) lightweight and open-source model, which increases the usability of their approach. I also appreciate the construction of a novel dataset, which could presumably be of interest to the larger community even in the absence of a specific model.
Weaknesses: First, I wonder about baselines. The authors have done a good job of comparing their method against a variety of open-source and closed-source LLMs, but these systems are all “generalist” compared to the fine-tuned Gorilla model. It would have been useful to see a comparison to a fine-tuned version of another model (even one with less fine-tuning than is possible on the small Llama-7b) just to have a sense of how much improvement can be leveraged from the newly introduced dataset. I also wonder if it would have been possible to compare to specialized tool-based LLM models like Toolformer. If such a comparison is not appropriate or possible, I feel the authors should mention why.
My second question concerns statistical significance. The authors indicate the high cost of LLM experiments as the reason for omitting such analysis. I sympathize with this explanation, but feel that I would be remiss not to stress the importance of statistical testing as part of justifying claims that Gorilla’s “performance surpasses prompting the state-of-the-art LLMs in three massive datasets.” Without a measure of a variance, we cannot meaningfully conclude that apparent improvements achieved by Gorilla (which are often on the order of 1-3% in overall score) are the result of anything other than noise. To be sure, a 7B parameter model that achieves even comparable results to a massive LLM is a notable result (though see my earlier point about fine-tuned baselines), but I would temper claims about improvement in the absence of statistical justification.
Lastly, I feel that the paper would benefit greatly from a more robust error analysis. In the best-performing realistic setting (i.e. without access to an oracle), Gorilla achieves an overall accuracy of roughly 65%. This means that more than one third of the time it is returning an incorrect result (including errors, the authors point out, that could be silent to the end user). Obviously this is far from being ready for direct deployment without external validation. Given the prevalence of errors, it would be useful to better understand what kinds of errors are common: are they more likely to occur with particular kinds of APIs? Does the wording and / or complexity of the input prompt have a large effect? This is the kind of analysis that, in combination with the novel dataset, could help spur future work and improvement. In addition, given the potential for damaging errors, a more robust broader impact statement would be beneficial.
Technical Quality: 2
Clarity: 2
Questions for Authors: How do models which are fine-tuned compare to Gorilla? (If such a comparison is not appropriate, why?)
What is the statistical significance of the results?
What are the most common kinds of errors?
Do the kinds of errors differ between the different categories of APIs?
As a final minor question: do the authors have an explanation for why GPT-3.5 appears to outperform GPT-4 across a few different problem settings? This seems like a surprising and somewhat notable result!
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: I feel that the authors should have a more robust discussion of the broader impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and their thoughtful comments. We are encouraged that you found our choice of model and dataset useful to the larger community. We will clarify some of the questions below:
**1. Fine-tuned version of another model and comparison**
We agree that GPT and Claude models are generalist models. We have evaluated the performance of the model across three diverse base models : MPT-7B (0.70), Falcon-7B (0.66) and LLAMA-7B (0.71) on the HF dataset. The results are included in the paper (Appendix Figure 13) and demonstrate that for the same train-eval dataset, our innovate Retrieval Aware Training (RAT) fine-tuning recipe is robust to the underlying base model.
Toolformer, by it’s construction, requires the LLM to reason over the output which extends well for general purpose chat-style queries such as question answering, Wikipedia search engine, etc. Note that Gorilla supports any API call, and doesn’t rely on the output of the API to train the model - which is often not available. Further, Toolformer required a complete re-training (fine-tuning) when the API specification change. However, since Gorilla is trained with RAT, it is robust to changes in API documentation.
The strongest API-focused model when writing the paper is DocPrompting Zhou et al. (ICLR 2023) which looked at choosing the right subset of code including API along with a retriever. We demonstrate that when comparing Gorilla (vs) DocPrompting - which is a fine-tuned model, Gorilla improves accuracy, while lowering the hallucination. We are happy to include any new baselines the reviewer suggests.
|Accuracy ↑ | Accuracy ↑ | Hallucination ↓ | Hallucination ↓ |
|---|---|---|---|
|DocPrompting | **Gorilla** | DocPrompting | **Gorilla**|
|61.72 |**71.68**| 17.36| **10.95**|
**2. What is the statistical significance of the results?**
Thank you for highlighting this. To demonstrate that Gorilla’s novel Retriever Aware Training (RAT) helps the model generalize very well to out of domain, for the rebuttal we evaluated RAT trained Gorilla on 2000 APIs from a totally diverse set including hyperscalers (GCP, Azure, AWS), Postman, RapidApi etc, and find that Gorilla (acc: 0.89) outperforms gpt-3.5-turbo-0125 (acc: 0.75). Some examples in this new set are: `stripe.Charge.create(amount={amount}, currency={currency}, source={source}, description={description})` to create a charge in Stripe, and `yfinance.Ticker({ticker}).dividends` to get dividend information of a stock from Yahoo Finance, which are completely out-of-domain to ML APIs demonstrating the ability to generalize well.
**3. Understanding the errors**
Thank you for your suggestion on including an analysis of errors. Given we have all the generations, this is an easy addition to the paper. In the interest of space, we present one example here, which also highlights errors that are possible even with an oracle retriever. TorchHub APIs usually require pre-processing that is often not templated like in HuggingFace. This causes quite a bit of confusion for the model. For example, to load Tacotron 2 from Nvidia, the right API call is tacotron2_model = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_tacotron2', model_math='fp16'). However, the model confuses this with the pre-processing step which looks very similar utils = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_tts_utils'), returning the pre-processing API instead of the model call.
In terms of categories, common types of errors include model-name hallucination `resnet-201` instead of `resnet-101`, hallucination in API call format from `torch.hub.load` to `torch.load`, chanes in directory structure e.g. `'facebookresearch/pytorch_GAN_zoo:hub` to `facebook/pytorch_GAN_zoo:hub`. The error categories vary in how often they occur as we go from 0-shot to BM-25 to oracle-retriever, but they remain steady as we vary datasets. We will include a collection of examples for more qualitative analysis in the appendix of the paper which would be a rich playground for future work!
**4. Why GPT-3.5 appears to outperform GPT-4 across a few different problem settings**
This is a great point of interest even for us. One potential explanation we got when we reached out to OpenAI is that GPT-4 is “lightly” RLHF’d when compared to GPT-3.5. Making 3.5 a stronger instruction follower. So, we can observe that as we move from 0-shot to oracle-retriever, 3.5 shines, since it is better at “following the instruction” of using the API to answer the user question, while GPT-4 on the other hand, tends to hallucinate even when provided with an API - oracle or not.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I appreciate the clarification about why Toolformer would not be an appropriate direct comparison. A few questions and concerns remain, however.
First, I'm not sure I was totally clear about question (2) -- I am asking for some kind of a t-test to compare the performance of different models. Are the improvements from Gorilla statistically significant (i.e. P < 0.05)? This is not a question of out-of-domain generalization, but of the robustness of the apparent improvement.
For question (3) -- while I understand that constraints on space make including additional examples difficult, I would recommend at least a supplemental table in which the counts of different error types (e.g. "model name hallucination" or "formatting hallucination" as you present) are presented. | Summary: This paper introduces Gorilla, a fine-tuned LLaMA model designed to improve large language models' ability to use APIs accurately. The authors created APIBench, a comprehensive dataset of ML APIs from HuggingFace, TorchHub, and TensorFlow Hub, and used self-instruct to generate instruction-API pairs for training. They fine-tuned LLaMA-7B with retrieval-aware training, incorporating tech like AST sub-tree matching for evaluating API call accuracy, retriever-aware training to adapt to API changes, and handling of constrained API calls. Results show that Gorilla outperforms existing LLMs (including GPT-4) on API call accuracy across multiple datasets, demonstrates ability to adapt to test-time changes in API documentation, and handles constrained API calls effectively. The paper presents a novel approach to improving LLMs' API usage capabilities, with promising results that outperform existing state-of-the-art models in this specific domain.
Strengths: * Gorilla outperforms existing state-of-the-art language models, including GPT-4 and Claude, in API call accuracy across multiple datasets (TorchHub, HuggingFace, and TensorFlow Hub).
* Gorilla significantly reduces API argument hallucination errors compared to other models, improving the reliability of API calls.
* The retriever-aware training enables Gorilla to adapt to test-time changes in API documentation, allowing it to remain up-to-date with evolving APIs without requiring retraining.
* Gorilla demonstrates the ability to understand and respect constraints (e.g., accuracy requirements) when selecting appropriate APIs, outperforming other models in constraint-aware API invocations.
Weaknesses: A significant weakness of the Gorilla approach is that it relies heavily on knowledge augmentation, which is a common technique already used in various domains to improve language model performance on specific tasks. The use of retrieval-augmented generation to enhance API calling capabilities doesn't represent a novel improvement or implementation compared to similar approaches in other domains.
Essentially, Gorilla applies existing knowledge augmentation techniques to the specific task of API invocation, rather than introducing a fundamentally new method for improving language model capabilities. While the results show improvements in API calling accuracy, the core approach of combining retrieval with language model fine-tuning is not innovative in itself. This limits the broader impact and generalizability of the work beyond the specific domain of API invocation.
The lack of significant methodological innovation suggests that similar performance improvements could potentially be achieved by applying existing retrieval-augmented generation techniques to the API calling task, without necessarily requiring the specific Gorilla architecture. This raises questions about the uniqueness and broader applicability of the Gorilla approach beyond the narrow domain explored in the paper.* While Gorilla performs well on the specific API datasets it was trained on, it's unclear how well it would generalize to entirely new APIs or domains not covered in the training data.
Other Notes:
* Although Gorilla shows some ability to handle constraints, its performance in this area is not significantly better than other models like GPT-3.5, suggesting room for improvement.
* The paper doesn't compare Gorilla's performance with specialized API documentation tools or code completion systems, which might be more tailored for this specific task.
* The paper doesn't provide an in-depth analysis of the cases where Gorilla fails, which could provide insights into its limitations and areas for improvement.
Technical Quality: 2
Clarity: 3
Questions for Authors: * How does Gorilla's performance compare to other specialized code generation or API-focused models, not just general-purpose LLMs?
* What is the performance impact of varying the size of the training dataset or the base model size (e.g. using LLaMA-13B instead of 7B)?
* How does Gorilla perform on more complex multi-step API workflows rather than just single API calls?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 1
Limitations: NA - Appropriately discussed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and thoughtful comments. We are motivated that you found Gorilla can significantly reduce API argument hallucination errors compared to other models, the retriever-aware training to be a novel contribution that allows Gorilla to adapt to test-time changes in API documentation, allowing it to remain up-to-date with evolving APIs without requiring retraining, and that Gorilla demonstrates the ability to understand and respect constraints (e.g., accuracy requirements) when selecting appropriate APIs, outperforming other models in constraint-aware API invocations.
We clarify some of the questions below:
We agree with the reviewer that prior work including Vicuna, Orca [Mukherjee et al.], Textbooks Are All You Need [Gunasekar et al.] have demonstrated knowledge augmentation techniques. However, our study has been centered around connecting LLMs with APIs. We,
1. Develop and manually curate a high-quality dataset that is valuable to the community.
2. Introduce Retriever-Aware Training (RAT) which teaches the LLM to either use or ignore the retrieved context. This is a novel approach that helps Gorilla achieve superior performance.
3. Propose evaluation metrics, including the first technique to define and measure hallucination for the domain of (functions and tools) APIs using Abstract Syntax Tree (AST). We also perform human evaluation to validate the competency of the metric.
4. Study realistic API calls with constraints (e.g., “I would like to detect pedestrians and cars on the street. I want a model to run on a Raspberry PI with accuracy at least 65% on ImageNet”. Here, Gorilla needs to reason about the constraints - that the model needs to fit the memory budget while still meeting accuracy budgets.)
5. Measure hallucination for popular models for APIs.
**1. Generalization to out of domain**
Gorilla’s novel Retriever Aware Training (RAT) helps the model generalize very well to out of domain. For example, for the rebuttal we evaluated RAT trained Gorilla on 2000 APIs from a totally diverse set including hyperscalers (GCP, Azure, AWS), Postman, RapidApi etc, and find that Gorilla (acc: 0.89) outperforms gpt-3.5-turbo-0125 (acc: 0.75). Some examples in this new set are: `stripe.Charge.create(amount={amount}, currency={currency}, source={source}, description={description})` to create a charge in Stripe, and `yfinance.Ticker({ticker}).dividends` to get dividend information of a stock from Yahoo Finance, which are completely out-of-domain to ML APIs demonstrating the ability to generalize well.
** 2. Although Gorilla shows some ability to handle constraints, its performance in this area is not significantly better than other models like GPT-3.5, suggesting room for improvement.**
We agree with the reviewer, and highlight that GPT-3.5 is a closed source model about which little is known. With Gorilla, we propose techniques to fine-tune a 7B parameter model that can match if not beat performance of GPT-3.5.
**3. How does Gorilla's performance compare to other specialized code generation or API-focused models, not just general-purpose LLMs?**
The strongest API-focused model when writing the paper is DocPrompting Zhou et al. (ICLR 2023) which looked at choosing the right subset of code including API along with a retriever. We demonstrate that when comparing Gorilla (vs) DocPrompting, Gorilla improves accuracy, while lowering the hallucination. We are happy to include any new baselines the reviewer suggests.
|Accuracy ↑ | Accuracy ↑ | Hallucination ↓ | Hallucination ↓ |
|---|---|---|---|
|DocPrompting | **Gorilla** | DocPrompting | **Gorilla**|
|61.72 |**71.68**| 17.36| **10.95**|
**4. In-depth analysis of failure cases**
Thank you for your suggestion on providing an in-depth analysis of the cases where Gorilla fails, which could provide insights into its limitations and areas for improvement. We will include this in the paper. In the interest of space, we provide an example here:
TorchHub APIs usually require pre-processing that is often not templated like in HuggingFace. This causes quite a bit of confusion for the model. For example, to load Tacotron 2 from Nvidia, the right API call is `tacotron2_model = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_tacotron2', model_math='fp16')`. However, the model confuses this with the pre-processing step which looks very similar `utils = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_tts_utils')`, returning the API for pre-processing instead of the model call.
**5. What is the performance impact of varying model**
We have evaluated the performance of the model across three diverse base models : MPT-7B (0.70), Falcon-7B (0.66) and LLAMA-7B (0.71) on the HF dataset. The results are included in the paper (Appendix Figure 13) and demonstrate that all kept constant, our innovative Retrieval Aware Training (RAT) fine-tuning recipe is robust to the underlying base model.
**6. How does Gorilla perform on more complex multi-step API workflows rather than just single API calls?**
Gorilla is focused at improving the LLM model to invoke single round APIs. Multi-step APIs is exciting future work and outside the scope of this paper.
---
Rebuttal Comment 1.1:
Comment: To keep the authors updated:
Thank you for providing an in-depth response to my queries. I appreciate your clear rebuttals to my concerns, and tangible commitments to revise the work based on the feedback.
I will read the rebuttal again, and respond with follow-up queries (if applicable). I intend to change my scores based on my improved understanding. | Summary: This paper addresses a pipeline to call adequate APIs among massive pools to accomplish users’ instructions. For that, the authors construct and release the APIBench dataset that contains more than 1645 APIs, and propose the Gorilla model, which is a retrieval-aware finetuned Llama-7B model for API calls.
Strengths: - Constructing the APIBench with AST tree matching evaluation metrics as well as open-sourcing the trained Gorilla model will be largely benefit to the community.
- The problem in this paper is timely in terms of LLM applications and eco-systems — i.e., automatizing API function calls.
- The fine-tuned model, Gorilla, surpasses performance compared to current SOTA models.
- The paper is well written and soundly provides experiment results and analysis.
Weaknesses: - Lack of details about the training data for Gorilla, such as data stats and construction methods. It’s unclear how different the Gorilla training data and evaluation sets of APIBench are, which could lead to doubts about data contamination.
- Experiment results analysis
- How robust are models to paraphrased user instructions corresponding to the same target model API?
- Providing performance for each domain could make it possible to analyze which domains are easy and hard.
- Clarifications are needed:
- In the experimental result section, the authors report two metrics along with overall accuracy: **the error by hallucination and by selecting the wrong API call.** The metric equations or explanations for calculating the values need to be clarified.
- In section AST as a Hallucination Metric (line 251), how was experimented to attain the human evaluation of 78% accuracy?
- It is required to mention the license of crawled APIs for usage.
- A limitation section is absent, though the authors describe them in the checklist. Broader impact and limitation could include implicit risks such as usage of unreliable APIs, license infringement, or unexpected results by incorrect function calls.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In section 4.1, “Note that for TorchHub and TensorHub, we evaluate all the models using AST tree accuracy score. *However, for HuggingFace, since the dataset is not exhaustive, for all the models except Gorilla, we only check if they can provide the correct domain names.*” is vague. The numbers of data for HF, TFH, and TH are 925, 801, and 95, respectively. Please clarify this.
- Table 1 shows that TorchHub is hard to even with oracle API documents compared to HuggingFace and TensorFlowHub. Can you draw the reason?
- (Typos) line 225. `his suggests`
- (suggestions) Please elaborate more on Table 2 — e.g., the caption and highlights. Moreover, it’s hard to connect between the written explanation and the table, due to different numbers.
- (suggestions) Absence of mentioning the full name of AST (Abstract Syntax Tree)
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: - Limitation section is absent, though the authors describe them in the check list. Broader impact and limitation could include implicit risks such as usage of unreliable APIs, license infringement, or unexpected results by incorrect function calls.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and thoughtful comments. We are encouraged that you find our contributions of APIBench, and AST tree matching evaluation metric, along with open-sourcing of the LLM models to be valuable contribution to the community.
We clarify the questions below:
**1. Details on Gorilla Training: Data and construction and Clarification on Sec 4.1**
The HuggingFace platform hosts and servers about 203,681 models. Post filtering for poor documentation, lack of dependencies, and those that have no information in their model card, etc, we pick the top 20 models from each domain. Post filtering, we arrive at a total of 925 models from HuggingFace. Note that some of the Model APIs are actually a family of APIs. For example, https://huggingface.co/docs/transformers/model_doc/resnet has `TFResNetModel` and `FlaxResNetModel` among others. In those scenarios, we decouple them to be independent APIs. Similarly, TensorFlow Hub is versioned into v1 and v2. The latest version (v2) has 801 models in total. After filtering, we are left with 626 models. Similarly, we extract all 95 models (exhaustive) from Torch Hub.
Post extraction, we manually check every data-point for quality and correctness. For each API, we verify the dataset to ensure it is executable. Since our evaluation metric checks against the ground truth, having a correct answer using our metric is guaranteed to have high quality code. (Please also see our human eval, highlighting the relationship between the evaluation metric with the final execution accuracy.)
Then, guided by the self-instruct paradigm Wang et al. (2022a), we employ gpt-4-0613 to generate synthetic instruction data. We provide three in-context examples, along with reference API documentation, and task the model with generating real-world use cases that call upon the API. We specifically instruct the model to refrain from using any API names or hints when creating instructions. We constructed 6 examples (Instruction-API pairs) for each of the 3 model hubs. These 18 examples were the only hand-generated data. For each of our 1,645 APIs, we generate 10 instruction-API pairs by sampling 3 of 6 corresponding instruction examples in each pair (illustrated in Figure 3). We then divide the data into a train, and test split (uniform) randomly.
We train Gorilla for 5 epochs with the 2e-5 learning rate with cosine decay, with batch size 64, warm-up ration 0.03, and 2048 max sequence length.
**2. How robust are models to paraphrased user instructions corresponding to the same target model API?**
The models trained can respond to a diverse set of user instructions, given the diversity in our training data-set. As a yard-stick Gorilla models have been downloaded 10000+ times, and here are some examples randomly chosen:
```
Example 1 (q 638) “I am an illustrator, I want to create an appealing image based on a text description for commercial purposes.”
Example 2 (q 704) “Assist me in finding the accurate information in a table related to the NYSE stock market”
Example 3 (q 867) “We want to communicate product information to online customers in France. 'Introducing the new eco-friendly water bottle made of high-quality stainless steel with double-wall insulation to keep your drinks cool for 24 hours or hot for 12 hours'.”
```
**3. Providing performance for each domain could make it possible to analyze which domains are easy and hard.**
This is a very valuable feedback and have already included this in the paper. We find the variance to be impacted by the number of models in the category. For, example, in HuggingFace, `Computer Vision Object Detection` has an accuracy of 0.77 compared to `NLP Text2Text Generation` which has an accuracy of 0.68.
**4. The metric equations or explanations**
Thank you for raising this. We will clarify in the paper. Here’s a brief explanation: We first parse all the API calls in the API pool using Abstract Syntax Tree (AST). We then parse the LLMs output into the AST. We then check whether the LLMs output matches any AST (subtree match) in the pool. Note that we not only check the functional name but also the non-optional arguments. For example, in HuggingFace, one example could be image classification: pipeline('image-classification', model='fxmarty/resnet-tiny-beans'), we check the AST tree node function argument pipeline, value argument image-classification and fxmarty/resnet-tiny-beans. If they all matches the model’s output, we claim the model’s output matches this specific API. We then check the function description of model’s output and the matched API in our pool. If they are equivalent, we claim correct. Evaluation metrics: Accuracy is the #Correct / #Total. In order to be correct, the model’s output not only needs to match one API in the API pool, but the matched API’s function description needs to be the same as the ground truth. Hallucination is the #Not Matched / # Total. This simply means the model’s output doesn't match any API call, thus we cannot find the corresponding API in our dataset.
Accuracy + Hallucination + Error (syntactic, etc) = 1
**5. License of crawled APIs**
The licenses of crawled APIs are permissible under Apache 2.0. TensorFlow is licensed under the Creative Commons Attribution 4.0, and code samples are licensed under the Apache 2.0. Pytorch Hub follows the Linux Foundation Policies. The huggingface hubdocs are all on Apache 2.0.
**6. TorchHub is hard to even with oracle API documents**
TorchHub APIs usually require pre-processing that is often not templated like in HuggingFace. This causes quite a bit of confusion for the model. For example, to load Tacotron 2 from Nvidia, the right API call is `tacotron2_model = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_tacotron2', model_math='fp16')`. However, the model confuses this with the pre-processing step which is similar returning `utils = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_tts_utils')`.
---
Rebuttal 2:
Title: How was the experiment conducted to attain the human evaluation of 78% accuracy?
Comment: **7. human evaluation of 78% accuracy?**
This is from human evaluation by directly executing the code. We manually evaluated the generated results on 100 LLM generations (randomly chosen from our eval set). The accuracy using AST subtree matching is 78%. We observe that this is consistent with human evaluation that revealed a 78% accuracy in calling the right API. All the generations that AST flagged as incorrect, were the same ones that were manually also flagged as incorrect. Additionally, Gorilla also generates supporting code to call the API which includes installing dependencies (e.g., `pip install transformers[sentencepiece]`), environment variables, etc. When we manually attempted to execute these codes, 72% of all codes generated were executed successfully. It's worth noting that the 6% discrepancy are NOT semantic errors, but errors that arose due to factors external to the API in the supporting code - we have included an example to illustrate this further. Considering the significant time and effort required for manual validation of each generation, our data further reinforces our belief in the efficiency of using AST as a robust offline metric.
Here is a representative example, where we are able to load the correct model API. However, in the supporting code, after we have the output from the API, the `zip()` function tries to combine sentiments and scores together. However, since scores is a `float`, it's not iterable. `zip()` expects both its arguments to be iterable, resulting in an `'float' object is not iterable` error.
```
from transformers import pipeline
def load_model():
classifier = pipeline('sentiment-analysis', model='nlptown/bert-base-multilingual-uncased-sentiment')
return classifier
def process_data(comments, classifier):
response = classifier(comments)
sentiments = response[0]['label'].split()
scores = response[0]['score']
result = [{'sentiment': sentiment, 'score': score} for sentiment, score in zip(sentiments, scores)]
return result
comments = "These comments are about our news website."
# Load the model
classifier = load_model()
# Process the data
response = process_data(comments, classifier)
print(response)
```
---
Rebuttal Comment 2.1:
Title: Reviewer please respond
Comment: Dear reviewer,
Thank you for your efforts in reviewing this paper. Now that the authors have provided their response, do you have any further comments?
Thank you,
AC | Rebuttal 1:
Rebuttal: We are encouraged by the insightful reviews and appreciate the recognition of the key strengths of our work. The reviewers highlighted:
1. The timely relevance of our problem statement in the realm of Large Language Models (LLMs) and API ecosystems, emphasizing our novel approach in automatizing API function calls **[MqVN, jdAv]**.
2. Our model, Gorilla, has been noted for its exceptional performance, surpassing current state-of-the-art (SOTA) models like GPT-4 and Claude in API call accuracy, and significantly reducing API argument hallucination errors **[Umrp, MqVN]**.
3. The novel use of AST tree matching evaluation metrics for measuring LLM hallucination in calling APIs, and the construction of APIBench, as well as the open-sourcing of Gorilla, are particularly noted for their potential to benefit the community **[MqVN, N9sF]**.
4. The paper’s clear, well-structured presentation, detailed comparative analysis, and comprehensive experimental results were also commended **[N9sF, MqVN]**.
5. We acknowledge the positive feedback on the model’s ability to adapt to evolving APIs and its effectiveness in constraint-aware API invocations **[Umrp]**.
We address individual concerns below, and all suggested revisions have been incorporated into the manuscript. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Optimal Algorithms for Augmented Testing of Discrete Distributions | Accept (poster) | Summary: This paper consider the problem of hypothesis testing for discrete distributions, including identity testing, closeness testing and uniformity testing. The authors investigated these testing problems under the setting where a predicted data distribution is available. Utilizing this additional information, the authors propose "augmented tester", which can reduce the sample complexity when the given predicted distribution is accurate (when it is inaccurate, the proposed algorithm is still robust). Lower bounds are also provided for justifying the optimality of the proposed algorithm.
Strengths: The idea of augmenting hypothesis testing using additional information is interesting. The proposed algorithm can reduce the sample complexity when the given prediction is accurate enough, which is satisfactory. Lower bounds are also provided, making this paper comprehensive.
Weaknesses: A possible weakness is that, the structure of this paper is a little bit weird. You first give your theorems in the introduction when your proposed algorithm is not mentioned yet. Then you provide the proofs in section 2, and state the upper bound again in section 3. The algorithm has not been mentioned until the last.
Technical Quality: 3
Clarity: 2
Questions for Authors: I am curious about the intuition behind your algorithm, i.e., how to utilize the additional information. It will be good if the authors could provide more explanation of the proposed algorithm.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: no limitations are stated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **presentation**
Thank you for your feedback. The algorithm is given on page 8 of the main body. We give technical overview in Section 2 before the algorithm to give a high level picture about how our upper bounds and lower bounds (in the appendix) fit together to give a complete characterization of the augmented distribution testing problems we study.
**Intuition behind our algorithm**
For the closeness testing problem, we use the hint distribution to $\hat{p}$ to reduce the $\ell_2$ norm of $p$, the unknown distribution. Intuitively, we can do this by looking at the ‘heavy hitters’ of $\hat{p}$: the domain elements that $\hat{p}$ assigns a high probability mass to. Then we split the mass of all heavy elements equally among many synthetically created domain elements. If we see such an element $i$ from our sample from $p$, we perform a thought experiment where we instead sample uniformly from the synthetically created domain elements corresponding to $i$. This does not change any of the TV distances, but significantly reduces the $\ell_2$ norm of $p$ (formalized in Lemma 3.1). A smaller $\ell_2$ norm makes the testing problem significantly easier, as done in prior works. Please see Section 2 for intuition about our other results. For further details, please see Section 2 and Section 3 in the main body.
---
Rebuttal Comment 1.1:
Comment: Thanks for your explanation. I will raise my rating to 6. | Summary: The paper studies the problem of property testing (specifically uniformity, identity, and closeness testing) in the context of the learning-augmented algorithms framework. They show how a prediction about the underlying distribution could be harnessed to provably reduce the number of samples required when the prediction is of high quality. They give efficient algorithms and provide experimental evaluation with code.
Strengths: The problem is well-motivated and the paper is well written.
Weaknesses: I did not check the proofs in the appendix in detail but the intuition given for the approaches make sense.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In the paragraph starting on Line 177: given "perfect predictors", why does one even need additional samples from the distribution? From my limited understanding, that paper talks about having additional access to other kinds of queries on the underlying distribution (apart from just drawing IID samples) and "samples" may refer to those other kinds of queries (for instance, one can query the probability mass of an element and get its true mass in $p$). I'm not sure how that directly compares with your setting where "samples" mean IID samples only.
- How do you obtain an **upper bound** expression for the final big-O containment for the equation block between Lines 378-379? To be precise, we have $s_f \leq = \min\{\frac{n^{2/3} \alpha^{1/3}}{\varepsilon^{4/3}}, n\} \leq \frac{n^{2/3} \alpha^{1/3}}{\varepsilon^{4/3}}$ so $\frac{n}{\varepsilon^2} \cdot \sqrt{\frac{\alpha}{s_f}} \geq \frac{n^{2/3} \alpha^{1/3}}{\varepsilon^{4/3}}$, which is a **lower bound** right? Where did I mess up?
Possible typos:
- Line 266: "...strategy is put the..." should be "...strategy is **to** put the..."?
- First line of the equation between Lines 328-329: Should the last equality be $\leq$?
- Line 375: Should it be $\leq 100 s_f$ instead?
- Line 376: Should it be $\leq 102 n$ instead?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Nil
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comparisons to the previous work**
[29] is an example of studying distribution testing when both samples and a prediction distribution $\hat{p} = p$ are available. However, their algorithm does not receive $p$ directly and can only query $p$. Since they do not see the whole $p$, they would require samples from it as well. They consider two types of queries: querying the probability of a certain domain element $p(i)$, or querying the probability of the first $i$ elements for a given $i$. In comparison to our model, their assumption on the accuracy of the prediction is very strong as they assume $p = \hat{p}$. They also consider an alternative model with multiplicative noise, which is still a stronger assumption compared to ours. However, their model is weaker in another sense, as they do not have access to $\hat{p}$ for free. The total cost of their algorithm involves the number of queries they make to $\hat{p}$ and the number of samples (which they describe it as queries to the sampling oracle). For testing uniformity and identity, they have shown that $\Theta(1/\epsilon)$ queries are necessary and sufficient. Given their upper bounds and our lower bounds, it can be concluded that solving the problem in our model is more difficult, meaning it would require many more samples. Please see Section 1.3 for more detailed comparison to prior related works.
**Sample complexity**
For $s_f$, we have two possibilities, either it is equal to $n^{2/3} \alpha^{1/3} / \epsilon^{4/3}$ or it is equal to $n$. For the first case, it is not too hard to see that the equation will be $O(s_f) = O(n^{2/3} \alpha^{1/3} / \epsilon^{4/3})$. Now, assume $s_f$ is $n$. This case, implies that:
$$n \leq n^{2/3} \alpha^{1/3} / \epsilon^{4/3} \leq n^{2/3}/\epsilon^{4/3}.$$
Hence, there is an interdependence between the parameters: $n \leq 1/\epsilon^4$. Therefore, one can conclude $n \leq \sqrt{n}/\epsilon^2$. Therefore, both expressions in the sample complexity equate to $O(\sqrt{n}/\epsilon^2)$.
Maybe it is helpful to note that in this case: $\sqrt{n}/\epsilon^2 \geq n^{2/3}/\epsilon^{4/3} \geq n^{2/3} \alpha^{1/3}/\epsilon^{4/3}$. Hence, our lower bound is not violated.
[In case we did not answer this question adequately, please leave us a comment, and we will try to clarify further.]
---
Rebuttal Comment 1.1:
Comment: Thank you very much for clearing up my confusion about the sample complexity!
I did read Section 1.3 and I understand that [29] considers a different query model where one can perform stronger queries such as asking for element probabilities $p(i)$. However, from what I understand and what you mentioned, [29] does not get any additional predictor as part of the input. As such, I think it is highly confusing to say that [29] uses "perfect predictors" and I do not think they are directly comparable... In the learning-augmented setup, one can always make a decision by just looking the given predictor without doing any additional work: if the predictor is arbitrarily bad, the answer will just be wrong. Thus, I find it strange to say that an algorithm given perfect predictor even needs any to take any sample.
This is still a good piece of work overall and I stick to my positive review. However, as per my response above, I would recommend the authors to reconsider the rephrasing the comparison of [29] in their revision.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comment. In our future version, we will add the necessary clarification to provide a more comprehensive comparison with [29]. | Summary: The paper looks at the problem of hypothesis testing for discrete distributions where a predicted data distribution is available. The paper gives algorithms (Algorithm 1; closeness testing, Algorithm 3; identity and uniform testing) which either reduces the number of samples required for testing, or do no worse than standard approaches. Lower bounds on samples are given, and experiments are done to validate the performance on real data.
Strengths: The paper is extremely thorough and detailed, giving an overview of the important results (including experiments), as well as a roadmap of the intuition required behind the proofs for the upper bound and lower bounds.
The appendix also includes motivation and explanation for the numerous proofs involved.
The main contributions of needing significantly fewer sample sizes for hypothesis testing compared to CRS'15 or [41] is also significant.
Weaknesses: I am not sure whether this is an appropriate venue mostly due to the heavy technical details in the paper, and with a lot of key details in the appendix ; but to be fair, the conference format makes it difficult to present detailed work like this.
Unfortunately, this is not mostly in my area of speciality, so I am unable to give meaningful comments about potential weaknesses (or strengths).
Technical Quality: 3
Clarity: 4
Questions for Authors: These questions may be straightforward to those familiar with the relevant literature, but these are the questions I have:
- Are there certain types of data where making weak assumptions about the predictor (compared to related works) might give bad results (or similar results to existing methods)?
- Is there a reason why experiments for identity testing and uniformity testing are not given?
Confidence: 1
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Suitability for the conference**
We would like to point out that many related works on distribution testing have appeared in recent ML conferences such as NeruIPS/ICML/ICLR. These works are relevant to the learning-theory, privacy, algorithmic statistics, and sublinear algorithms community within ML. Please see below for a small sample of such related works that appeared in recent NeurIPS conferences that we cite in our work. Our references in our submission list additional related works in other top ML venues.
[4] J. Acharya, Z. Sun, and H. Zhang. Differentially private testing of identity and closeness of
discrete distributions. NeurIPS 2018
[5] Jayadev Acharya, Constantinos Daskalakis, and Gautam Kamath. Optimal testing for properties of distributions. NeurIPS 2015
[8] Maryam Aliakbarpour, Mark Bun, and Adam Smith. Hypothesis selection with memory
constraints. NeurIPS 2023
[9] Maryam Aliakbarpour, Ilias Diakonikolas, Daniel Kane, and Ronitt Rubinfeld. Private testing of distributions via sample permutations. NeurIPS 2019
[24] Clement L. Canonne, Ilias Diakonikolas, Daniel Kane, and Sihan Liu. Nearly-tight bounds for testing histogram distributions. NeurIPS 2022
[27] Clement L Canonne, Gautam Kamath, Audra McMillan, Jonathan Ullman, and Lydia Zakynthinou. Private identity testing for high-dimensional distributions. NeurIPS 2020
[45] Ilias Diakonikolas, Daniel M. Kane, and Alistair Stewart. Sharp bounds for generalized uniformity testing. NeurIPS 2018
**Weaker assumptions about prediction**
Thank you for suggesting an interesting open direction! It is not clear to us if our predictor assumption can be relaxed while retaining similar guarantees. We believe our model is quite natural, but there could certainly be other weaker assumptions which are powerful enough to give improved sample complexity. However, we note that our algorithms are robust, so even if the predictor assumptions do not hold, we never lose asymptotically on the sample complexity compared to classical algorithms without predictions.
**Experiments on identity testing and uniformity testing**
We focused on closeness testing since it is the hardest statistical task out of the three distribution testing problems we studied: it generalizes identity and uniformity testing, and a larger sample complexity is required, both in the classical and augmented settings. We suspect that one can see similar qualitative gains from using our augmented algorithms, provided that appropriate predictions are available.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed rebuttal. My score remains unchanged. | Summary: This paper studies the task of identity/closeness testing when the tester is augmented with predictions of the unknown distribution involved apriori. In particular, in identity testing, in addition to sample access to the unknown distribution $p$, the algorithm is also given some $\hat p$ that may or may not satisfy the guarantee $ \text{TV}(p, \hat p) \leq \alpha$. If the guarantee does not hold, the algorithm is allowed to answer **inaccurate**. Otherwise, the algorithm is asked to perform the standard testing task: return YES if $p$ equals to some known distribution $q$, and NO if $p$ is at least $\epsilon$-far from $q$ in total variation distance. For closeness testing, though both $p$ and $q$ are unknown, the authors assume that such prior prediction $\hat p$ is only available for one of the distributions.
For identity testing, the sample complexity goes through a phase transition depending on the relative size of the prediction error tolerance parameter $\alpha$ and $d = \text{TV}(q, \hat p )$, the TV distance between the prediction distribution and the known distribution. When $d<\alpha$, the sample complexity is the same as the usual identity testing. When $d > \alpha$, the sample complexity is $\min\left( \frac{1}{ (d - \alpha)^2 }, \frac{\sqrt{n}}{\epsilon^2} \right)$. Intuitively, this is saying that the prediction could potentially help a lot if it predicts that the test is in the soundness case (as $\hat p$ is indeed far from $q$) but not so much if it predicts the test is in the completeness case. This intuition is clear from their upper bound approach. They leverages the Scheffé set $S$ (the set that realizes the maximum discrepancy between $\hat p$ and $q$), and test for discrepancy between $p(S)$, $q(S)$ and $\hat p(S)$. This will either invalidate the prediction or lead to rejection. Furthermore, since this is simply a 1-dimensional bias estimation problem, they could avoid any dependency on the doamin size (when $d$ is sufficiently separated from $\alpha$). The above strategy is not so helpful when there is no significant discrepancy between $p(S)$, $q(S)$ and $\hat p(S)$, and therefore their algorithm fall back to the standard identity tester when the prediction says that the test is in the completeness case.
For closeness testing, they show that the sample complexity is given by $\sqrt{n} \alpha^{1/3} / \epsilon^{4/3} + \sqrt{n} / \epsilon^2$. Here the Scheffé set strategy no longer applies as $q$ is also unknown to us. However, the authors show that the prior $\hat p$ is still very useful in an important closeness testing sub-routine (commonly referred as **flattening**). At a high level, when the unknown distributions have large $\ell_2$ norm, standard collision-based test statistics may have large variance. Flattening techniques could then be used to transform the distributions to reduce their $\ell_2$ norms, and hence also the variance of the test statistics. The authors show that such a routine can be implemented in a more sample-efficient way if the algorithm is equipped with a prediction $\hat p$.
Strengths: Identity/closeness testings are fundamental problems in many different areas. In the standard setting, the sample complexity scales polynomially with respect to the domain size, making the tests prohibitively expensive for distributions with large supports. The authors demonstrated that the sample complexity may be significantly improved if the tester is augmented with a prediction of the unknown distribution. The surprising part is that the prediction need not to be an accurate one. While the tester draws significantly fewer samples if the prediction is accurate, even if the prediction is not, the tester is guaranteed not to be misguided. In particular, it can identify inaccuracies in the prediction, and simply fall back to the standard testing approach in that case. Lastly, the bounds are optimal up to constant factors and technically solid.
Weaknesses: The authors mention that the potential application is when the unknown distribution evolves over time. In that case, the ``unknown'' distribution is not completely unknown to us as we may extract information from past data. It will be more compelling if the authors could provide more detailed mathematical setup of this. For example, it will be interesting to see analysis of the behavior of augmented testers when we face a sequence of tests where the unknown distribution may go through random distribution shifts from one task to another.
Technical Quality: 3
Clarity: 3
Questions for Authors: Have the authors considered situation when we have prior knowledge about both of the unknown distributions in closeness testing? Then it seems like the Scheffé set may again become useful, and could lead to huge sample complexity improvements.
Comment: The setup bears some similarities to the notion of testable learning, where the algorithm is given some prior distribution assumption that may or may not hold. In particular, the algorithm is also given the ability to output ``reject'' when this prior knowledge is false.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, they have.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Clarification on evolving distributions**
For our results, the algorithm requires access to a prediction distribution $\hat{p}$, which is intended to be close to the unknown distribution $p$ (in addition to samples from $p$). Our algorithms are applicable in settings where the extra information available about $p$ can be translated into a distribution $\hat{p}$. In our introduction, we strive to depict various scenarios where such predictions can be available, including slowly evolving distributions such as network traffic data or search engine queries. Mathematically, we consider a series of slowly changing unknown distributions: $p_1, p_2, \ldots, p_t$. In this setting, the empirical distribution of the samples from $p_1, p_2, \ldots, p_{t-1}$ can serve as a prediction for $p_t$. For example, the distributions of search queries change slightly every hour or so. However, the overall empirical distribution of search queries from the past year could be a good prediction for the distribution at a particular hour. We will clarify this in our introduction. Continual testing while the distributions undergo small perturbations at every step is a very interesting open question. Although our results do not directly address this problem, we believe that some techniques from augmented flattening could potentially be helpful in solving this problem as well.
**Studying two predictions**
Thank you for suggesting an interesting open direction. We did not consider this case in our paper. First, we would like to mention that our upper bound still holds when two predictions are provided. It is not too difficult to adapt our augmented flattening to incorporate both $\hat{p}$ and $\hat{q}$. The sample complexity in this setting is proportional to the minimum of $\alpha_p^{1/3}$ and $\alpha_q^{1/3}$. However, the lower bounds do not hold. As you pointed out, one can hope to achieve better sample complexity if the distance between the two predictions is more than $\alpha_p + \alpha_q + \epsilon$. In particular, estimating the probability of the Scheffé set gives us either a 'reject' or 'inaccurate information' result for one of the distributions. On the other hand, when $\hat{p}$ and $\hat{q}$ are identical, our lower bound indicates that one cannot hope for better sample complexity. It remains an open question to study the case where the distance between the two predictions is smaller than $\alpha_p + \alpha_q + \epsilon$, but larger than zero.
**Connection to testable learning**
Yes, in both cases, the algorithm is given an option of forgoing to solve the problem when the underlying assumption does not hold. Although there is no notion of ``augmentation’’ in that line of work. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their feedback. We will integrate all your editorial comments regarding the presentation of our paper in the future version. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Interpolating Item and User Fairness in Multi-Sided Recommendations | Accept (poster) | Summary: The paper addresses the challenge of balancing multiple stakeholder interests in online recommendation systems, which include platforms, items (sellers), and users (customers). To tackle this challenge, the authors introduce a novel fair recommendation framework, FAIR, formulated as a constrained optimization problem to balance these competing interests. They further explore this problem in a dynamic online setting, introducing FORM, a low-regret algorithm that simultaneously learns user preferences and enforces fairness in real-time recommendations. Through theoretical analysis and a real-world case study, the paper demonstrates that FORM can maintain platform revenue while achieving fairness for both items and users, contributing to more sustainable and inclusive digital platforms.
Strengths: The paper tackles the practically relevant and challenging problem of balancing platform revenue, user fairness, and item fairness
The paper proposes an online algorithm to learn parameters of the problem while ensuring some given fairness constraints, not merely assuming that the parameters are given as done in most existing papers
The paper provides sound and thorough regret analysis of the proposed algorithm
The empirical results on real-world data shows that the proposed method effectively and flexibly control user-item fairness while maximizing the platform revenue
Weaknesses: This is not a weakness specific to this paper, but choosing the right fairness notion and the values of $\delta$ is non-trivial
Even though the paper performs offline experiments on public real-world datasets, it fails to perform any online production A/B test; so it might be risky to be overly optimistic about the empirical results of the paper until the proposed approach is verified in an online environment and provides some tangible business benefits
Technical Quality: 3
Clarity: 4
Questions for Authors: How can we reasonably choose the right fairness notion and the values of $\delta$ in practice?
Do the authors have any plan to deploy and evaluate the proposed algorithms in some real production environment?
Confidence: 2
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback! We'd like to address your questions below.
**Regarding choosing the right fairness notion,**
- Thank you for raising this insightful point! Indeed, choosing the right fairness notion is not trivial and very much depends on the context, such as the type of online platform and the stakeholder outcomes that matter. Here are some of the most important considerations:
- **Understanding stakeholders’ needs/outcomes.** One of the most crucial steps here is understanding the desired outcomes for items and users. Our framework, Problem (FAIR), is designed to handle various outcome functions, such as revenue, marketshare, and visibility, which can differ across platforms. For example, a video streaming platform might aim to ensure fair *visibility* for both independent content creators and popular studios, while an e-commerce platform might focus more on fair *marketshare/revenue* for small sellers versus large brands.
- **Assessing how each fairness notion aligns with the stakeholders' needs and the platform’s objective.** Each fairness notion has a different implication, and platforms need to evaluate which best suits their goals and stakeholder needs.
For example, consider the following fairness notions from our Table 1:
- Maxmin fairness maximizes the outcome received by the most disadvantaged stakeholder. This can be ideal for video streaming platforms like Netflix or YouTube that wish to ensure independent and lesser-known content creators receive fair visibility alongside popular creators.
- K-S fairness ensures that each individual receives a fair share of his/her maximum attainable outcome. This can be suitable for platforms like LinkedIn or Indeed that wish to ensure fair opportunities for their job seekers relative to their qualifications and experience.
Platforms may also need to experiment to understand how different fairness notions impact their "price of fairness" (i.e., platform's revenue loss in enforcing fairness) before determining the right fairness notion. For instance, in our Amazon review case study, maxmin fairness appears to have a slightly higher price of fairness compared to K-S fairness (see Fig. 1b in Sec. 4 and Fig. 3b in Sec. F.2).
- **Regulatory considerations.** As we discussed in Sec. 1, one important motivation for imposing fairness in online recommendation decisions is the increasing regulatory action, such as the Digital Markets Act proposed by EU. Therefore, the choice of fairness notions can also depend on legislative or regulatory requirements. For example, since the Digital Markets Act calls for a fair and open digital marketplace, maxmin fairness can be a suitable option as it ensures even the most disadvantaged item/content creator receive a fair level of exposure.
- Our fairness framework is designed for a wide range of fairness notions and outcome functions, allowing it to be tailored to each platform's specific needs. We appreciate the reviewer’s question and will include the above discussion in the camera-ready version of our paper.
**Regarding how to choose the right fairness parameter $\delta_I, \delta_U$,**
- Please see (2) in our global response for all reviewers.
**Regarding online A/B test and deployment,**
- Thank you for your question! For now, the paper has a modeling and methodological focus. We believe that the offline experiments in our case study provide a strong initial validation for the efficacy of our approaches. The Amazon review dataset was also used for evaluation in prior works on multi-sided fairness (e.g. TFROM [63]).
- In our paper, we have also discussed how a platform can use its “price of fairness” along with online A/B testing to select the right fairness parameters, which can potentially inform the design of real-world A/B tests (see Sec C.1). Please also see (2) in our global response for all reviewers for a complete discussion on this topic.
At a high level, we propose that (i) a platform can leverage the concept of the "price of fairness” to identify a small range of fairness parameters to experiment with, ensuring that their fairness considerations do not negatively impact its key business objectives like revenue and marketshare. (ii) Upon determining a small subset of parameters for experimentation based on the desired fairness levels and acceptable price of fairness, a platform can then conduct A/B testing on segmented traffic to determine the best fairness parameters.
- To facilitate potential real-world testing and deployments, we have presented this work to several online platforms and industry leaders and received positive feedback. While it is not known to us how these platforms intend to use our methodologies in their respective setups, we are open to collaborations with industry partners who have the necessary infrastructure to facilitate online A/B testing and deployments. We will mention this as an exciting future direction in our conclusion section.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their useful clarification. I will maintain my positive assessment. | Summary: This paper focuses on an important and interesting research direction: achieving multi-sided fairness for recommendation. Specifically, the authors aim to answer two research questions: (1) What constitutes a fair recommendation within a multi-sided platform? and (2) How would a platform implement a fair recommendation in a practical online setting? The contributions of this work include a novel multi-sided perspective that achieves within-group and across-group fairness among users/items, and also an online recommendation algorithm for multi-sided fairness. Empirical and theoretical results show the advantage of the authors' contributions over prior baselines, especially in online settings.
Strengths: 1. The motivation of this paper is clear, and it is well-written. The theoretical proof appears solid.
2. Prior works on fairness-aware recommendation generally focus on either a single side or an offline setting. This work combines both aspects and proposes a novel framework and algorithm, which is useful in real-world settings.
3. The authors compared their method with some well-studied offline and online fairness-aware recommendation methods, demonstrating that their proposed algorithm achieves a better tradeoff between fairness and revenue.
Weaknesses: 1. After doing a quick search on "multisided fairness + recommendation", it seems some related works [1, 2, 3] are missing. Even though the setting might be different, the authors should mention and discuss them somewhere in the paper since the topics seem related.
2. An efficiency comparison on the algorithm running time is missing.
Ref:
[1] Naghiaei et al. CPFair: Personalized Consumer and Producer Fairness Re-ranking for Recommender Systems, SIGIR, 2022.
[2] Wu et al. Joint Multisided Exposure Fairness for Recommendation. SIGIR, 2022.
[3] Wu et al. A multi-objective optimization framework for multi-stakeholder fairness-aware recommendation. ACM Transactions on Information Systems, 2022.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can the proposed method be applied to the scenario in which each user/item has multiple sensitive attributes? For instance, there may be multiple facets in describing a user, in terms of race, gender, occupation, age, income level, ... The number of facets here may be large sometimes, and it is hard to treat a specific combination of multiple facets as a single type. How does this algorithm deal with such "intersectional fairness" (e.g., to achieve fairness not only to "black"/"women"/"nurse"/"elder" separately, but also to the intersectional demographic groups)?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. In the checklist, the authors claim that "We have clearly stated the assumptions needed for our theoretical results and the setups adopted for our numerical experiments" and "There is no negative societal impact of the work performed". However, these claims are too strong and I actually do not think the limitations and negative societal impact of this work are clearly or comprehensively discussed. For instance, one of the important challenges in recommendation is the "cold-start" problem, the author did not discuss whether their algorithm is capable of applying in this scenario, or it is also some limitation and future work. Also, one potential negative societal impact to me is the leakage of user/item sensitive information as the algorithm needs to use the "type" information. This is also not discussed or mentioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback! We'd like to address your questions below.
**Regarding related works,** thank you for pointing us to these works! We'll add them and the following discussion to our related works section.
- **A short summary of [1,2,3]**. [1] focuses on achieving producer and customer fairness in re-ranking decisions, where fairness is promoted w.r.t. item exposure and user nDCG. In contrast, our framework is not confined to a single outcome or fairness notion. [2] proposed and compared a family of joint multi-sided fairness metrics that tackle systematic biases in content exposure, while our main focus is on proposing a new fairness framework/algorithm; [3] proposed a multi-objective optimization framework which jointly optimized accuracy and fairness for consumers and producers. We proposed a different framework that optimizes the platform's revenue while incorporating fairness constraints for items/users.
- **Key differences.** As acknowledged by the reviewer, our setting differs from [1,2,3] in several ways: (i) Our work proposes a constrained optimization framework that not only imposes fairness for multi-stakeholders (items/users), but also takes the platform’s revenue into consideration; (ii) Our framework is not confined to a single fairness/outcome notion; (iii) Most importantly, unlike works that solely focused on multi-sided fairness, we recognize the challenge of jointly handling fairness and learning (Sec. 3.2), and propose an algorithm with theoretical guarantees (Sec. 3.4). The tradeoff between learning and fair recommendation, to the best of our knowledge, has not been studied in prior works.
**Regarding efficiency comparison on algorithm runtime,**
- Please see (1) in our global response for a complete discussion on our algorithm’s complexity & scalability considerations.
- Overall, the complexity for our algorithm will be $O((MN)^{2+1/18})$ at each round when we solve (FAIR-RELAX) and $O(N)$ at each round when we simply recommend and update user preference. As stated in our global response, Problem (FAIR-RELAX) need only be solved for $O(\log(T))$ rounds to maintain the same theoretical guarantee. In this way, our runtime will be greatly improved.
- In comparison to the two most relevant baselines—the per-round runtime of FairRec is $O(MN)$; the per-round runtime of TFROM is $O(NM^2log(M))$ in the offline setting and $O(N)$ in the online setting. While our runtime can be higher than these baselines during the rounds when (FAIR-RELAX) is solved, this is justified by the additional capabilities of our algorithm. Unlike the baselines, our method (i) maintains the platform’s revenue in addition to achieving item/user-fairness, and (ii) balances the processes of learning and making fair decisions, attaining strong theoretical guarantees. These enhancements lead to better performance (as validated by our case study in Sec. 4), all while maintaining a polynomial runtime under common fairness notions/outcomes.
**Regarding users having multiple sensitive attributes,**
- We’d like to first clarify that in our context, we expect user types to be primarily determined by user preferences rather than sensitive attributes such as gender, ethnicity, etc. Unlike scenarios like loan applications, where fairness involves ensuring equal opportunities across demographic groups, user-fairness in our personalized recommendation context focuses on ensuring users see items they prefer the most, rather than purely revenue-maximizing items (see our definition of user-fairness in Sec. 2.2).
- With this in mind, when it comes to determining user types, we can cluster users based on their preferences, rather than directly grouping them based on genders or ethnicity. In real-world recommendation systems, platforms often have access to insensitive features such as the type of devices (Mac versus PC), zip code, as well as users’ past purchase history that can be used for clustering and determining user types. (See our response to reviewer kac1 for a complete discussion on how user type can be determined.) This is also how we determined user types in our case study (see Sec. 4). In this way, we can also incorporate multiple facets by clustering users based on comprehensive preference profiles rather than specific combinations of sensitive attributes.
**Regarding the cold start problem,**
- This can be readily addressed by design of our algorithm. As detailed in Sec. 3.1, FORM does not require any initial knowledge of user preferences or arrival rates. Instead, it employs an exploration mechanism to gather sufficient user data (see Sec. 3.3 for details on our “randomized exploration”). In practice, when new items or users join the platform mid-way through the time horizon, we can similarly enforce an initial exploration for them by having the algorithm recommend the new item with an additional small probability $\epsilon$. The amount of exploration can then diminish as we accumulate more data for these new items or users.
**Regarding leakage of user/item information,**
- Thank you for raising this important issue! As discussed previously, our user types are primarily determined by user preferences rather than sensitive attributes. To alleviate concerns on sensitive information, user clustering can be performed based on insensitive features such as types of devices, zip codes, past purchase histories, etc. As for items, we can categorize them based on publicly available attributes such as keywords, types, prices without the use of sensitive attributes, and enforce fairness w.r.t. items within the same category (see (1) of our global response for a discussion of applying our framework to the last stage of recommendation pipeline). If there is concern about using specific attributes or features, our algorithm can also be configured to exclude such information during the user clustering process. We'll make sure to include a discussion on this issue in the conclusion section.
---
Rebuttal Comment 1.1:
Comment: Thanks the authors for the responses during the rebuttal. I keep my positive score. | Summary: This paper presents a novel fairness recommendation framework called FAIR, and an online algorithm called FORM for solving multi-stakeholder fairness problems. The main contributions include 1. The FAIR framework is proposed to balance the platform revenue and the fairness of multi-stakeholders (items and users) through the form of constrained optimization problems. The framework can flexibly define fair solutions for items/users and adjust the trade-off between platform goals and stakeholder fairness.2. A FORM algorithm is designed for simultaneous learning and fair recommendation in online settings. The algorithm balances learning and fairness by relaxing the fairness constraints and introducing random exploration.3. Theoretical analyses demonstrate that the FORM algorithm achieves sub-linear regret in terms of both gain and fairness.4. A case study on Amazon review data verifies the effectiveness of the proposed method. The paper solves the fairness problem in multi-party recommender systems, and the proposed method can flexibly weigh the interests of multiple parties while maintaining the platform's revenue and demonstrates good performance in both theory and experiment. This is an important research direction, which is significant for the fairness of practical recommender systems.
Strengths: 1. The needs of multiple stakeholders (users, items) are considered simultaneously and Fairness parameters can be adjusted as needed.
2. Theoretical bounds on algorithmic performance are effectively proven.
3. It has a degree of practicality, with case studies demonstrating application to real data.
Weaknesses: 1. The computational complexity of the algorithm, especially its scalability in large-scale systems, is not discussed in detail in the paper.
2. The choice of δ_I and δ_U may require a lot of experiments to find suitable values.
3. The model assumes that the user type is known, which may not always hold in practical applications.
4. the paper mainly focuses on short-term fairness and does not delve into the impact of long-term fairness.
5. this paper mainly uses gain and fairness regret as evaluation metrics and more metrics may be needed to fully evaluate system performance.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. The paper assumes that user types are fixed, but in practice, user preferences may change over time.
2. the universality of the fairness definition: is the definition of fairness proposed in the thesis applicable to all types of recommender systems?
3. the case study uses only one category of data from Amazon, is it sufficient to demonstrate the universality of the approach?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: please refer to "weakness"
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful feedback! We’d like to address your questions below.
**W1. Regarding complexity and scalability,**
- Please see (1) in our global response.
**W2. Regarding choosing fairness parameters,**
- Please see (2) in our global response for a detailed answer to this question.
- In our global response, we discussed how a platform can select fairness parameters based on its “price of fairness” (PoF) (see Sec. C in our paper for more details). We also explained how a platform can identify a small range of promising parameters for A/B testing based on desired fairness levels and acceptable PoF, thus avoiding extensive experimentation.
**W3. Regarding unknown user types,**
- In real-world recommendation systems, platforms often have access to features that can be used to determine user types, even when sensitive features (e.g., gender, race) are restricted due to regulations. For example, Orbitz.com [https://tinyurl.com/yatek26w ] categorizes users based on the type of device (Mac versus PC), while other platforms may use zip codes for user categorization [“LARS: A location-aware recommender system"]. Historical data, such as purchase history, can also be used to cluster users, ensuring that users within each cluster have similar preferences, while users across different clusters exhibit distinct tastes. We'll include this discussion to our setup section.
**W4. Regarding long-term fairness,**
- This is a great point! Our method aims to calibrate the recommendation policy within a shorter time period (e.g., a month) when user preferences are relatively fixed. By imposing item/user-fairness over this shorter time period, we expect that in the long term, our approach can contribute in the following:
- **Long-term fairness.** By introducing more diversity in our recommendations, we can help mitigate the “winner-takes-all” phenomena often seen in online platforms [https://tinyurl.com/4efxc54k ].
- **Long-term growth.** The platform can attract more items (while retaining the existing ones) and more users (including those with niche tastes), due to its efforts to maintain fairness.
- We leave formally quantifying such long-term effects as an exciting future direction and believe that our work lays a good foundation for such a study. We’ll discuss this in our conclusion section.
**W5. Regarding alternative metrics,**
- In our framework, we have considered various evaluation metrics:
- For items/users: Our framework accommodates different outcome metrics. E.g., items can consider visibility, marketshare, or revenue as their outcomes. In our case study, we verified that our fairness constraints are consistently met, regardless of the chosen metric (see Sec. F.2).
- For the platform, we not only evaluated the convergence of revenue regret (Fig 1a) and its revenue gain (Fig. 1b) but also examined its price of fairness under different fairness parameters (see Fig. 2).
- We appreciate the reviewer’s suggestion and acknowledge that our evaluation can be further strengthened by incorporating additional metrics beyond the main objectives of our stakeholders, such as user satisfaction, user retention, and diversity of recommendations. These metrics can provide a more holistic view of our framework’s short/long-run impact on the system and are often best obtained through online experiments with real traffic. We’ll mention this in our future directions section.
**Q1. Regarding changing user types/preference,**
- As mentioned in our response on long-term fairness, our method attempts to calibrate the recommendation policy within the short period when user preferences are considered fixed. Over the long term, as user and item attributes evolve, adaptive fairness notions might need to be developed, which is something we mentioned in our future directions section (Sec. 5). We believe that our framework for multi-sided fairness under fixed user preference serves as a good starting point for such future studies.
- One possible extension of our framework/method to accommodate changing user types/preferences can be inspired by existing adaptive partitioning methods (e.g.,the zooming method in “Contextual Bandits with Similarity Information”). This approach would involve using a meta-algorithm like the zooming method to update user types periodically at a slower pace, while our algorithm FORM can be applied as an inner algorithm that performs fair recommendation under the current user types/preferences. Exploring such ideas could be another interesting future direction.
**Q2. Regarding applicability of fairness notions,**
- Our proposed framework is meant to encapsulate a wide range of recommender systems, including e-commerce sites, social media, video streaming sites, etc. As remarked in Sec 2.4, we can even accommodate platforms with additional stakeholder groups (e.g., Doordash drivers, Airbnb hosts) by similarly incorporating additional fairness constraints. The generality of our framework allows these platforms to choose any fairness notions/outcomes that best suit their short-term and long-term goals (see Table 1 for example fairness notions; see Sec 2.3 for example outcome functions).
- Determining the right fairness notion largely depends on the type of online platform and the desired outcomes for its stakeholders. See our response to reviewer pMvw for a more detailed discussion.
**Q3. Regarding universality of our approach,**
- Here, we worked with the clothing category of the Amazon review data because it was also used for evaluation in TFROM [63], which was one of the baselines we compared with. In addition to Amazon review data, we also performed experiments with the MovieLens data in a movie recommendation setting and obtained similar results. These results were previously omitted due to space constraints but are now included in our global response to demonstrate the broad applicability of our method. Please see the attached PDF in our global response.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the detailed response. I keep my positive score. | Summary: The paper introduces a fair recommendation framework, FAIR, for balancing the interests of multiple stakeholders in recommendation systems, namely the platform, sellers, and users. The framework is formulated as a constrained optimization problem that addresses fairness concerns for both items and users in a dynamic online setting. The paper proposes a no-regret algorithm, FORM, which ensures real-time learning and fair recommendations. The effectiveness of the framework and the algorithm is demonstrated through theoretical analysis and a case studies on real-world data.
Strengths: The paper addresses the often-overlooked complexity of balancing the interests of multiple stakeholders in optimizing recommendation systems. The proposed FAIR framework offers a solution by ensuring fairness for both items and users while maintaining platform revenue. The extension to a dynamic online setting where data uncertainty is present is reasonable. The FORM algorithm's ability to learn and adapt in real-time is a merit for practical applications in environments where user behavior and preferences are nonstationary. This work also provides a robust theoretical foundation for the proposed framework and algorithm, including proofs of sub-linear regret bounds.
Weaknesses: While the paper presents promising results on a dataset with 30 items and 5 user types, it does not thoroughly address how the framework and algorithm would scale to much larger datasets with a large set of items and more diverse user profiles. My main concern is the proposed solution's scalability, a critical factor for real-world deployment but the proposed solution involves solving a constrained optimization problem at every time step, which could be complex and computationally intensive. This complexity might limit the feasibility of implementation for platforms with limited computational resources. I urge the author to include some discussion about the computational complexity of the proposed solution and also the limitations regarding the scalability.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. do you think the regret upper bound given in Theorem 3.1 is optimal? If it is not, any idea how can we improve upon this result?
2. If the platform wants to tradeoff between user fairness and item fairness, how does it need to set the parameters delta^{I} and delta^{U}? The current fairness regret simply cares about the maximum of R^I and R^U, what if we care about both, e.g., R^I + \lambda R^U? Does the proposed method still applicable to this situation?
3. I'm wondering if there could be any intrinsic tradeoff between the revenue guarantee and fairness guarantee.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback! We’d like to address each of your questions below.
**W1. Regarding computational complexity and scalability,**
- Please see (1) in our global response.
**Q1. Regarding our sublinear regret upper bound**, we’d like to highlight the following:
- **Challenges of designing no-regret algorithms in our setting.** While our algorithm essentially solves a constrained optimization problem at each round, it is important to note that one cannot directly apply other off-the-shelf algorithms for online constrained optimization with bandit feedback, including
- (i) Bandits with knapsack algorithms [37,15, 56]. In these works, their constraint is a budget constraint that can be directly evaluated, which is critical in helping attain a known optimal per-round regret of $O(T^{-1/2})$ in their setting.
- (ii) Gradient descent without a gradient (Flaxman et al. 2004, arXiv:cs/0408007), which assumes that the optimizer can analytically evaluate whether all constraints are satisfied and attains $O(T^{-1/4})$ per-round regret.
However, our setting is different from and more challenging than these off-the-shelf methods, as we can neither determine the amount of constraint violation, nor analytically evaluate the feasibility of our fairness constraints (see discussion in Sec. 3.2). Despite the extra challenge, our $O(T^{-1/3})$ regret upper bound is already superior to the $O(T^{-1/4})$ from the latter work.
- **The optimal attainable regret remains an open question.** To the best of our knowledge, the optimal attainable regret for an online optimization problem with uncertain constraints like ours, where at each iteration we are only allowed a one-time bandit feedback (i.e., the purchase decision), remains an open problem. We appreciate the reviewer’s question and will mention this as an exciting future direction.
**Q2.1 Regarding the tradeoff between item and user fairness,**
- In our framework, there is no inherent tradeoff between item and user fairness. Since both are enforced as fairness constraints, increasing the level of item fairness (i.e., $\delta_I$) does not reduce the level of user fairness (i.e., $\delta_U$). The only thing to note is that if both $\delta_I, \delta_U$ approach 1, Problem (FAIR) might become infeasible, in which case the platform would need to decide whether to reduce the level of item fairness or user fairness. However, as observed in our case study, infeasibility occurs rarely—in our case study, Problem (FAIR) is only infeasible when $\delta_U $ → 1, representing a strictly user-fair solution (see Fig. 2 in Sec C.1, where we evaluate Problem (FAIR) under different parameters).
**Q2.2 Regarding choosing parameters $\delta_I, \delta_U$,**
- Please see (2) in our global response.
**Q2.3 Regarding "What if we care about $R^I+\lambda R^U$",**
- **Clarification on our metric.** We believe that there might be some misunderstanding regarding our framework's main objective and would like to first clarify the following. In contrast to a multi-objective optimization framework that **maximizes outcomes** for items and users, our framework Problem (FAIR) focuses on **minimizing regrets**. In Def. 3.1, our fairness regret is defined as $R_F(T) = \max (R^I_F(T),R^U_F(T))$ because our goal is to find a regret upper bound that can simultaneously apply to **all items/users**. Here, the fairness regret $R_F(T)$ measures the maximum gap between the fair outcome and the time-averaged outcome for any item/user. Therefore, the $O(T^{-1/3})$ regret upper bound we established for $R_F(T)$ is an upper bound for the fairness regret of any item $i$ or user $j$. As a byproduct, for any linear combination of item/user fairness regrets ${regret}_i+\lambda{regret}_j$, the same regret upper bound always holds. Our sublinear regret bound means that for any item/user, our algorithm guarantees that their long-run average outcome will reach the desired proportion of their fair outcome.
- **Our framework’s advantage over multi-objective optimization problems.** We also remark that our Problem (FAIR) is a constrained optimization problem, not a multi-objective optimization problem that jointly maximizes outcomes for platform, items, users. This distinction is critical as it impacts how fairness is achieved—via constraints rather than via direct optimization of outcomes.
Such a framework enjoys several advantages, as stated in Remark 2.1. At a high level, our framework uses interpretable parameters $\delta_I,\delta_U$ that directly relate to the level of fairness for items/users, rather than maximizing an aggregate function like $rev + \sum\lambda_iO_i+\sum\lambda_jO_j$ where the choice of Lagrangian multipliers $\lambda_i,\lambda_j$ are less straightforward.
In our framework, if one wishes to impose fairness w.r.t. some linear combination of item/user outcomes, they can simply include a similar fairness constraint: $AO_i(x)+BO_j(x)\geq\delta\cdot(AO_i(f^I)+BO_j(f^U))$, where $f^I,f^U$ is the item-fair/user-fair solution and $x$ is the recommendation policy. All our methods/results continue to hold.
**Q3. Regarding tradeoff between revenue and fairness,**
- Our sublinear theoretical guarantees for **revenue regret** and **fairness regret** in Theorem 3.1 are always attainable under our framework/algorithm.
- There indeed exists a tradeoff between platform’s revenue and fairness. As we highlighted in Remark 2.2 and discussed in detail in Sec. C, this tradeoff can be quantified by platform’s price of fairness (PoF), i.e., its loss in revenue due to maintaining fairness. See Fig. 2 for an illustration of PoF in our Amazon case study. As we showed in Theorem C.1, the PoF can be quantified by (i) the magnitude of fairness parameters $\delta_I, \delta_U$ and (ii) the amount of misalignment in platform’s and its stakeholders’ goals. As discussed in (2) of our global response, a platform can use PoF as an indicator for picking the right fairness parameters.
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal
Comment: I thank the authors for their detailed response, which addresses most of my concerns. I maintain my score and weakly lean towards acceptance. | Rebuttal 1:
Rebuttal: We’d like to express our sincere gratitude to all reviewers for your insightful feedback! Below we’ve addressed several common questions from the reviewers. We’ll incorporate all discussions in our response into the revised version of the paper.
(1) **Regarding computational complexity and scalability,**
- **Complexity analysis.** For most commonly used item-fairness notions (e.g., maxmin fairness, K-S fairness, demographic parity; see Table 1) and item outcome functions (visibility, marketshare, revenue; see Sec 2.2), solving Problem (FAIR-RELAX) involves solving two linear programs with MxN variables, leading to a polynomial runtime of $\tilde{O}((MN)^{2+1/18})$. Consequently, each iteration of FORM has a worst-case complexity of $\tilde{O}((MN)^{2+1/18})$. However, practical implementations often achieve much better performance than this worst-case scenario using fast LP solvers (e.g., we use the CBC solver in PuLP for our case study).
- **Real-world scalability considerations.** Our algorithm can be further adapted with scalability in consideration.
- **No need to solve Problem (FAIR-RELAXED) at every round.** In real-world deployment, there is no need to solve the constrained optimization problem at every user arrival. Theoretically, solving the LP in $\log(T)$ rounds can already establish the same theoretical guarantee, as shown in other works (e.g., “Optimal learning for structured bandits”). Practically, platforms can resolve the problem after every X user arrivals or at X-minute intervals, while updating user data in real-time, which can remove the computational overhead. In our MovieLens experiments (see attached PDF), we also only solved (FAIR-RELAXED) at every 1k user arrivals, and the algorithm remains effective.
- **Applying our framework to the last stage of recommendation.** To apply our framework in real-world recommendation systems, it's not always necessary to solve a large-scale optimization problem. The recommendation process typically proceeds in stages, initially using a lightweight model to narrow down items based on keywords and filtering criteria. This allows us to focus on smaller subsets within the same category, defined by keywords, types, prices, etc., (e.g., "a silicone cake pan in the price range 10-20 usd") and maintain fairness among these smaller subsets, thus reducing the computational burden. Our fairness framework is also particularly impactful at this final stage, where items with similar qualities or features now compete for visibility/revenue, and a purely revenue-maximizing solution would thus be extremely unfair.
(2) **Regarding how to determine fairness parameters $\delta_I$ and $\delta_U$,**
There are two main factors in determining the right fairness parameters: (i) the extent of fairness needed for items/users, and (ii) the "price of fairness" the platform is willing to pay.
- **$\delta_I, \delta_U$ are interpretable, tunabled handles.** As discussed in Sec. 2.4, in our framework, $\delta_I, \delta_U$ are tunable handles that determine how much share of the fair outcome a platform would like to ensure for items/users respectively. For instance, an e-commerce platform that values high user retention might choose to impose higher user fairness to ensure user satisfaction.
- **“Price of fairness” measures the tradeoff between a platform’s revenue and fairness.** As highlighted in Remark 2.2 and discussed in detail in Sec. C, a platform’s “price of fairness” (PoF) measures a platform’s revenue loss in maintaining fairness for its stakeholders. Understanding its PoF under different parameters $(\delta_I, \delta_U)$ is crucial as a platform needs to also understand the cost of implementing fairness constraints. We show in Theorem C.1 that the upper bound of the PoF grows roughly linearly with (i) the amount of misalignment in the platform’s and its stakeholders’ objectives, (ii) the fairness parameters $\delta_I, \delta_U$.
Having these in mind, in Sec C.1 we’ve also illustrated how to determine the right fairness parameters using our case study on Amazon review data.
- **Insights from the case study.** In Fig. 2, we explored the PoF under various $\delta_I, \delta_U$ on Amazon review data. We found that $\delta_U$ has little impact on PoF as the user-fair constraints are not binding, while $\delta_I$ impacts the PoF in a roughly piecewise linear manner. This suggests that under Amazon review data, the platform can achieve high user-fairness at little cost, while it can gauge the amount of tradeoff between its revenue and item fairness (the slope between PoF and $\delta_I$) by adjusting $\delta_I$ incrementally and performing experimentation.
- **Guidelines to set $\delta_I, \delta_U$.** Similar methods can be applied to different online platforms in their respective contexts. A platform should first identify the binding constraints by determining which stakeholders experience the most unfairness under the current recommendation policy. Then, using the piecewise linear relationship, it can gauge the amount of tradeoff between binding fairness constraint and its PoF. Based on the desired fairness levels and acceptable PoF, the platform can narrow down the range of fairness parameters to experiment with. Once a small subset of promising fairness parameters is identified, a platform can conduct online A/B tests by splitting its traffic (e.g., 10-20%) to experiment with these parameters in parallel. This allows the platform to pick the most effective fairness parameters without extensive experimentation on all possible sets of parameters.
(3) **Additional Experiments on MovieLens Data.** In response to reviewer kac1’s question, in the attached PDF we’ve included experiments on the MovieLens data to validate the efficacy of our framework in an alternative setting (movie recommendation). The results are consistent with our Amazon case study, showing our method's effectiveness in balancing platform revenue and stakeholder fairness.
Pdf: /pdf/6b4f30d3bef2602c5939705719ccbb41862e0b7b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with Auto-Generated Data | Accept (poster) | Summary: The paper proposes a workflow in which synthetic Text To Image data is generated in order to improve faithfulness of T2I models. Specifically, they generate prompts with LLMs, then Images with T2I models and finally fine-tune pre-trained T2I models with LORA fine-tuning. They fine-tune multiple LORA experts and then merge them by merging parameters. On a limited set of evals, the proposed model achieves good results.
Strengths: The paper addresses the faithfulness of Text To Image models. With the increasing application of T2I models across many domains, faithfulness becomes and increasingly important research topic in both industry as well as the academic community.
I appreciate the discussions of related works and how the paper at hand differs from prior art.
While the paper motivates its contributions with faithfulness, the proposed workflow is actually independent from faithfulness itself. Faithfulness is only a side effect of the proposed method which has the main focus on multi-task fine-tuning for T2I models. As such, the impact could be even bigger, if the method gets applied to other challenges in the field of T2I.
Weaknesses: The strength mentioned above is also a weakness of the paper. The paper is motivated by faithfulness. However, the proposed method tackles faithfulness at best as a side effect. While this means that the method might be more general, it also negatively impacts the clarity of the paper and ease of understanding. It might be better to just focus on the multi-task fine-tuning task instead of the impact of individual tasks or side-effects.
Technical Quality: 3
Clarity: 3
Questions for Authors: Given the complex setup of the proposed workflow, there is a large number of design decisions to make. Do you have guidance, experiments which part of the workflow is the most sensitive with respect to downstream results, e.g., LLM choice, in-context example selection, LLM prompt choice, T2I choice, LORA parameters, etc.?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The limitations are only discussed in the appendix. I think it would be important to have a comprehensive discussion of limitations in the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback. Below we address your questions with further clarifications and experiments.
> **W1. The strength mentioned above is also a weakness of the paper. The paper is motivated by faithfulness. However, the proposed method tackles faithfulness at best as a side effect. While this means that the method might be more general, it also negatively impacts the clarity of the paper and ease of understanding. It might be better to just focus on the multi-task fine-tuning task instead of the impact of individual tasks or side-effects.**
We would like to clarify that our framework is not only motivated by text faithfulness but also designed to directly tackle and improve multiple aspects (i.e., skills) of faithfulness. To improve multiple different aspects of faithfulness, our framework (1) automatically generates datasets for teaching faithfulness in different aspects and (2) obtains a T2I model with better text faithfulness, via skill-specific expert learning and expert merging.
> **Q1. Given the complex setup of the proposed workflow, there is a large number of design decisions to make. Do you have guidance, experiments which part of the workflow is the most sensitive with respect to downstream results, e.g., LLM choice, in-context example selection, LLM prompt choice, T2I choice, LORA parameters, etc.?**
Thanks for the question. Below we provide ablation results of two design choices: Skill-specific LoRA merging and prompt generator LLM choices. Some of the analysis are from paper Table 2 and Table 9.
(1) First, we show that skill-specific LoRA merging is crucial for mitigating knowledge conflict and improving text faithfulness of T2I models. Merging skill-specific experts during inference outperforms single-LoRA by 3.3% on the DSG benchmark. Besides, merging skill-specific experts also works better than fine-tuning a single LoRA with more parameters (i.e., larger rank), demonstrating the effectiveness of learning separate LoRA to avoid knowledge conflict.
| Approach | LoRA Rank | DSG |
|--------------|-----------|------|
| SDv2 | | 70.3 |
| Single LoRA | 128 | 74.4 |
| Single LoRA | 256 | 74.9 |
| Single LoRA | 640 | 71.5 |
| LoRA Merging (ours) | 128 | **77.7** |
(2) Second, we show the effectiveness of stronger prompt generator LLMs. We collect 5K skill-specific prompts with Llama-3 and GPT-3.5, and fine-tune SDXL with self-generated images. The table below shows that learning from prompts generated with GPT-3.5 outperforms learning from prompts generated with Llama-3 (80.2 vs. 78.6 on DSG benchmark), demonstrating a stronger LLM can help improve the performance. However, note that both SDXL models fine-tuned with GPT-3.5 and Llama-3 prompts outperform the original SDXL baseline by a large margin, demonstrating the effectiveness of our pipeline even with weaker LLMs.
| Approach | Prompt Generator | DSG |
|----------|------------------|------|
| SDXL | - | 73.3 |
| SDXL | Llama-3 | 78.6 |
| SDXL | GPT-3.5 | **80.2** |
---
Rebuttal Comment 1.1:
Title: Final review
Comment: I would like to thank the authors for their response to both my questions and the questions by the fellow reviewers. After going over the other reviews and considering all the answers, I still believe that the contributions and novelty of the paper are sufficient to slightly pass the bar for acceptance. | Summary: This article goes through four stages: (1) collecting skill-specific prompts using in-context learning of LLMs, (2) self-generating image-text samples for diverse skills without the need for human annotation or feedback from reward models, (3) fine-tuning the expert T2I models on these datasets separately, and (4) obtaining the final model by merging experts from each dataset for efficient adaptation to different skills and mitigation of knowledge conflict in joint training. It is found that the model can optimize itself to an excellent level relying solely on the prompts from the LLM and the model's own generative capabilities.
Strengths: 1. This article demonstrates through experiments an interesting conclusion: the model, relying solely on the images generated by prompts, can still be trained on certain domains and achieve superior results without any additional annotation information.
2. This article proves that T2I models struggle with LoRA to accommodate distinct skills and writing styles from different datasets and proposes that using a training-free multi-LoRA fusion method for LoRA trained on different tasks can effectively alleviate this issue.
3. This article demonstrates that even when using data generated by a weaker T2I model, it is still possible to enhance the performance of a stronger model.
Weaknesses: 1. The article does not provide a detailed explanation of the comparison in Table 2 regarding single LoRA. In the comparison between multi-LoRA and single LoRA, are the parameter counts of multi-LoRA and single LoRA the same, or is each LoRA within multi-LoRA equivalent in parameter count to single LoRA? Additionally, is the total training step count for multiple LoRAs in multi-LoRA equal to the training step count for single LoRA, or is the training step count for single LoRA consistent with that of each LoRA within multi-LoRA?
2. The fusion mechanism among multiple LoRAs does not seem to have been thoroughly explored through sufficient ablation experiments. If, after training each LoRA separately in multi-LoRA, a router is introduced to perform gating operations on multi-LoRA, similar to MoE-LoRA, would the effectiveness improve?
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback. Below we address your questions with further clarifications and experiments.
> **W1-1. In the comparison between multi-LoRA and single LoRA, are the parameter counts of multi-LoRA and single LoRA the same, or is each LoRA within multi-LoRA equivalent in parameter count to single LoRA?**
The single LoRA that learns multiple skills has the same number of parameters for each skill-specific LoRA. Furthermore, we experiment with LoRA with more parameters by using higher ranks (256 / 640) and compare with our default LoRA merging (rank=128).
| Approach | LoRA Rank | DSG |
|--------------|-----------|------|
| SDv2 | | 70.3 |
| Single LoRA | 128 | 74.4 |
| Single LoRA | 256 | 74.9 |
| Single LoRA | 640 | 71.5 |
| LoRA Merging (ours) | 128 | **77.7** |
The table shows that increasing the rank of LoRA from 128 to 256 slightly improves the performance (i.e., 74.9 vs. 74.4), but further scaling the rank of the LoRA to 640 significantly drops the performance (i.e., 71.5 vs. 74.4). The performance drop when using LoRA with higher ranks (i.e., rank=640) is similar to the observation in Figure 3 in [A]. This result indicates the effectiveness of our skill-specific learning and merging of LoRA experts. We will add this additional result in the final version.
> **W1-2. Additionally, is the total training step count for multiple LoRAs in multi-LoRA equal to the training step count for single LoRA, or is the training step count for single LoRA consistent with that of each LoRA within multi-LoRA?**
We let the T2I model see each example for the same amount of times (i.e., same epochs), in both settings.
For (1) “single LoRA for multiple skill-specific image-text pairs”, we train the LoRA for 25K training steps on 5K image-text pairs with batch size 64.
For (2) “learning skill-specific LoRAs for each skill-specific image-text pair and merging them” (ours), we train each LoRA for 5K training steps on each of 1K image-text pairs with batch size 64.
> **W2. If, after training each LoRA separately in multi-LoRA, a router is introduced to perform gating operations on multi-LoRA, similar to MoE-LoRA, would the effectiveness improve?**
As described in L741-744, we freeze the learned LoRA weights and only fine-tune the gating function (i.e., router), which is the setting you mentioned. Additionally, we experiment with learning LoRA weights along with the router. As shown in the below table, learning LoRA weights along with the router achieves worse performance than when LoRAs were frozen. Our default expert merging method – LoRA merging – performs the best. We will add this additional result in the final version.
| Approach | DSG |
|------------------|------|
| SDv2 | 70.3 |
| MoE-LoRA (learning router and LoRAs from scratch) | 75.9 |
| MoE-LoRA (learning router only; LoRAs are frozen) | 77.2 |
| LoRA Merging (default) | **77.7** |
[A] He et al. (2024), "Sparse Matrix in Large Language Model Fine-tuning"
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: The authors solve my issues and I maintain the rating to accept it. | Summary: The paper analyzes the merging of skill-specific LoRA-finetuned models, trained on generated data and compares this approach to other training approaches (i.e. finetuning, PPO). Moreover, the paper compares the use of GT images and prompts to the use of generated data. The results suggest that this approach performs better than the baseline approaches. Finally, the paper shows a weak-to-strong generalization from weaker models.
Strengths: The presented approach shows clear advantages over other methods for alignment - both on text faithfulness and human preference. The paper is easy to read and to understand. Both the effectiveness of auto-generated data (text and images) as well as the merging of experts approach is well studied and ablated.
Weaknesses: _Weak-to-Strong Generalization_
The claim in this section doesn't mention the fact that the text here comes from an additional strong model. While the generative model is weaker, the use of an LLM here makes this experiment less convincing. One possible way to improve this section is by ablating here separately again the text and the images.
_Ablation of LoRA parameters_
Will a larger $r$ value for the LoRA remove the need for the merging? The paper can benefit from ablation of the size of the bottleneck in the LoRA, together with an experiment that shows that the merging is better than one larger LoRA, trained on all the generated data together.
_Optimization time comparisons_
An additional concern for the merged LoRAs approach is its optimization time. Can you provide the comparisons for the training time and iteration number for the presented approach and baselines?
Technical Quality: 4
Clarity: 4
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Limitations are provided.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback. Below we address your questions with further clarifications and experiments.
> **W1. Weak-to-Strong Generalization**
Regarding experiments with weaker LM, we would like to bring your attention to Table 9, where we experiment with a LLaMA 3 (8B), which is a publicly available LM known as weaker than GPT-3.5. We find that fine-tuning SDXL with data generated with LLaMA 3 achieves 78.6% on average on DSG, improving the baseline by 5.3% on DSG, closing the gap to GPT-3.5 based results. This demonstrates that SELMA is flexible and compatible even with weaker (but publicly available) prompt generator LMs. Besides, we further experiment with fine-tuning SDXL with data generated with both a weaker image generator SDv2 and a weaker prompt generator LLaMA3. Our results in the table below show that this model achieves similar performance as the model fine-tuned with images generated with SDXL, demonstrating that weak-to-strong generalization holds with weaker data generators.
| Base Model | Prompt Generator | Image Generator | DSG |
|------------|------------------|-----------------|------|
| SDXL | - | - | 73.3 |
| SDXL | Llama3 | SDv2 | 78.0 |
| SDXL | Llama3 | SDXL | 78.6 |
| SDXL | GPT3.5 | SDv2 | 81.3 |
| SDXL | GPT3.5 | SDXL | 80.2 |
> **W2. Ablation of LoRA parameters**
Following your suggestion, we experiment with LoRA with different ranks (128 / 256 / 640) and compare with our default LoRA merging (rank=128).
| Approach | LoRA Rank | DSG |
|--------------|-----------|------|
| SDv2 | | 70.3 |
| Single LoRA | 128 | 74.4 |
| Single LoRA | 256 | 74.9 |
| Single LoRA | 640 | 71.5 |
| LoRA Merging (ours) | 128 | **77.7** |
The table shows that increasing the rank of LoRA from 128 to 256 slightly improves the performance (i.e., 74.9 vs. 74.4), but further scaling the rank of the LoRA to 640 significantly drops the performance (i.e., 71.5 vs. 74.4). The performance drop when using LoRA with higher ranks (i.e., rank=640) is similar to the observation in Figure 3 in [A]. This result indicates the effectiveness of our skill-specific learning and merging of LoRA experts. We will add this additional result in the final version.
> **W3. Optimization time comparisons**
In short, there is no meaningful time difference between training (1) single LoRA for multiple skill-specific image-text pairs vs. (2) learning skill-specific LoRAs for each skill-specific image-text pair and merging them, since we let the T2I model see each example for the same amount of times (i.e., same epochs), in both settings.
For (1) “single LoRA for multiple skill-specific image-text pairs”, we train the LoRA for 25K training steps on 5K image-text pairs, which takes around 30h on one single L40 GPU.
For (2) “learning skill-specific LoRAs for each skill-specific image-text pair and merging them” (ours), we train each LoRA for 5K training steps on each of 1K image-text pairs. As shown in Appendix L721-722, this takes around 6h x 5 = 30h on one single L40 GPU (if run parallel in 5 processes, this actually takes 1/5 times of (1)). The LoRA merging takes 26s and is used only once when loading the model before inference, which is negligible compared to the training time.
[A] He et al. (2024), "Sparse Matrix in Large Language Model Fine-tuning"
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. I decided to keep my score. | Summary: The goal of this paper is to improve the faithfulness of text-to-image generation models. New datasets and fine-tuning frameworks are introduced to address this limitation. Specifically, this paper first adopts LLMs to generate multiple datasets of text prompts that can teach different skills and then generates images with a T2I image based on the text prompts. In the second stage, the LoRA finetuning is performed on the generated datasets to get different experts. Finally, different LoRA experts are merged into one better T2I model. The experiments show better performance on public benchmark datasets.
Strengths: 1. The experimental results show consistent improvement over multiple base models.
2. The qualitative results look convincing. The proposed model seems to be more faithful than the baselines.
Weaknesses: 1. The technical novelty is a bit weak. LoRA fine-tuning and merging experts are not original. The prompt generation seems to be straightforward too.
2. Clarity of this paper should be improved. What is the definition of "Skill"? Is it referred to different prompt styles? Why do the self-generated images improve the T2I model, especially when the T2I model fails to generate faithful images?
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weakness.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are discussed in the appendix!
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback. Below we address your questions with further clarifications.
> **W1: The technical novelty is a bit weak. LoRA fine-tuning and merging experts are not original. The prompt generation seems to be straightforward too.**
We would like to first clarify that our main contribution is **introduction of a framework that improves T2I models’ text faithfulness with automatically generated skill-specific data and merge of skill-specific experts** (L40-47). Although LoRA fine-tuning / expert merging / prompt generation methods were initially proposed by previous works, they were not used for improving text faithfulness for T2I models in different skills. Moreover, we would like to elaborate our technical contributions (in relation to previous works):
- **1. We propose a novel method to automatically create skill-specific image-text pairs based on LLM and the target T2I model itself.** Previous works collect image-text pairs via human annotations (L90-98), and our method significantly reduces the cost for data collection that used to be done via human annotation. Compared to concurrent work relying on heavy image filtering (DreamSync [69]), our data generation method is significantly more efficient by using only 2% of image-text pairs (L98-104).
- **2. We introduce skill-specific T2I expert learning and merging.** Previous works uses LoRA based T2I model fine-tuning (DreamSync [69]) does not use skill-specific expert learning and merging. ZipLoRA [65] merges two LoRAs for T2I model, but their two LoRAs are limited to a specific subject (e.g., a dog) and a style (e.g., watercolor painting), while we are the first to show the effectiveness of LoRA merging on multiple diverse skills (from 5 datasets) in T2I models (L164-167).
- **3. We provide comprehensive experiments**, including improvements in two text faithfulness benchmarks (TIFA/DSG), three human preference metrics (Pick-a-Pic/ImageReward/HPS) across three T2I backbones (SD v1.4/v2/XL), human evaluation, ablation studies on design choices, and weak-to-strong generalization (L68-79).
> **W2-1: What is the definition of "Skill"? Is it referred to different prompt styles?**
As we give examples in Fig 1., L121-L122, and L177-178, we use “skills” to refer to different aspects of text prompts that require different writing styles or reasoning capabilities. This includes understanding common objects (e.g., puppy in a backyard), handling long prompts (e.g,. an elegant room with floor-to-ceiling bookshelves, filled with an impressive collection of books of all genres. The cozy reading nook by the window invites anyone to curl up with a good book.”), and displaying commonsense-defying scenes (e.g., cat flying over sky). Following your suggestion, we will add the definition of skill in the introduction section.
> **W2-2: Why do the self-generated images improve the T2I model, especially when the T2I model fails to generate faithful images?**
As we described in L136-142, we conjecture that the T2I model have seen many prompts that require different skills during pre-training, but the T2I model does not have incentive to demonstrate such skills as it is not important in optimizing loss during the pre-training stage. Our fine-tuning stage aims to efficiently extract the knowledge, which we believe was already inside the T2I models, with automatically generated skill-specific image-text pairs.
---
Rebuttal Comment 1.1:
Comment: The rebuttal addressed my concerns as well as other reviewers' concerns. Therefore, I would like to increase the rating to be a Weak Accept! | Rebuttal 1:
Rebuttal: We thank the reviewers for their valuable feedback. We also appreciate that they acknowledge SELMA's strengths:
- Clear advantages over other methods for text-image alignment (Reviewer dhqF, xcmQ, WQ94)
- Our proposed automatic data generation pipeline and LoRA merging approach are well studied and ablated (Reviewer xcmQ, c1fe)
- Interesting findings supported with experiments (e.g., weak-to-strong generalization, learning from self-generated images) (Reviewer c1fe).
We have addressed all the questions in our rebuttal, and will incorporate the feedback in the final version. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Stochastic contextual bandits with graph feedback: from independence number to MAS number | Accept (poster) | Summary: The authors consider the problem of contextual bandits with finitely many contexts, stochastic rewards and a directed feedback graph (assumed to contain all self loops) across actions. They study the setting of "complete cross-learning" where the reward feedback of the chosen action is observed across all contexts. In this setting, the authors establish a lower bound of $\Omega(\sqrt{\beta_M(G) T})$ where $\beta_M(G)$ is a graph-dependent quantity which lies between the independence number and the maximum acyclic subgraph of the feedback graph, and prove that this lower bound is tight for a specific class of context sequences by designing an efficient algorithm with a matching regret bound. Furthermore, the authors provide an upper bound for general context sequences with a graph dependence that improves upon the maximum acyclic subgraph, but in general may not be tight.
Strengths: * The authors establish regret upper and lower bounds that constitute a considerable step towards characterizing the minimax regret in contextual bandits with feedback graphs in the complete cross-learning framework.
* The authors provide an overview of the hard instance construction which helps the reader understand the difficulty of minimizing regret in this setting.
* The idea of incorporating a sequential graph-theoretic zero sum game in a bandit algorithm seems novel and very interesting.
* Even though the authors close the gap completely only for specific types of context sequences, they also provide an upper bound for general sequences which improves upon the known upper bounds in the literature.
Weaknesses: * My main issue with the presentation of the main results is with the comparison with the previous related work of [1] (lines 147-148). While the authors do mention that this work considers the setting of feedback graphs over the contexts as well as over the arms, it should be mentioned that it is not how the problem is presented in [1]. Rather, they consider a tabular RL problem in which the states correspond to the contexts, with the crucial difference that the transition between states is governed by some stochastic process. Since in the authors' work there is no assumption regarding the transition between contexts (that is, it could be adversarial), it seems not quite fair to compare their results to those of [1], specifically the regret lower bound. Indeed, the lower bound described by the authors is easily seen to be inapplicable in the setting of [1] as they vary the contexts (or states) in a very controlled adversarial manner. Therefore, I think the authors should emphasize that their lower bound does not provide a strengthening of the lower bound given in [1] for the RL setup.
References:
[1] Christoph Dann, Yishay Mansour, Mehryar Mohri, Ayush Sekhari, and Karthik Sridharan. Reinforcement learning with feedback graphs.
Technical Quality: 3
Clarity: 2
Questions for Authors: I would appreciate it if the authors could address my main concern under "Weaknesses" and provide a more careful comparison with the work of [1].
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for highlighting the difference between [1] and our work. Indeed [1] primarily studies the RL setting so their upper bound is more general than ours; however, our lower bound instance could be embedded into the tabular RL setting of [1]. Consider an episodic tabular RL with $H = 1$, so that the initial state at every episode is the context. Since the initial state can be adversarially chosen in [1], our lower bound construction is legitimate in this setting and could lead to better dependence on the graph structure. The general H can also be handled using an absorbing state. We will provide a more detailed comparison in the final version.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I indeed missed the fact that [1] considered a setting where the initial state can be adversarial, and thus includes the setting studied in this paper if $H=1$.
I currently have no further questions, and will maintain my score. | Summary: In this paper, the authors consider the problem of contextual bandits with a feedback graph, for finite context space. In the presented setting, taking an action reveals the rewards for all neighboring actions in the feedback graph for all contexts. The authors propose $\beta_M(G)$, a theoretical quantity in which $M$ is the number of contexts and $G$ is the feedback graph, with the goal of characterizing the hardness of learning for this class of problems. The authors prove a lower bound of $\Omega(\sqrt{\beta_M(G)T})$ where $T$ is the number of rounds. The authors also present a near-optimal upper bound of $\widetilde{O}(\sqrt{\min (\bar{\beta}(G),m(G))T})$.
Strengths: 1. The insights in the paper are interesting, it characterizes the difference between MAB with graph feedback and Contextual MAB with graph feedback, for small and finite context space.
2. The authors present upper and lower bounds for the problem.
3. The algorithmic approach taken to produce both results is elegant, especially the use of the arm elimination technique.
Weaknesses: 1. In contextual MAB literature, the main difficulty is that the context space is large, and in each episode, the feedback is revealed only for the current context. In the discussed setting, the feedback is revealed for all contexts. This significantly reduces the inherent difficulty of the contextual influence. Hence,
(a) Can you please explain how having a context made the learning harder w.r.t the non-contextual MAB with feedback graph in this setting, conceptually?
(b) Have you considered the standard Contextual setting in which only the feedback of the current context is released? Can you adjust your results to this setting as well?
2. It would benefit the reader if an additional explanation of equation (4) will be given. So is for the result stated in Lemma 3.1.
3. Typos - A comment that was left in the text: line 20: "to name a few".
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. See weakness 1.
2. Can you please explain the conclusions to corollary 1.2? Specifically, the dependency of $I_c$ in the context is unclear to me, and why it implies that $\beta_M(G) = m(G)$.
3. Can you please provide some more intuition regarding the behavior of $\beta_M(G)$, and an example for a calculation of it for simple graphs?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the review and the insightful questions.
1.(a). In general, the oracle we compare in contextual bandits to is able to take different optimal actions under different contexts, so it is a stronger benchmark. Also, our work assumes adversarial context, so intuitively, a larger number of contexts increases the adversary’s ability to break the learner’s exploration plans and to craft a “harder” problem instance. The latter corresponds to increasing $M$ in the lower bound quantity $\beta_M$ and thereby including extra independent subsets in the hard instance.
1.(b). Yes it is straightforward to extend to the case with no information across contexts (by running $M$ instances of the single-context algorithm), which essentially becomes $M$ separate subproblems and leads to an optimal regret $\widetilde{O}(\sqrt{M\alpha T})$.
1.(c). It is however an interesting question when the feedback across contexts is not complete. Section 4.1 discusses the product graph case for weakly observable action graphs. We define product graphs as $(a_1, c_1)\rightarrow (a_2,c_2)$ if $a_1\rightarrow a_2$ and $c_1\rightarrow c_2$ in their respective graphs (we'll make this definition clearer in text). When the action graph is strongly observable, we may extend our Theorem 1.1 and 1.3 with the following graph quantity (defined by both the action graph and the context graph) that shares the same spirit:
$$
\beta' = \max \lbrace\sum_{c=1}^{M}|I_c| : I_c\subseteq V_c \textnormal{ independent and } I_j\not\rightarrow I_k \textnormal{ if } j<k \rbrace
$$
and Theorem 1.4 with
$$
\bar\beta' = \max\lbrace \sum_{c=1}^{M}|I_c| : I_c\subseteq V_c \textnormal{ independent} \rbrace
$$
where $V_c=\lbrace (a,c): a\in[K] \rbrace$.
2. One can think of Eq (2) of $\beta_M$ as sequentially taking out one independent subset (and its out-neighbors) as the hard instance for a context. Note those $I_c$’s are allowed to be empty. By the constraint of $I_j \not\rightarrow I_k$ for $j<k$, they form a DAG on the set level, so their total size cannot exceed the MAS number $\mathsf{m}(G)$. Then when $M\ge \mathsf{m}(G)$, one can simply take the max acyclic subgraph and put one node for each $I_c$, while leaving $(M-\mathsf{m}(G))$ of them empty. This shows $\beta_M=\mathsf{m}(G)$ when $M$ is sufficiently large, hence Corollary 1.2.
3. As an example, consider a “monotone” graph across the actions: for actions in $[K]$, there is an edge $i\rightarrow j$ iff $i<j$. With cross-learning among $M$ context, the quantity $\beta_M=\max\lbrace M, \mathsf{m}(G)\rbrace$ increases from $\alpha=1$ to $\mathsf{m}(G)=K$ as $M$ grows. This example shows up in bidding in auctions (see Han et al. 2020 in our reference) with actions modeling the bidding values and contexts the values of the presented items.
---
Rebuttal 2:
Comment: I thank the authors for their response and have no further questions. | Summary: This paper studies contextual online learning when the feedback received by the learner is regulated by a feedback graph. The setting is as follows: the actions constitute the nodes of a directed graph $G$, and playing action $a$ at time $t$ when the context is $x_t$ reveals not only the loss incurred by that action at that time, for that context but the losses of all the neighboring actions, for all possible $m$ contexts. While the contexts are generated adversarially, the losses are i.i.d..
The non-contextual problem is well understood, with tight minimax regret guarantees holding for both the adversarial and stochastic settings. These rates depend on both the time horizon $T$ and some graph parameters. In particular, for strongly observable graphs (as the ones studied in this paper), the rate is known to be $\sqrt{T \alpha}$, where $\alpha$ is the independence number of the feedback graph.
Previous results achieve a regret bound of $O(\sqrt{T m})$ for the contextual problem, where $m(G)$ is the maximum acyclic subgraph number, which is complemented by the abovementioned $\Omega(\sqrt{T \alpha})$ lower bound. This paper investigates the gap in the graph-dependent parameter in the minimax rate. Note, $\alpha$ = $m$ for undirected graphs.
The contribution of the paper are as follows:
when the number of contexts is large ($\ge m$), then the $\sqrt{T m}$ rate is tight
in general, a lower bound of $\Omega(\sqrt \beta_m)$ is proved, where $\beta_m$ is a new graph parameter which crucially depends on the number $m$ of contexts and gracefully interpolates between $\alpha$ and $m$.
improved upper bounds are then proved for special context sequences and the general problem
Strengths: 1. Online learning with feedback graphs is a relevant problem with a long literature in NeurIPS and ICML
2. Studying this problem with contexts is fairly natural, and has already been studied
3. the paper presents a consistent set of results and manages to present them nicely in the intro. Due to space constraints, the technical parts are, however, only roughly sketched.
Weaknesses: - the problem is not closed: there is still a significant gap in the right graph theoretic parameter
- the result only holds in the stochastic setting. What can be said in the adversarial setting?
- the lower-bound construction is fairly natural (at a high level)
- the graph theoretic parameter introduced is artificial and way less natural than the ones present in the non-contextual settings.
Minor comments
- please update the references: e.g., Schneider and Zimmert and Zhang et al have been published
- please consider adding some further references to the learning with feedback graph literature. (see also questions)
Technical Quality: 3
Clarity: 3
Questions for Authors: What is the relationship between your graph parameter and the ones in Eldowa et al ”On the Minimax Regret for Online Learning with Feedback Graphs”. NeurIPS 2023?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No potential negative societal impact
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the review and the insightful questions.
1. Our work assumes adversarial context. When the reward is also adversarial, as shown in (Balseiro et al. 2019), cross-learning is not helpful and the optimal minimax regret is proved to be $\sqrt{M\alpha T}$: this essentially corresponds to dividing the horizon into $M$ different subproblems each with duration $T/M$. Then the minimax regret for each (single-context) subproblem is known to be $\sqrt{\alpha T/M}$.
2. (Eldowa et al. 2023) studies the non-context (or single-context) setting and arrives at the independence $\alpha$ number. Our proposed quantity $\beta_M$ can be seen as an extension of $\alpha$ under the assumption of adversarial context, as it sequentially takes the max independent subset and its neighbors. The information-flow constraint in Eq (2) then naturally arises from this adversarial assumption too.
3. We agree that the idea behind the lower bound construction is natural, but we’d also like to make two remarks. First, this natural construction proves the tightness of the MAS number when the context space is large, an observation overlooked in the literature even when the upper bound using the MAS number was obtained. Second, the proof technique requires to achieve a careful balance between exploration and exploitation, and the two-inequality approach (below Line 182) could be of general interest when proving interactive lower bounds.
4. Although our graph-theoretic quantity $\beta_M$ is not as natural as the independence number and the MAS number, our lower bound shows that this is the right quantity for contextual bandits (at least under self-avoiding contexts). In addition, this quantity exhibits the desired interpolation between the independence number and MAS number.
5. We appreciate the reviewer for mentioning these references and shall include them and further discussion in the paper. | Summary: In this work, the authors consider the problem of contextual bandits with feedback graphs and aim to achieve a tighter dependency on graph-dependent quantities.
Figuring out the correct dependency on graph-dependent quantities is a notably challenging problem in the standard MAB framework, as aspects such as whether the feedback graph is directed, whether it has self-loops, or whether the time horizon is large compared to the size of the graphs are crucial components to figure out how much information can be extracted from the feedback graph.\\
In this work, the authors consider a contextual MAB problem where there is a feedback graph across actions and cross-learning between contexts. The authors propose a minimax lower bound for this context, which depends on a quantity $\beta_M(G)$, where $M$ is the number of contexts.
The authors then show that this lower bound is tight for certain classes of problems such as self-avoiding contexts, which is a problem setting where the environment regularly switches from one context to the next but doesn't ever come back to contexts that have already been seen in the past.
The authors also derive an upper bound in the general setting using a different algorithm.
Strengths: The authors propose a detailed characterization of the challenges of contextual MAB with feedback graphs by focusing in deriving a lower bound and gaining a good sense of how challenging the problem is. They then provide both an algorithm in a setting where they can achieve a tight bound as well as a general algorithm.
The proofs are well detailed, and the authors properly study and discuss the gap between upper and lower bounds as well as possible extensions.
Weaknesses: While the gap in terms of upper and lower bounds may be tight in terms of the graph-dependent quantities in settings with self-avoidant contexts, the upper bounds contain some superfluous logarithmic dependencies. (In particular the $log^2 K$ term in Theorem 3.3). Do you think that this could be avoided?
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
You propose a specific algorithm for the setting with self-avoidant contexts, but do you think it is a realistic assumption to make ahead of time?
Do you think that it would be possible to get some sort of best-of-both-worlds guarantees, where the same algorithm could achieve tight bounds if we are in the self-avoidant setting while still ensuring worst-case guarantees in the general case?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Theoretical work, NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the review and the insightful questions.
1. While we agree it is possible to improve the logarithmic $\log(MKT)$ term with a more careful concentration argument, it is less obvious how to avoid the $\log^2K$ term when we use a practical algorithm: if we are allowed access to the dominating subset of the active set of arms, as mentioned in Lemma 3.1, we are only left with $\log K$ term. This term comes from Lemma A.1 and that we include both dominating subsets and independence subsets in the subroutine of Algorithm 1 in order to relate our exploration sets to independent sets. We are not sure this can be removed in our approach.
2. While we include it more as a mathematically interesting case, we wish to point out that our bounds are also tight when the graph is undirected or transitively closed (Theorem 1.4), which is an often more realistic assumption. It is achieved by the more straightforward Algorithm 2 which just greedily picks the active arm with most out-going edges in the subroutine. As an example, in bidding in auctions where the winning bid is revealed, the feedback graph is transitively closed across the learner’s actions (e.g. Han et al. 2020 in our reference).
3. This is a great question. In our Example 1, while the quantity $\beta_{dom}$ used in bounding Algorithm 2 (Lemma 3.2) is loose, Algorithm 2 itself actually achieves the optimal regret under appropriate tie-breaking strategy. It is open whether we can arrive at a tighter bound for Algorithm 2 directly or with modifications, such as exploring an extra small independence subset as we did in Algorithm 1 (details in Appendix C.1).
---
Rebuttal Comment 1.1:
Comment: Thank you for the extra clarifications. I currently don't have further questions. | Rebuttal 1:
Rebuttal: We appreciate the insightful reviews and questions from the reviewers and would like to highlight points that have drawn most attention here.
1. In addition to self-avoiding context (Theorem 1.3), Theorem 1.4 shows that our results are also tight for arbitrary context when the feedback graphs are either undirected or transitively closed (i.e. extra assumptions on the information structure between actions rather than context). The latter appears to find more realistic applications including bidding in auctions and inventory control.
2. While we assume self-loops in the current work, as pointed out by one reviewer, our results actually apply to all strongly observable graphs. The only difference is to add an extra constraint that $I_1,...,I_M$ are disjoint in definition (2) of $\beta_M$.
3. It is an interesting question to look beyond complete cross-learning over contexts. We partially discuss this in Section 4.1 on context-action product graphs when the action graph is weakly observable, where product graphs are defined as $(a_1,c_1)\rightarrow (a_2,c_2)$ if $a_1\rightarrow a_2$ and $c_1\rightarrow c_2$ in their respective graphs. It is straightforward to extend our results to the case of product graphs when the action graph is strongly observable too. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper investigates the problem of stochastic contextual bandits with graph feedback, in which a graph over actions models the feedback structure. The learner selects an action after observing the current context, and then receives the losses of the actions that are neighbors of the selected one in the feedback graph.
This work proposes a novel graph-theoretic quantity $\\beta\_{M}(G)$ to characterize the statistical complexity of learning in this problem setting.
The authors establish a minimax regret lower bound $\\Omega(\\sqrt{\\beta\_{M}(G)T})$, where $\\beta\_{M}(G)$ interpolates between the independence number $\\alpha(G)$ and the maximum acyclic subgraph (MAS) number $m(G)$.
Specifically, $\\beta\_{M}(G) = \\max \\{\\sum\_{c=1}^{M} |I\_c| : I\_1, \\dots, I\_M \\text{ are independent sets in } G, I\_i \\nrightarrow I\_j \\text{ for } i < j\\}$.
This result implies that, while $\\alpha(G)$ dictates the complexity in multi-armed bandits (i.e., $M=1$), $m(G)$ becomes the relevant parameter as the number of contexts increases.
The paper further provides algorithms that achieve near-optimal regret bounds $\\tilde{O}(\\sqrt{\\beta\_{M}(G)T})$ for self-avoiding context sequences and $\\tilde{O}(\\sqrt{\\min\\{m(G),\\bar\\beta\_{M}(G)\\} T})$ for general context sequences (where $\\bar\\beta\_{M}(G)$ is a larger but similarly defined graph parameter than $\\beta(G)$), leveraging carefully designed arm elimination techniques.
These algorithms are polynomial-time and demonstrate tight regret bounds for special families of context sequences and feedback graphs, namely undirected or transitively closed graphs.
Strengths: The most interesting contribution of this work is probably the connection between the learnability of the problem and the novel (at least to the best of my knowledge) graph-theoretic parameter $\\beta_{M}(G)$.
Thanks to this, the authors are able to provide further insights into the contextual bandit problem with feedback graphs (with complete cross-learning and under the assumption), showing that the number of contexts can influence the dependence of the regret on the structure of the feedback graph $G$, interpolating between the independence number $\\alpha(G)$ (when $M=1$) and $m(G)$ (e.g., when $M \\ge m(G)$.
The way the regret analysis shows the dependence on such a parameter, via the computation of the value of the sequential game described in Section 3, is also interesting and nontrivial.
Weaknesses: What the authors consider in this work is not the entire family of strongly observable feedback graphs (the one known to correspond with minimax regret of order $\\sqrt{T}$ in the non-contextual case), but only a subset of those graphs, i.e., that contain all self-loops. This excludes relevant feedback graphs such as the loopless clique and the apple tasting one. I think a discussion about these missing graphs, e.g., in Section 4.1 would give a clearer picture of the contributions of this work and how they compare with previous relevant work.
More importantly, the only nearly tight bounds for the regret are provided for self-avoiding contexts.
Moreover, the setting of complete cross-learning studied in this work seems quite restrictive as it imposes the assumption of observing the reward of the chosen action under any context.
This feels like a somewhat limiting assumption, as in real-world scenarios such a reward is often observed for the current context only, and the same context could reappear in non-contiguous time steps.
The applicability of the results is nevertheless sufficient, especially given some applications of interest and the extension of their results for the more general setting, albeit lacking a nearly optimal characterization for general context sequences and any strongly observable feedback graphs.
A further limitation of the applicability of the results is the fact that the feedback graph is assumed to be fixed.
This might not be the case generally speaking, as feedback graphs could be time-varying (as assumed in most of the recent literature on bandits with feedback graphs).
Time-varying feedback graphs could also be found in applications such as repeated first-price auctions (e.g., see “The Role of Transparency in Repeated First-Price Auctions with Unknown Valuations” by Cesa-Bianchi, Cesari, Colomboni, Fusco, and Leonardi, STOC 2024), which is one of the applications mentioned in the related work within this paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Please, address any relevant doubt that might have arisen from what is written above.
- Can the results be extended to the other strongly observable feedback graphs (i.e., the ones not containing all self-loops)? What could be the technical limitations, if you see any?
- Do you think it could be possible to adapt the algorithmic techniques in this work for time-varying graphs?
Minor comments/typos:
- Throughout the paper, use “domination number” instead of “dominating number”
- Line 84: “In what follows” instead of “In the sequel”
- Some references mention the arXiv, while they might already be published in some conference or journal
- I think the introduction would benefit from a more thorough comparison with the literature on bandits with feedback graphs. For instance, detailed studies on the minimax regret for bandits with feedback graphs have been pursued in:
- Chen, Huang, Li, and Zhang. “Understanding bandits with graph feedback”, NeurIPS 2021
- Eldowa, Esposito, Cesari, and Cesa-Bianchi. “On the minimax regret for online learning with feedback graphs”, NeurIPS 2023
- Chen, He, and Zhang. “On interpolating experts and multi-armed bandits”, ICML 2024
- Lines 157-159: the following paper also fits with that description:
- Zhang, Zhang, Luo, and Mineiro. “Efficient contextual bandits with uninformed feedback graphs”, ICML 2024
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thorough review and insightful questions.
1. It is indeed a great question about extending to the class of all strongly observable graphs, and it turns out that the extension is straightforward.
Upper bounds: A key implication of strongly observable graphs is that for any subgraph with size > 1, there is a dominating subgraph. In Sec. 3 we may safely assume the subgraphs are larger than 1 because having a singleton active subset in arm elimination means it is the optimal arm with high probability. In addition, Lemma A.1 and A.2 still hold for strongly observable graphs. Therefore, the minimax quantities in Section 3 remain to hold.
Lower bound: The same lower bound still holds, after adding an additional requirement that $I_1, …, I_M$ are disjoint in the definition of $\beta_M(G)$ in (2). In fact, for strongly observable nodes with no self loop, they cannot appear in $I_2, …, I_M$, and if they appear in $I_1$ we must have $|I_1| = 1$. Therefore, our lower bound analysis still works through.
2. We agree that the assumption of self-avoiding context is often restrictive in applications, and we include it as a mathematically interesting case. However, we wish to point out that our bound is also tight under the assumption of undirected or transitively closed graphs (Theorem 1.4) with the easier-to-implement Algorithm 2. We believe this assumption finds a wider range of applications, such as bidding in auctions where the feedback graphs are transitively closed (e.g. Han et al. 2020) and also possess complete cross-learning.
3. We agree that an incomplete cross-learning is an interesting open question, and discuss it partially in Section 4.1 with weakly observable context-action product graphs. We define product graphs as $(a_1, c_1)\rightarrow (a_2,c_2)$ if $a_1\rightarrow a_2$ and $c_1\rightarrow c_2$ in their respective graphs (we'll make this definition clearer in text). When the action graph is strongly observable, we may extend our Theorem 1.1 and 1.3 with the following graph quantity (defined by both the action graph and the context graph) that shares the same spirit:
$$
\beta' = \max \lbrace\sum_{c=1}^{M}|I_c| : I_c\subseteq V_c \textnormal{ independent and } I_j\not\rightarrow I_k \textnormal{ if } j<k \rbrace
$$
and Theorem 1.4 with
$$
\bar\beta' = \max\lbrace \sum_{c=1}^{M}|I_c| : I_c\subseteq V_c \textnormal{ independent} \rbrace
$$
where $V_c=\lbrace (a,c): a\in[K] \rbrace$.
4. We agree and appreciate the reviewer for pointing out our inapplicability to time-varying graphs. The high-level reason is that although the graph feedback is helpful for multi-armed bandits even in the adversarial setting, it is typically not helpful for adversarial contextual bandits. For example, it was shown in (Balserio et al. 2019) that when the rewards are adversarial, even with cross learning it is optimal to handle each context separately, with the optimal regret scaling with the number of contexts. Therefore, while previous literature handles time-varying graphs in an adversarial framework using EXP3-type algorithms, we do not know of the counterpart for arm-elimination-based algorithms.
5. We will fix the typos and update the references pointed out by the reviewer.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for addressing the points raised in my review. I am currently keeping my score and will make a final decision after further discussing with the other reviewers and the AC. | null | null | null | null | null | null |
Mitigating Quantization Errors Due to Activation Spikes in GLU-Based LLMs | Reject | Summary: This paper pays attention to extremely large outliers in LLMs and further investigates the reasons behind these "attention spikes." Consequently, the authors propose two methods to enhance the performance of quantized models.
Strengths: 1. The analysis of attention spikes is thorough and comprehensive.
2. The exploration of the relationship between attention spikes and Gated Linear Units (GLU) variants is both interesting and insightful.
Weaknesses: 1. The proposed QFeM method is not hardware-friendly, as it maintains some modules at high precision and cannot directly utilize low-bit INT General Matrix Multiply (GEMM) for activations and weights.
2. The proposed QFeP method bears a strong resemblance to a previously researched method, IntactKV[1], yet lacks a detailed comparative discussion.
3. The experimental settings are limited to W8A8 configurations, which previous research, such as SmoothQuant[2], has shown can nearly achieve lossless quantization for W8A8 models.
4. The authors have not included comparisons with state-of-the-art baselines, such as OmniQuant[3], AffineQuant[4], QLLM[5], and QuaRot[6].
[1]. Liu, Ruikang, et al. "IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact." arXiv preprint arXiv:2403.01241 (2024).
[2]. Xiao, Guangxuan, et al. "Smoothquant: Accurate and efficient post-training quantization for large language models." International Conference on Machine Learning. PMLR, 2023.
[3]. Shao, Wenqi, et al. "Omniquant: Omnidirectionally calibrated quantization for large language models." arXiv preprint arXiv:2308.13137 (2023).
[4]. Ma, Yuexiao, et al. "Affinequant: Affine transformation quantization for large language models." arXiv preprint arXiv:2403.12544 (2024).
[5]. Liu, Jing, et al. "Qllm: Accurate and efficient low-bitwidth quantization for large language models." arXiv preprint arXiv:2310.08041 (2023).
[6]. Ashkboos, Saleh, et al. "Quarot: Outlier-free 4-bit inference in rotated llms." arXiv preprint arXiv:2404.00456 (2024).
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Could you provide a detailed analysis highlighting the differences between the QFeP method and IntactKV?
2. Could you expand the experimental results to include different quantization settings such as W4A4 and W4A8?
3. Could you offer a more detailed comparison with state-of-the-art (SOTA) baselines or conduct the ablation tests as outlined in Table 4?
If the authors can provide more comprehensive results, I am prepared to raise my evaluation scores.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are deeply grateful for your thorough feedback of our work!
We hope our response sufficiently addresses your questions.
**Q1.** The proposed QFeM method is not hardware-friendly, as it maintains some modules at high precision and cannot directly utilize low-bit INT General Matrix Multiply (GEMM) for activations and weights.
**A1.** Thank you for your careful comment regarding the hardware side. As you commented, we designed our QFeM to leave an activation tensor in high precision (i.e., FP16), which causes incompatibility with low-bit operations that are supported by hardware level. In such scenario, we encourage applying fine-grained quantization or more advanced quantization (e.g., fine-grained group quantization in Atom [1]) to the target modules ($|M_{unq}|$) of QFeM, as an alternative to leaving them unquantized. This is motivated by QFeM's approach of searching for quantization-sensitive modules using the max-median ratio, as shown in Section 4.1.
[1] Zhao, Yilong, et al. "Atom: Low-bit quantization for efficient and accurate llm serving." Proceedings of Machine Learning and Systems 6 (2024): 196-209.
---
**Q2.** Could you provide a detailed analysis highlighting the differences between the QFeP method and IntactKV?
**A2.**
We appreciate your recommendation of the valuable related work of IntactKV [2].
We carefully read the paper and agree with your comment that our QFeM method strongly resembles IntactKV.
However, there are major differences between QFeP and IntactKV:
1. **We identify the activation spikes.**
Our comprehensive analysis of activation spikes reveals that the large activation scale of activation spikes is responsible for significant degradation of quantization performace. Furthermore, the activation spikes dynamically occur depending on the current input sequence.
2. **QFeP addresses all attention spikes.**
Figure 2 illustrates the activation spikes given a token sequence. Our QFeP searches for dynamic activation spikes using calibration and stores the searched token in the prefix, which prevents recurrence of activation spikes in the subsequent tokens. However, IntactKV includes only the [BOS] token.
3. **Prefix ablation study confirmed the efficacy of QFeP.**
In Section 5.4, we conduct a prefix ablation study for QFeP. Compared to the prefix with only the [BOS] token (which can be viewed as IntactKV), QFeP consistently shows significant improvement through various LLMs.
We acknowledge your recommendation. We intend to include discussions of IntactKV in the final version of our paper.
[2] Liu, Ruikang, et al. "IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact." arXiv preprint arXiv:2403.01241 (2024).
---
**Q3.** The experimental settings are limited to W8A8 configurations, which previous research, such as SmoothQuant, has shown can nearly achieve lossless quantization for W8A8 models.
**A3.** We expand our experiments for low-bit quantization. Please refer to Q2 in the general response. To further our discussion about SmoothQuant, we recommend referring to Q1 in the general response.
---
**Q4.** Could you offer a more detailed comparison with state-of-the-art (SOTA) baselines or conduct the ablation tests as outlined in Table 4?
**A4.** Thank you for the paper list. Please refer to Q3 in the general response.
---
Rebuttal 2:
Title: Minor questions about rebuttal
Comment: I appreciate the authors' discussion and the additional evaluation results provided. However, I have a few minor points that require further clarification. Firstly, IntactKV pre-saves all system prompts as pivot tokens for Vicuna models, which includes all attention spikes, similar to QFeP. Secondly, I am curious as to why QFeP does not improve the performance of the 4-bit LLaMA2-13B model in Appendix 3. Could this be due to the influence of an additional prefix affecting the model's performance?
---
Rebuttal 3:
Comment: We are sincerely grateful for your effort and time in reviewing our rebuttal and for your valuable questions.
---
**Q5.** IntactKV pre-saves all system prompts as pivot tokens for Vicuna models, which includes all attention spikes, similar to QFeP.
**A5.**
The discussion continues from our previous response to your Q2, which highlighted the differences between QFeP and IntactKV [1].
Notably, IntactKV selects different pivot tokens depending on the type of LLMs.
- For pre-trained LLMs (e.g., LLaMA), IntactKV selects only the [BOS] token as a pivot token.
- For supervised fine-tuned LLMs (e.g., Vicuna), IntactKV stores all system prompts as pivot tokens.
We denote these methods as IntactKV[B] and IntactKV[P], respectively, according to [1].
In our previous rebuttal, we compared QFeP with IntactKV[B] due to the absence of system prompts for pre-trained LLMs.
When quantizing activations for pre-trained LLMs, QFeP's ability to address activation spikes more effectively than IntactKV becomes evident, as discussed in the previous response.
Nevertheless, further investigation into activation spikes for supervised fine-tuned (or instruction fine-tuned) LLMs would be valuable.
Previous works have found that pivot tokens persist after instruction fine-tuning [1, 2].
This finding implies that activation spikes are transferable and that our methods are also effective.
Indeed, we observed equivalent activation spikes in both LLaMA-2 and its fine-tuned models, such as Vicuna-v1.5 and LLaMA-2-Chat.
While IntactKV[P] may store long system prompts for pivot tokens, QFeP saves a more compact prefix with a length of 3.
Furthermore, QFeP is explicit and prompt-agnostic, providing flexibility for system prompts.
Finally, **QFeP can be applicable to more generalized and effective solutions for addressing activation spikes when applying activation quantization, regardless of whether the LLM is pre-trained or fine-tuned.**
[1] Liu, Ruikang, et al. "IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact." arXiv preprint arXiv:2403.01241 (2024).
[2] Sun, Mingjie, et al. "Massive activations in large language models." arXiv preprint arXiv:2402.17762 (2024).
---
**Q6.** I am curious as to why QFeP does not improve the performance of the 4-bit LLaMA2-13B model in Appendix 3. Could this be due to the influence of an additional prefix affecting the model's performance?
**A6.**
During the preparation of the rebuttal to reviewer geCr, we confirmed that introducing only an additional prefix of QFeP slightly degrades the performance of the FP16 model.
Based on this observation, we hypothesize the following regarding QFeP's contribution to the 4-bit LLaMA-2-13B:
- Case 1 (Atom [3] + QFeP): Given that quantization errors have been minimized via the fine-grained group quantization scheme of Atom, the prepended prefix of QFeP has the possibility to slightly degrade performance. Note that the fine-grained group quantization utilizes a finer granularity than per-token quantization.
- Case 2 (OmniQuant [4] + QFeP): At present, we are unable to determine the specific factor that degrades the perplexity for WikiText-2, although QFeP achieved a performance gain for C4.
To provide further evaluation results, we assessed four zero-shot tasks, as shown in the table below.
The results indicate that QFeP improves the OmniQuant baseline for three tasks, specifically the WinoGrande task.
This is similar to the results of W4A6 LLaMA-2-13B in Table 2 of the attached PDF.
While there are numerous potential factors that could explain the degradation (e.g., trainable components of OmniQuant or softmax quantization in low-bit), we are making an effort to include an ablation study, similar to Figure 6, to clarify the influence of an additional prefix. Thanks again for your questions.
| Model | #Bits | Method | PIQA($\uparrow$) | LAMBADA($\uparrow$) | HellaSwag($\uparrow$) | WinoGrande($\uparrow$) | Avg($\uparrow$) |
|---|---|---|:---:|:---:|:---:|:---:|:---:|
|LLaMA-2-13B|FP16| - | 79.49% | 76.54% | 60.20% | 72.38% | 72.15% |
|LLaMA-2-13B|W4A4|Atom|78.02%|75.80%|58.48%|70.56%|70.72%|
|LLaMA-2-13B|W4A4|$\quad$+QFeM|78.07%|75.43%|**58.49%**|70.40%|70.60%|
|LLaMA-2-13B|W4A4|$\quad$+QFeP|76.99%|**76.19%**|58.22%|70.80%|70.55%|
|LLaMA-2-13B|W4A4|$\quad$+QFeM+QFeP|**78.24%**|75.61%|58.34%|**71.11%**|**70.83%**|
|LLaMA-2-13B|W4A4|OmniQuant|67.25%|41.63%|44.95%|52.57%|51.60%|
|LLaMA-2-13B|W4A4|$\quad$+QFeM|**71.11%**|**48.01%**|46.86%|57.14%|**55.78%**|
|LLaMA-2-13B|W4A4|$\quad$+QFeP|68.93%|38.99%|45.95%|**58.48%**|53.09%|
|LLaMA-2-13B|W4A4|$\quad$+QFeM+QFeP|70.02%|44.89%|**46.93%**|57.46%|54.82%|
[3] Zhao, Yilong, et al. "Atom: Low-bit quantization for efficient and accurate llm serving." Proceedings of Machine Learning and Systems 6 (2024): 196-209.
[4] Shao, Wenqi, et al. "Omniquant: Omnidirectionally calibrated quantization for large language models." arXiv preprint arXiv:2308.13137 (2023).
---
Rebuttal Comment 3.1:
Comment: Thanks for your detailed response and additional experiments. Some of my concerns are addressed and I have decided to raise my score.
---
Reply to Comment 3.1.1:
Comment: Thank you for your insightful reviews of our work and rebuttal during the review period. Your valuable feedback is greatly appreciated and will be incorporated into the final version. | Summary: This paper identifies some of the underlying causes for why activation quantization (PTQ) could lead to low performance and suggests some methods to address these issues.
Strengths: Please see the “Questions” section.
Weaknesses: Please see the “Questions” section.
Technical Quality: 2
Clarity: 3
Questions for Authors: My review is as follows:
1) The results of Table 2 seem to suggest that SmoothQuant leads to an unacceptably high performance degradation. Table 6 of the SmoothQuant paper however shows that for Llamav1, the degradation is very small. I wonder if a difference in implementation is causing this discrepancy in results. I’m sorry if this is mentioned somewhere in the text and I missed it. Could you please clarify?
2) What is the latency for FP16 in Figure 7?
3) There are multiple recent works that suggest quantizing the weights to 4 bits can be done without a big impact to the accuracy. Since memory access (for weights) is typically the bottleneck when LLMs are deployed, it is possible that one may prefer W4A16 over W8A8. In my opinion, studying 4-bit weight quantization (in addition to the already studied W8 quantization) could make the paper more interesting.
4) QFeM method is essentially a mixed precision approach. A lot of quantization papers actually employ some form of mixed precision (even though it is sometimes only mentioned in the footnote). For instance, SmoothQuant uses FP16 for LayerNorm. Has this been taken into account when comparing against other methods? Also, have similar assumptions been made in the implementation of the proposed method?
Minor:
5) There is a type on line 90: “in the remain(der) of this 91 paper”
6) Typo in line 147 “dominate(s)”
Readability and presentation: The paper is mostly easy to understand. One thing I could say is that the idea of QFeM is easier to follow than QFeP; the introduction of the QFeP method may need to be improved and made more concise.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for taking the time to review our work!
We reviewed our paper and fixed some typos, including your suggestions to improve the readability and presentation.
---
**Q1.** The results of Table 2 seem to suggest that SmoothQuant leads to an unacceptably high performance degradation. Table 6 of the SmoothQuant paper however shows that for Llamav1, the degradation is very small. I wonder if a difference in implementation is causing this discrepancy in results. I’m sorry if this is mentioned somewhere in the text and I missed it. Could you please clarify?
**A1.**
Thank you for your careful consideration.
Please refer to Q1 in the general response.
---
**Q2.** What is the latency for FP16 in Figure 7?
**A2.**
In Figure 7, we omit the latency of FP16 because LLaMA-13B and LLaMA-70B are too large to deploy into their respective target GPUs (RTX 4090 and A100).
Instead, we provide the latency for multi-GPU setups.
The latency for LLaMA-2-13B is approximately 553.06 ms using two RTX 4090 GPUs, while for LLaMA-2-70B, it is around 1673.63 ms using two A100 GPUs.
---
**Q3.** There are multiple recent works that suggest quantizing the weights to 4 bits can be done without a big impact to the accuracy. Since memory access (for weights) is typically the bottleneck when LLMs are deployed, it is possible that one may prefer W4A16 over W8A8. In my opinion, studying 4-bit weight quantization (in addition to the already studied W8 quantization) could make the paper more interesting.
**A3.**
Please refer to Q2 in the general response. We expanded our experiment on low-bit quantization.
---
**Q4.** QFeM method is essentially a mixed precision approach. A lot of quantization papers actually employ some form of mixed precision (even though it is sometimes only mentioned in the footnote). For instance, SmoothQuant uses FP16 for LayerNorm. Has this been taken into account when comparing against other methods? Also, have similar assumptions been made in the implementation of the proposed method?
**A4.**
In our experiments, we only quantize the linear modules to utilize efficient INT8 matrix multiplication operation.
In such case, the other modules (e.g., LayerNorm, EmbeddingLayer) use FP16 precision.
We implement baseline methods (SmoothQuant and OSP) following the same setting.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed responses! It would be great the findings reported in the rebuttal could be explicitly incorporated in the revision. I'll increase my rating from 4 to 5 since some of my concerns regarding the numerical results are already addressed.
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate your constructive response to our rebuttal! We are highly encouraged by your thoughtful feedback. Thank you. | Summary: This paper addresses the precision challenges posed by the large language models (LLMs) quantization during inference, specifically focusing on the quantization errors in GLU-based feedforward networks. The authors identify that GLU variants in LLMs cause significant local quantization errors due to excessive activation magnitudes, referred to as activation spikes. They observe that GLU-implemented models have larger spikes than non-GLU-implemented models. They propose two methods, Quantization-free Module (QFeM) and Quantization-free Prefix (QFeP), to isolate and mitigate these spikes during quantization. QFeM leave some linear layers unquantized (usually those layers that cause large activation spikes in the first several layers), and QFeP introduce an additional prefix before the inference process. Their extensive experiments show that these methods improve quantization performance and are compatible with existing techniques.
Strengths: 1. The identification of activation spikes in GLU-based LLMs is novel.
2. The paper is well-structured and clear.
3. The QFeP method is novel, and is somehow similar to the finding of "sink token" in StreamLLM [1].
[1] Xiao, Guangxuan, et al. "Efficient streaming language models with attention sinks." arXiv preprint arXiv:2309.17453 (2023).
Weaknesses: 1. My major concern is about the baseline of SmoothQuant reported in Table 4. For example, In Table 7 of SmoothQuant's original paper, they report that W8A8 SQ's PPL of Llama-7B on WikiText-2 dataset is 5.515, while the authors report a PPL of 9.907 on the same dataset. Is there a specific reason about this large gap?
2. In Table 3, the improvement brought by the QFeP method does not seem significant, especially when combining with the QFeM method.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In the method of Quantization-free Prefix (QFeP), will the additional prefix decrease or increase the accuracy? That means you use the QFeP method but do not apply quantization, only introduce the additional prefix.
2. Where do the spikes of the GLU-based model come from in the element-wise multiplication? Do they mostly comes from the gate projection part, or the up projection part?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate the invaluable feedback provided by the reviewer.
---
**Q1.** My major concern is about the baseline of SmoothQuant reported in Table 4. For example, In Table 7 of SmoothQuant's original paper, they report that W8A8 SQ's PPL of Llama-7B on WikiText-2 dataset is 5.515, while the authors report a PPL of 9.907 on the same dataset. Is there a specific reason about this large gap?
**A1.**
Please refer to Q1 in the general response.
---
**Q2.** In Table 3, the improvement brought by the QFeP method does not seem significant, especially when combining with the QFeM method.
**A2.**
Our proposed two methods both improve the quantization performance by significant margins.
In the case of the LLaMA-2-7B model with the W8A8 quantization setting, applying QFeM achieves an average zero-shot evaluation accuracy of 69.14%, while applying QFeP achieves 68.69%. When we apply the two methods at the same time, it achieves 69.47%.
The combination of QFeM and QFeP achieves a notable performance boost, showing up to a 29.2% increase when quantizing LLaMA-2-13b to W4A6, as shown in Table 2 (row W4A6+QFeM+QFeP) of the attached PDF.
---
**Q3.** In the method of Quantization-free Prefix (QFeP), will the additional prefix decrease or increase the accuracy? That means you use the QFeP method but do not apply quantization, only introduce the additional prefix.
**A3.** Thank you for your insightful question! We provide the extra experiment results as below:
| Model | Method | WikiText-2($\downarrow$) | PIQA($\uparrow$) | LAMBADA($\uparrow$) | HellaSwag($\uparrow$) | WinoGrande($\uparrow$) | Avg($\uparrow$) |
|:-----------:|:------------------:|:------------------------:|:----------------:|:-------------------:|:---------------------:|:----------------------:|:---------------:|
| LLaMA-2-7B | FP16 | 5.268 | 78.18% | 73.67% | 57.13% | 69.46% | 69.61% |
| | FP16 (+add Prefix) | 5.281 | 77.53% | 74.42% | 56.46% | 69.85% | 69.57% |
| LLaMA-2-13B | FP16 | 4.789 | 79.49% | 76.54% | 60.20% | 72.38% | 72.15% |
| | FP16 (+add Prefix) | 4.800 | 78.84% | 76.48% | 60.00% | 72.30% | 71.91% |
The results indicate that our QFeP degrades the performance of the FP16 model slightly. However, our proposed methods improve the performance by significant margins when quantizing the LLMs.
---
**Q4.** Where do the spikes of the GLU-based model come from in the element-wise multiplication? Do they mostly comes from the gate projection part, or the up projection part?
**A4.**
At the layer where the activation spikes occur (e.g., Layer 2 FFN), the down projection faces large-scale activations derived from element-wise multiplication. In our analysis, we observe that both tensors (the output activation from the gate projection and the output activation from the up projection) incoming to the multiplication operation contain spikes in the same dimensions.
---
Rebuttal Comment 1.1:
Comment: The results are very interesting. I increase my score accordingly.
---
Reply to Comment 1.1.1:
Comment: We appreciate your review of our rebuttal. We're pleased that our results addressed your concerns and improved your assessment of our work. | Summary: This paper introduces activation quantization methods for GLU-based LLMs, which often face challenges due to activation spikes. To effectively manage these spikes and enable activation quantization using a PTQ-based approach, the paper proposes a Quantization-free Module (QFeM) and a Quantization-free Prefix (QFeP). Specifically, QFeM aims to partially bypass quantization for linear layers where large quantization errors occur. QFeP identifies the prefix that triggers activation spikes and preserves its context as a key-value (KV) cache, preventing the recurrence of activation spikes in subsequent tokens. The paper presents extensive experimental results to compare the accuracy of the quantized models.
Strengths: 1) This paper is well organized and easy to understand.
2) The proposed QFeM and QFeP effectively mitigate the impact of activation spikes on activation quantization, preserving the accuracy of LLMs even when activation quantization is applied.
3) The ablation study thoroughly examines the effects of QFeM and QFeP, providing valuable insights.
Weaknesses: 1) The perplexity/accuracy results of the baseline methods deviate from the results reported in previous papers.
2) The paper does not compare its method with the state-of-the-art LLM quantization method [1], which enables W4A4 quantization (partially using 8-bit operations) with a PTQ approach.
[1] Zhao, Yilong, et al. "Atom: Low-bit quantization for efficient and accurate llm serving." Proceedings of Machine Learning and Systems 6 (2024): 196-209.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1) The perplexity and accuracy results of the baseline methods (SQ [1] and OSP [2]) in Table 4 are worse than the figures reported in the original SQ and OSP papers. For instance, the SQ paper reported successful preservation of LLM perplexity after W8A8 quantization (Table 7 of [1]), but Table 4 of this paper shows poor perplexity and accuracy results for SQ. Additionally, the OSP paper claimed successful preservation of LLM perplexity even after INT6 quantization and reported better perplexity results compared to SQ (Table 2 of [2]). Although OSP only evaluated LLaMA-1 and there is no LLaMA-2 data, we can reasonably expect similar trends for LLaMA-2 given that both use GLU-based activation functions. However, Table 4 of this paper shows poor perplexity and accuracy results for OSP, particularly for LLaMA-2-7B. Why are the evaluation results for the previous methods so different?
2) Since the prefix token is retrieved from the calibration set and the threshold alpha is determined from it, will these parameters remain consistent when inferring on a new dataset using the same model?
3) What are the advantages of the proposed method compared to Atom [3] in terms of perplexity/accuracy, latency, or other aspects?
[1] Xiao, Guangxuan, et al. "Smoothquant: Accurate and efficient post-training quantization for large language models." International Conference on Machine Learning. 2023.
[2] Wei, Xiuying, et al. "Outlier suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scaling." arXiv preprint arXiv:2304.09145 (2023).
[3] Zhao, Yilong, et al. "Atom: Low-bit quantization for efficient and accurate llm serving." Proceedings of Machine Learning and Systems 6 (2024): 196-209.
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The proposed method is limited to GLU-based LLMs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's insightful comments and the references provided.
---
**Q1.** The perplexity/accuracy results of the baseline methods deviate from the results reported in previous papers. Why are the evaluation results for the previous methods so different?
**A1.**
Please refer to Q1 in the general response.
---
**Q2.** The paper does not compare its method with the state-of-the-art LLM quantization method [1], which enables W4A4 quantization (partially using 8-bit operations) with a PTQ approach. What are the advantages of the proposed method compared to Atom [3] in terms of perplexity/accuracy, latency, or other aspects?
**A2.**
Please refer to Q3 in the general response.
---
**Q3.** Since the prefix token is retrieved from the calibration set and the threshold alpha is determined from it, will these parameters remain consistent when inferring on a new dataset using the same model?
**A3.**
We conducted extra experiments on the LLaMA-2-7B and LLaMA-2-13B models by incorporating various calibration datasets such as WikiText-2, Pile, and PTB to determine the threshold alpha, the target layers of QFeM, and the prefix of QFeP.
The table below illustrates that for the LLaMA-2-7B model, the alpha ($\alpha$) and the number of excluded layers ($M_{unq}$) remain consistent across datasets, while the prefix tokens are identical in all cases.
For the LLaMA-2-13B model, the number of excluded layers and most alpha values also exhibit similarity. When evaluating the QFeM and QFeP performance, we found that the results are nearly identical, except the PTB dataset for QFeM.
| Model | **Calibration Dataset** | **$\alpha$** | **$M_{unq}$** | **WikiText-2 (QFeM, ppl$\downarrow$)** | **Prefix** | **WikiText-2 (QFeP, ppl$\downarrow$)** |
|:-----------:|:-------------------:|:--------|:---------:|:----------------------------:|:---------------|:----------------------------:|
| LLaMA-2-7B | C4 | 6.68 | 17 | 5.758 | [BOS] all . | 5.758 |
| LLaMA-2-7B | WikiText-2 | 6.68 | 17 | 5.798 | [BOS] all . | 5.758 |
| LLaMA-2-7B | Pile | 6.79 | 15 | 5.768 | [BOS] all . | 5.758 |
| LLaMA-2-7B | PTB | 7.38 | 11 | 5.831 | [BOS] all . | 5.758 |
| LLaMA-2-13B | C4 | 12.91 | 6 | 5.241 | [BOS] then , | 6.000 |
| LLaMA-2-13B | WikiText-2 | 37.75 | 4 | 5.291 | [BOS] years the | 6.009 |
| LLaMA-2-13B | Pile | 36.56 | 4 | 5.291 | [BOS] A the | 6.000 |
| LLaMA-2-13B | PTB | 105.88 | 2 | 5.394 | [BOS] years the | 6.004 |
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and the additional experiments. You have addressed many of the questions I had, so I have decided to increase my score.
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate your time and effort in reviewing our rebuttal. Your constructive questions have been instrumental in enhancing the quality of our work. | Rebuttal 1:
Rebuttal: # Response to all reviewers
We sincerely appreciate all reviewers' thoughtful feedback and constructive suggestions on our paper!
Thanks to valuable comments, our work achieved some breakthroughs and broad contributions:
- Our methods demonstrate effectiveness beyond the W8A8 setting, showing promising results in further low-bit quantization scenarios such as W4A8 and W4A6.
- The high compatibility of our methods enables state-of-the-art (SOTA) LLM quantization methods to acheive performance improvements.
We will gratefully incorporate these improvements into the final version of our paper. Finally, we look forward to further discussion with reviewers!
---
**Q1.** In Table 4, the previous quantization methods (SQ and OSP) show high degradation in evaluation results compared to those reported in their original papers. Why are the evaluation results for the previous methods so different? (Reviewers BgAS, geCr, TX3p)
**A1.** The performance gap is due to the granularity of the activation quantization. Because we identify the activation spikes in token units (Section 3.2), the experiments are mainly on coarse-grained quantization (i.e., **per-tensor quantization**) to examine the impact of activation spikes on a whole tensor (Section 3.3). This may lead to confusing results for the readers. As reviewers commented, baseline methods (SQ and OSP) have reported almost lossless quantization performance for the LLaMA family, as reported in Table 6 of SQ and Table 3 of OSP. As stated in their table captions, they utilze fine-grained **per-token quantization** [1], which we pointed out in line 284 in our paper. To clarify, we expand our Table 4 for both quantization granularities. Please see Table 1 in the attached PDF. The results provide the following insights:
- Although per-token quantization serves nearly lossless quantization for GLU-based LLMs given the W8A8 round-to-nearest (RTN) method, the performance degrades significantly when coarse-grained quantization is applied. Our proposed methods mitigate these quantization errors by addressing the activation spikes, which are responsible for a significant quantization bottleneck.
- With per-token quantization, our methods are still compatible with previous quantization methods (SQ and OSP) and improve quantization performance, especially in the low-bit setting (e.g., W4A4).
---
**Q2.** Extended experimental results regarding low-bit quantization settings (e.g., W4A8, W4A4). (Reviewers TX3p, PpZ5)
**A2.** Thanks for the reviewers' constructive suggestions! We follow the reviewers’ advice and extend our experimental results to include various low-bit quantization scenarios, such as W4A8, W4A6, and W4A4. Please refer to Table 2 in the attached PDF. Our proposed methods consistently improve quantization performance in 4-bit weight quantization, even with 6-bit activation quantization, achieving up to a 29.2% increase in average zero-shot accuracy. However, in W4A4 quantization, the limited bitwidth for activations ruins the functionality of LLM with coarse-grained quantization. From this observation, we conclude that fine-grained activation quantization is necessary for extremely low-bit cases, such as 4-bit.
---
**Q3.** Additional comparisons with state-of-the-art (SOTA) LLM quantization methods. (Reviewers BgAS, PpZ5)
**A3.** We truly appreciate the references that reviewers recommended for the SOTA LLM quantization methods. Because our methods are simple to integrate and orthogonal to LLM quantization techniques (e.g., custom matrix multiplication), one can directly plug in our QFeM or QFeP, or even both. Among the recommended SOTA methods, we tested Atom [2] and OmniQuant [3] with our methods by evaluating the perplexity of WikiText-2 and C4 datasets.
| Model | #Bits | Method | WikiText-2 | C4 |
|-------------|-------|---------------|:----------:|:------:|
| LLaMA-2-7B | FP16 | - | 5.268 | 7.013 |
| LLaMA-2-7B | W4A4 | Atom | 5.710 | 7.601 |
| LLaMA-2-7B | W4A4 | $\quad$+QFeM | 5.634 | 7.538 |
| LLaMA-2-7B | W4A4 | $\quad$+QFeP | 5.685 | 7.493 |
| LLaMA-2-7B | W4A4 | $\quad$+QFeM+QFeP | **5.607** | **7.442** |
| LLaMA-2-7B | W4A4 | OmniQuant | 14.208 | 19.005 |
| LLaMA-2-7B | W4A4 | $\quad$+QFeM | 9.483 | 13.569 |
| LLaMA-2-7B | W4A4 | $\quad$+QFeP | 11.926 | 15.796 |
| LLaMA-2-7B | W4A4 | $\quad$+QFeM+QFeP | **8.818** | **12.244** |
| LLaMA-2-13B | FP16 | - | 4.789 | 6.518 |
| LLaMA-2-13B | W4A4 | Atom | 5.081 | 6.878 |
| LLaMA-2-13B | W4A4 | $\quad$+QFeM | **5.071** | **6.854** |
| LLaMA-2-13B | W4A4 | $\quad$+QFeP | 5.089 | 6.872 |
| LLaMA-2-13B | W4A4 | $\quad$+QFeM+QFeP | 5.073 | 6.855 |
| LLaMA-2-13B | W4A4 | OmniQuant | 10.416 | 14.103 |
| LLaMA-2-13B | W4A4 | $\quad$+QFeM | **9.499** | 12.348 |
| LLaMA-2-13B | W4A4 | $\quad$+QFeP | 10.808 | 13.584 |
| LLaMA-2-13B | W4A4 | $\quad$+QFeM+QFeP | 9.901 | **12.294** |
Note that we follow their original implementation and parameter settings (e.g., fine-grained group quantization in Atom) during evaluation. As highlightened in bold in above table, our methods are compatible with SOTA quantization methods (Atom and OmniQuant) and enhance their quantization performance, especially for OmniQuant (and W4A4 OSP in Q1).
---
**references:**
[1] Yao, Zhewei, et al. "Zeroquant: Efficient and affordable post-training quantization for large-scale transformers." Advances in Neural Information Processing Systems 35 (2022): 27168-27183.
[2] Zhao, Yilong, et al. "Atom: Low-bit quantization for efficient and accurate llm serving." Proceedings of Machine Learning and Systems 6 (2024): 196-209.
[3] Shao, Wenqi, et al. "Omniquant: Omnidirectionally calibrated quantization for large language models." arXiv preprint arXiv:2308.13137 (2023).
Pdf: /pdf/5e6e14dce7a811abea832142c5aa6d607a1ef11a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Models | Accept (poster) | Summary: This work proposes self-evolution decoding, a method to improve LM's factuality without using external knowledge or fine-tuning data. Specifically, the differences between each layer’s logits and the final layer’s logits are utilized to approximate the gradient, which are further used to estimate the inner knowledge of LMs. Finally, the estimated "inner" distribution is utilized to adjust the model's final outputs. Experiments on various tasks show the effectiveness of the proposed method.
Strengths: - The paper is generally easy to understand.
- The proposed method is shown to provide good results on a variety of tasks.
Weaknesses: - I think some of the approximations might be questinable. First, in Section 2.2, I'm not sure whether it is reasonable to use the logit differences to estimate the gradient: logits are unconstrained, while the scale of the gradients are constrained within 1. Moreover, in Section 2.3, I'm not sure why the estimations are broken-down for each item in the vocabulary and why the weights for different layers are directly aggregated and normalized with cosine similarties. More ablation studies should be provided to verify some of these choices.
Technical Quality: 2
Clarity: 3
Questions for Authors: - I'm wondering if there can be good methods to evaluate LM's factuality in more realistic open-ended generation tasks, which are the main scenarios where we hope to improve LM's factuality and consistency.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Limitations have been discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer
Thank you so much for taking the time to provide your feedback. Your comments and suggestions are invaluable to us. We appreciate the opportunity to address your concerns regarding the approximation used in our approach, and we are also including additional results to support our methodology.
> **"Q: logits are unconstrained, while the scale of the gradients are constrained within 1"**
Thank you for your insightful comment regarding the unconstrained nature of the logits $\mathcal{L}^{(n)} - \mathcal{L}^{(N)}$ and the constrained nature of the gradients $(p_1 - t_1, p_2 - t_2, ..., p_i - t_i, ..., p_d - t_d)$. We appreciate the opportunity to address your concern.
In fact, this is an important consideration in our methodology. Our approach does not rely on the magnitudes of $\mathcal{L}^{(n)} - \mathcal{L}^{(N)}$; rather, it utilizes this difference to approximate the direction of the gradient vector. Specifically, we do not employ the expression:
$$\mathcal{L}^{(n)} - \mathcal{L}^{(N)} \approx (p_1 - t_1, p_2 - t_2, ..., p_i - t_i, ..., p_d - t_d)$$
to directly estimate $(t_1, t_2, ..., t_i, ..., t_d)$.
Instead, our method involves increasing the cosine similarity between $\mathcal{L}^{(n)} - \mathcal{L}^{(N)}$ and the vector $(p_1 - t_1, p_2 - t_2, ..., p_i - t_i, ..., p_d - t_d)$:
$$CosineSimilarity[\mathcal{L}^{(n)} - \mathcal{L}^{(N)}, (p_1 - t_1, p_2 - t_2, ..., p_i - t_i, ..., p_d - t_d)]$$
to estimate $(t_1, t_2, ..., t_i, ..., t_d)$.
This approach aligns the directions of the gradients rather than their magnitudes, thereby addressing the issue of their different scales.
> **"Q: why the estimations are broken-down for each item in the vocabulary"**
Thank you for your inquiry. We are happy to clarify your concerns. Given that the vocabulary of a typical large language model (LLM) often exceeds 10,000 items (d > 10k), the $(t_1, t_2, ..., t_i, ..., t_d)$ we need to estimate constitute a high-dimensional vector. Estimating this vector in its entirety for each vocabulary item simultaneously would involve significant computational overhead.
By breaking down the estimation process, we can focus only on the possible words in the vocabulary (that have the top-k highest probability in the original output) and then estimate their corresponding $t_i$ one by one. This selective approach reduces computational costs, as well as avoids the noise introduced by less significant words.
Therefore, this breakdown is strategically designed to minimize computational load. For the sake of conciseness of the presentation, we move the discussion of computational considerations to Section 2.4 and still use d instead of k here.
>**”Q: why aggregated and normalized”**
**Normalization** is applied individually to each layer $n$ as described in line 115. This is necessary because the vector components:
$(\bar{t}^{(n)}_1, \bar{t}^{(n)}_2, ..., \bar{t}^{(n)}_i, ..., \bar{t}^{(n)}_d)$ do not inherently sum to one. Therefore, we should normalize these components to achieve an estimation, $\bar{t}^{(n)}$, of the inner distribution for each layer n.
**Aggregations** are conducted across the $\{\bar{t}^{(n)}\}$ for all the layers.
Contrary to direct aggregation, after obtaining a normalized $\bar{t}^{(n)}$ for each layer, the aggregation process is not straightforward. The weights $w^{(n)} = \sum_i^d (\bar{t}^{(n)}_i)$ in line 116 indicate the degree to whichthe logits $\mathcal{L}^{(n)} - \mathcal{L}^{(N)}$ and the normalized vectors $(p_1 - \bar{t}^{(n)}_1, p_2 - \bar{t}^{(n)}_2, ..., p_i - \bar{t}^{(n)}_i, ..., p_d - \bar{t}^{(n)}_d)$ are well-aligned in terms of cosine similarity. A larger weight suggests that the estimations from $\bar{t}^{(n)}$ are more reliable, and thus, we assign greater weight in the aggregation process to such layers. Conversely, layers with smaller weights are assigned lesser importance in the aggregation.
We include a new ablation study to support our proposed methods following your suggestion. In one such study, we deviated from our established processes by crudely scaling the $\mathcal{L}^{(n)} - \mathcal{L}^{(N)}$, and simply averaging these scaled differences across different layers, a method we denote as "ablation1." It bypasses the steps of breakdown, normalization, and aggregation that we have discussed above. The results show that our method achieves better result.
| | Factor | TrufulQA(MC1) | TrufulQA(MC2) | TrufulQA(MC3) |
|-----------------------------|-----------|---------------|---------------|---------------|
| llama2-7B-chat + ablation1 | 62.73 | 33.66 | 39.83 | 31.47 |
| + SED | **65.16** | **37.08** | **63.86** | **32.90** |
| llama2-13B-chat + ablation1 | 66.29 | 37.33 | 45.0 | 31.98 |
| + SED | **67.06** | **37.09** | **63.75** | **32.60** |
>**"Q: more realistic open-ended generation tasks"**
Thank you for this valuable suggestion. We have conducted additional experiments on more realistic open-ended generations datasets, HotPotQA, Natural Question (NQ), TriviaQA (Tri). We adopt more evaluation metric, Exact Match(EM) and the F1.
| Model | HotpotQA EM | HotpotQA F1 | NQ EM | NQ F1 | Trivia EM | Trivia F1 |
|-----------------|:-----------:|:-----------:|:-----:|:-----:|:---------:|:---------:|
| Llama 2 7B chat | 19.6 | 20.1 | 21.8 | 20.4 | 44.4 | 44.3 |
| + DoLa | 20.4 | 21.3 | 23.5 | 21.5 | 45.2 | 45.3 |
| + Sed(ours) | **20.9** | **21.5** | **24.4** | **22.2** | **47.6** | **46.3** |
| Llama 2 13B chat| 23.8 | 21.7 | 33.1 | 28.9 | 63.0 | 60.9 |
| + DoLa | 24.5 | 23.2 | 33.1 | 28.9 | 63.2 | 61.5 |
| + Sed(ours) | **25.0** | **24.5** | **34.6** | **31.6** | **63.3** | **62.2** |
). The results show that our method improves the performance in more realistic open-ended generations tasks.
Sincerely,
Authors
---
Rebuttal 2:
Title: Reviewer, please take a look at author response.
Comment: Hello Reviewer 1uAz,
Please take a moment to read and acknowledge the author's response to your review.
Thanks, Area Chair
---
Rebuttal 3:
Title: Dear Area Chair and Reviewer
Comment: Dear Area Chair,
**Thank you so much for your time and efforts in facilitating the review and discussion process. We really appreciate it.**
Dear Reviewer,
We sincerely thank you for reviewing and discussing our paper. Your suggestions have been incredibly helpful in enhancing our work. In our rebuttal, we have provided detailed explanations addressing your concerns, including more details on methodology, additional ablation studies, and further results on more realistic open-ended generation scenarios. **We will definitely incorporate all the discussions and new results in our revision. Thank you so much for your valuable suggestions! Should you have any questions, whether about the methodology or if you need further explanations or additional results, do not hesitate to raise them. We are committed to resolving any issues to your satisfaction. We understand that you are very busy, so we deeply appreciate your time and effort. Thank you so much!**
(We have noticed that sometimes the formulas in our rebuttal may not display correctly on OpenReview. A refresh of the browser may resolve them. However, if you continue to face difficulties or need more detailed explanations of these formulas, please let us know. We are prepared to provide all necessary support to make our research clear and understandable.)
Sincerely,
Authors
---
Rebuttal Comment 3.1:
Comment: I thank the authors for the responses and the extra results. Some of my concerns have been resolved and I have adjusted my scores correspondingly. Nevertheless, I still keep the borderline decision, especially after reading other reviewers’ comments, and I think it seems that many concerns still remain.
---
Rebuttal 4:
Title: Dear Reviewer
Comment: Dear Reviewer,
**Firstly, we sincerely thank you for acknowledging our responses and the additional results we provided. Your adjusted score and your valuable suggestions are greatly appreciated and motivate us to improve our manuscript further. Thank you so much!**
**Respectfully**, we wish to further alleviate your concerns by reiterating that we have addressed all concerns raised by the other reviewers. Unfortunately, we have not yet received feedback from some reviewers on our rebuttal, but this absence does not imply that we have not addressed the concerns. For instance:
1. As highlighted by Reviewer 12rF, we have successfully addressed the concerns regarding additional computational costs and further theoretical analysis. Considering that Reviewer Mgbd also raised similar issues, we have resolved Reviewer Mgbd's main concerns as well.
2. Although error bars and statistical significance were not discussed in the most relevant literature [1,2,3,4], following Reviewer Mgbd’s advice, we still provided those discussions to show our method’s superiority.
3. We have included more detailed explanations of our methodology to clarify aspects that were previously questioned.
**Respectfully,** we disagree with Reviewer Xg4n’s perspective. While we understand the concerns, it is standard practice for most papers to include detailed explanations and new results during the rebuttal phase. **Considering that we have managed to complete the primary rebuttal within the 6000-character limit set by NeurIPS, we believe our revisions are not so significant as to warrant the rejection of our paper.** This is consistent with common academic standards and does not deviate from what is typically expected in similar submissions, as evidenced by past proceedings of the conference.
**We understand that you are very busy, and we deeply appreciate your time and effort. Thank you so much! Respectfully, we hope not to leave you with the impression that we have not addressed other reviewers' concerns, considering that we have indeed provided detailed explanations and additional results, and received positive feedback from Reviewer 12rF.**
Thank you once again for your valuable suggestions and guidance! We really appreciate it!
Sincerely,
Authors
[1] Yung-Sung Chuang et al., "Dola: Decoding by contrasting layers improves factuality in large language models," 2024.
[2] Kenneth Li et al., "Inference-time intervention: Eliciting truthful answers from a language model," 2023.
[3] Shiqi Chen et al., "In-context sharpness as alerts: An inner representation perspective for hallucination mitigation," 2024.
[4] Yue Zhang et al., "Alleviating hallucinations of large language models through induced hallucinations," 2023. | Summary: In this work, the authors present a decoding method called Self-Evolution Decoding (SED) to enhance factuality. During the decoding process, SED first estimates the “inner knowledge distribution,” representing the knowledge the model “knows,” by analyzing the difference between the top-layer logits and intermediate layers. This estimated inner knowledge distribution is then used to adjust the LLM output logits, steering the model’s behavior towards greater factuality. Experiments across several datasets and LLMs demonstrate that SED significantly improves the LLM’s factuality.
Strengths: Originality: The approach of narrowing the gap between model generation and internal knowledge is innovative.
Significance: Enhancing the factuality of LLMs is a crucial research problem, and the positive results demonstrate the effectiveness of the proposed method.
Weaknesses: 1. The motivation for this method is somewhat problematic. In section 2.2, the authors claim that the initial layers predominantly encode “lower-level” information while the later layers capture more “semantic” information. However, this does not imply that their difference is a good approximation of the gradient of KL divergence. If you believe this approximation is accurate, why not directly apply this approximated gradient to Equation 3?
2. The experiments primarily focus on task performance but lack an investigation into how this method works. For example, in lines 145-147, the authors claim that the approximated P_inner cannot be directly used as it is not perfect. I believe presenting experimental results would be more convincing than a verbal analysis.
3. The paper is not well-structured, and the concepts are confusing. Please refer to the questions below for more details.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. The notion of logits \mathcal{L} is confusing, which is normally used to represent loss function.
2. Notation is not consistent. In line 69, P refers to the probability distribution, where in line 71 P_L denotes the logits distribution. Is this a typo?
3. Equation 2 is confusing, I initially thought it represented L^n - L^N introduced in Line 90, but there is no description of this equation. I spent quite a while understanding that it is just a derivation of the gradient of the KL function and is not related to line 90.
4. In Line 107, "In this context" and "In this formulation" duplicate.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors acknowledge that the approximated P_inner is not perfect, but they do not provide sufficient experimental results to support this claim. Including more detailed experiments that explore the accuracy and limitations of this approximation would strengthen their argument.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you so much for taking the time to provide your feedback. Your comments and suggestions are invaluable to us, especially regarding our methodologies and presentations. We appreciate this opportunity to address your concerns.
> **"However, this does not imply that their difference is a good approximation of the gradient of KL divergence. If you believe this approximation is accurate, why not directly apply this approximated gradient to Equation 3?"**
**First Issue: Suitability of the Approximation**
We are grateful for your insightful comments and are eager to further explain this aspect. Regarding why we interpret the difference $\mathcal{L}^{(n)} - \mathcal{L}^{(N)}$ as a suitable approximation for the gradient of the KL divergence, the core reason lies in our claim that the KL divergence from the real distribution to $\mathcal{P}_{\mathcal{L}}^{(N)}$ is smaller than that to $\mathcal{P}_{\mathcal{L}}^{(n)}$. Thus, $\mathcal{L}^{(n)} - \mathcal{L}^{(N)}$ serves as an approximate gradient.
We can verify this claim directly because:
$$
KL(P_{real} \parallel P_{L}) = CE(P_{real}, P_{L}) + H(P_{real})
$$
Here, CE represents cross-entropy and H denotes entropy. By comparing the cross-entropy across different layers, as illustrated in Figure 3 (which was unfortunately not referenced in our original manuscript), we find that the final layer exhibits a smaller loss value compared to earlier layers. This observation makes sense because, as discussed, the final layer directly engages with real-world labels through cross-entropy during training, making it more accurate. Additionally, as you mentioned, the final layer contains more “semantic” information, making it closer to the real-world labels.
Thus, $\mathcal{L}^{(n)} - \mathcal{L}^{(N)}$ is a good approximation of the gradient of $KL(P_{real} \parallel P_{L})$. When estimating the gradient of $KL(P_{inner} \parallel P_{L})$, as per Equation 3, this gradient should be close to the $KL(P_{real} \parallel P_{L})$ to benefit the decoding. Hence, we utilize $\mathcal{L}^{(n)} - \mathcal{L}^{(N)}$ as the source for appromximation.
**Second Issue: Application to Equation 3**
Regarding why we do not directly apply $\mathcal{L}^{(n)} - \mathcal{L}^{(N)}$ in Equation 3, it is important to consider that while $\mathcal{L}^{(n)} - \mathcal{L}^{(N)}$ is unconstrained, the gradients estimated in Equation 2 (e.g., $p_1 - t_1, p_2 - t_2, ..., p_i - t_i, ..., p_d - t_d)$ are constrained within 1. Thus, direct substitution could lead to a mismatch in magnitudes. Proper normalization and subsequent aggregation of estimations from different layers are precisely what our method addresses in Section 2.3. Our approach does not naively scale to normalize; it provides a more interpretable and computationally efficient method, aligning the directions of the gradients rather than their magnitudes to address their different scales.
To further address your concerns, we include a new ablation study by directly scaling the $\mathcal{L}^{(n)} - \mathcal{L}^{(N)}$, and simply averaging these scaled differences across different layers, a method we denote as "ablation1" in the following table
> **"The authors claim that the approximated P_inner cannot be directly used as it is not perfect. I believe presenting experimental results would be more convincing than a verbal analysis.**
Thank you so much for your suggestion. We include the corresponding ablation study following your suggestion and denote it as "ablation2". In this study, we directly use $P_{inner}$
| | Factor | TrufulQA(MC1) | TrufulQA(MC2) | TrufulQA(MC3) |
|-----------------------------|-----------|---------------|---------------|---------------|
| llama2-7B-chat + ablation1 | 62.73 | 33.66 | 39.83 | 31.47 |
| + ablation2 | 63.59 | 25.21 | 51.09 | 26.25 |
| + SED | **65.16** | **37.08** | **63.86** | **32.90** |
| llama2-13B-chat + ablation1 | 66.29 | 37.33 | 45.0 | 31.98 |
| + ablation2 | 66.70 | 27.05 | 52.72 | 28.46 |
| + SED | **67.06** | **37.09** | **63.75** | **32.60** |
> **"The notion of logits \(\mathcal{L}\) "**
We appreciate this suggestion. To avoid confusion, we will replace it in our revision.
> **"In line 69, Is this a typo?"**
We apologize for this inconsistency. Yes, it was a typo. In our revision, we will clarify that \(P_L\) in line 71 indeed refers to the probability distribution derived from the logits, maintaining consistency throughout the document.
> **"Equation 2 is confusing"**
We apologize for any confusion caused by Equation 2. We have removed the subscript \(L\) from \(\mathcal{P}_{\mathcal{L}}\) to clarify that it represents a general probability distribution, not specifically linked to 'logits' as previously implied. Additional explanations will be added to elucidate that this equation is a derivative of the gradient of the KL divergence function and is unrelated to the discussions around line 90.
> **"In Line 107, 'In this context' and 'In this formulation' duplicate."**
Thank you for pointing out the redundancy. We will revise this part to enhance clarity and avoid duplication.
Lastly, we would like to express our heartfelt gratitude for the time and effort you have dedicated to reviewing our paper. We deeply appreciate your guidance and assure that all new results and findings will be included during our revision. Thank you!
Sincerely,
Authors
---
Rebuttal 2:
Comment: Dear Reviewer,
Thank you so much for your time and engagement in our discussion. We noticed an issue with the display of mathematical symbols in the "First Issue: Suitability of the Approximation" section of our rebuttal. Therefore, we would like to re-explain with the correct display to ensure it is easy to read.
> **"However, this does not imply that their difference is a good approximation of the gradient of KL divergence. If you believe this approximation is accurate, why not directly apply it to Equation 3?"**
**First Issue: Suitability of the Approximation**
Regarding why we interpret the difference $\mathcal{L}^{(n)} - \mathcal{L}^{(N)}$ as a suitable approximation for the gradient of the KL divergence, the core reason lies in our claim that the “KL divergence from the real distribution to early-layers' logits distribution” is smaller than the “KL divergence from the real distribution to early-layers' logits distribution”, which means
$$KL(P_{real} \parallel P_{\mathcal{L}^{(N)}}) < KL(P_{real} \parallel P_{\mathcal{L}^{(n)}}).$$
Thus, based on our experiences in gradient decent algorithm, we adopt $\mathcal{L}^{(n)} - \mathcal{L}^{(N)}$ to serve as an approximation for the gradient direction.
We think the above claim makes sense because we notice that the cross-entropy satisfies $CE(P_{real},P_{\mathcal{L}^{(N)}}) < CE(P_{real}, P_{\mathcal{L}^{(n)}}).$ Then based on the following relationship between the KL divergence and the cross-entropy:
$$
KL(P_{real} \parallel P_{\mathcal{L}}) = CE(P_{real}, P_{\mathcal{L}}) + H(P_{real})
$$ (H denotes entropy)
, we can derive $
KL(P_{real} \parallel P_{\mathcal{L}^{(N)}}) < KL(P_{real} \parallel P_{\mathcal{L}^{(n)}})
$.
As for the reason why $CE(P_{real},P_{\mathcal{L}^{(N)}}) < CE(P_{real},P_{\mathcal{L}^{(n)}})$, first we verify this by empirically comparing the cross-entropy across different layers. As illustrated in Figure 3 (we will add the missing references in our revised manuscript), we find that the final layer exhibits a smaller CE loss value compared to earlier layers. This observation makes sense because, as discussed, the final layer **directly** engages with real-world labels through cross-entropy during training, making it more accurate. Additionally, as you mentioned, the final layer contains more “semantic” information, making it closer to the real-world labels.
Based on the above discussion, we think $\mathcal{L}^{(n)} - \mathcal{L}^{(N)}$ is a good approximation of the gradient of $KL(P_{real} \parallel P_{\mathcal{L}^{(N)}})$. When estimating the gradient of $KL(P_{inner} \parallel P_{\mathcal{L}^{(N)}}) $ in Equation 3, this gradient should be close to the gradient of the $KL(P_{real} \parallel P_{\mathcal{L}^{(N)}}) $ to benefit the decoding because the inner knowledge $P_{inner}$ should be close to real-world knowledge to avoid making errors. Hence, we utilize $\mathcal{L}^{(n)} - \mathcal{L}^{(N)}$ as the source for approximation.
**Second Issue: Application to Equation 3**
Regarding why we do not directly apply $\mathcal{L}^{(n)} - \mathcal{L}^{(N)}$ in Equation 3, it is important to consider that while $\mathcal{L}^{(n)} - \mathcal{L}^{(N)}$ is unconstrained, the gradients of KL divergence (e.g., $p_1 - t_1, p_2 - t_2, ..., p_i - t_i, ..., p_d - t_d$ in Equation 2 ) are constrained within 1. Thus, direct substitution could lead to a mismatch in magnitudes. Proper normalization and subsequent aggregation of estimations from different layers are exactly what our method addresses in Section 2.3. Our approach does not naively scale to normalize; it provides a more interpretable and computationally efficient method, aligning the directions of the gradients rather than their magnitudes to address their different scales.
To further address your concerns, we include a new ablation study by directly scaling the $\mathcal{L}^{(n)} - \mathcal{L}^{(N)}$, and simply averaging these scaled differences across different layers, a method we denote as "ablation1" in the following table. The result shows the direct application of $\mathcal{L}^{(n)} - \mathcal{L}^{(N)}$ in Eq 3 is not as effective as our method.
| | Factor | TrufulQA(MC1) | TrufulQA(MC2) | TrufulQA(MC3) |
|-----|-----|------|------|-----|
| llama2-7B-chat + ablation1 | 62.73 | 33.66 | 39.83 | 31.47|
| + SED | **65.16** | **37.08** | **63.86** | **32.90**|
| llama2-13B-chat + ablation1 | 66.29 | **37.33** | 45.0 | 31.98|
| + SED | **67.06** | 37.09 | **63.75** | **32.60** |
**Lastly, we sincerely thank you for reviewing and discussing our paper. Your valuable suggestions have greatly enhanced our methodology presentation, particularly the ablation studies, which make our paper more comprehensive. We deeply appreciate your guidance and will incorporate all the above discussion during our revision. Should you have any further comments or questions, feel free to raise them. We are committed to addressing any concerns.**
Sincerely,
Authors
Title: Supplementary explanation with the display of easy-to-read mathematical symbols
---
Rebuttal Comment 2.1:
Comment: Thank you for the detailed response, which addresses my concerns. As a result, I have slightly increased my assessment. However, I still believe this paper requires significant editing to include the necessary discussions and further polishing.
---
Rebuttal 3:
Title: Reviewer, please take a look at author response.
Comment: Hello Reviewer Xg4n,
Please take a moment to read and acknowledge the author's response to your review.
Thanks, Area Chair
---
Rebuttal 4:
Title: Dear Area Chair and Reviewer
Comment: Dear Area Chair,
**Thank you so much for your time and efforts in facilitating the review and discussion process. We really appreciate it.**
Dear Reviewer,
We sincerely thank you for reviewing and discussing our paper. Your suggestions have been incredibly constructive in enhancing our work. In our rebuttal, we have provided detailed explanations addressing your concerns, including further clarification on the motivation and presentation of our methodology, and additional ablation studies to support our methods, following your valuable suggestions. **We will definitely incorporate all the discussions and new results in our revision. Thank you so much for your valuable suggestions! Should you have any questions, whether about further explanations, additional results, or any other aspects that you find unclear, do not hesitate to raise them. We are committed to resolving any issues to your satisfaction. We understand that you are very busy, so we deeply appreciate your time and effort.**
(We have noticed that sometimes the formulas in our rebuttal may not display correctly on OpenReview. A refresh of your browser may resolve them. However, if you continue to face difficulties or need more detailed explanations of these formulas, please let us know. We are prepared to provide all necessary support to make our research clear and understandable.)
Sincerely,
Authors
---
Rebuttal 5:
Title: Dear Reviewer
Comment: Dear Reviewer,
**Firstly, we are very grateful for your time and timely response.** We are pleased to have addressed most of your concerns. **Respectfully**, we wish we could further address your concerns regarding the extent of modifications required.
1. **Methodology:** We will integrate key formulas and discussions seamlessly into the current content, ensuring that these crucial elements are highlighted effectively.
2. **New Results and Ablation Studies:** We will prioritize the enhancements you suggested by emphasizing these new results and moving less critical details to the appendix.
3. **Feedback from Other Reviewers:** We will ensure that the most important results are included in the main text. If space limitations necessitate placing some results in the appendix, we will ensure they are clearly cited in the main text, providing explicit references to their exact locations in the appendix.
**Respectfully**, considering that we can complete the primary rebuttal within the 6000-character limit set by NeurIPS and that most papers require presenting more detailed discussions and results in their rebuttals, **we hope to assure you that the extent of the modifications will be manageable and not as significant as perceived.**
**Unfortunately, we are unable to update the file with the latest edits during the rebuttal and discussion period. Should you have any further concerns, we can attempt to show the edited sections, especially the methodology part, directly in the "Official Comments" here to further resolve your concerns. We really appreciate your understanding.**
Thank you once again for your time and efforts. We truly appreciate it and look forward to resolving your concerns further.
Sincerely,
Authors | Summary: This paper introduces Self-Evolution Decoding (SED), a novel decoding strategy aimed at enhancing the factual accuracy of large language models (LLMs) without the need for external knowledge bases or additional fine-tuning. SED optimizes the outputs of LLMs by refining the logits from the final layer through the inherent self-evolution of hidden states, effectively reducing hallucinations and refocusing the probability mass on factual responses. Evaluations on several benchmarks, including TruthfulQA and FACTOR, demonstrate that SED outperforms existing methods like DoLa, achieving up to a 10% improvement in factual accuracy. The method is also compatible with other factuality-enhancing techniques, further boosting their effectiveness. While the empirical results are promising, the paper notes some computational overhead and calls for further theoretical analysis to better understand the mechanisms behind SED's success.
Strengths: - __Novelty__: The paper presents a novel decoding strategy, SED, which improves the factual accuracy of large language models without requiring additional fine-tuning or external knowledge bases. This approach fills a crucial gap in the existing methodologies for improving the reliability and truthfulness of LLM outputs.
- __Comprehensive evaluation__: The effectiveness of SED is validated across multiple benchmarks such as TruthfulQA, FACTOR, StrategyQA, and GSM8K. The results show that SED outperforms existing methods like DoLa and other baseline strategies, demonstrating significant improvements in factual accuracy and overall performance.
- __Compatibility__: SED is its compatibility with other factuality-enhancing methods. The paper demonstrates how SED can be integrated with methods like Inference-Time Intervention, Activation Decoding.
Weaknesses: - __Lack of theoretical analysis__: While the empirical results are robust, the paper lacks a rigorous theoretical analysis to explain why SED improves the factual accuracy of LLMs. A better understanding of the underlying mechanics and theoretical justification for the approach would strengthen the contribution.
Experimental Reproducibility and Statistical Significance:
- __Computation efficiency__: Although SED improves factual accuracy, this comes at the cost of increased computational complexity compared to methods like DoLa. The paper mentions that SED operates slightly slower, which could be a drawback for applications requiring real-time performance. Further benchmarking on computational costs and scalability would be useful for assessing the practical applicability of SED.
Technical Quality: 3
Clarity: 3
Questions for Authors: Could you provide more insights or theoretical justifications for the functioning and effectiveness of SED? Are there any specific properties of the inner knowledge distribution (P_{inner}) that you believe contribute to the success of SED?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Computationally inefficiency is the biggest limitation of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Title: Rebuttal by Authors
Comment: **Dear Reviewer,**
Thank you very much for your time and supportive comments. We appreciate your suggestions and are committed to improving our paper to meet your expectations.
> **"A better understanding of the underlying mechanics and theoretical justification for the approach would strengthen the contribution."**
We are grateful for your advice and plan to incorporate the following analyses to enhance the understanding of our approach:
The principal insight is that pre-trained LLMs exhibit variations in token distributions across different layers, particularly when comparing the output layer with the earlier layers. We have discovered that contrasting the early layers with the final layer can yield a more factual distribution over specific tokens.
We provide a demonstration in the attached PDF to further reveal how the SED mechanism benefits from this approach.
1. By contrasting the final layer with each of the early layers, we estimate an inner distribution: $$ \bar{t}^{(n)} = \frac{1}{w^{(n)}} (\bar{t}^{(n)}_1, \bar{t}^{(n)}_2, ..., \bar{t}^{(n)}_i, ..., \bar{t}^{(n)}_d) $$ This is more precise and can be demonstrated by comparing Figure 1(a) and Figure 2(a) in the attached PDF. Our analysis in Section 2.4, Question 1, also delves deeper into this aspect. For most early layers, the estimated inner distribution tends to assign a higher probability to the correct tokens. However, DoLa's estimates are imprecise, leading to a heavy reliance on the selection of candidate layers. As shown in Figures 2(a) and 2(b), DoLa's choice to contrast the final layer with the zeroth layer results in both incorrect and correct tokens having the same probability.
2. For different layer estimates, when we ensemble different layers, we do not simply average them. Instead, our SED method determines weights by calculating the cosine similarity, identifying layers with imprecise estimates. Thus, their influence is diminished, as illustrated in Figures 1(a) and 1(b), where the weights are reduced for layers that misestimate the inner distribution.
3. Our approach integrates the inner distribution with the original distribution, as opposed to DoLa's method, which simply replaces the original distribution with the inner one. This integration is discussed in Section 2.4, Question 2.
> **"Further benchmarking on computational costs and scalability would be useful for assessing the practical applicability of SED. "**
Thank you so much for your suggestions. We have benchmarked our method against baseline models and Dola, and our approach does not significantly increase computational time—less than a 10% increase.
| Model\&Methods | DoLa | SED (topk=5) | SED (topk=20) | SED (topk=50) |
|----------------|--------|--------------|---------------|---------------|
| LLaMA-2-7B | 29.93 | 30.41 | 31.15 | 32.70 |
| LLaMA-2-13B | 39.57 | 39.61 | 41.14 | 43.30 |
| LLaMA-2-70B | 136.42 | 138.33 | 140.24 | 143.12 |
This efficiency is largely due to several factors:
- **Optimized Operations:** Most operations, including the calculation of gradients and cosine similarity, are accelerated using PyTorch. This optimization is crucial for maintaining low computational overhead.
- **Vectorized Operations:** By utilizing vector operations extensively, we avoid excessive reliance on for-loops, which enhances the computation speed.
Despite the increase in computational load with a larger top-k, as demonstrated in our parameter analysis in Figure 5, a large top-k is unnecessary. In fact, increasing top-k can introduce more noise, reducing the effectiveness of our model. Therefore, our approach maintains a balanced computational overhead, making it feasible for practical applications.
We really appreciate your suggestions which are very important to our work. We are committed to incorporating these expanded discussions and findings in our revised manuscript to provide a more comprehensive understanding of our method. Thank you so much for your time and efforts.
Sincerely,
Authors
---
Rebuttal 2:
Title: Reviewer, please take a look at author response.
Comment: Hello Reviewer 12rF,
Please take a moment to read and acknowledge the author's response to your review.
Thanks, Area Chair
---
Rebuttal 3:
Title: Reviewer's response
Comment: Thank you for the response and additional results. The authors have addressed my concern. I believe my ratings are still fair and decide not to change my scores.
---
Rebuttal Comment 3.1:
Title: Dear Area Chair and Reviewer
Comment: Dear Area Chair,
**Thank you so much for your time and efforts in facilitating the review and discussion process. We really appreciate it.**
Dear Reviewer,
**We sincerely thank you for your time and efforts in reviewing and discussing our paper. Your comments and your suggestions are incredibly constructive in enhancing our work. We will definitely incorporate all the discussions and new results in our revision following your suggestions. Thank you so much!**
Sincerely,
Authors | Summary: It introduces a novel decoding strategy named Self-Evolution Decoding (SED) aimed at enhancing the reliability and truthfulness of Large Language Models (LLMs). Unlike methods that depend on external knowledge bases or additional fine-tuning, SED is an intrinsic optimization technique that capitalizes on the self-evolution of LLMs' hidden states. The method refines the output during inference, akin to continued training, which improves accuracy and interpretability without sacrificing natural language fluency.
Strengths: The SED method is an original contribution that tackles the issue of factuality in LLMs by introducing a new decoding approach. This is a novel way to enhance outputs without relying on external data or model retraining.
The concept of optimizing an implicit objective function using the self-evolution of LLMs is creative and presents a new angle for improving model outputs during inference.
Weaknesses: While the empirical results are positive, there is no enough theoretical analysis provided to support the method's effectiveness.
The paper does not report error bars or measures of statistical significance, which are important for understanding the variability of the results.
SED may introduce additional computational overhead during inference, which could be a concern for real-time applications.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see weaknesses.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: While SED has been tested on multiple datasets, there may be a need for further evaluation on an even broader range of datasets to ensure the method's generalizability.
The paper does not provide a rigorous optimization analysis of SED, which could help understand why it leads to more factual outputs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Dear Reviewer,**
Thank you so much for taking the time to provide your feedback. Your comments and suggestions are invaluable to us, especially regarding our methodologies. We appreciate this opportunity to address your concerns.
> **"SED may introduce additional computational overhead during inference, which could be a concern for real-time applications."**
Thank you for raising this concern. We have benchmarked our method against standard decoding and Dola by measure the latency (ms/token) across different configurations on different size of models. 0ur findings indicate that our approach increases computational time by less than 10%.
| Model | DoLa | SED (topk=5) | SED (topk=20) | SED (topk=50) |
|----------------|--------|--------------|---------------|---------------|
| LLaMA-2-7B | 29.93 | 30.41 | 31.15 | 32.70 |
| LLaMA-2-13B | 39.57 | 39.61 | 41.14 | 43.30 |
| LLaMA-2-70B | 136.42 | 138.33 | 140.24 | 143.12 |
This minimal increase is due to:
- **Optimized Operations:** Most operations, including the calculation of cosine similarity, are accelerated using PyTorch, which is crucial for maintaining low computational overhead.
- **Vectorized Operations:** By extensively utilizing vector operations, we reduce reliance on for-loops, thereby enhancing computation speed.
Although there is an increase in computational cost with a larger top-k, our parameter analysis in Figure 5 shows that a large top-k, such as 50, is not necessary. Thus our approach maintains a acceptable computational overhead, making it feasible for real-time applications.
> **"While the empirical results are positive, there is not enough theoretical analysis provided to support the method's effectiveness."**
We are grateful for your advice and plan to enhance our analysis to better understand why SED improves the factual accuracy of a Large Language Model (LLM). The principal insight is that pre-trained LLMs exhibit variations in token distributions across different layers, particularly when comparing the output layer with the earlier layers. We have discovered that contrasting the early layers with the final layer can yield a more factual distribution over specific tokens.
We provide a demonstration in the attached PDF to further reveal how the SED mechanism benefits from this approach.
1. By contrasting the final layer with each of the early layers, we estimate an inner distribution:
$$
\bar{t}^{(n)} = \frac{1}{w^{(n)}} (\bar{t}^{(n)}_1, \bar{t}^{(n)}_2, ..., \bar{t}^{(n)}_i, ..., \bar{t}^{(n)}_d)
$$
This is more precise and can be demonstrated by comparing Figure 1(a) and Figure 2(a) in the attached PDF. Our analysis in Section 2.4, Question 1, also delves deeper into this aspect. For most early layers, the estimated inner distribution tends to assign a higher probability to the correct tokens. However, DoLa's estimates are imprecise, leading to a heavy reliance on the selection of candidate layers. As shown in Figures 2(a) and 2(b), DoLa's choice to contrast the final layer with the zeroth layer results in both incorrect and correct tokens having the same probability.
2. For different layer estimates, when we ensemble different layers, we do not simply average them. Instead, our SED method determines weights by calculating the cosine similarity, identifying layers with imprecise estimates. Thus, their influence is diminished, as illustrated in Figures 1(a) and 1(b), where the weights are reduced for layers that misestimate the inner distribution.
3. Our approach integrates the inner distribution with the original distribution, as opposed to DoLa's method, which simply replaces the original distribution with the inner one. This integration is discussed in Section 2.4, Question 2.
> **"While SED has been tested on multiple datasets, there may be a need for further evaluation on an even broader range of datasets to ensure the method's generalizability."**
Thank you for this valuable suggestion. We have conducted additional experiments on more realistic open-ended generations datasets, HotPotQA, Natural Question (NQ), TriviaQA (Tri). We adopt more evaluation metric, Exact Match(EM) and the F1.
| Model | HotpotQA EM | HotpotQA F1 | NQ EM | NQ F1 | Trivia EM | Trivia F1 |
|-----------------|:-----------:|:-----------:|:-----:|:-----:|:---------:|:---------:|
| Llama 2 7B chat | 19.6 | 20.1 | 21.8 | 20.4 | 44.4 | 44.3 |
| + DoLa | 20.4 | 21.3 | 23.5 | 21.5 | 45.2 | 45.3 |
| + Sed(ours) | **20.9** | **21.5** | **24.4** | **22.2** | **47.6** | **46.3** |
| Llama 2 13B chat| 23.8 | 21.7 | 33.1 | 28.9 | 63.0 | 60.9 |
| + DoLa | 24.5 | 23.2 | 33.1 | 28.9 | 63.2 | 61.5 |
| + Sed(ours) | **25.0** | **24.5** | **34.6** | **31.6** | **63.3** | **62.2** |
). The results show that our method improves the performance in more realistic open-ended generation tasks.
Sincerely,
Authors
---
Rebuttal 2:
Comment: Dear Reviewer,
Thank you for your thoughtful comments and the time you've invested in reviewing our manuscript. In response to your suggestions, we would like to provide some context and additional explanations regarding the error bars or other measures of statistical significance in our initial submission.
The reason we did not include error bars in our initial manuscript is that we followed the general settings from the recent studies on factuality decoding, such as DoLa [1], ITI [2], AD [3], and ICD [4] methods. These papers typically do not report error bars either. This is mainly because current factuality decoding approaches are more focused on the paradigm of greedy decoding, where outputs are deterministically selected based on maximum likelihood. Consequently, the output variability is inherently limited by the deterministic nature of the decoding method. Therefore, to maintain a fair comparison with these established methods, we also chose not to include error bars in our analysis.
Considering your advice on statistical significance, we have explored the potential for variability due to different data subsets by employing the bootstrap method, which involves multiple resamplings of the same dataset. In our revised analysis, we calculated the 95% confidence intervals (95%CI) using the bootstrap method with 1,000 bootstrap samples on Factor Dataset. Each sample was generated by randomly resampling the data, with replacement, to simulate the effects of data variability.
We have reported the results of different methods using this approach in the table below:
| | 95%CI | | 95%CI |
|------------------|-------------------|------------------|-------------------|
| llama2-7B-base | [56.44,59.89] | llama2-7B-chat | [54.90,58.48] |
| + DoLa | [61.39,64.76 ] | + DoLa | [54.81,58.32] |
| + SED | **[65.56,68.98]** | + SED | **[63.46,66.77]** |
| llama2-13B-base | [61.88,65.43] | llama2-13B-chat | [60.35,63.79] |
| + DoLa | [55.21,58.89] | + DoLa | [56.21,59.75] |
| + SED | **[69.17,72.41]** | + SED | **[65.40,68.74]** |
Our finding shows,
1. **Superiority of Our Method (SED)**: Across all configurations, whether using the LLaMA 7B or 13B models, our approach SED consistently achieves higher lower bounds and upper bounds in the confidence intervals compared to the base models and those augmented with DoLa only. This consistent outperformance suggests that the SED significantly improves the model's accuracy.
2. **Non-Overlapping Confidence Intervals**: In most cases, the confidence intervals of our SED-enhanced method do not overlap with those of the other methods. This lack of overlap is statistically significant as it indicates that the improvements observed with our method are not due to random variations within the data, but are a result of the SED augmentation.
The clear separation and higher confidence intervals associated with our method suggest that the differences in performance are statistically significant.
**Lastly, we sincerely appreciate the time and effort you have dedicated to reviewing and discussing our paper. Your suggestions have been immensely valuable to our research, and we plan to incorporate all the discussions and new results into our revised paper. Thank you so much. If you have any further questions or concerns, whether about the methodology, presentation, or additional experimental results, do not hesitate to raise them. We are committed to addressing your concerns and meeting your expectations. Once again, we deeply appreciate your feedback and look forward to your suggestions.**
Sincerely,
Authors
[1] Yung-Sung Chuang et al., "Dola: Decoding by contrasting layers improves factuality in large language models," 2024.
[2] Kenneth Li et al., "Inference-time intervention: Eliciting truthful answers from a language model," 2023.
[3] Shiqi Chen et al., "In-context sharpness as alerts: An inner representation perspective for hallucination mitigation," 2024.
[4] Yue Zhang et al., "Alleviating hallucinations of large language models through induced hallucinations," 2023.
Title: Supplementary explanation
---
Rebuttal 3:
Title: Reviewer, please take a look at author response.
Comment: Hello Reviewer Mgbd,
Please take a moment to read and acknowledge the author's response to your review.
Thanks, Area Chair
---
Rebuttal 4:
Title: Dear Area Chair and Reviewer
Comment: Dear Area Chair,
**Thank you so much for your time and efforts in facilitating the review and discussion process. We really appreciate it.**
Dear Reviewer,
We sincerely appreciate your time and efforts in reviewing and discussing our paper. Your insights have been incredibly constructive and have significantly enhanced our work. In our rebuttal, we provided detailed explanations addressing your concerns, including:
1. Providing results on the computational cost to demonstrate that our method is acceptable for real-time applications, along with the reasons for its efficiency.
2. Providing more theoretical analysis and demonstrations to further elucidate how our method works and why it is effective.
3. Following your valuable suggestions, we have included more experimental results on more realistic open-ended generation scenarios.
4. Following your valuable suggestions, we have also included a discussion of error bars and statistical significance to ensure the reliability of our method.
**Thank you so much for your suggestions. We will definitely incorporate all the discussions and new results in our revision. Should you have any questions, whether about further explanations, additional results, or any other aspects that you find unclear, do not hesitate to raise them. We are committed to resolving any issues to your satisfaction. Thank you so much!**
Sincerely,
Authors
---
Rebuttal Comment 4.1:
Title: Dear Reviewer Mgbd, Could We Take a Minute of Your Time?
Comment: Dear Reviewer Mgbd,
We hope our rebuttal addresses your concerns effectively. As we are approaching the end of the author-reviewer discussion period, we would like to see if you have any remaining questions or comments you'd like us to clarify/discuss. We understand that you are very busy, so we would really appreciate it if you could take a minute to review our rebuttal, as suggested by the Area Chair.
In our rebuttal, we provided the detailed explanations addressing your concerns, including:
1. Providing results on the computational cost to demonstrate that our method is acceptable for real-time applications, along with the reasons for its efficiency.
2. Providing more theoretical analysis and demonstrations to further elucidate how our method works and why it is effective.
3. Following your valuable suggestions, we have included more experimental results on more realistic open-ended generation scenarios.
4. Following your valuable suggestions, we have also included a discussion of error bars and statistical significance to ensure the reliability of our method.
Your insights would be valuable for our submission, and we would greatly appreciate any further feedback you may provide before the end of the discussion. Thank you so much for your time and efforts! We really appreciate it!
Sincerely,
Authors | Rebuttal 1:
Rebuttal: Dear Reviewers,
We express our gratitude towards all reviewers for their time reviewing our submission and providing constructive feedbacks. Along with our rebuttal, we are including a PDF file containing figures that further support our analysis of why the SED method is effective. This additional material aims to provide a deeper understanding of the mechanisms underlying SED's efficacy.
Sincerely,
Authors
Pdf: /pdf/fd8b4c50686945a0cb262ddcc46b4500d29a0951.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Approximation-Aware Bayesian Optimization | Accept (spotlight) | Summary: The authors present an extension of the sparse Variational Gaussian Process framework (SVGP) that is better suited for Bayesian optimization (BO) tasks. This is achieved by a new optimization criterion (EULBO) which aims to optimize the parameters of SVGPs s.t. the training data is fit well while achieving a better performance in selecting new solution candidates. Two concrete instantiations of EULBO (with Expected Improvement and Knowledge Gradient) are derived. An extensive empirical evaluation is conducted to demonstrate the effectiveness of the proposed approach.
Strengths: - though being dense, the paper is clearly written and straightforward to follow
- the proposed approach is technically solid and mathematically sound
- a relevant problem is addressed by an elegant solution
- both, methodology and experiments are presented well
Weaknesses: ## Methodology
- there are some details unclear/missing regarding figure 1:
- which points were used as inducing points?
- why is the uncertainty relatively constant at the right side of the plot (left plot)? Depending on the choice of the inducing points, I'd expect the predicted variance (and thus the EI) to increase towards the right edge of the plot
- what was the exact problem definition leading to Fig. 1? It would be good to see the exact parameterizations for reproducibility as it would help to gauge whether the motivational example is rather cherry-picked or whether the shown issue manifests across different parameterizations
- missing $t$ in Eq. 2? shouldn't it be $\pi(f | \mathcal{D}_t)$?
- in the equation after line 221 it seems that the sum does not depend on $j$. shouldn't it be $\mathbf{x}_j$?
## Experiments
- why were the baselines only applied in conjunction with EI? (see line 259)
- it seems that, depending on the choice of the objective and method, there are quite significant differences in the variance of EULBO. For example, in Fig. 2 "Media Molecules 1", TuRBO + ELBO EI has much higher variance than TuRBO + EULBO EI. Do you account this observation to EULBO?
Technical Quality: 4
Clarity: 4
Questions for Authors: see weaknesses
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: the authors discussed the limitations of their approach in a detailed waysssssssssssssssssssssss
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Figure 1: which points were used as inducing points?
The inducing points are initialized to a set of points selected uniformly at random in the search space and then they are learned via gradient descent by minimizing the loss (ELBO or EULBO) when the GP model is trained on the data.
> Figure 1: why is the uncertainty relatively constant at the right side of the plot (left plot)? Depending on the choice of the inducing points, I'd expect the predicted variance (and thus the EI) to increase towards the right edge of the plot
In the large interval between the rightmost inducing points, the variance remains fairly constant at the prior variance. At the rightmost inducing point, the variance dips, and then to the right of this rightmost inducing point, the variance increases back up to the prior variance. We think this is fairly expected behavior, and perhaps the dip in variance at the rightmost inducing point was missed due to the relatively large observation noise in this data? We are happy to discuss this with you in more detail.
> Figure 1: what was the exact problem definition leading to Fig. 1? It would be good to see the exact parameterizations for reproducibility as it would help to gauge whether the motivational example is rather cherry-picked or whether the shown issue manifests across different parameterizations
We agree that reproducibility is important for this example problem as well as for other experiments. We plan to include a simple notebook with code to quickly reproduce Figure 1 in our code release which will be available on github once the paper is accepted. We specifically designed the toy dataset to illustrate the point of the paper (large amount of data on the left, very little on the right so that an SVGP with only 4 inducing points would underfit on the right so that the large objective value out right would be missed). Our hope is that this figure is illustrative and aids reader understanding, which we will make more clear in the next version. Of course, the actual experimental results in the paper are actual challenging benchmark problems and not designed for our method. Thank you for the suggestion!
> missing 𝑡 in Eq. 2? shouldn't it be 𝜋(𝑓|𝐷𝑡)?
Thanks for catching this. We will fix this in the next version.
> in the equation after line 221 it seems that the sum does not depend on 𝑗. shouldn't it be 𝑥𝑗?
The maximum (denoted by “max”; not “maximize”!) operation in Line 221 is taken over j.
> why were the baselines only applied in conjunction with EI? (see line 259)
EI is the most commonly used acquisition function used in the BO literature, often because it is cheap to compute and performs comparably with other KG and other acquisition functions. A common motivation in the literature for choosing EI over KG is because KG is significantly more expensive to compute. Running EULBO provides a unique opportunity to make computing KG much more efficient such that it becomes approximately as cheap as EULBO with EI (see last paragraph of section 3.3). This is not the case for other baselines so running them with KG wouldn’ve been much more expensive. Beyond KG and EI, we note that the EULBO is limited by construction to acquisition functions that admit decision theoretic formulations, i.e. they must be phrasable as posterior expected utilities. This appears to exclude some popular acquisition functions like UCB.
> it seems that, depending on the choice of the objective and method, there are quite significant differences in the variance of EULBO. For example, in Fig. 2 "Media Molecules 1", TuRBO + ELBO EI has much higher variance than TuRBO + EULBO EI. Do you account this observation to EULBO?
This is not something that we have explored specifically, but one plausible explanation is that EULBO consistently converges to a high-performing region of the search space with similar high-scoring molecules, while ELBO converges to a wider variety of lower-scoring regions of the search space, thus resulting in higher variance and lower final objective values. In other words, EULBO may just have lower variance because it consistently performs better on this problem.
---
Rebuttal Comment 1.1:
Title: Answer to Rebuttal
Comment: Thank you very much for addressing the points I raised!
The rebuttal clarified my points.
There is only one question left from my side regarding motivation: Could you provide intuition on whether there are certain circumstances in which using ELBO instead of EULBO heavily leads to performance deterioration of BO with SVGPs? In the experiments, one can see that there are problem instances in which EULBO leads to significantly better results than ELBO. Still, in other problem instances, the difference is not that large.
Overall, I'm happy to keep my score.
---
Reply to Comment 1.1.1:
Title: Regarding your question about motivation
Comment: This is a good question that we're not sure we have enough data to confidently answer. The benchmark problems we consider in this paper are a mix of problems that some were originally introduced in BO papers using non variational GPs, and others using them. For the ones that have already used variational GPs, there is obviously some survivorship bias where, if SVGPs catastrophically failed on those problems, we wouldn't have seen them. The other problems that originally did not use variational GPs like Rover give us at least some evidence about the performance degradation when switching to SVGPs with ELBO, and it at least does not seem to be catastrophic. We haven't encountered a problem where SVGPs with the ELBO have totally failed compared to known exact GP results, in part because they are often used in situations where evaluation budgets mandate them. | Summary: The paper proposes a new method for training sparse Gaussian processes (GPs) for large-budget Bayesian Optimization (BO), designed to facilitate the sequential decision tasks inherent in BO. This is achieved by introducing the “expected utility lower bound (EULBO),” which formulates sparse GP training as a joint optimization problem of BO queries and posterior approximation. The proposed framework is compatible with other significant advancements in the field, such as Expected Improvement (EI) and Knowledge Refinement (KR) acquisition functions, TuRBO, and batch optimization. This compatibility was demonstrated across several BO benchmarks.
Strengths: The method proposed in the paper is novel and original. One of the main strengths is how the newly proposed EULBO integrates with important points in the literature. The authors effectively demonstrate how EULBO can be used in various contexts, such as with EI, KR acquisitions, and batch optimization. This helps distinguish their work from others in the field. The submission is technically sound, with derivations and experiments convincingly supporting the claims of the paper. The work is significant, and I anticipate it will be extended in the future or that some of the ideas presented will be adopted elsewhere.
Weaknesses: While the paper is generally well-written, many derivations were abbreviated and some of the notation was not properly explained, making the paper more difficult to read than necessary. These issues, however, are easily amendable. I provide some suggestions for improvement in the following subsections.
Technical Quality: 3
Clarity: 2
Questions for Authors: Introduction:
(line 51): '\emph{minorize-maximize}'
Section 2:
(Fig. 1): It would be beneficial to show the SVGP model fit for EULBO as well.
(line 110): Could you define \lambda, m and S?
(Eq. between 114-115): q_\lambda is initially presented with one argument, then two. If the intention is for q_\lambda to represent all approximate distributions (in this case, p(f, u) ), please state this explicitly. It is natural to assume that q_\lambda(.) is just the Gaussian pdf (as defined in line 110), which creates confusion.
(line 116): k(.,.) as a function also needs to be explicitly defined.
Section 3:
(line 123): This should probably refer to the equation between lines 114-115, not Equation 3.
(Eq 4): Please define \Lambda.
(Eq 6): Please use double integrals to indicate nested integration. Additionally, I don't understand the emergence of l(D_t |f) and the normalizing constant Z. I was under the impression that l(.|.) is the Gaussian likelihood as defined in line 113. Notice that D_t, as defined in line 92, contains both x and y.
(line 147): 'corresponds to'
(line 165): Please clarify where exactly the requirement for the utility to be strictly positive plays a role or comes from.
(lines 173-175): Would using Gauss-Hermite quadrature for EI make EULBO significantly slower than ELBO?
(line 189): What is y^{+}_t and what its relation to y_t?
Question:
Because you formulate the BO queries and GP approximation as a joint optimization problem without necessarily reducing the dimensionality of the problem, how much slower is this method compared to ‘approximation-ignorant’ BO?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: The authors are open about the limitations of their work, which they leave for future exploration. Specifically, the increased computational complexity seems to be a primary limitation. It would be beneficial if the authors could quantify this increased complexity, perhaps through reporting empirical runtimes or other relevant metrics.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > (line 110): Could you define \lambda, m and S?
$m$ and $S$ are defined as the (learned) mean and covariance of the variational distribution $q(u)$ in our SVGP model. $\lambda$ should have been defined as $\lambda = (m, S)$, a shorthand for “all of the variational parameters,” which we omitted. We will fix this in the next version, and clarify the definition of $m, S$.
> (Eq. between 114-115): q_\lambda is initially presented with one argument, then two. If the intention is for q_\lambda to represent all approximate distributions (in this case, p(f, u) ), please state this explicitly. It is natural to assume that q_\lambda(.) is just the Gaussian pdf.
In Bayesian methods, it is fairly typical to overload the notation of a pdf/measure with its argument. Therefore, $q(u)$ and $q(f, u)$, should be without ambiguity as per common practice. The subscript \lambda is to denote that the distribution denoted as q contains trainable parameters in $\lambda$. We will clarify this.
> (line 116): k(.,.) as a function also needs to be explicitly defined.
Thank you for pointing this out. We will define the kernel function properly in the next version.
> (line 165): Please clarify where exactly the requirement for the utility to be strictly positive plays a role or comes from.
This is due to the log in Eq 6. Thank you for pointing this out, this is pretty subtle and we don’t draw much attention to it. We will clarify this requirement in the next version.
> (lines 173-175): Would using Gauss-Hermite quadrature for EI make EULBO significantly slower than ELBO?
This is a great question that we missed addressing in the original text. Very crucially, the expensive $K_{zz}^{-1}m$ and $K_{zz}^{-1}SK_{zz}^{-1}$ (or $K_{zz}^{-1/2}$ with whitening) solves that dominate both the asymptotic and practical running time of both the ELBO and the EULBO are fixed across the log utility evaluations needed by quadrature (and Monte Carlo in the q-EULBO case). As a result, the additional work of quadrature is pretty negligible. Because Gauss-Hermite quadrature converges extremely quickly in the number of quadrature sites, it only requires on the order of 10 or so of these post-solve evaluations to achieve near machine precision. We will add an explanation of this.
> (Eq 6): Please use double integrals to indicate nested integration. Additionally, I don't understand the emergence of l(D_t |f) and the normalizing constant Z. I was under the impression that l(.|.) is the Gaussian likelihood as defined in line 113. Notice that D_t, as defined in line 92, contains both x and y.
Yes, that is correct. $\pi(f,u|D)$ denotes the “normalized” posterior, which is essentially $\ell(D|f,u) p(f,u) / Z$. Plugging these into the derivation of Eq 6 should clarify things.
> (line 189): What is y^{+}_t and what its relation to y_t?
Thank you for catching this! We forgot to define $y^{+}_t$, which is the highest value of $y_t$ observed so far. We will properly define this in the next version.
> Because you formulate the BO queries and GP approximation as a joint optimization problem without necessarily reducing the dimensionality of the problem, how much slower is this method compared to ‘approximation-ignorant’ BO?
In general, you’re right that the added cost of EULBO is largely due to optimization challenges, rather than concerns like quadrature etc. A single EULBO forward and backward calculation has essentially the same cost as an ELBO calculation; however, as we describe in Section 3.5 and our limitations section, we currently “warm start” EULBO optimization by first optimizing the ELBO, leading to increased computational cost. Here are the wall-clock run times for running TuRBO on the Lasso DNA task using the standard ELBO compared to EULBO with EI and EULBO with KG:
| method | execution time (min) |
|---|---|
| ELBO | 184.40 $\pm$ 0.59 |
| EULBO-EI | 267.30 $\pm$ 2.53 |
| EULBO-KG | 296.95 $\pm$ 1.31 |
For the camera ready version of the paper, we plan to add a table of wall-clock runtimes comparing EULBO to all baselines.
---
Rebuttal Comment 1.1:
Title: Rebuttal Acknowledgement
Comment: I would like to thank authors for their response and addressing my concerns. I am happy to keep the assessment of their work the same. | Summary: This paper proposes a modification to sparse variational Gaussian processes (SVGPs) used in Bayesian optimization (BO) to better align the SVGP posterior approximation with the goal of optimizing an acquisition function. The key idea is to jointly optimize the SVGP and the acquisition function using a unified objective called the expected utility lower bound (EULBO). This approach ensures the posterior approximation is well-suited for the downstream decision-making task. The authors derive efficient EULBO objectives for the expected improvement (EI) and knowledge gradient (KG) acquisition functions and demonstrate improved performance over standard SVGPs on several high-dimensional optimization benchmarks.
Strengths: The paper presents a novel approach to aligning SVGP approximations with the goals of BO by jointly optimizing the posterior approximation and acquisition function. This is a creative combination of ideas from variational inference and decision theory.
The proposed method is well-motivated and grounded in sound theoretical principles. The derivations of the EULBO objectives for EI and KG are clear and technically sound. The experiments are comprehensive, covering a range of high-dimensional optimization tasks, and the results convincingly demonstrate the effectiveness of the approach.
The paper is well-written and easy to follow. The authors provide a clear exposition of the problem, the proposed solution, and the experimental setup. The figures and tables effectively communicate the key results.
Scaling BO to high-dimensional spaces is an important problem, and this work represents a significant step towards more efficient and effective BO in such settings. The proposed approach is general and could potentially be applied to other acquisition functions and GP approximations, making it a valuable contribution to the field.
Weaknesses: The paper focuses specifically on SVGPs, but it would be interesting to explore whether the proposed approach can be extended to other sparse GP approximations, even those without a tractable ELBO. This would broaden the applicability of the method and strengthen the contribution.
In Figure 2, the exact GP performs worse than the proposed method and some other baselines in the BO setting. This is counterintuitive, as one would expect the exact GP to be the gold standard when computationally feasible. The authors should provide more discussion and insights into this unexpected result.
The paper presents several variations of the proposed method (e.g., EI-EULBO, KG-EULBO, batch versions), but it lacks a clear recommendation on which variant to use in practice. A more in-depth discussion of the trade-offs between these variants and guidance on when to use each one would enhance the practical value of the work.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can the proposed EULBO approach be extended to other sparse GP approximations beyond SVGPs, even if they do not have a tractable ELBO? If so, what challenges would need to be addressed, and how might the optimization problem be formulated?
Why does the exact GP perform worse than the proposed method and some other baselines in the BO experiments (Figure 2)? Is this due to the limitations of the exact GP in high dimensions, or are there other factors at play?
What are the key factors to consider when choosing between the different variants of the proposed method (e.g., EI-EULBO, KG-EULBO, batch versions)? In what scenarios would one variant be preferred over another?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations of their work in Section 6, acknowledging the increased computational cost of the EULBO approach compared to standard SVGPs. They also mention the need for multiple optimization tricks and the potential instability of the EULBO optimization problem. These are important limitations that users should be aware of when considering this method.
However, the authors do not discuss any potential negative societal impacts of their work. While the proposed method is primarily methodological and does not pose immediate societal risks, it would be valuable for the authors to briefly comment on any broader implications of improved BO in high-dimensional spaces, such as the potential for misuse or unintended consequences in certain application domains.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The paper focuses specifically on SVGPs, but it would be interesting to explore whether the proposed approach can be extended to other sparse GP approximations, even those without a tractable ELBO. This would broaden the applicability of the method and strengthen the contribution.
> Can the proposed EULBO approach be extended to other sparse GP approximations beyond SVGPs, even if they do not have a tractable ELBO? If so, what challenges would need to be addressed, and how might the optimization problem be formulated?
Our focus on SVGP was motivated by the fact that SVGPs are the most widely used sparse GP approximation in the high-throughput BO literature, but your question is a good one. SVGPs have been extended and improved several times, and it’s a natural question whether the EULBO can be adapted to all of these settings. Our method should be readily applicable when an ELBO exists mathematically but can only be approximated through Monte Carlo or quadrature. For sparse approximations where the objective is not a canonical ELBO, it is possible that a similar variational utility lower-bound could be derived, but it may look very different from this paper. We conjecture that some recent SVGP-like models like ODSVGP admit relatively straightforward EULBO adaptations. Other examples, like Vecchia approximations, would require pretty orthogonal machinery to what we introduce and would probably be interesting in their own right. We are happy to provide more detailed comments on specific sparse GP approximations if the reviewer has one in mind, and will add a discussion to the camera ready version.
> In Figure 2, the exact GP performs worse than the proposed method and some other baselines in the BO setting. This is counterintuitive, as one would expect the exact GP to be the gold standard when computationally feasible. The authors should provide more discussion and insights into this unexpected result.
> Why does the exact GP perform worse than the proposed method and some other baselines in the BO experiments (Figure 2)? Is this due to the limitations of the exact GP in high dimensions, or are there other factors at play?
While we agree that this is counter-intuitive, we point to the fact that the true posterior of an SVGP is not an exact GP. Strictly speaking, SVGPs not only provide a variational approximation, but also modify the GP model to have an additional layer of latent variables. Therefore, there is no reason to expect that the performance of SVGPs would be the same as exact GPs, even if one has access to their true posterior. Additionally, and perhaps most interestingly, Exact GPs arguably have their own “mismatches” between learning and acquisition like the ones pointed out in and addressed by this paper: hyperparameter learning in exact GPs is not done in a utility aware fashion, for example. It seems plausible that, for some problems, utility aware hyperparameter learning might outweigh the loss incurred by using a sparse model. We do note that whether our method is better or worse than exact GPs is unlikely to be consistent across different tasks, and we suspect exact GPs are still quite competitive in settings where they can be used. (Also note the response to Reviewer veTg on a similar question.)
> The paper presents several variations of the proposed method (e.g., EI-EULBO, KG-EULBO, batch versions), but it lacks a clear recommendation on which variant to use in practice. A more in-depth discussion of the trade-offs between these variants and guidance on when to use each one would enhance the practical value of the work.
> What are the key factors to consider when choosing between the different variants of the proposed method (e.g., EI-EULBO, KG-EULBO, batch versions)? In what scenarios would one variant be preferred over another?
The best choice for which acquisition function to use, whether or not to use batch BO, what batch size to use, etc., often vary widely across problems and it’s hard for us to give guidance beyond what exists in the literature: KG is often less myopic than EI, which can be better on some problems and worse on others. Definitively comparing acquisition strategies probably deserves its own investigation. One thing we do note, however, is that the use of the EULBO switches the balance in the computational-performance trade-off of acquisition functions. For instance, the cost of computing the EULBO amortizes the cost of computing KG, making it a more sensible choice in some settings than it was before (see section 3.3 last paragraph). But again, whether one should use EI or KG given this tipping of the scale will highly depend on the problem and this paper probably doesn’t help definitively answer that question.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, which resolves my concerns and raises some very interesting and insightful discussion on the svgp. | Summary: The paper proposes a new approach for scaling Bayesian Optimisation to large datasets. Contrary to previous approaches, which fitted a sparse GP and optimised acquisition function independently, the paper proposes a method, which jointly optimises the variational parameters of sparse GP and searches for the next point to query. This is done by adding a utility term to ELBO. Authors discuss how this objective can be connected to generalised Bayesian Inference. Authors evaluate the proposed method on a number of benchmark with high number of datapoints and show it outperforms other baselines.
Strengths: - The paper touches on a very important subject of scaling Bayesian Optimisation to large datasets. I find the proposed solution to be very elegant. I also think the connection to generalised Bayesian Inference is very interesting.
- The method seems to be delivering a significant improvement in sample efficiency over baselines. I believe the choice of benchmarks is sufficiently diverse and the number of baselines compared against is sufficient.
- The paper is well-written and I really like that authors share the details of the tricks they used for improved optimisation of EULBO. I also like the fact that authors are very open about admitting the limitations of the proposed method.
Weaknesses: - As mentioned by the authors, optimisation of the EULBO objective is a bit cumbersome and seems this process is also slower than the optimisation of standard ELBO. However, it might be worth to sacrifice a bit of computational efficiency for an increase in the sample complexity.
- While authors cite previous work, which proved that the selected action satisfies convergence guarantee, it does not directly prove anything about the performance of optimisation process choosing action in such a way (e.g. regret bound or convergence to optimum). The paper could be made much stronger if authors managed to connect those notions in some way. However, I also do not think it is critical, as the main contribution of the paper is empirical (although it would be greatly appreciated).
Technical Quality: 4
Clarity: 4
Questions for Authors: - Can the authors provide the running time of the proposed method (and baselines) ? It is ok if the proposed method is slower, but it would be good to quantify exactly how much slower it is.
- Do authors have any expectations on how would the method perform in comparison to an exact GP on a smaller dataset? While I understand that the method is particularly designed for large datasets, it would be interesting to see how the utility VI-based acquisition strategy compares to standard acquisition functions like EI. It would be nice to have at least such experiment in the camera-ready version to judge whether the improvement delivered by the method comes from better acquisition strategy or better modelling
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Authors admit that the increased difficulty of optimising EULBO is a limitation of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > While authors cite previous work, which proved that the selected action satisfies convergence guarantee, it does not directly prove anything about the performance of optimisation process choosing action in such a way (e.g. regret bound or convergence to optimum). The paper could be made much stronger if authors managed to connect those notions in some way. However, I also do not think it is critical, as the main contribution of the paper is empirical (although it would be greatly appreciated).
From the best of our knowledge, non-asymptotic convergence proofs under approximate inference, both in Bayesian optimization and bandits are rare. Furthermore, most existing works assume that one can explicitly control the fidelity of the approximation or that the true posterior is within the variational family. The benefits of utility-calibrated variational inference, on the other hand, only exists when the approximation is imperfect. Therefore, we are currently unsure how to analyze the theoretical properties of utility-calibrated Bayesian inference. However, we believe this would be a very interesting avenue for future research and will definitely explore this direction. We also point out that some asymptotic analyses of consistency exist as mentioned in Line 151-152.
> Can the authors provide the running time of the proposed method (and baselines) ? It is ok if the proposed method is slower, but it would be good to quantify exactly how much slower it is.
Here are the wall-clock run times for running TuRBO on the Lasso DNA task from the paper using the standard ELBO compared to using EULBO with EI and using EULBO with KG:
| method | execution time (min) |
|---|---|
| ELBO | 184.40 $\pm$ 0.59 |
| EULBO-EI | 267.30 $\pm$ 2.53 |
| EULBO-KG | 296.95 $\pm$ 1.31 |
We agree that providing a full table of average wall-clock runtimes for EULBO and all baselines would be useful, we will plan to add this table to the appendix for the camera ready version. Thank you for the suggestion!
> Do authors have any expectations on how would the method perform in comparison to an exact GP on a smaller dataset? While I understand that the method is particularly designed for large datasets, it would be interesting to see how the utility VI-based acquisition strategy compares to standard acquisition functions like EI. It would be nice to have at least such experiment in the camera-ready version to judge whether the improvement delivered by the method comes from better acquisition strategy or better modelling
We point out to the top left pane in Figure 2, where we present results on the classic Hartmann 6 function. Somewhat surprisingly, our proposed approach does better than exact GPs in this one instance. Because SVGPs are not only a variational approximation, but a distinct model from exact GPs due to the use of inducing points, there is no reason to expect that exact GPs are a limiting case for the BO performance of SVGPs. Indeed, Exact GPs arguably have their own “mismatches” between learning and acquisition like the ones pointed out in this paper: hyperparameter learning in exact GPs is not done in a utility aware fashion, for example, while it is with EULBO. In general, we still expect exact GPs to be quite performant in settings where it can be applied. (Also note the response to Reviewer TCW1 on a similar question.)
---
Rebuttal Comment 1.1:
Comment: Thank you very much for responding to my review. I am happy with the response and I believe the paper is worthy of publication. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
UQE: A Query Engine for Unstructured Databases | Accept (poster) | Summary: This paper proposed a Universal Query Engine to directly draw insights from unstructured data. For aggregation queries, this paper designed an unbiased sampling algorithm to solve the problem that it’s hard to apply index on virtual columns. Inspired by the workflow of the C++ compiler, this paper also proposed a workflow of optimizing and generating prompt to achieve more efficient query.
Strengths: 1. This paper suited an interesting and important problem, by using LLM, more complex, semantics -related SQL queries can be achieved.
2. This paper took efficient into consideration and proposed corresponding algorithm and workflow to improve the query speed.
3. Provided theory proof of the unbiased sampling algorithm.
4. The idea of introducing compiler’s workflow into such problem to accelerating the LLM’s inference is interesting.
Weaknesses: 1. UQE seems to lack scalability on the Semantic Retrieval task. The latency reported in Table 5 is as high as 38.08 seconds. Also, the paper didn’t state that whether these numbers are the total latency or average latency per query. If they are latency per query, using UQE on the Semantic Retrieval task still has a far distance from practice.
2. The cost of embedding operation of Algorithm 1 and 2, and the cost of Algorithm 2 are not discussed in the paper. Although the authors wrote in the appendix that the g function in Algorithm 2 caused the high latency problem, no quantitative analysis was provided.
3. Lack of comparison with important baselines. I noticed that the references [9] and [33] mentioned in related work are not included in the baselines. Using LLM as UDF may significantly slow down the queries, however, they may have better accuracy on aggregation queries. Thus, I think it’s necessary to include them in the baselines to further evaluate the performance of the trade-off made by Algorithm 1 between accuracy and latency.
4. Lack of evaluation of UQE’s time cost in the main text of the paper. I noticed those results are reported in the appendix. However, in the database area, the latency and cost are significant metrics that should be reported in the main text of the paper.
Technical Quality: 3
Clarity: 4
Questions for Authors: I noticed that lc-gpt-4-turbo also has quite a high latency on Semantic Retrieval tasks, is it possible to do more optimization on the model itself to improve the performance to make such a method practice?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: Yes, the authors have mentioned the limitations of this work, but I suggest to provide more comprehensive evaluations, detailed discussions about UQE’s scalability and cost. After all, these are critical problems about the practicality.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments, and we do value your opinion regarding the latency issues. Below we try to justify from the use cases and existing baselines, and also provide more experimental results.
### Q1 **"...latency...far distance from practice..."**
The latency is per query and is averaged over 8 runs. We agree that this latency is not meant for real-time queries like the SQL engine behind web apps. The type of query that is most useful is for massive data analytics, where the alternative solution is usually to train a task specific ml model, build up the data preprocessing pipeline and then run SQL, which can easily take days for data scientists.
Secondly, our framework offers the flexibility to do trade-offs between latency and accuracy. Take the IMDB retrieval experiment as an example:
| Budget $B$ | 256 | 128 | 64 | 32 |
| ---------- | ------------- | ------------- | ------------- | ------------- |
| Latency(s) | 38.08 | 21.61 | 11.11 | 5.84 |
| F1 score | 0.978 ± 0.003 | 0.974 ± 0.005 | 0.921 ± 0.013 | 0.828 ± 0.035 |
And for aggregation:
| Budget $B$ | 128 | 64 | 32 |
| ---------- | -------------- | ------------- | ------------- |
| Latency(s) | 5.83 | 4.28 | 2.93 |
| Error | 5.75% ± 3.43% | 6.84% ± 5.52% | 8.29% ± 7.39% |
So comparing to alternative solutions for data analytics, which take days to first preprocess the data into structured columns and then run SQL, the speed of UQE is significantly improved.
### Q2 **"...cost of embedding... cost of Algorithm 2...g function..."**
The embedding cost is the same as the vector DBs. This is a pre-processing step and only needs to be done once per dataset. For text we use voyage-2 which costs 0.1 USD / million tokens, and it roughly take 8h (due to rate limit) to embed these datasets. For image we use Cloud Vertex multi-modal embedding and it takes 12h (due to rate limit).
For a query on IMDB that takes 21s, querying g and updating g takes 1.5s and 0.5s respectively. The main reason for the latency is the sequential nature of online learning of g, and a potential improvement is to leverage batched online learning to improve the parallelism.
### Q3 **"...baselines.. the trade-off made by Algorithm 1 between accuracy and latency"**
We followed the reviewer's advice to include BINDER (the viper-gpt is similar but for images) in all the experiments, and as shown in our global rebuttal, the performance of BINDER is much worse when given the same LLM budgets (we stop the BINDER execution when running out of budget, as looping over the entire database can easily cost hundreds of dollars, which is not feasible for us).
To see how the accuracy evolves for both UQE and BINDER, we perform the following experiments on aggregation tasks (IMDB) by varying the budgets. In paper we use B=128, and we lower/higher the budget for UQE/BINDER respectively in the table below.
| Budget $B$ | 512 | 256 | 128 | 64 | 32 |
| ---------- | -------------- | ------------- | -------------- | ------------- | ------------- |
| BINDER | 8.11% ± 3.15% | 8.35% ± 5.47% | 13.67% ± 6.24% | - | - |
| UQE | - | - | 5.75% ± 3.43% | 6.84% ± 5.52% | 8.29% ± 7.39% |
we can see that both BINDER and UQE will have more accurate estimation when given more budget. But for BINDER, it takes 16x more LLM calls (512) compared to UQE (32) to obtain the same level of accuracy or precision. We hope this answers the question of tradeoff of Algorithm 1.
### Q4 **"...latency and cost...in the main text..."**
We have provided the cost per query in every table in the main text. We will follow the reviewer's advice to include the latency and also the above trade-off analysis into the main text.
### Q5 **"... lc-gpt-4-turbo also has quite a high latency..."**
lc-gpt-4-turbo is basically the baseline on feeding the entire content into gpt-4-turbo and ask questions. There could be several ways that make it faster (though it is not the focus of this paper to make LLM itself run faster):
1. prefix caching; if the data never changes, one can cache majority of the kv states of prompts which can save 2x of the cost/latency or so.
2. in general, the models will only get faster/cheaper, e.g., gpt-4o-mini which came out recently.
---
Rebuttal 2:
Title: We'd love to hear your opinion on the rebuttal
Comment: Dear reviewer E5QU,
We have provided additional experiments and explanations to address your valuable feedback.
Could you please kindly take a look, and let us know if you have further concerns so that we can provide further clarifications if needed?
Thank you so much!
Best, | Summary: This paper propose a new Universal Query Engine (UQE) that directly interrogates and draws insights from unstructured data collections.
Further, a Universal Query Language (UQL) is proposed as a dialect of SQL that provides full natural language flexibility in specifying conditions and operators.
UQE leverages the ability of LLMs to conduct analysis of unstructured data and achieve efficient and accurate query execution.
Strengths: 1. The idea of designing new query language and query engine for unstructured databases leveraging the power of LLMs is interesting.
2. The paper is also very clear with thorough experiments and analysis.
3. Well-organized presentation of the proposed method and its components.
Weaknesses: 1. Many details about the design and implementation of UQE are missing. For example, the description of the generation of machine-specific code (assembly instructions) during the compilation stage (L251) is not clear. Please clarify the details of this strategy. Additionally, the authors claim that calling the LLM dominates the overall cost per query (L239). Providing detailed information about the LLM-calling stage would be beneficial.
2. The indexing technique introduced by UQE is related to the sampling strategy based on specific queries. However, database indexing is traditionally meant to facilitate quick searches. If each query requires sampling, will this impact efficiency? Additionally, doesn't this compromise accuracy?
3. This paper primarily addresses unstructured datasets, including video and audio. However, the experiments only involve text and images, lacking validation for broader generalizability.
4. Furthermore, there are plenty of multi-modal vector databases, yet the authors only compare the RAG-based query. Can UQE outperform recent multi-modal query databases, such as Milvus and MillenniumDB? If so, demonstrating this would make the evaluation more convincing.
5. Does the query support join operations, such as joining across multiple different modal files?
6. It's hard to validate the results without supplying code/models.
Technical Quality: 2
Clarity: 2
Questions for Authors: Please see the weakness section.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: No discussion on limitations, I suggest the authors to add one.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments. Below we try to address the potential misunderstandings, and provide more experimental results for justifications.
### Q1 **"...machine-specific code...clarify the details..."**
The analogy is to show how to convert the query (in UQL) to the machine code (the concrete prompts that work with LLMs). Concretely, for example, a `WHERE cond` statement would be "compiled" to
>*Please analyze the following movie review, and only reply <True> if cond, or <False> otherwise.*
and executed on the rows sampled by the online-learning or unbiased samplers. We provided the full set of prompts in appendix for your information.
### Q1 **"...LLM dominates the overall cost per query...Providing detailed information"**
For example, a query on clevr dataset costs 0.2USD using gpt-4o. The compute on the client side only takes CPU time for 30s. On AWS such a machine costs 5.80USD/month, so 30s of compute would be nothing, compared to the LLM calls.
### Q2 **"...database indexing...sampling...impact efficiency? ...accuracy?"**
The traditional database is able to build index on the columns that might be conditioned on, so as to avoid linear scan of the entire database. For unstructured data, as we don't know the query beforehand, there is no such indexing. Vector DB builds the embedding based indexing, but can only be useful for limited types of queries as we have seen in the experiments.
Based on that, we propose the analogy of indexing in unstructured database, i.e. the sampling techniques to 1) avoid linear scan; 2) handle complex reasonings; 3) have statistical guarantee.
So to answer your question, it is actually the sampling (and the online learning) that makes things efficient. There's rich literature [1] on approximated compute engine, and we have shown that its accuracy is much better than alternatives throughout comprehensive experiments (e.g., **40x** F1 score compared to the reviewer suggested Milvus for some queries).
### Q3 **"...addresses unstructured datasets, including video and audio. However, the experiments only involve text and images"**
The proposed UQE framework rely on the LLM backend for multi-modal queries, and the optimization done in the paper is agnostic to the modality.
Nevertheless, we provide new experimental results on the Audio MNIST, which contains 30k wav files from 60 speakers pronouncing digits 0-9.
Below we perform the audio semantic retrieval experiments. The query is first converted to audio space using TTS and the corresponding audio embedding is used for MIPS/Milvus search.
| Query | BINDER-gemini | MIPS/Milvus | UQE-gemini |
| ---------------------------------------- | ------------- | ----------- | ------------- |
| "Three" | 0.109 ± 0.022 | 0.445 | **0.839 ± 0.135** |
| "A number whose square equals to itself" | 0.259 ± 0.068 | 0.039 | **0.922 ± 0.024** |
| "The minimum prime number" | 0.107 ± 0.044 | 0 | **0.917 ± 0.025** |
We can see UQE consistently does better than alternatives, and it also allow complex queries that require reasoning, while embedding based MIPS or Milvus is limited to the type of queries.
### Q4 **"...Can UQE outperform ... Milvus and MillenniumDB..."**
In our paper we compared against MIPS (Maximum inner-product search) and we show the clear advantage in Table 2. Vector DB like Milvus is designed for fast but approximated nearest neighbor (ANN) search. By default it uses inner product as the similarity metric, thus is expected to be an approximation of MIPS in our paper (where we do full vector database scan without approximation).
Nevertheless, we still run Milvus on all benchmarks, and the results have been included in the global response and UQE still wins in almost all query types. MillenniumDB is for graph or structured data query and thus may not be suitable to compare. We hope this would resolve some misunderstandings.
### Q5 **"...join operations..."**
No, we have stated in our conclusion section that table join optimization is not done in our paper. With that said, one can still create the virtual columns in UQL via `SELECT` and then use existing SQL engine for the joins, at a potential very high cost of language model calls. We hope to optimize the join operation in future work.
### Q6 **"...code/models"**
The model/data and prompts are all provided already. We will release the code.
### **Limitation section**
We have already clearly stated the limitation of current work in the conclusion section, including 1) lack of table join optimization; 2) LLM selection; 3) even larger datasets. And we will work on these in future works.
References:
>[1] A handbook for building an approximate query engine.
---
Rebuttal 2:
Title: We'd love to hear your opinion on the rebuttal
Comment: Dear reviewer rs84,
We have provided additional experiments and explanations to address your valuable feedback.
Could you please kindly take a look, and let us know if you have further concerns so that we can provide further clarifications if needed?
Thank you so much!
Best,
---
Rebuttal Comment 2.1:
Comment: Thanks for your quick response! Most of my concerns are resolved now. I am increasing my score to 5. | Summary: This paper proposes an ambitious new framework for analytics on unstructured databases, the Universal Query Engine (UQE). The authors first present a list of semantics and clauses for querying unstructured databases and propose methods for implementing these functionalities, including indexing and compiling. Experiments on multimodal unstructured data analytics tasks show that UQE outperforms other methods, demonstrating the promise of utilizing UQE in database analytics.
Strengths: - The motivation is quite interesting and necessary to broaden the functionalities of databases.
- The paper is clearly written and well-presented.
- The authors presented the exact scope of the task and proposed methods accordingly.
- The proposed approach is reasonable and performs well within the scope of the work.
Weaknesses: - Could you elaborate more on the difference between BINDER and this approach (line 39)? Is it due to the lack of indexing and compatibility with unstructured information?
- Similar to the first point, the baselines seem a bit weak. Why have the authors not used BINDER as their baseline?
- I want to hear the authors' opinions on how advancements in LLMs and vision models might affect the guarantee that databases consistently return the same results. Traditional database engines, which are widely used, provide this consistency over time. Could these technological advancements affect the reliability of database analytics and, subsequently, compromise their reliability?
Technical Quality: 4
Clarity: 4
Questions for Authors: I have listed my questions in the Weaknesses section.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The limitations of the work are properly stated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for providing insightful and constructive feedback. We provide our response below, as well as extra experimental results in the global response. We look forward to learning your further thoughts.
### **"... difference between BINDER..."**
Yes as the reviewer pointed out, our focus is on the analogy of "indexing" (or sampling and search) to efficiently implement the unstructured database query engine. BINDER focuses on translating natural language questions into programs with llm as a callable function. The execution of this program is delegated to existing SQL implementations. These two can actually work together to deliver a better end to end system.
### **"...used BINDER as their baseline?"**
While BINDER and our UQE have different focuses, we provided additional experiments using BINDER on our tasks in the general response. We can see that without the query engine optimization, BINDER would achieve much higher variance and error when performing aggregation queries, and the retrieval F1 is also much lower compared to UQE under the same LLM budgets.
### **"...consistently return the same results...compromise their reliability"***
We totally agree with the reviewer that the consistency and reliability is important, and that's why our algorithm (especially the one for the aggregation query) optimizes for low-variance and zero-bias estimation, which has significantly boosted this compared to all baselines. In addition to that, one can further reduce the temperature of the LLMs and use fixed random seeds to make sure the results are reproducible for a given setup.
The reviewer raised another good point where the LLM behind the scene gets updated and that would cause the inconsistency across different versions of LLMs. This is a generic problem of the application built on top of LLMs, but here are several ways to mitigate it.
1. Result caching: for the same query, one can always cache the intermediate or final results to save the compute and also make it consistent for the same question.
2. Prompt calibration: once a new model is deployed, a common practice is to adjust the prompt a bit to make it consistent or fitting better on the target tasks. Similar practices can be made for the database queries, as long as a proper evaluation set is available. In our experiment we prepare multi-modal datasets with the ground-truth labels to validate and calibrate the prompts.
3. Even without any further effort, it may not be that inconsistent in some situations. In our paper we used claude-3-haiku as the backbone, and below we use the latest gpt-4o-mini to provide more justification for that. On IMDB we see little variation, where we can see that, the difference caused by the model switching is much lower than using a worse query engine.
| Task | BINDER | UQE-haiku (in paper) | UQE-gpt4o-mini (new) |
| ----------- | -------------- | -------------------- | -------------------- |
| Retrieval | 0.505 ± 0.030 | 0.978 ± 0.003 | 0.956 ±0.010 |
| Aggregation | 13.67% ± 6.24% | 5.75% ± 3.43% | 6.33%± 4.71% |
Of course, this behavior shift would be task related, but with the advances of LLMs we believe this variation would converge and be more stable to prompts.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. It cleared up my concerns.
---
Reply to Comment 1.1.1:
Title: thank you!
Comment: Thank you reviewer 6SvA for your kind reply and your recognition of the paper! | Summary: This paper proposes an unstructured query language based on a small segment of SQL. Its key feature is that it further supports unstructured texts and images because the authors assume that LLMs work on intra-row texts. During the query, the authors propose using online learning on LLM outputs over batches to improve the sample quality in the upcoming batches. Empirical results convey important information; that is, feeding longer context into larger LLMs does not work as well as using smaller LLMs in a more organized way.
Strengths: This paper discusses several possibilities for using LLMs in relational databases with text columns. It reveals a new possibility that one could do something between 1. using LLMs as a black box and letting them handle everything in the context. and 2. using LLMs to parse the question into SQL and let the database engine do the job with much smaller LLMs.
For queries requiring sampling, a simple online learning algorithm is used to improve the sample efficiency.
Weaknesses: - This paper is more like a technical report or a guideline for building certain products. The overall product presented is simplified from the introduction's proposed question. Realizing this system under the strong assumption that LLMs work on intra-row semantic understanding is not too difficult: the virtual column, although it seems to be filled by LLMs, can be realized in several prompts, such as closed-domain classification for WHERE and open-domain classification for GROUPBY.
Technical Quality: 2
Clarity: 2
Questions for Authors: - Given that the output in the virtual column will work with other parts of the SQL, what is the failure rate in output formatting in the experiments?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes,
Flag For Ethics Review: ['Ethics review needed: Environmental Impact']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments. Below we try to address the potential misunderstandings, and provide more experimental results for justifications.
### **"...technical report...the strong assumption that LLMs work on intra-row semantic understanding"**
While generating the virtual column entirely and process with SQL is a feasible approach, the goal of the paper is to present an efficient system that can void such kind of "linear scan" of the database content, and thus the core of the contribution lies in the unbiased+low-variance sampler and the online learning approach that orchestrates well with the LLMs. Thus we focus on providing algorithmic detail and the proof (see appendix A) for these while put less attention on the database and system layer. If the reviewer have specific details that would like to learn more, please feel free to let us know. We will also release the code afterwards.
Regarding the assumption that LLMs work on intra-row semantic understanding, we have carefully examined this. On IMDB dataset, the gpt model can achieve **97%** accuracy on average over each review, and many of the errors are actually due to the unclear sentiment from the data. Definitely the accuracy would depend on the difficulty of the question, but having a good-enough out-of-the-box solution can be a good starting point, given that people are still pushing the frontier of LLMs.
### **"...failure rate in output formatting..."**
We completely rewrite the entire SQL engine and thus the error handling can be easily done in our system. As is shown in Appendix, most of the time we ask LLMs to output <True> or <False> based on the query. We did a quick experiment to see when the LLM failed to generate either of these.
| Dataset | LLM | Violation rate |
| ------- | -------------- | -------------- |
| IMDB | Claude-3-haiku | 0.0% |
| | gpt-4o-mini | 0.8% |
| ABCD | Claude-3-haiku | 0.0% |
| | gpt-4o-mini | 0.2% |
We can see the above models, despite being small and cheap, are very good at instruction following and the formatting violation rate can be tolerated with extra error handling. And we believe that the next generation LLMs can only be better at instruction following. Actually as of Aug-6, openai released a new version of gpt-4o (**gpt-4o-2024-08-06**) which claimed to have **100%** accuracy on complex format following.
---
Rebuttal 2:
Title: We'd love to hear your opinion on the rebuttal
Comment: Dear reviewer uiEW,
We have provided additional experiments and explanations to address your valuable feedback.
Could you please kindly take a look, and let us know if you have further concerns so that we can provide further clarifications if needed?
Thank you so much!
Best, | Rebuttal 1:
Rebuttal: We thank all the reviewers for their insightful feedback, and below we provide experimental updates w.r.t baselines to address the reviewers’ comments.
### **Comparison with baselines like BINDER [1] (review 6SvA and E5QU)**
While [1] and UQE use similar SQL-like language, the focus is very different. [1] or [2] focuses on translating natural language questions into programs with llm as a callable function. The execution of this program is delegated to existing SQL implementations. On the contrary, UQE replaces the query engine layer, with the focus to execute this kind of programs in an efficient and accurate manner.
Nevertheless, we adapted BINDER (or equivalently [2] if it is for image) for our task by providing the program directly and executing it until it reaches the same API cost as UQE.
Below is the estimation error and variance on aggregation tasks (lower the better).
| Benchmark | Query | lc-GPT4-turbo | BINDER | UQE |
|:---------:|:------------------:|:---------------:|:---------------:|:-------------------:|
| IMDB | sentiment_positive | 49.02% ± 21.23% | 13.67% ± 6.24% | **5.75% ± 3.43%** |
| ABCD | account_access | 69.25% ± 32.82% | 18.99% ± 9.85% | **11.75% ± 9.78%** |
| | single_item_query | 78.42% ± 9.36% | 26.95% ± 22.16% | **12.32% ± 10.53%** |
| AirDialog | book | 47.58% ± 15.24% | 10.15% ± 7.64% | **4.98% ± 2.26%** |
| | no_flight | 47.92% ± 21.62% | 21.08% ± 16.78% | **8.78% ± 8.12%** |
| | no_reservation | 50.54% ± 21.86% | 21.19% ± 12.10% | **7.23% ± 5.40%** |
| Clevr | obj_count < 4 | 22.46% ± 19.35% | 31.04% ± 25.15% | **9.55 ± 8.55%** |
| | # spheres > 3 | 35.72% ± 14.95% | 19.35% ± 13.81% | **15.14% ± 10.71%** |
Below is the F1 score (higher the better) on semantic retrieval tasks. Note that vector DB (or per reviewer rs84, Milvus) is doing approximated MIPS as we reported in the paper, so basically they are the same and we merge their results below:
| Benchmark | Query | lc-GPT4-turbo | BINDER | MIPS/Milvus | UQE |
|:---------:|:------------------:|:--------------:|:--------------:|:-----------:|:-----------------:|
| IMDB | sentiment_positive | 0.397 ± 0.041 | 0.505 ± 0.030 | 0.875 | **0.978 ± 0.003** |
| ABCD | account_access | 0.045 ± 0.033 | 0.076 ± 0.017 | **0.961** | 0.940 ± 0.019 |
| | single_item_query | 0.023 ± 0.021 | 0.065 ± 0.017 | 0.266 | **0.935 ± 0.006** |
| AirDialog | book | 0.327 ± 0.0667 | 0.342 ± 0.031 | 0.930 | **0.979 ± 0.010** |
| | no_flight | 0.066 ± 0.037 | 0.144 ± 0.034 | 0.867 | **0.928 ± 0.018** |
| | no_reservation | 0.156 ± 0.075 | 0.145 ± 0.042 | 0.965 | **0.969 ± 0.004** |
| | cancel | 0.006 ± 0.009 | 0.013 ± 0.009 | 0.066 | **0.741 ± 0.205** |
| Clevr | obj_count < 4 | 0.058 ± 0.026 | 0.093 ± 0.031 | 0.023 | **0.897 ± 0.006** |
| | # spheres > 3 | 0.037 ± 0.027 | 0.089 ± 0.017 | 0.145 | **0.859 ± 0.007** |
From the tables above, we can see UQE achieves much better estimation precision and retrieval effectiveness compared to BINDER or a straight query engine implementation. Meanwhile, we confirmed again that vector DBs, even the commercial ones, may not be suitable for some retrieval tasks that require deep reasoning.
To see how the accuracy evolves for both UQE and BINDER, we perform the following experiments on aggregation tasks (IMDB) by varying the budgets. In paper we use B=128, and we lower/higher the budget for UQE/BINDER respectively in the table below.
| Budget $B$ | 512 | 256 | 128 | 64 | 32 |
| ---------- | -------------- | ------------- | -------------- | ------------- | ------------- |
| BINDER | 8.11% ± 3.15% | 8.35% ± 5.47% | 13.67% ± 6.24% | - | - |
| UQE | - | - | 5.75% ± 3.43% | 6.84% ± 5.52% | 8.29% ± 7.39% |
we can see that both BINDER and UQE will have more accurate estimation when given more budget. But for BINDER, it takes 16x more LLM calls (512) compared to UQE (32) to obtain the same level of accuracy or precision.
### **Results on audio database (review rs84)**
Follow reviewer rs84, we provide new experimental results on the Audio MNIST, which contains 30k wav files from 60 speakers pronouncing digits 0-9.
Below we perform the audio semantic retrieval experiments. The query is first converted to audio space using TTS and the corresponding audio embedding is used for MIPS/Milvus search.
| Query | BINDER-gemini | MIPS/Milvus | UQE-gemini |
| ---------------------------------------- | ------------- | ----------- | ------------- |
| "Three" | 0.109 ± 0.022 | 0.445 | **0.839 ± 0.135** |
| "A number whose square equals to itself" | 0.259 ± 0.068 | 0.039 | **0.922 ± 0.024** |
| "The minimum prime number" | 0.107 ± 0.044 | 0 | **0.917 ± 0.025** |
We can see UQE consistently does better than alternatives, and it also allow complex queries that require reasoning, while embedding based MIPS or Milvus is limited to the type of queries.
>References\
[1] Binder: Binding Language Models in Symbolic Languages\
[2] ViperGPT: Visual Inference via Python Execution for Reasoning | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
C-GAIL: Stabilizing Generative Adversarial Imitation Learning with Control Theory | Accept (poster) | Summary: This paper theoretically analyzes the training dynamics of GAIL, a widely discussed issue in GANs, pointing out that the original GAIL cannot converge to the desired equilibrium. From a control theory perspective, the paper propose C-GAIL, which can achieve asymptotic stability. The paper demonstrate that C-GAIL, compared to several baseline methods, accelerates convergence, reduces oscillations, and more closely approximates the expert distribution in 5 typical MoJoCo control tasks.
Strengths: - Well-written paper with clear objectives. The authors have the intention to share the code.
- The idea is novel for GAIL, although it draws on concepts from GAN.
- Theoretical analysis is complete.
- The method is simple to implement and can be easily applied to various existing GAIL methods.
Weaknesses: - The theoretical analysis is based on quite a few assumptions, and the final implementation method is only a rough approximation of the theoretical results.
- The experimental section is not comprehensive enough. Given that the method is simple and easy to deploy, can its effectiveness be validated in more GAIL variants and environments?
- There is a lack of comparison with the latest GAIL variant algorithms; only GAIL (2016) and GAIL-DAC (2018) are shown in Figures 1, 2, and 3.
- There is a divergence between the theoretical motivation of the paper and the final algorithm implementation.
Technical Quality: 3
Clarity: 2
Questions for Authors: - What are the main differences between C-GAIL and similar methods in GANs? It is recommended to provide a separate discussion.
- The authors claim that the reason for the original GAIL's failure to converge to equilibrium is due to the entropy regularization term. Removing entropy can ensure convergence to equilibrium at the cost of exploration. However, if $V^{\prime}_{\pi}$ is not computable, will it affect the policy's exploration? Compared to directly removing entropy, does C-GAIL show an advantage in exploration?
- The theoretical analysis is based on the original GAIL, which measures JS divergence. For variants like WGAIL, LS-GAIL, and f-GAIL, can convergence and stability still be guaranteed?
- How long is each expert demonstration in the experiments? How were the expert demonstrations obtained?
- In Figure 2, what causes the significant drop in the blue line in the Hopper environment? Were the curves plotted using the averages of multiple experiments?
- $\bar{s}$ and $\bar{a}$ in line 89 are undefined.
- Table 2 lacks the expert reward baseline as a reference.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The limitations of the method have been outlined in the discussion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We hope our response might provide sufficient evidence that our method is general enough to consider upgrading your score. In particular please note the additional experiments on a new environment, and the three additional variants (two already included in Appendix F, one new in the global rebuttal).
**Q1. From theory to practical settings**
A1. Please see our global review A3.
**Q2. More environments**
A2. We have included a set of Atari results under the global rebuttal. Please let us know any specific additional environments you'd like to see.
**Q3. Latest GAIL variants**
A3. While the results in the paper's main body indeed focus on GAIL and GAIL-DAC, Appendix F Table 2 already includes comparisons with two other recent GAIL variants -- BC-GAIL and WAIL. We will make these results more prominent in the next version of the paper. In addition, we have included a diffusion variant in the global rebuttal. Please let us know any specific additional variants you'd like to see.
**Q4. Differences between C-GAIL and similar methods in GANs?**
A4. Compared with GANs, the policy generator of GAIL involves an MDP transition, which results in a much more complicated dynamical system induced by a policy acting in an MDP rather than a static data generating distribution. Prior theoretical analysis and controllers are therefore inapplicable. We adopt different analysis (different ODEs) and controlling techniques (local linearization), to present a new stability guarantee, controller, and theoretical results for the different dynamical system of GAIL.
**Q5. Exploration for entropy term**
A5. Thank you for this comment. We understand that inclusion of the entropy term is critical to allow the generator policy to explore the state space. C-GAIL maintains this entropy term, while additionally adding a controller to stabilize the system. Removing the entropy term directly as you suggest is an interesting proposal, but we expect this would cause the algorithm to collapse to some local minimum without fully exploring the space.
**Q6. Theoretical guarantee for other GAIL variants**
A6. Our work presents a methodology to analyse the training process as a dynamical system and design a controller to stabilize its training. To design a controller for other GAIL variants, we would start with their objective functions and take a derivative to get their dynamical system respectively. The only difference is on the design of the generator's controller to ensure local linearization theorem. The controller for the discriminator will remain the same.
In this manner, since our implementation of C-GAIL only adds controllers for the discriminator, our C-GAIL is compatible with WAIL, LSGAIL and f-GAIL. We have provided our result compared with WAIL in appendix F.
**Q7. Length of expert demonstration and how obtained**
A7. Our main experiments follow the same settings as GAIL-DAC [1], which in turn followed vanilla GAIL [2] which used 50 timesteps per trajectory. The expert policies were trained with TRPO.
**Q8. Drop in Hopper Figure 2, number of random seeds**
A8. This is a good observation. We are not certain why there is a sudden dip in Wasserstein distance for Hopper. It's possible that, because the distance is computed between a low number of expert and generated trajectories, there is noise in our measurement, leading to these unexpected spikes.
Figure 2 is computed by averaging five random seeds. We apologize for the confusion as we did not originally include the error bars. We present the standard deviation as table in global rebuttal Q4 and will modify our figure.
**Q9. Undefined notations**
A9. Thank you for pointing it out. We will properly define them to avoid confusion. They are substitutes notation for $s$ and $a$ since we have the initial condition $s_0 = s$ and $a_0 = a$.
**Q10. Table 2 missing expert return**
A10. Thank you for noticing this. We will add an extra row for expert return. Since we are conducting on the same Mujuco tasks as Figure 1 and Table 1, the expert return is the same.
[1] Ilya Kostrikov, Kumar Krishna Agrawal, Debidatta Dwibedi, Sergey Levine, and Jonathan Tompson. Discriminator-actor-critic: Addressing sample inefficiency and reward bias in adversarial imitation learning. arXiv preprint arXiv:1809.02925, 2018.
[2] Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems, pp. 4565–4573, 2016.
---
Rebuttal Comment 1.1:
Title: Thanks for your clarification
Comment: Thanks for your rebuttal. I still think that there is a significant difference between the theoretical foundation of the paper and the actual algorithm presented. However, competitive experimental results have alleviated this concern. I decided to improve my score to 5.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response! We are pleased you have found value in our experimental results. | Summary: This paper proposes a stabilized version of the Generative Adversarial Imitation Learning (GAIL) through control theory, addressing limitations related to achieving equilibrium and slow convergence. The authors conduct a theoretical analysis using control theory to investigate the properties that influence equilibrium achievement. Empirical validation demonstrates that the proposed modifications improve performance and training stability.
Strengths: - The paper addresses an important problem in the stability of GAIL, which is crucial for the practical implementation of imitation learning techniques.
- The paper provides a theoretical foundation, analyzes the GAIL objective, and demonstrates its limitations and proposed improvements.
- The experimental results validate the theoretical claims, showing improved performance and stability in training.
Weaknesses: - It is unclear how the one-step GAIL, a simplified version, generalizes to the full GAIL framework and impacts policy learning.
- The practical settings used in the experiment do not guarantee convergence, raising questions about the need for evaluating one-step GAIL.
- The paper mentions using a linear negative feedback controller to reduce oscillations but lacks details on its implementation and usage.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How does the simplified one-step GAIL compare to the full GAIL in terms of performance and convergence?
2. Can the authors provide a quantitative analysis of training oscillations shown in Figure 1 and correlate them with training stability? It would be beneficial to quantify these oscillations, as they are a major contribution to the paper. It is mentioned 3 times for Walker2d at line 296, but how is this number computed? Similarly, 10x is mentioned in line 298. Additionally, how do these oscillations indicate training stability, and what is the correlation?
3. How is the controller designed and implemented in the system evaluated? Is there a way to automate the design of controllers u_1 and u_2, and what are the empirical hyperparameters involved?
4. In Figure 2, what is the high distance value on the y-axis for C-GAIL-DAC? Could the authors explain the initial higher values?
5. The number of demonstrations (four) mentioned (line 273) and used in the paper seems small. Can the authors elaborate on how this number suffices for learning effective policies?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes. The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We're pleased you have seen value in our new control-theoretic approach to understanding and stabilizing GAIL. We appreciate several nuanced observations you raised, and respond to these below.
**Q1. One-step vs full GAIL**
A1. Theoretically, our controllers work under one-step setting. We follow the insights from one-step setting and extend our controller to full GAIL as C-GAIL. It is difficult to know how to study the difference between the one-step and full setting analytically -- if able to do this, we could directly design controllers for the full setting! It is an interesting direction for future work. However, we have empirically studied the effectiveness of our C-GAIL under the full setting through our experiments. Also see our response A2 to R jmhJ, which notes similar approximations have been used in popular methods such as TRPO, and the global rebuttal A3.
**Q2. Quantifying oscillations**
A2. This is a good point. We have used the term "oscillations" informally in the text, without defining it rigorously. A direct way to interpret this is as variance in the reward curves across runs -- viewing the size of the shaded standard deviation in Figure 1 would be one way to quantify this (or reading off the final standard deviations in Table 2). Another way is to consider the oscillations between successive training time steps -- we do not have a single metric capturing this, but it can be viewed on Figure 1 also.
**Q3. Design and implementation of the controllers**
A3. Note that our controllers $u_1$ and $u_2$ only need to be derived once for a given dynamical system. Automating their design in an arbitrary new dynamical system is an interesting direction for future work, but beyond the scope of this paper. We have chosen to use additive controllers, which are implemented as an addition term to the objective functions (line 238-239). This is summarized in Algorithm 1. Notice that C-GAIL only introduces one additional hyperparameter $k$ (this is one of the attractions of our method). We provide an ablation study in appendix E, finding that C-GAIL is not particularly sensitive to the choice of $k$.
**Q4. High initial value on C-GAIL-DAC**
A4. Thank you for this interesting observation. To reiterate; Figure 2 presents the state Wassertein distance for GAIL-DAC and C-GAIL-DAC. The state Wassertein distance measures the difference between expert and generator distributions at a given point in training. Higher values indicate a larger difference between expert and generator. As noted by the reviewer, in 4/5 environments C-GAIL-DAC briefly has a higher (worse) metric than GAIL-DAC, though in the long run always ends up lower (better).
We are not certain why this occurs, particularly as the reward curves are almost always strictly better for C-GAIL-DAC (Figure 1). Intuitively, since our controller for the discriminator encourages convergence to $ \frac{1}{2}$, the corresponding generator may have a larger distance to the expert one initially. However, as the discriminator approaches its equilibrium, the generator may become stable too as a response.
**Q5. Number of expert demonstrations**
A5. Thank you for this feedback -- we will add a discussion on this in the next version of the paper. We used four expert demonstration to be consistent with the original GAIL-DAC paper [1]. As we understand, one of the advantages of GAIL is that it requires less expert trajectories than other methods (e.g. BC). It's possible that other environments and GAIL variants may require larger numbers of demonstrations (it also depends on the length of each trajectory). We do provide an ablation in Figure 3 with up to 18 demonstrations.
[1] Ilya Kostrikov, Kumar Krishna Agrawal, Debidatta Dwibedi, Sergey Levine, and Jonathan Tompson. Discriminator-actor-critic: Addressing sample inefficiency and reward bias in adversarial imitation learning. arXiv preprint arXiv:1809.02925, 2018. | Summary: The paper formulated training process of GAIL as a dynamic system. From control theory’s view, authors pointed out GAIL would not converge to the desired state where the generator perfectly matches with the expert policy and the discriminator cannot distinguish generated from expert trajectories. Hence, authors proposed a new regularizer to stabilize GAIL.
Strengths: 1. I have appreciated the logical framework regarding training process of GAIL as a dynamics system, where more control theory tools could be introduced in.
2. Theoretical analyses are solid, making proposed regularized more convincing.
3. The topic selection is meaningful, revealing the issue of training instability in GAIL. Proposed controller is quite necessary for stabilizing GAIL.
Weaknesses: 1. This work showed GAIL cannot converge to the only desired state because of biased entropy term. Is it possible that entropy term introduces subtle bias in but improves stability a lot? The case will make your work meaningless. I think more experiment should be interpreted from this view.
2. The GAIL actually minimizes JS divergence between $\rho_{\pi_{E}}$ and $\rho_{\pi}$. For simplicity, the proposed 'C-GAIL' substituted $\rho_{\pi}$ with $\rho$. Does it ignore differences on state distribution? Why the performance still be good?
3. Did this work just substitute a biased regularizer with an unbiased one? It seems that authors should refer success of experiments only to strong convexity of applied controller. I think more details should be supplemented for making your analysis meaningful.
4. More advanced benchmarks should be compared in the experimental section.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is there a missing parenthesis in the formula at line 181? $\tilde{V}_D(D, \pi)=\iint_0 p(s) \pi(a \mid s) \log D(s, a)+\pi_E(a \mid s) \log (1-D(s, a)) d s d a,$
2. It seems tricky that theoretical analysis is done under a strict condition, where one-time step environment is considered and controllers are applied both on policy and discriminator, but original policy objective is applied for updating in implementation, although experimental results are good. Could you explain it more, making it convincing?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have discussed the limitations and future work in Section 7.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review, we are pleased to have communicated the value of our control-theoretic approach. We hope our clarification around the biased-ness of GAIL, and addition of further experiments, might encourage an increase your evaluation of the paper.
**Q1. The entropy term and bias**
A1. We don't believe it's correct to say "the entropy term introduces subtle bias in but improves stability". The role of the entropy term is to allow the generator to explore the state-action space and discover how to reproduce the expert trajectories. But it does not improve stability directly. We have proved that the entropy term prevents the system from converging to the desired global equilibrium, it is not clear to us that there is some other "biased" equilibrium that it is pushed to instead -- empirical evidence comes from the oscillations in the GAIL-DAC training curves (Figure 1) suggesting there is no alternative more stable local equilibrium.
**Q2. Ignoring differences on state distribution and performance**
A2. We indeed ignore the effect of policy changes to the state distribution (switching $\rho_{\pi_E}$ with $\rho$) and only consider the influence of $\pi$ on actions in our one-step setting. We believe this is not uncommon in RL theory to simplify analysis. One example is TRPO [1], which similarly allows the original policy distribution to approximate the updated distribution and provides a theoretical justification for this approximation (Eq2 to Eq6 in its paper).
Notice that the methodology of starting with simplified setting in theory and extend it to general setting in practice is also commonly used in GANs (global rebuttal Q3).
Our intuition for why performance is still good with only a controller on the discriminator is that C-GAIL pushes the discriminator to its equilibrium at a faster speed by introducing a penalty controller centered at $\frac{1}{2}$. This leads GAIL’s training to converge faster with a smaller range of oscillation, and match the expert distribution more closely.
**Q3. Biased regularizer and strong convexity**
A3. We do not believe "bias" is the right term here. In control theory we aim for certain equilibria. As per our above response, it's not clear that GAIL with entropy converges to any equilibrium. Further more, we are not replacing this entropy term, but introducing an additional regularizer that does make the system stable around the desired equilibrium.
We are unsure what the convexity comment refers to. None of our results rely on this assumption.
**Q4 More benchmarks**
A4. Thank you for this suggestion. We have added additional experiments as per the global rebuttal.
**Q5. Missing parenthesis**
A5. Thank you for pointing it out. We will modify it.
**Q6. One-step setting and lack of policy regularizer**
A6. Please see our response A2 and global rebuttal A3.
[1] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. ICML, 2015.
---
Rebuttal Comment 1.1:
Comment: Authors -- I appreciate your thorough response and am inclined to increase my score from 5 to 6.
---
Reply to Comment 1.1.1:
Comment: Thank you for your time. We are glad that you find our response helpful. | Summary: The paper titled "C-GAIL: Stabilizing Generative Adversarial Imitation Learning with Control Theory" addresses the challenge of training instability in Generative Adversarial Imitation Learning (GAIL), a method used to train a generative policy to imitate a demonstrator's behavior. The authors analyze GAIL's optimization from a control-theoretic perspective and identify that GAIL does not converge to the desired equilibrium due to the entropy term in its objective. To resolve this, they propose Controlled-GAIL (C-GAIL), which introduces a differentiable regularization term to the GAIL objective to stabilize training. Empirical results demonstrate that C-GAIL improves upon existing GAIL methods, including GAIL-DAC, by accelerating convergence, reducing oscillation, and more closely matching the expert's policy distribution. The paper contributes a novel control-theoretic approach to stabilize adversarial training in imitation learning, offering both theoretical insights and practical algorithmic advancements.
Strengths: 1. The paper introduces a novel application of control theory to stabilize the training process of Generative Adversarial Imitation Learning (GAIL). By analyzing GAIL from a control-theoretic perspective, the authors provide a deeper understanding of the optimization challenges inherent in GAIL and propose a theoretical solution that ensures asymptotic stability.
2. The authors develop Controlled-GAIL (C-GAIL), a practical algorithm that incorporates a regularization term derived from control theory. This algorithm is shown to improve the stability and convergence of GAIL in empirical tests, offering a tangible advancement that can be applied to existing GAIL methods to enhance their performance.
3. The paper provides extensive empirical evidence to support the effectiveness of C-GAIL. Through experiments on MuJoCo control tasks, the authors demonstrate that C-GAIL achieves faster convergence, reduces the range of oscillation in training, and more closely matches the expert's policy distribution compared to both the original GAIL and other variants, showcasing the robustness and practical applicability of their approach.
Weaknesses: 1. While the paper proposes a theoretically sound controller for stabilizing GAIL training, the practical implementation of the controller is only applied to the discriminator's objective function. The paper acknowledges that the generator's portion of the controller, which would require knowledge of the expert policy, is not used during training. This limitation means the full potential of the control-theoretic approach is not realized in practice.
2. The paper formulates GAIL training as a continuous dynamical system for the purpose of stability analysis. However, in actual practice, the updates to the generator and discriminator are discrete. The discrepancy between the theoretical model and the practical implementation could potentially affect the real-world applicability of the proposed controller.
3. The stability guarantees provided by the paper rely on certain assumptions, such as specific conditions on the hyperparameters of the controller. These assumptions might not hold in more general settings or across different problem domains, which could limit the broad applicability of the results. Additionally, the paper does not fully explore how violations of these assumptions might impact the performance of C-GAIL.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. While the paper demonstrates the effectiveness of C-GAIL in the context of MuJoCo control tasks, how well does the proposed control-theoretic approach generalize to other domains, such as autonomous driving or game playing, where the dynamics of the environment and the complexity of the tasks might be significantly different?
2. Given that the practical updates in GAIL training are discrete whereas the theoretical model assumes a continuous dynamical system, what is the impact of this discrepancy on the long-term stability and performance of the learned policies, especially in tasks with high stochasticity or where the environment reacts non-linearly to actions?
3. The paper mentions that a proper selection of the hyperparameter k is crucial for the effectiveness of the C-GAIL controller. Can the authors provide more insights on how to determine the optimal values for these hyperparameters in a data-driven manner, without relying on extensive hyperparameter tuning, and how sensitive is the performance of C-GAIL to these choices?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: 1. The paper formulates GAIL as a continuous dynamical system for the purpose of stability analysis, but in practice, updates to the generator and discriminator are discrete. This discrepancy between the theoretical model and practical implementation may impact the applicability of the theoretical results in real-world scenarios.
2. The practical implementation of the controller is applied only to the discriminator's objective function. The paper does not provide a method for incorporating a controller for the policy generator, which would require knowledge of the expert policy that is not available during training.
3. The stability guarantees are based on certain assumptions, such as hyperparameter conditions. The paper does not fully explore how the results might be affected when these assumptions do not hold, which could limit the generalizability of the findings to different settings or problem domains.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper. We are pleased we were able to communicate the strengths of the work effectively. We value your feedback on the gap between theory and practice. We have included a general response to this point in the global rebuttal, and here offer more targeted comments. We hope this, along with further experiments in game playing environments, and clarifying the importance of hyperparameter selection, might warrant an uplift in your score.
**Q1 Theoretical controller to practice**
A1. As noted by the reviewer, we were unable to implement the derived generator regularizer since we don't have access to the expert policy. We would like to emphasize that the *theoretical contribution* made by our work remains -- our theory is non-trivial, offering a new way to study GAIL, and insight into what an ideal generator regularizer would look like, which may inspire future work. We showed empirically that the discriminator alone is still of significant practical benefit by itself.
**Q2. Continuous vs Discrete system**
A2. Gradient flow [1] is a widely used technique across optimization theory to transform discrete gradient step updates to a continuous differential equation. This is critical to allow systems to be studied analytically. We agree with the reviewer that understanding what is lost in this conversion is of interest. However, given the ubiquity of the technique we feel it is beyond the scope of our work to study this in depth in our specific setting. We do note that our empirical results, which use discrete updates (algorithm 1), show the benefit of our continuous study does transfer to the discrete setting.
**Q3. Hyperparameter assumption requirements**
A3. Assumption 4.2 specifies an allowed range of hyperparameters in order for convergence to be guaranteed in theory. It is not clear how impactful this is when choosing $k$ at implementation time, since one needs to know $c$ ($p(s)$) for each state. So the allowed bounds are more valuable from a theoretical perspective.
**Q4. Experiments on other tasks**
A4. Thank you for this suggestion. We have added game environments (Atari) under the global rebuttal.
**Q5 Selection of hyperparameters**
A5. Thank you for this comment, which suggests we have not communicated the effect of the hyperparameter $k$ clearly. We provided an ablation study on $k$ in Figure 4. This shows that C-GAIL is effective under a wide range of $k$ values, from 0.1 to 10, with only marginal gains to be found by tuning it. As such, one of the benefits of our method is the insensitivity to $k$ and lack of need for tuning. We will emphasize this point more clearly in the next version.
[1] Conley, C., 1988. The gradient structure of a flow: I. Ergodic Theory Dynam. Systems, 8. | Rebuttal 1:
Rebuttal: # Global Rebuttal for common questions
Thanks to all reviewers for their constructive feedback. We were pleased our work received a favorable assessment. Whilst we address each reviewer's questions individually, this global rebuttal summarizes our response to common points highlighted by several reviewers. 1) Whether C-GAIL's improved stability holds in further environments. 2) How our controlled variant interacts with other SOTA AIL methods. 3) Clarifying the gap between our theory and practical implementation. 4) We computed standard deviations from Figure 2.
**Q1. Additional environments**
A1.
As noted by R PPsU, DQNe & 9gMV, our experiments focused on continuous control in Mujoco tasks. To explore whether C-GAIL brings benefit in other environments we have run additional experiments using (vanilla) GAIL in several Atari games. C-GAIL achieves higher final reward on 4/5 games, always with lower variance.
|| BeamRider | Pong | Q*bert | Seaquest| Hero
|---| -------- | ------- |------- |------- |------- |
Expert (PPO)| 2637.45 $\pm$ 1378.23| 21.32 $\pm$ 0.0| 598.73 $\pm$ 127.07 | 1840.26 $\pm$ 0.0| 27814.14 $\pm$ 46.01|
GAIL |1087.60 $\pm$ 559.09 |-1.73 $\pm$ 18.13| -7.27 $\pm$ 24.95| 1474.04 $\pm$ 201.62| 13942.51 $\pm$ 67.13|
C-GAIL | 1835.27 $\pm$ 381.84 | 0.34 $\pm$ 8.93 | 428.87 $\pm$ 12.72 | 1389.47 $\pm$ 80.24| 23912.73 $\pm$ 32.69
Table R1. Final reward in five Atari tasks [1]. Mean and standard deviation over ten runs. Based on the implementation here: https://github.com/yunke-wang/gail_atari.
**Q2. Additional GAIL variants**
A2.
R 9gMV requested a comparison with more GAIL variants. R PPsU brought our attention to a new family of diffusion-based GAIL algorithms. We were curious to investigate whether the C-GAIL regularizer would bring benefit in this new class of GAIL methods. Run in Mujoco, the table below presents results. Whilst the final performance for both the controlled and non-controlled variants all roughly match (or exceed) the expert return, the standard deviation of C-DiffAIL is consistently smaller than DiffAIL.
|| Hopper | HalfCheetah | Ant | Walker2d|
|---| -------- | ------- |------- |------- |
Expert| 3402| 4463| 4228| 6717|
DiffAIL | 3382.03 $\pm$ 142.86 | 5362.25 $\pm$ 96.92| 5142.60 $\pm$ 90.05 | 6292.28 $\pm$ 97.65|
C-DiffAIL| 3388.28 $\pm$ 41.23 | 4837.34 $\pm$ 30.58 | 4206.64 $\pm$ 36.52 | 6343.89 $\pm$ 33.67
Table R2. Final reward with 1 trajectory in diffusion-based GAIL [2], both vanilla and controlled variant. Mean and standard deviation over five runs. Based on the implementation here: https://github.com/ML-Group-SDU/DiffAIL.
**Q3. From theory to practice**
A3.
Several reviewers correctly noted differences between the assumptions made in our theory, and the practical implementation of our algorithm. These include moving from a continuous flow to discrete updates (R DQNe), studying the one-step setting (R qTQn), only implementing the discriminator regularizer (R DQNe). (R 9gMV also notes this more generally.)
While we provide a specific responses to each assumption separately in our individual responses, we wanted to make a more general response here. We'd like to emphasize that theoretical studies of deep learning usually make simplifying assumptions in order to make progress analytically. This approach of beginning from simplified theory and applying insights it to a practical setting has often been found to be valuable. For example, approaches to stabilize GANs often follow this approach [3][4][5].
**Q4. Figure 2 error bars**
A4. Figure 2 is computed by averaging five random seeds. We did not include error bars in the original version. We present the standard deviation as table below and will modify our figure in the next version. Note that the standard deviations of C-GAIL-DAC are nearly always lower than for GAIL-DAC.
| | 0.2M | 0.4M | 0.6M | 0.8M | 1M |
|--------------|------|------|------|------|----|
| Half-Cheetah | 1.32| 0.95 | 0.83 | 0. 86| 0.81 |
| Walker | 0.65 | 0.42 | 0.46 | 0.41 | 0.43 |
| Reacher | 0.53 | 0.46 | 0.53| 0.55 | 0.52 |
| Ant | 0.87 | 0.75 | 0.79 | 0.76 | 0.82 |
| Hopper | 0.37 | 0.29 | 0.34 | 0.67 | 0.46 |
Table R3. Standard deviation for the state Wasserstein distance of GAIL-DAC
| | 0.2M | 0.4M | 0.6M | 0.8M | 1M |
|--------------|------|------|------|------|----|
| Half-Cheetah | 0.77 | 0.49| 0.44 | 0.36 | 0.38 |
| Walker | 0.52 | 0.37 | 0.36 | 0.35 | 0.36 |
| Reacher | 0.61 | 0.41 | 0.37 | 0.39 |0.38 |
| Ant | 0.65 | 0.62 | 0.58 |0.58 | 0.56 |
| Hopper | 0.23 | 0.20 | 0.16 | 0.14 | 0.15 |
Table R4. Standard deviation for the state Wasserstein distance of C-GAIL-DAC
[1] Wang et al., 2021. Learning to weight imperfect demonstrations. ICML.
[2] Wang et al., 2024. DiffAIL: Diffusion Adversarial Imitation Learning. AAAI.
[3] Mescheder et al., 2018. Which training methods for GANs do actually converge? ICML.
[4] Xu et al., 2020. Understanding and stabilizing GANs’ training dynamics using control theory. ICML.
[5] Luo et al., 2023. Stabilizing GANs’ training with brownian motion controller. ICML. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper addresses the problem of unstable training of Generative Adversarial Imitation Learning (GAIL). To this end, the paper studies the convergence of GAIL from a control-theoretic perspective and proposes to employ a regularization term for the discriminator loss function, which can stabilize the training. The experiments in the locomotion domain, e.g., Half-Cheetah and Walker2D, show that the proposed method leads to learned policies that match experts better, achieve better performance, and converge faster. I believe this work provides insightful analyses, presents a promising method, and sufficiently evaluates the proposed method. Hence, I recommend accepting this paper.
Strengths: **Motivation and novelty**
- The motivation for stabilizing GAIL training is convincing.
- Studying this from a control-theoretic perspective is novel to the best of my knowledge.
**Clarity**
- The overall writing is clear.
**Experimental results**
- The experiments in the locomotion domain, e.g., Half-Cheetah and Walker2D, show that the proposed method leads to learned policies that match experts better, achieve better performance, and converge faster.
Weaknesses: **Figure 2 standard deviation**
- Are the results reported in Figure 2 aggregated from five random seeds? Since Figure 2 does not show standard deviation, I am unsure if the gaps are statistically significant.
**Table 1 Hopper results**
- I am wondering why BC (2830) outperforms the expert (2631) in Hopper.
**Limited domains for evaluation**
- The evaluation is limited to locomotion, e.g., Half-Cheetah and Walker2D. Experimenting with the proposed method and the baselines in other domains, such as navigation (grid world or point maze in D4RL), robot arm manipulation (OpenAI Fetch or Shadow Dexterous Hand), and games (Atari) would significantly strengthen the results.
**Visualized state distributions**
- I appreciate the authors showing the state Wasserstein distance between expert and learned policies in Figure 2. I feel it would be informative to visualize the state distributions of expert and learned policies. For example, we can use a grid world navigation task with discrete state and action spaces. Then, we can visualize the state distributions of expert and learned policies as heatmaps and put them side by side for comparison. I believe this would make the claim that C-GAIL-DAC can match the expert state distributions better and more convincing.
**Related work**
- Including the descriptions of more recent IL methods could make the related work more comprehensive, such as
- Diffusion policy (https://arxiv.org/abs/2303.04137v4 https://arxiv.org/abs/2403.03954)
- Consistency Policy (https://arxiv.org/abs/2405.07503)
- Diffusion BC (https://arxiv.org/abs/2302.13335)
- DiffAIL (https://arxiv.org/abs/2312.06348) / DRAIL (https://arxiv.org/abs/2405.16194v1)
Technical Quality: 3
Clarity: 3
Questions for Authors: See above
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive review. We are delighted that we were able to communicate the value of our work. We address your questions below, and have provided several new results in the global rebuttal.
**Q1. Figure 2 standard deviation**
A1. The results in Figure 2 are indeed aggregated from five random seeds. Thank you for pointing out that we did not report error bars on the plot. We present the standard deviation as tables in global rebuttal Q4 and will modify our figure.
**Q2. Table 1 Hopper results exceed expert**
A2. This is a good observation, and we do not have a comprehensive answer for why the agent average slightly surpasses expert performance. This also happened in the new DiffAIL results. It's possible that the agent overfits to one of the more successful demonstration trajectories. However, we notice that the expert return is within the error bars of our method.
**Q3. Additional experiment domains**
A3. We agree that our results could be more impactful with further domains. We have added in additional environment on Atari as per the global rebuttal.
**Q4. Additional IL algorithms in related work**
A4. Thank you for pointing out this gap in our related work, which we will add to the next version of our paper. Moreover, inspired by the reviewer's comment, we have conducted new experiments on DiffAIL in the global rebuttal.
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal by Authors
Comment: I appreciate the authors for reporting the error bars, and providing additional Atari experiments on and comparisons to DiffAIL. I believe this paper presents solid contributions and should be accepted. Hence, I increased my score to 8 (strong accept).
---
Reply to Comment 1.1.1:
Comment: We are pleased we have been able to further strengthen the paper by incorporating your feedback. Thank you again for your time. | null | null | null | null | null | null |
Chat-Scene: Bridging 3D Scene and Large Language Models with Object Identifiers | Accept (poster) | Summary: This paper proposes a 3D Multi-Label Language Model (MLLM) designed to perceive and represent 3D scenes at the object level. To interpret individual object instances, the authors develop object identifiers to convert the 3D scene into a series of distinguishable object tokens and present object-centric representations using foundation models. Experiments are conducted on various 3D scene-language tasks.
Strengths: 1. Introducing Large Language Models (LLMs) into 3D perception and representation is a valuable and innovative research direction.
2. Leveraging foundation models to extract 3D and 2D embeddings shows significant potential for enhancing the performance and capabilities of the 3D MLLM model.
Weaknesses: 1. The concept of object identifiers is not new, as similar methods have been previously introduced, such as in "Set-of-Mark Prompting Unleashes Extraordinary Visual Grounding in GPT-4V."
2. The authors claim the proposed model enables "efficient object referencing and grounding abilities," but this efficiency is not evaluated in the experiments. Furthermore, experiments on 3D referring expression segmentation are not provided, making this claim hard to substantiate.
3. It is unclear how the proposed object-centric representations address the problem of 3D data scarcity.
4. Table 2 lacks comparisons with several notable works, such as the state-of-the-art method CORE-3DVG (NeurIPS 2023) on the ScanRefer dataset. More existing methods should be included for a comprehensive evaluation.
Technical Quality: 2
Clarity: 2
Questions for Authors: Please refer to the weaknesses section.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 1
Limitations: The authors have discussed the limitations and potential societal impact of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## W1: Difference with Set-of-Mark.
- **Different ways to introduce object identifiers.** Set-of-Mark attaches object Identifiers directly onto the image, relying entirely on the multimodal LLM’s OCR capability to perceive the identifiers from the image. This method is indirect and can introduce ambiguity, especially when there are many objects in the image. Our method explicitly assigns the identifiers to each object in the language prompt. Intuitively, it is easier for the LLM to understand the link between the object and its identifier from the text.
- **Inefficiency of adapting Set-of-Mark to 3D.** Adapting Set-of-Mark for 3D perception requires cross-view clustering of the segmented masks (from SAM) and labeling object identifiers on multi-view images. However, currently there is no MLLM that can handle a sequence of images with a good OCR capability.
- **Prompt-based vs Training-based.** Set-of-Mark is a prompt-based method, which could produce uncontrollable outputs. Our training-based method can stably handle a series of tasks including grounding, captioning, and QA.
At last, our SOTA performance across various 3D benchmarks demonstrates the effectiveness of our proposed method to use object identifiers. Thus, the similar concept of object identifiers in Set-of-Mark does not diminish the contribution or novelty of our work at all.
## W2.1: Evidence of object referencing and grounding abilities.
Section 3.3, along with Figures 2 and 3, illustrates how object identifiers are utilized during interactions with LLMs. Users can reference objects with identifiers for tasks like 3D dense captioning (Scan2Cap), while the LLM responds with identifiers to ground objects for both single object grounding (ScanRefer) and multiple object grounding (Multi3DRefer). The superior performance of our model on benchmarks such as Scan2Cap, ScanRefer, and Multi3DRefer, demonstrates its efficient object referencing and grounding abilities.
## W2.2: Results of 3D referring expression segmentation.
**Table A. Evaluation results of 3D referring expression segmentation on ScanRefer.**
| | Nr3D | | | | | Sr3D | | | | |
|---|:---:|---|---|---|---|:---:|---|---|---|---|
| | **Overall** | **Easy** | **Hard** | **View Dep** | **View Indep** | **Overall** | **Easy** | **Hard** | **View Dep** | **View Indep** |
| 3DVG-Trans | 40.8 | 48.5 | 34.8 | 34.8 | 43.7 | 51.4 | 54.2 | 44.9 | 44.6 | 51.7 |
| TransRefer3D | 48.0 | 56.7 | 39.6 | 42.5 | 50.7 | 57.4 | 60.5 | 50.2 | 49.9 | 57.7 |
| MVT | 59.5 | 67.4 | **52.7** | **59.1** | 60.3 | 64.5 | 66.9 | 58.8 | 58.4 | 64.7 |
| 3D-VisTA | 57.5 | 65.9 | 49.4 | 53.7 | 59.4 | 69.6 | 72.1 | **63.6** | 57.9 | 70.1 |
| **Ours** | **63.9** | **75.7** | 52.6 | 53.8 | **69.2** | **73.1** | **78.3** | 60.8 | **66.6** | **73.4** |
Table A shows that our method surpasses previous baselines for 3D referring expression segmentation on ScanRefer. It is important to note that we did not provide referring expression segmentation results on ScanRefer simply because it is not a common evaluation metric for this dataset. Most previous baselines assess accuracy based on the IoU between the predicted boxes and the ground-truth boxes. The comparison based on box IoU can already demonstrate our model’s grounding ability.
## W3: Why proposed object-centric representations alleviate the problem of 3D data scarcity.
As discussed in Section 1 (Lines 72-84), training robust scene-level representations typically requires a large amount of paired scene-language data, which is difficult to obtain. To overcome this challenge, we represent scenes using object-centric representations derived from well-trained 3D and 2D encoders. Benefiting from at least million-level pre-training, the 3D encoder excels at extracting spatial and shape attributes from point clouds, while the 2D encoder adeptly extracts rich features of object’s appearance from multi-view images. Intuitively, we then propose to combine the well-trained 3D and 2D representations explicitly at the object level, along with the object identifiers, to comprehensively represent a whole scene.
Unlike previous 3D LLMs such as LEO[20] and 3D-LLM[22], which require constructing additional data (near million-level) for pre-training or alignment, our method achieves state-of-the-art performance without the need for additional alignment data. Considering that we adopt similar instruction tuning techniques as theirs, the biggest difference between our model and their models is the design of representations. Thus, our superior performance with less data usage can indicate that our proposed object-centric representations alleviate the problem of 3D data scarcity to some extent.
## W4: Lack of comparison with SOTA methods.
In Table 2, we've included previous SOTA methods for each dataset:
- **ConcreteNet [45]** with the highest Acc\@0.5 on **ScanRefer**,
- **M3DRef-CLIP [63]** with the highest F1\@0.5 on **Multi3DRefer**,
- **Vote2Cap-DETR++ [10]** with the highest CIDEr\@0.5 on **Scan2Cap**,
- **Scene-LLM [17]** with the highest CIDEr on **ScanQA** and the highest EM on **SQA3D**.
In Tables 6–10 in the appendix, we provide comprehensive comparisons on each dataset by including additional SOTA models.
CORE-3DVG is a missed reference on ScanRefer. However, although it surpasses our method by 1.25% on Acc\@0.25, our method still demonstrates superior grounding performance with a 6.39% higher Acc\@0.5. We will include CORE-3DVG as a baseline for comparison in the future version.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for providing the rebuttal. However, I noticed that the performance of the proposed method on hard objects is subpar, as shown in Table A. Additionally, the comparison involving pre-trained 2D and 3D encoders—compared to methods without pre-trained encoders like CORE-3DVG—raises some questions, particularly since the performance is even lower than CORE-3DVG on Acc@0.25. Therefore, I will revise my rating to a borderline reject, but I remain inclined towards rejection.
---
Reply to Comment 1.1.1:
Comment: Thank you for your prompt response.
We would like to respectfully draw your attention to our **main contribution** (L11-15, L92-94), which is the *unification of various 3D tasks* —including grounding, captioning, and QA—within an LLM-based model, rather than focusing on the development of specialized models for individual tasks. Most existing works, including CORE-3DVG (which is limited to 3D visual grounding), are designed with specific models or require task-specific fine-tuning. We believe that creating a 3D generalist model is a significant direction for future research.
For your concerns about performance comparison:
1) Please note our model's leading performance on the *overall* metric, the **primary index** in 3D grounding, rather than focusing solely on the *hard* metric:
- In Table A, without employing specialized model designs or adjusting specific hyperparameters for single dataset, our model achieves the best *overall* performance compared to other works.
- Compared to CORE-3DVG, our model demonstrates superior performance (+6.39%) in *Overall* Acc@0.5 and competitive performance in *Overall* Acc@0.25 in ScanRefer.
2) Please note our model's leading performance **across all left benchmarks** rather than focusing on the single benchmark. Given this, we believe our model provides a solid baseline for the community in developing 3D generalist models.
Therefore, we think that focusing solely on a sub-metric from a particular dataset while overlooking our main contribution is not entirely appropriate. We would greatly appreciate it if you could reconsider your rating in light of this. | Summary: The paper proposes a new representation for 3D multimodal LLMs, a family of foundation models that repurpose LLMs to receive multimodal (visual and linguistic) input. Specifically, the paper advocates for an object-centric representation, where objects are first discovered (detected or segmented) with an off-the-shelf model, then they are fed as tokens to the LLM.
This happens in the following way: i) objects are discovered in 3D using a 3D detector; ii) for every object we get a language identifier (e.g., <obj1>), a local point cloud and a 2D mask (by projecting the 3D mask back to multiple 2D views); the language identifier, featurized local point cloud and featurized 2D segment form tokens, that are fed to an LLM in the form of prompt.
This formulation unifies many visual-language tasks as text generation. As a byproduct, using task-specific prompts, the model can be jointly trained on several tasks jointly, leading to improved performance. The results show quantitative gains on visual grounding, captioning and VQA.
Strengths: 1. (Presentation) The paper is very clearly presented and easy to understand. The writing makes the right claims and all the useful details and questions are answered in the main paper.
2. (Contribution) The submission addresses an important problem, which is to ground the knowledge of LLMs in the visual world. The proposed scene tokenization is not novel per se, but its combination with MLLMs is a nice feature. Also, several important details, such as appropriate featurization using both multi-view 2D and 3D are very useful to see, because MLLMs haven't shown such good results so far using different input representation.
3. (Soundness) The results indeed show that the proposed approach manages to use MLLMs in an effective way. More importantly, the comparisons include baselines that also train multiple tasks simultaneously, so the gain doesn't seem to come more data alone. Although we cannot conclude that this architecture is better than baselines which may have trained on a single task only, it seems to be the strongest multi-task approach now.
Weaknesses: 1. (Contribution) The paper enters the debate of the right representation for visual-language (VL) understanding without giving a clear answer, unfortunately due to current benchmarks' limitations. Object-centric transformers for VL tasks have been proposed long ago (see VilBERT, VisualBERT, LXMERT, OSCAR, UNITER etc for 2D VL understanding), but then got superseded by one-stage approaches like MDETR. The issue with two-stage object-centric models is the definition of a vocabulary. Open-vocabulary methods cannot really rely on detector bottlenecks, since they, by design, have a limited vocabulary. We cannot possibly enumerate all the concepts a user may refer to.
That said, in 3D VL understanding and especially grounding, most approaches are indeed object-bottlenecked. This can be attributed to ScanNet being the base dataset for most 3D VL benchmarks. ScanNet limits the scope of in-domain approaches to few classes. One cannot for example detect the legs of a chair using ScanNet, since parts of objects are not annotated. This is true for both one-stage and two-stage approaches, as long as they are trained on ScanNet. But among the two directions, the one more promising to extrapolate on a broad domain seems to be the non-bottlenecked one.
However, the object-centric tokenization is not bad, it provides a nice abstraction of the scene. My concern is that a detector will not cover some useful parts of the scene and, as a result, that part of the scene won't be visible at all to the model. This can be due to imperfect prediction or even limited concept vocabulary. Given how impressive the VLMs' generalization is, limiting them with in-domain detectors may be handicapping them.
2. (Soundness) It seems that a lot of quantitative gain comes from using 2D features. While it is fair to use 2D features in the proposed approach, I'm not sure whether the main factor of good results is the use of object identifiers, 2D features or multi-tasking through the unification of the output space. For example, an approach that does the same unification (with different prompts of course) and uses scene tokens and multi-view features (for example by unprojecting 2D features from multiple views and performing some voxelization) would be the best baseline to validate the importance of object identifiers. Which baseline or ablation represents that direction? Right now, we can mainly conclude that 2D features are very important and that the format of object identifiers is better than other alternatives, but we cannot safely conclude that object identifiers are the most useful component.
Another interesting fact that I noticed in this paper is that the best-performing models on grounding (ScanRefer) are the ones trained only on ScanRefer. In fact, the proposed approach is the only multi-task approach that beats those models. I believe it would be useful to also see some results on Nr3D and Sr3D for several reasons. First, the margins on grounding are a bit narrow, so we don't know whether the most competitive approaches lack due to less data or architecture. Evaluating on more datasets could provide some more evidence. Second, Nr3D and Sr3D use ground-truth boxes. It's good to see what is the limit of the proposed approach if a part of perception is perfect.
Technical Quality: 3
Clarity: 3
Questions for Authors: Overall, I appreciate the presentation of the paper, its claims, good results and interesting technical contributions. At the same time, thinking broader of the field, I'm not sure whether this paper provides enough evidence towards the right direction; going through an in-domain bottleneck may drastically limit the generality modern LLMs/VLMs can offer. Moreover, the ablations lack some targeted experiments that would help us verify the proposed components.
I would appreciate if the response can clarify/add some of the following:
* Discussion on the scope of using object-bottlenecked 3D VL models for the general field (beyond evaluating on ScanNet).
* Discussion on the relation of this work to previous object-centric tokenization approaches from the 2D VL domain.
* Adding results on Nr3D/Sr3D.
* Adding some more tergated ablations that switch off object identifiers.
I will adapt my score after the discussion.
---------------------------------------------------
Post-rebuttal, some concerns are addressed and I'm increasing my score to from 4 to 6
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## W1.1: Concerns about the recent trend of one-stage replacing two-stage methods in 2D.
Thanks for pointing out a promising future direction. However, one-stage models require large-scale training, for example, MDETR used 1.3M image-text pairs for pre-training. Given current limited 3D data (1200 scenes and 150K language annotations), our SOTA performance across various benchmarks highlights the effectiveness of our two-stage architecture. The data scaling and the exploration of end-to-end architectures should be left as a future work.
## Q1.1 & W1.2: Discussion on the scope of using object-bottlenecked 3D VL models.
Firstly, it's important to note that open-vocabulary is not claimed as a contribution in our paper, nor do the previous baselines (either one-stage or two-stage) include formal open-vocabulary evaluations.
As discussed in the Limitation section (Lines 594-599), we acknowledge the limitation of relying on object detectors. Based on object detectors, our object identifiers can refer to object-level instances or clustered objects but fail to represent part-level concepts. We will clearly state this in the revised version.
Although part-level detectors such as PointNeXt[a] could be adopted to extract part-level instances, we still lack well-trained part-level encoders and related benchmarks. It is important to highlight that current large-scale 3D datasets are object-level, such as Objaverse[b] and OmniObject3D[c], and a large-scale, high-quality concept-level dataset is lacking. Therefore, the exploration of open-vocabulary/concept-level evaluations should be left for future work.
[a] PointNeXt: Revisiting PointNet++ with Improved Training and Scaling Strategies. NeurIPS 2022.
[b] Objaverse: A Universe of Annotated 3D Objects. CVPR 2023.
[c] OmniObject3D: Large-Vocabulary 3D Object Dataset for Realistic Perception, Reconstruction and Generation. CVPR 2023.
## Q1.2: Zero-shot evaluation on other datasets.
**Table A. Evaluation results on scene-language tasks 3RQA, 3RDialog, and 3RPlan, based on 3RScan.**
| Method | 3RQA | 3RDialog | 3RPlan |
|:---:|:---:|:---:|:---:|
| LEO (zero-shot) | 35.8 | 25.5 | 23.4 |
| **Ours (zero-shot)** | **36.2** | **32.7** | **30.9** |
| LEO (fine-tuned) | 51.9 | 73.3 | 81.1 |
| **Ours (fine-tuned)** | **55.8** | **82.1** | **93.5** |
To assess the generalizability of our model, we follow the precedent set by the 3D LLM method LEO[20] and conduct an evaluation on their proposed tasks: 3RQA, 3RDialog, and 3RPlan. These tasks are built upon the 3RScan[d] dataset, which belongs to a different domain than ScanNet. We use the same detection results as LEO and leverage our pre-trained weights (including the projection layer and LLM pre-trained on ScanNet). The results shown in the table above demonstrate our model’s zero-shot capabilities on 3RScan.
[d] RIO: 3D Object Instance Re-Localization in Changing Indoor Environments. ICCV 2019.
## Q2: Discussion on the previous object-centric tokenization approaches.
Compared to previous object-centric tokenization methods, our approach introduces a novel way to incorporate a sequence of {object identifier, object features} into the LLM, enabling it to solve different tasks in a unified question-answering format.
For instance, ViLBERT uses an additional task head to predict matching scores for object grounding, while VisualBERT ranks entities by comparing attention weights. These methods require task-specific designs and heads for different tasks, which is impractical for real-world human-assistant interactions.
In contrast, our method establishes a direct link between the object and its identifier, allowing the LLM to respond with an object identifier as the grounding result. This approach can naturally extend to more complex tasks, such as multiple object grounding (outputting several identifiers for multiple grounding results) and grounded captioning (producing complex captions interleaved with identifiers as the grounding result).
## Q3 & W2.2: Results on Nr3D/Sr3D.
**Table B. Evaluation results on Nr3D/Sr3D.**
| | Nr3D | | | | | Sr3D | | | | |
|---|:---:|---|---|---|---|:---:|---|---|---|---|
| | **Overall** | **Easy** | **Hard** | **View Dep** | **View Indep** | **Overall** | **Easy** | **Hard** | **View Dep** | **View Indep** |
| 3DVG-Trans | 40.8 | 48.5 | 34.8 | 34.8 | 43.7 | 51.4 | 54.2 | 44.9 | 44.6 | 51.7 |
| TransRefer3D | 48.0 | 56.7 | 39.6 | 42.5 | 50.7 | 57.4 | 60.5 | 50.2 | 49.9 | 57.7 |
| MVT | 59.5 | 67.4 | **52.7** | **59.1** | 60.3 | 64.5 | 66.9 | 58.8 | 58.4 | 64.7 |
| 3D-VisTA | 57.5 | 65.9 | 49.4 | 53.7 | 59.4 | 69.6 | 72.1 | **63.6** | 57.9 | 70.1 |
| **Ours** | **63.9** | **75.7** | 52.6 | 53.8 | **69.2** | **73.1** | **78.3** | 60.8 | **66.6** | **73.4** |
The results show that our model achieves state-of-the-art performance compared to previous expert models.
## Q4 & W2.1: Importance of object identifiers.
**Table C. Ablation study on object identifiers.**
| | ScanRefer | Multi3dRefer | Scan2Cap | ScanQA | SQA3D |
|---|:---:|:---:|:---:|:---:|:---:|
| w/o obj identifiers | 22.8 | 25.6 | 48.9 | 79.8 | 50.0 |
| **Ours** | **50.2** | **52.4** | **77.1** | **88.4** | **55.5** |
Object identifiers are crucial to our model's design. They allow users to reference objects for tasks such as 3D dense captioning (Scan2Cap), while the LLM uses these identifiers for grounding objects in both single object grounding (ScanRefer) and multiple object grounding (Multi3DRefer).
Without object identifiers, it is necessary to adopt an alternative method for grounding and dense captioning. Following 3D-LLM[22], we add special location tokens to represent bounding boxes in the LLM. Thus, the model can output bounding boxes for grounding tasks and input bounding boxes for dense captioning. As shown in the table above, training the model without object identifiers reveals a significant decline in performance, particularly in grounding and captioning tasks.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: I would like to thank the authors for their effort in the rebuttal. My main concerns were regarding the object bottleneck, references to related work, additional evaluations, more targeted ablations.
* Discussion regarding the object bottleneck: while we don't expect a single paper to solve the debate, this paper positions in favor of two-stage approaches for limited data setups. That opens indeed a new discussion given recent advances in mixed 2D-3D training [1] but I agree that it is an avenue for future work.
* Additional connections to related work were added.
* Additional evaluations on ReferIt3D were added.
* One additional ablation was added, but I'm not sure I get all the details. Are encoded bounding boxes fed to the LLM? Some description of this baseline would be helpful. If that's the case, one related approach is LanguageRefer [2]. I was also imagining that scene tokens (after some voxelization) could be fed directly to the transformer.
It would also be interesting to ablate the effect of multi-tasking. Is the architecture strong on its own or it benefits a lot from the unification of the output space? For that, some single-task results would be useful, e.g. on grounding.
For the final version, please consider adding the additional discussions on the broader position (two-stage vs one-stage), related work and additional results and ablations. If possible, including single-task results would provide a great insight on disentangling the architecture design and output unification. While the last is studied for NLP, it's not well-studied for 3D VL understanding.
Under these conditions, I'm increasing my score from 4 to 6.
[1] ODIN: A Single Model for 2D and 3D Segmentation, 2024
[2] LanguageRefer: Spatial-Language Model for 3D Visual Grounding, 2021
---
Rebuttal 2:
Title: Thank you for your review
Comment: Thank you for your detailed comments and constructive suggestions. We appreciate your recommendations regarding additional discussions on model architecture and related object-centric methods, as well as the suggestion to include more results and ablation studies. We will incorporate these in the final version of the paper. Below, we want to further address your concerns.
**Details of the ablation study on object identifiers:**
Following 3D-LLM[1], we use 6 discrete tokens to represent a bounding box. Specifically, we add 1000 special tokens (<LOC000>, <LOC001>, ... , <LOC999>) into the language model's vocabulary to discretely represent numeric values in the [0, 1] range. (Coordinates were normalized into [0, 1]). For example, an original bounding box defined as (x=0.234, y=0.467, z=0.129, w=0.301, h=0.235, l=0.189) would be represented as: <LOC234> <LOC467> <LOC129> <LOC301> <LOC235> <LOC189>. This method has shown effectiveness in 2D models such as OFA[2] and Pix2Seq[3]. However, both our experiments and 3D-LLM's results indicate that the location tokens are not well-learned due to the lack of 3D data.
LEO[4] and LL3DA[5] chooses to feed the encoded feature of a bounding box (or a single point) to the LLM to represent the user-referred object. However, this design can not be directly applied for outputting a bounding box. Therefore, these models are not able to do grounding tasks.
It is worth noting that Scene-LLM[6] feeds scene tokens (after voxelization) directly to LLM as you suggested. However, this model is only evaluated on QA tasks, as it lacks the ability to reference specific objects for captioning or grounding.
Consequently, our design, which employs object identifiers, is pioneering in unifying these tasks among 3D MLLMs. Notably, our model even achieves superior performance compared to expert models, highlighting its potential as a promising direction for LLMs in 3D tasks.
**Multi-tasking ablation:**
Thanks for the advice. The comparison between multi-task and single-task training could provide more insights of our method.
| Method | Acc\@0.25 | Acc\@0.5 |
|:---:|:---:|:---:|
| single-task training | 50.8 | 46.3 |
| multi-task training | 55.5 | 50.2 |
We conduct an experiment of the single-task training on ScanRefer. The result in the table above shows that the joint training on multiple tasks enhances performance. However, it is important to note that the comparison might not be entirely fair, as reduced data for single-task training also leads to fewer training steps per epoch. Adjusting hyperparameters could slightly improve single-task training performance. Anyhow, we will include a more comprehensive comparison between single-task and multi-task training in the final version.
[1] 3D-LLM: Injecting the 3D World into Large Language Models. NeurIPS 2023.
[2] OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework. ICML 2022.
[3] Pix2seq: A Language Modeling Framework for Object Detection. ICLR 2022.
[4] An Embodied Generalist Agent in 3D World. ICML 2024.
[5] LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and Planning. CVPR 2024.
[6] Scene-LLM: Extending Language Model for 3D Visual Understanding and Reasoning. arXiv 2024.
---
Rebuttal Comment 2.1:
Title: Thank you for the additional clarifications
Comment: Thank you for the additional clarifications, I believe that the paper is now more complete and I vote for acceptance. | Summary: This paper proposes a 3D MLLM that can understand the 3D environment at the object level. The proposed work designs object identifiers that are projected into language token space and can be understood by LLMs, which unifies several 3D scene understanding tasks into the same format. These object identifiers give a more natural way for humans and MLLM to reference specific objects of interest in a 3D scene. The paper extracts features from 3D scenes using several 2D and 3D pretrained models and finetunes the MLLM on 3D scene understanding tasks. Experiments were conducted on several benchmarks based on ScanNet and the results show the proposed method achieves state-of-the-art performance, surpassing prior arts by a large margin.
Strengths: 1. The paper decomposes a 3D scene into objects and develops object tokens to represent them for interaction with LLMs. This is a more natural way for people and LLMs to refer to object entities in the scenes.
2. The paper conducts extensive experiments across several 3D scene understanding tasks (Grounding, VQA, and Captioning), showing the versatility of the proposed method. The reported results show that the proposed method achieves state-of-the-art performance and outperforms the existing expert and LLm-based models by large margins.
3. The paper also conducts a study on video input using a 2D video instance tracking model, showing that the proposed method can still work without the 3D model.
4. The paper is well-written and the code is provided in the supplementary material.
Weaknesses: 1. The performance of the proposed method is highly reliant on the ability of the pretrained 2D and 3D models (Mask3D, DINOv2, DEVA). Specifically, since the scene is represented as discrete object tokens, some fine-grained information existing in the full 3D scenes is lost. For example, if the 3D segmentation model wrongly merges 2 chairs into one, there is no way for the proposed method to recover. In this regard, I encourage the authors to provide some case studies about how the model fails (or succeeds) in this way.
2. Although multiple tasks are tested, they are all based on the same underlying dataset, ScanNet. Thus it's unclear how the proposed method works on other datasets. While this is maybe because the ScanNet has most established benchmarks on these 3D scene understanding tasks, I still want to see how the proposed method works on other datasets (e.g. some outdoor datasets from AV literature, or simply the next iteration of ScanNet, ScanNet++). Such evaluation can be quantitative or qualitative. It's also interesting to see how the model trained on ScanNet only can transfer to other datasets.
3. I'm interested to see the zero-shot, open-vocabulary generalization of the proposed method. As the proposed method is based on several foundation models, it probably has the ability in such settings. I won't consider this as a major weakness though, as this is not claimed as a contribution in the paper.
Minor issues, typos, and grammar errors:
* Ln 156, "due to due to".
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Ln 183 mentions that DINOv2 has superior handling of local features within images. Why and how does this matter in the 3D scene understanding task? How does it influence the final performance? The relevant ablations study is currently lacking.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors provide a discussion on limitations and societal impact in the appendix. Yet they don't provide qualitative examples or discussion on the failure cases, which I highlighted in the weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## W1: Case study of the reliance on pre-trained detectors.
Please refer to Figure 1 the attached PDF in the “Author Rebuttal”.
We provide several qualitative cases where the detected objects are imperfect (such as incomplete point clouds or an object being separated into two or more parts). Despite the direct influence of incomplete masks on grounding quality, there are successful cases in captioning and QA tasks. The model's ability to perceive the surroundings of the objects allows it to infer the correct captions or answers.
It is worth emphasizing that previous SOTA models such LEO[20], 3D-VisTA[68], and M3DRef-CLIP[63] are also two-stage models reliant on pre-trained detectors, which face the same challenge as us when the detector fails.
## W2: Zero-shot evaluation on other datasets.
**Table A. Evaluation results on scene-language tasks 3RQA, 3RDialog, and 3RPlan, based on 3RScan.**
| Method | 3RQA | 3RDialog | 3RPlan |
|:---:|:---:|:---:|:---:|
| LEO (zero-shot) | 35.8 | 25.5 | 23.4 |
| **Ours (zero-shot)** | **36.2** | **32.7** | **30.9** |
| LEO (fine-tuned) | 51.9 | 73.3 | 81.1 |
| **Ours (fine-tuned)** | **55.8** | **82.1** | **93.5** |
Considering the lack of 3D scene understanding benchmarks built upon other datasets, we follow the precedent set by the 3D LLM method LEO[20] and conduct an evaluation on their proposed tasks: 3RQA, 3RDialog, and 3RPlan. These tasks are built upon the 3RScan[a] dataset, which belongs to a different domain than ScanNet. We use the same detection results as LEO and leverage our pre-trained weights (including the projection layer and LLM pre-trained on ScanNet). The results shown in the table above demonstrate our model’s zero-shot capabilities on 3RScan.
[a] RIO: 3D Object Instance Re-Localization in Changing Indoor Environments. ICCV 2019.
## W3: About zero-shot open-vocabulary generalization.
Firstly, it's important to note that open-vocabulary is not claimed as a contribution in our paper, nor do the previous baselines include formal open-vocabulary evaluations.
The Table A above reveals some zero-shot open-vocabulary abilities of our model on novel scenes in 3RScan. For achieving real open-vocabulary capabilities, we can adopt open-vocabulary detectors such as SAMPro3D[b] and Open3DIS[c], and scale up the language data to train a robust MLLM. This remains a valuable direction in this field, and we tend to leave it for future work.
[b] SAMPro3D: Locating SAM Prompts in 3D for Zero-Shot Scene Segmentation. arXiv 2023.
[c] Open3DIS: Open-vocabulary 3D Instance Segmentation with 2D Mask Guidance. CVPR 2024.
## Q1: How do DINOv2 features work in the 3D scene understanding task?
As described in Appendix A (Ln 531-541), we extract 2D representations from the area of projected masks in multi-view images of each object. DINOv2, trained with self-supervision objectives at both image and patch levels, captures detailed local information such as shape and texture, providing fine-grained perception abilities. This aligns with our design goal of extracting rich object-centric representations. The ablation results in Table 4 demonstrate the importance of these 2D representations in our final model.
**Table B. Ablation study on the 2D encoder.**
| | ScanRefer | Multi3dRefer | Scan2Cap | ScanQA | SQA3D |
|---|:---:|:---:|:---:|:---:|:---:|
| | **Acc\@0.5** | **F1\@0.5** | **C\@0.5** | **CIDEr** | **EM** |
| w/ CLIP | 46.3 | 48.7 | 73.1 | 85.0 | 53.9 |
| w/ DINOv2 | **50.2** | **52.4** | **77.1** | **88.4** | **55.5** |
We also add an ablation study replacing DINOv2 with CLIP, which was trained with image-level contrastive learning and tends to neglect rich pixel-level details. The results show that using the CLIP encoder leads to lower performance, particularly on grounding tasks.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for the response!
I think all my concerns have been addressed. I have also briefly gone through other reviews and found no strong reasons to change my mind.
Thus I'm keeping my rating at 7 - accept.
---
Reply to Comment 1.1.1:
Comment: We are pleased to have addressed all your concerns and appreciate your decision to keep your rating! | Summary: This paper aims to enhance the efficiency in interpreting individual object instances and improve referencing and grounding capabilities for intricate scene comprehension. The method decomposes the input 3D scene into object identifier tokens. Experimental results on 3D scene-language datasets demonstrate the effectiveness of the proposed approach.
Strengths: 1. The paper is well-written and easy to follow.
2. The experimental results on 3D scene-language datasets shows the performance improvement and effectiveness of the proposed method.
3. The training schema of the model is single-stage yet effective on downstream task.
Weaknesses: 1. The authors do not provide the details about detectors, encoders, multi-modal inputs and LLMs used in other methods in the Table 1. These designs has already built a high-performance baseline, for example, the baseline results in Table 2 have exceeded other methods in the Table 1. It is difficult to judge the fairness of the comparisons in Table 1.
2. This paper lacks the performance on ScanQA test set.
3. The ablations are insufficient. The paper lacks the ablations about different size of LLM, training and fine-tuning schema.
4. The paper lacks the computation and time cost about the proposed method.
5. This paper does not provide sufficient discussion with object-centric representation learning methods on 3D vision-language. For example,[a] explored the object-centric representation learning with contrastive learning. And the object-level tokens are used in LEO[22], 3D-LLM[20], etc.
[a] Vision-Language Pre-training with Object Contrastive Learning for 3D Scene Understanding, AAAI2024
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. More ablations to show the effectiveness of object identifier, such as the order of identifiers, only combining a 3D object token embedding and a 2D object token embedding for each object without the specifical token for identifier.
2. More ablations about the LLM used for 3D-VL learning, such as different types of LLMs , different sizes of LLM, multimodal large models such as LLAVA[b] or other 3DLLM[22]
3.The details about the training set, such as the scale of 3D, 2D and language, the statistics of identifiers .
4.The details about the inputs at inference, like will all objects of a scene be combined as input, and how to combine?
[b] Visual Instruction Tuning, NeurIPs2023
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors provide the limitations about the reliance on pre-trained models, data scarcity, and the experience hallucinations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## W1: Details about multi-modal inputs, detectors, encoders, and LLMs used in other methods.
Please refer to Table 1 in the attached PDF in “Author Rebuttal”.
## W2: Results on ScanQA test set.
| | Test w/ object | | | | Test w/o object | | | |
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| **Method** | **CIDEr** | **B-4** | **METEOR** | **ROUGE** | **CIDEr** | **B-4** | **METEOR** | **ROUGE** |
| ScanQA | 67.29 | 12.04 | 13.55 | 34.34 | 60.24 | 10.75 | 12.59 | 31.09 |
| Multi-CLIP | 68.70 | 12.65 | 13.97 | 35.46 | 63.20 | 12.87 | 13.36 | 32.61 |
| 3D-VLP | 70.18 | 11.23 | 14.16 | 35.97 | 63.40 | **15.84** | 13.13 | 31.79 |
| 3D-VisTA | 76.6 | **16.0** | 15.2 | 38.6 | 62.6 | 11.9 | 12.9 | 32.8 |
| 3D-LLM | 69.6 | 11.6 | 14.9 | 35.3 | - | - | - | - |
| LL3DA | 78.16 | 13.97 | 16.38 | 38.15 | 70.29 | 12.19 | 14.85 | 35.17 |
| **Ours** | **94.04** | 14.38 | **19.65** | **44.53** | **79.79** | 11.38 | **17.10** | **39.34** |
## W3 & Q2: Ablations about LLMs and training schema.
| | | ScanRefer | Multi3dRefer | Scan2Cap | ScanQA | SQA3D |
|---|---|:---:|:---:|:---:|:---:|:---:|
| **LLM** | **Training Schema** | **Acc\@0.5** | **F1\@0.5** | **C\@0.5** | **CIDEr** | **EM** |
| OPT-1.3B | one-stage | 47.6 | 48.3 | 75.7 | 86.2 | 54.3 |
| Tiny-Vicuna-1B | one-stage | 49.5 | 50.1 | 76.2 | 86.7 | 54.5 |
| Vicuna-7B | one-stage | 50.2 | 52.4 | 77.1 | 88.4 | 55.5 |
| Vicuna-13B | one-stage | 49.9 | 51.7 | **78.2** | **89.1** | **55.8** |
| Vicuna-7B | two-stage | **51.1** | **53.1** | 73.7 | 85.6 | 55.3 |
1) **Size of LLM**: We compared various sizes of LLMs: Vicuna 1B, 7B, and 13B. The 7B model achieves the best performance on grounding tasks, while the 13B model excels in captioning and QA tasks. The 13B model does not show significant performance gains over the 7B model, suggesting that the current task scope and data scale do not challenge larger LLMs.
2) **Type of LLM**: We tested the OPT-1.3B, which performs slightly worse than the Vicuna-1B of similar size. As for multimodal large models like LLaVA and LEO, basically they use the same LLM backbone (Vicuna-7B) and the projection layer (MLP) as ours. However, directly adapting their pre-trained weights to our model is unsuitable because we use different 2D or 3D representations.
3) **Training schema**: We compare one-stage training with two-stage training (fine-tuning on each dataset). Two-stage training improves performance on grounding tasks but significantly decreases performance on captioning and QA tasks. The performance decline in captioning and QA tasks may be due to these tasks being easier to converge, leading to overfitting during further fine-tuning.
## W4: Computation and time cost.
Method | Data Preparation (per scene) | GPU Usage | Training Time
---|---|---|---
3D-LLM (BLIP-2) | ~ 15 min | 64 * V100 | ~ 1 day
3D-LLM (Flamingo) | ~ 15 min | 8 * A100 | ~ 7 days
Ours | < 1 min | 4 * A100 | ~ 8 hours
Compared to 3D-LLM[22], our model's simpler design significantly reduces computation and time costs.
## W5: Discussion about object-centric representation.
Firstly, it’s important to note that the object-centric representation is not claimed as a contribution in the paper. We state that our study is based on well-trained encoders, and our contribution is on how to incorporate a sequence of {object identifier, object features} into the LLM, which can solve various 3D scene-language tasks in unified formats. More comparisons among different object-centric representations can be left as a future work.
## Q1: Importance of object identifiers.
| | ScanRefer | Multi3dRefer | Scan2Cap | ScanQA | SQA3D |
|---|:---:|:---:|:---:|:---:|:---:|
| | **Acc\@0.5** | **F1\@0.5** | **C\@0.5** | **CIDEr** | **EM** |
| fixed order | 49.6 | 51.5 | 76.2 | **88.7** | 55.4 |
| w/o obj identifiers | 22.8 | 25.6 | 48.9 | 79.8 | 50.0 |
| **Ours** | **50.2** | **52.4** | **77.1** | 88.4 | **55.5** |
**Order of Object Identifiers**: In our actual implementation, we randomize the order of object identifiers during training to minimize the influence of any inherent order distribution. Our results show that using a fixed order of identifiers yields slightly poorer performance compared to using a random order.
**Removing Object Identifiers**: Object identifiers are crucial to our model's design. They allow users to reference objects for tasks such as 3D dense captioning (Scan2Cap), while the LLM uses these identifiers for grounding objects in both single object grounding (ScanRefer) and multiple object grounding (Multi3DRefer). Without object identifiers, it is necessary to adopt an alternative method for grounding and dense captioning. Following 3D-LLM[22], we add special location tokens to represent bounding boxes in the LLM. Thus, the model can output bounding boxes for grounding tasks and input bounding boxes for dense captioning. As shown in the table above, training the model without object identifiers reveals a significant decline in performance, particularly in grounding and captioning tasks.
## Q3: Details about training data.
Data type | Size
---|---
3D Scene | 1201
Point Cloud / Scene | 145K
Images / Scene | ~ 80
Image Resolution | 640*480
Language | 155K
Identifiers | 100
As a comparison, LEO[20] used 1.2M language training data, and 3D-LLM[22] used 700K, which are several times more than our training data (155K).
## Q4: Details about the inputs at inference.
All objects detected by the 3D detector are combined as input. For each object, the extracted 3D and 2D representations are projected into the LLM’s embedding space, becoming 3D and 2D tokens. Consequently, each object is represented by three tokens: an identifier token, a 3D token, and a 2D token. As illustrated in Figure 2, these objects form a token sequence of length 3$n$, where $n$ is the number of objects. For more details, refer to Section 3.3 for the prompt template and Section 3.2 for feature extraction.
---
Rebuttal Comment 1.1:
Comment: We appreciate all your suggestions that help us improve our paper. As the deadline for discussion is approaching, we would be glad to address any remaining questions or concerns. Please let us know if there are any further points you'd like to discuss! | Rebuttal 1:
Rebuttal: ### **[Our Contributions]**
We are glad to find out that the reviewers generally acknowledge our contributions:
(Contribution)
- The combination of object-centric representations and MLLMs is a nice feature [KdPm], a valuable and innovative research direction [G2or], and it is a natural way for people and LLMs to refer to object entities using object identifiers [n84p].
- Appropriate featurization using both multi-view 2D and 3D are very useful to see, because MLLMs haven't shown such good results so far using different input representation. [KdPm, G2or]
(Soundness)
- The training schema of the model is single-stage yet effective on downstream tasks. [2kgC]
- The experimental results show our state-of-the-art performance across various tasks, indicating the effectiveness of our method. [2kgC, n84p, KdPm]
- The paper also conducts a study on video input using a 2D video instance tracking model, showing that the proposed method can still work without the 3D model. [n84p]
- The code is provided in the supplementary material. [n84p]
(Presentation)
- The paper is very clearly presented and easy to follow. [2kgC, n84p, KdPm]
### **[New Experiments and Results]**
In this rebuttal, we have added more supporting results to address reviewers' concerns.
- Details about multi-modal inputs, detectors, encoders, and LLMs used in other methods. [2kgC]
- Results on ScanQA test set. [2kgC]
- Ablation of object identifiers. [2kgC, KdPm]
- Details about training data. [2kgC]
- Zero-shot evaluation on 3RScan datasets. [n84p, KdPm]
- Ablation of 2D encoder. [n84p]
- Results on Nr3D/Sr3D datasets. [KdPm]
- Results of 3D referring expression segmentation on ScanRefer dataset. [G2or]
Thank you again for your constructive comments. We would be happy to answer and discuss if you have further concerns.
Pdf: /pdf/10e29fa5de7ee0b954627b6bf537841515693cd3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Multi-LLM Debate: Framework, Principals, and Interventions | Accept (poster) | Summary: The paper tackles the issue of echo chambers in multi-agent debate (MAD), a critical area that has not been sufficiently explored theoretically in current MAD research as the authors claimed. The authors propose three interventions – diversity pruning, quality pruning, and refuting misconceptions – to address this problem.
Strengths: Positive aspects:
1. The paper addresses the important issue of echo-chamber effects in multi-agent debates.
2. The authors propose theorems to clarify concepts, attempting to provide a theoretical foundation for their work.
3. The paper is written in an accessible manner, making it easy to understand for readers.
Weaknesses: Major critiques in several areas:
A. Theoretical Issues:
A.1 Underdeveloped theoretical foundation: While the authors propose theorems, many appear trivial or state obvious conclusions (e.g., Theorems 5.1 and 5.2).
A.2 Ambiguous parameter representation: The representation of θ in Assumption 4.1 is unclear, raising questions about how model parameters are represented and changed.
A.3 Shared latent space issues: Lemma 4.2 doesn't adequately address how different LLMs would share their latent space.
B. Methodological Concerns:
B.1 Simplistic multi-agent dialogue setup: The model fails to fully leverage the potential of a multi-LLM ensemble, using it more like traditional ensemble schemes.
B.2 Unclear latent concept model: The origin and nature of latent concepts in Section 2 lack clarity.
Lack of quantifiability: The formulation in Section 4 cannot be quantified, leading to reliance on heuristics.
C. Implementation and Experimental Issues:
C.1 Disconnect between theory and implementation: Section 6 presents questionable formulations, particularly in the use of KL divergence and reliance on sentence embedding as a proxy.
C.2 Outdated model selection: Experiments use older models instead of more recent, capable ones.
Limited experimental insights: Results only show significant improvement in the first round of voting.
C.3 Questionable example: The example in Section 5.2 doesn't effectively illustrate the point about misconceptions.
Conclusion: While the paper addresses an important issue and attempts to provide theoretical grounding, it falls short in several areas. The multi-LLM debate framework presented doesn't demonstrate its full potential for sophisticated decision-making tasks. The intervention strategies and control parameters lack necessary granularity. Despite the paper's accessibility, the gap between theoretical concepts and practical application remains significant, with many proofs being trivial or of limited practical value.
Recommendation for future work: Focus on developing more sophisticated decision-making tasks, refine intervention strategies, and use state-of-the-art language models for experiments. Additionally, strengthen the connection between theoretical concepts and practical implementations to enhance the paper's overall impact and relevance.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Suppose KL represents KL divergence. KL divergence is asymmetric and unbounded. Why not use other metrics?
2. In the experiment section, would you consider adding ablation tests to determine which of your intervention strategies are effective?
3. Can you clarify how you pass the distribution of latent concepts between LLMs?
4. Please find more related papers in multi-LLM debate or dialogue. You missed a couple of key papers.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: NO! Limitation section missing.
The authors on the checklist state that they submitted limitations, but they did not.
This paper could have been desk-rejected.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### A Theoretical Issues:
**[Theoretical Results]** We aim to provide theoretical results that help explain why specific behavior would be observed in the debate process. For example, Theorem 5.2 is intended to help understand why one would observe tyranny of the majority LLM debate. While not surprising to observe tyranny of the majority in debate, we are first to establish this observation rigorusly. Our result complements the main findings of [1], which outlines a similar phenomenon in in-context-learning; when many of the examples share an underlying pattern, models are more likely to give responses that also have that shared underlying pattern (Figure 1 in [1] outlines the basic idea). We don't wish to overrepresent the impact of our results, but we do genuinely believe that the theorems we provide are informative.
**[Parameter Representation and $\theta$]** We agree that in isolation, the representation of $\theta$ in Assumption 4.1 is unclear. We discuss the role of $\theta\in \Theta$ in detail, starting on line 89. To address this, we propose to change the language of Assumption 4.1 to read:
**Assumption 4.1** For a given latent concept sapce $\Theta$, the probability of generating response $\mathbf{z}_i^{(t+1)}$ is conditionally independent of both the responses $Z^{(t)}$ and the task $x$ when given any concept $\theta\in\Theta$ and model parameters $\phi_i$, i.e., $\mathbb{P}\big(\mathbf{z}_i^{(t+1)}|\theta, \mathbf{x}, Z^{(t)}, \phi_i\big) = \mathbb{P}\big(\mathbf{z}_i^{(t+1)}|\theta, \phi_i\big)$.
**[Communication]** Models communicate with each other only through tokens and do not share their latent space.
### B Methodological Concerns
**[Simplistic Multi-Agent Dialogue]** We agree that debate as a paradigm fails to fully leverage the potential of a multiple LLMs. Our main goal with this paper is to provide a more rigorous formalization of debate. Given the growing number of papers focusing on multi-LLM debate (which all use a very similar setup to ours), we believe that our work is valuable.
We would also like to point out that debate is quite different from traditional ensemble methods. Stoping debate at round $t=0$ would be equivalent to a traditional ensemble. With that said, we would be very interested in seeing general multi-LLM interactions explored further, this is outside the scope of our work.
**[Latent Concepts and a Reliance on Heuristics]** We follow the latent concept model of [1, 2]. The true latent concept space will not be accessible; we use proxies such as sentence embeddings. We fully agree that our method's weakness is its reliance on heuristics. However, our theory informs the development of these heuristics. In our experiments, these heuristics are effective.
### Implementation and Experimental Issues
**[Theorey and Implementation]** We use our theory to help guide and motivate the development of our heuristics. While we use proxies for statistical distance (namely, the distance between sentence embeddings), the choice of this proxy, as well as what we do with that proxy, are the direct result of our theoretical results. Beyond just informing the heuristics, our theoretical results also help provide some intuition as to why these heuristics work. In particular, it may not be obvious a priori why diversity pruning effectively reduces tyranny of the majority (as demonstrated in Figure 2). Both theorem 5.2 and the formulation diversity pruning (given on line 232) help explain this effectiveness more intuitively than just empirical observation alone.
**[Model Choice]** We use Llama-3, Llama-2, Mistral-v0.2, and GPT-3.5. Both Llama-3 and Mistral-v0.2 are state-of-the-art models. While we agree that Llama-2 (which has been replaced by Llama-3) and GPT-3.5 are not state-of-the-art, they are still quite capable models. Our experiments show that our method is effective on a diverse set of models, including quite capable models such as Llama-3.
**[Improviemnt in the First Round]** Our method achieves improvement over the baseline across all rounds (not just the first round). For both vanilla-debate and our method, the largest average gain is at round $t=1$. Both methods continue to improve over subsequent rounds (Figure 4). Ultimately, we observe significant improvement over vanilla-debate (in most cases) even after 10 rounds of debate (Table 1).
**[Example about misconceptions]** Our example of misconceptions indicates that misconceptions represent a particular type of error, specifically erroneous associations between more fundamental underlying ideas (concepts).
**[Choice of KL Divergence]** While we could choose other distance metrics, we choose KL divergence specifically to capture the surprise (in a technical sense of the word) when using one of the responses $\mathbf{z}_i$ as a predictor of $\theta$ compared to using the original task $\mathbf{x}$ as a predictor of $\theta$. Under this interpretation, the asymmetry of KL is desirable for quality pruning and misconception refutation. In the case of diversity pruning, because we sum over all pairs of $\mathbf{z}_i, \mathbf{z}_j$, a symmetry of KL is lost. We agree that the unbounded nature of KL can be undesirable.
**[Ablation of Each Intervention]** We provide an ablation of each intervention in Table 3 of the supplement and point to this table on line 314 of the main text.
**[Related Work]** There are quite a few relevant papers that have come out after the submission deadline that we plan to add. If you know of any other papers that we have missed, please let us know. We find this area deeply intriguing and would be very grateful for pointers towards any interesting work that we may have overlooked.
**References**
[1] Xie, Sang Michael, et al. "An explanation of in-context learning as implicit Bayesian inference." ICLR (2022).
[2] Hui Jiang. A latent space theory for emergent abilities in large language models. arXiv preprint 354 arXiv:2304.09960, 2023.
---
Rebuttal Comment 1.1:
Title: related work, etc.
Comment: Thank you for providing clarifications. Because of the tight schedule, let me provide a textbook relevant to your work for consideration and discussion. The book was used in a university class and was known in the NLP community. Its Chapters 4 to 9, and especially chapter 5 and 6 are highly relevant to this paper. E.g., chapter 6 justifies using cross-entropy, earth-mover, mutual information and other metrics in addition to the KL.
https://www.amazon.com/dp/B0D8NGQZFR
---
Reply to Comment 1.1.1:
Title: Authors' Response
Comment: Thank you for the pointer to this textbook! We were not aware of this book at the time of submission. We agree that this text is highly relevant to any paper investigating multi-LLM debate. We plan to cite this book and make special mention of chapters 5 and 6; specifically, we propose to add the following text to our related work section:
> “An overview of debate techniques and analysis can be found in [Chang 24]. Chapter 5 outlines a debate procedure that dynamically controls the amount of disagreement between debating LLMs. Chapter 6 outlines the way in which LLM responses change during the debate process; distributions over responses tend to converge (approximately) to consensus as debate rounds increase.”
As you point out, chapter 6 offers justification for studying distributions of LLM responses via several metrics including KL. For clarity we would like to mention that we study distributions over latent concepts (see Lines 233, 237 and 260 of our paper) rather than distributions over LLM responses (as is the case in [Chang 24]). Further differentiating our work is the use of these distributions to then optimize the debate procedure.
If we can provide any additional clarification, please let us know.
[1] Edward Y. Chang (2024). The Path to Artificial General Intelligence –- Insights from Adversarial LLM Dialogue.
---
Rebuttal 2:
Comment: Dear Authors: Thank you for your response. I will adjust the rating accordingly.
Dear Area Chair,
Please determine whether the omission of the "limitations" section in the submitted checklist is a significant issue. Consistency in review policy is crucial for fairness, especially given that both of our NeurIPS submissions were desk-rejected due to checklist-related issues.
Thank you.
-------------------
Dear Authors,
According to the NeurIPS policies, any violence of formatting instructions (as indicated in the Call for Papers: https://neurips.cc/Conferences/2024/CallForPapers -> Formatting instructions) results in a desk rejection. Your paper was rejected since the NeurIPS paper checklist is part of these formatting instructions (see https://neurips.cc/Conferences/2024/CallForPapers -> Formatting instructions -> Point 3). Unfortunately, we are unable to allow further submissions or amendments to submissions as the system is now closed.
We are aware that this is not a decision you expected, but we must follow the NeurIPS policies. The decision is final and cannot be reverted.
---
Rebuttal Comment 2.1:
Comment: Dear reviewer,
Thank you for carefully reading the paper and raising the issue. However, it is not your role to decide whether the paper gets desk-rejected or not. I'll check with SAC about this, but I do not think this is a serious enough issue for desk rejection.
---
Rebuttal 3:
Title: Author Followup
Comment: We are sorry to hear the unfortunate desk rejections from your end, and thank you for reminding us of this. In the future, we will pay closer attention to it.
At the time of submission, we had a different interpretation of the limitation requirement. While a separate section is encouraged, we thought the paper flows better when we discuss the limitations right in the main text, and labeling Yes simply states that we are aware of our limitations and have discussed them in the main text. (in the checklist, we added the justification that "Justification: We mention the limitation of our approach in the conclusion section and experiment section").
That said, we would like to be clear that we fully agree with the reviewer's comments regarding our discussion of limitations; our initial discussion of limitations was insufficient and should be provided in its own section (please see our general response for our proposal).
---
Rebuttal 4:
Title: Misunderstanding
Comment: I pasted the statement of rejecting my papers. Of course I have no authority what-so-ever to make that decision.
Good luck! | Summary: The main questions, that is tackled in this paper is: How can the debate of LLMs be influenced to ensure the best possible outcome. This question is broken down into three parts.
The first part is a theoretical model of debate between LLMs. The model is very intreaging. From the presentation it is not clear, why it should only be limited to the debate of LLMs. It would be very interesting to see, how the presented model of debate relates to models of debate from other fields, e.g. sociology.
The second part describes three methods of interfering with the outputs of the models to ensure that they do not converge to a suboptimal solution.
The third part shows the results of experiments.
Strengths: The models, assumptions, theorems and conclusions are very well and are also easy to follow for readers who may not be so familiar with the mathematics.
I would like to see, if the model of debate could also be applied to other settings.
The experiments confirm the theory, i.e. the methods of interfering show a clear improvement in the performance of the systems.
Weaknesses: There are some typos in the document, that need to be cleaned up.
The introduction to section 5 seems incomplete. Line 166 ends within a sentence. There seems to be a grammar mistake and a ) too much in line 158.
Technical Quality: 4
Clarity: 3
Questions for Authors: I had a hard time understanding, why Diversity Pruning is referred to as maximizing the mutual information. Intuitively, maximizing mutual information would mean, that by knowing X you know everything or a lot about Y. Here, you are doing to the opposite. You are maximizing the difference of D(Theta|z_i) and D(Theta|z_j), hence, knowing z_i should not give you much information on how z_j influences Theta. I would appreciate, if you could help me understand my misunderstanding.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors claim that they have addressed the limitations. I was not able to find that section. Please point out, where you discuss the limitation of your models and interventions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[Applying Debate to Other Settings]** We think that applying debate to settings beyond QA tasks would be very interesting. As of now the majority the works on debate focus on QA tasks (while some investigate translation). We hope that our framework inspires further generalization to other settings.
**[Typos and Grammar]** Thank you for pointing these out. We will address these in the final version of our paper.
**[Diversity Pruning and Mutual Information]** It would be more accurate to say that we are maximizing the amount of unique information present in the responses $Z'^{(t)}$, i.e., maximizing the information entropy present in $Z'^{(t)}$. Our use of the term "mutual information" arose during editing. Initially, we had said, "the amount of unique information mutually present" and then edited this phrase down to “mutual information” on lines 35 and 232, without realizing that we had inadvertently invoked a technical term conveying something other than our intention. This is an oversight on our part; we will modify the language around diversity pruning from "mutual information" to "information entropy".
---
Rebuttal Comment 1.1:
Title: Answer
Comment: Thank you very much for the rebuttal and addressing my comments. I am looking forward to the final version of the paper. | Summary: The paper presents a framework for multi-LLM debate, gives a detailed overview on drawbacks within multi-LLM debate and proposes several interventions to improve debates. The authors show the effectiveness of their interventions in several experiments on four benchmarks.
Strengths: The task of multi-LLM debate is interesting, and important to further improve LLMs responses. The authors gave a detailed overview on effects that can occur within multi-LLM debate and proposed effective interventions to improve debate on four benchmarks.
Weaknesses: The related work section would be more readable, if the author's names where mentioned instead of the reference number. Like in: "[19] focuses on debate when each model..."
KL denotes the Kullback–Leibler divergence, which should be noted, even if it seems obvious.
There are several Mistral models. I would specify the model further (Mistral 7B).
Figure 3 is a bit too small and the two lines are the same color, so it's hard to tell them apart.
You chose Debate as the main baseline tool. A detailed explanation of the method to understand the differences of the approaches is missing.
- The method and task are both called "debate.", which is confusing.
small errors:
- line 82 should be "that agent i provides response zi(t+1), is given by"..
- no error, just a small suggestion: in line 86 write (1) and (2) instead of 1) and 2) which makes it more readable in my opinion
- line 94: task x, and its associated answer y (singular) else I would not understand why x denotes multiple tasks?
- line 189: do you mean with the property instead of propriety?
- it seems that Diversity Pruning should maximize the KL-divergence and not the mutual information as stated in lines 232 and 233. In the diversity pruning we are maximizing over the answers Z so it should be Z ⊂ Z(t) instead of Y.
- What is S (z ∈ S) in the formula for misconception refutation?
- line 292: Z(0) instead of Z0)
- Figure 3: small error underneath the graphic: you wrote pairwise similarity twice
Technical Quality: 3
Clarity: 2
Questions for Authors: - Maybe I overlooked the explanation in the paper, but what does alpha denote in Formula (3)?
- How does the misconception refutation work exactly? How should a model be able to identify misconceptions in the answer of another model?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: An explicit sections that mentions limitations of the work is missing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[Citations and Specificity]** Thank you for pointing this out. We agree with both points regarding the related work. We will add named citations rather than numbered citations. The specific model versions are provided in Table 2 of the supplement, but we will add this information to line 280 of the main body.
**[Figure 3]** We will make this figure large and change the dashed green line to a dashed blue line to further distinguish these values.
**[Difference Between Debate and Our Method]** Our method (namely our three interventions) is an in-processing modification to the original debate procedure in which we leverage a latent concept space $\Theta$ to modify the responses $Z^{(t)}$ before these responses are used at the next round $t+1$. We provide an overview of the vanilla debate procedure (proposed in [Du, Yilun, et al.]) on line 73 of our paper. To help clear up this point, we propose to cite [Du, Yilun, et al.] at the beginning of our discussion on debate (line 73), and to state after line 88 that “The key distinction between our method and vanilla debate is that we will leverage a latent concept space (discussed next) to modify the responses $Z^{(t)}$ in-between round)”.
**[Method and Task are Both Called Debate]** We agree that overloading the term debate can cause confusion. We propose the following fix: we will refer to the debate baseline [Du, Yilun, et al.] used in our experiments as "vanilla-debate" (VD), and we will refer to our method as either "Ours" or "Debate via Latent Concepts" (DLC).
**[Small Errors]** Thank you for pointing these out. We will make these fixes in the final version of our manuscript.
**[Role of $a$ in Formula (3)]** In Formula (3), the variable $a$ is an "answer extraction" function explained on line 80 of our paper. If $\mathbf{z}$ is a string, then $a(\mathbf{z}))$ is the extracted answer from that string. For example, suppose the question is "What color is the sky?" and $\mathbf{z} =$ "During the day, the sky is blue.", then $a(\mathbf{z}) =$ "blue." In practice, $a$ is implemented either through regular expression matching (as in the case of BoolQ, MMLU, and MathQ) or an LLM judge (as is the case in TruthfulQA).
**[Misconception Refutation]** In practice, misconception refutation works as follows. A model $f_1$ provides a response $\mathbf{z}$. A second model $f_2$ is then asked to identify a list of misconceptions in $\mathbf{z}$. The first model $f_1$ is then reprompted for an updated response that addresses the misconceptions raised by $f_2$ ($f_1$ and $f_2$ can be the same model). The intuition for why this works is very similar to why methods such as self-refinement are effective [Madaan, Aman, et al.] (also called self-reflection); judging is often an easier task than generating. However, it is also important to note that there is a limit to how well LLMs can self-identify their own mistakes as pointed out by [Tyen, Gladys, et al.], LLMs can have a difficult time identifying their own errors in reasoning tasks, but (as the title suggests) can fix erroneous reasoning steps when being told that a given step is wrong (even without being told what the correct step should have been). This observation might indicate that misconception refutation scales in the effectiveness of $f_2$ (responsible for misconception identification) more than the effectiveness of $f_1$.
**References**
[1] Du, Yilun, et al. "Improving factuality and reasoning in language models through multiagent debate." arXiv preprint arXiv:2305.14325 (2023).
[2] Madaan, Aman, et al. "Self-refine: Iterative refinement with self-feedback." Advances in Neural Information Processing Systems 36 (2024).
[3] Tyen, Gladys, et al. "LLMs cannot find reasoning errors, but can correct them!." arXiv preprint arXiv:2311.08516 (2023).
---
Rebuttal Comment 1.1:
Comment: Thank you for considering all my remarks for adapting your final manuscript.
I will adjust my rating accordingly. I am just confused about the comment from reviewer 1QQG who pointed out that the paper should have been desk rejected because of the missing limitations sections. Is the last comment an official response from the area chairs?
---
Reply to Comment 1.1.1:
Title: That last comment was not for this paper, but copied from somewhere else
Comment: Thank you for your feedback.
The last comment from reviewer 1QQG is **not** for this paper. We believe it is a paragraph copied from the reviewer either from their end or from somewhere else.
As we replied to reviewer 1QQG, at the time of submission, we had a different interpretation of the limitation requirement (Does the paper discuss the limitations of the work performed by the authors?). While a separate section is encouraged in the guideline, we thought the paper flows better when we discuss the limitations right in the main text, and labeling the answer as "Yes" simply states that we are aware of our limitations and have discussed them in the main text. Indeed, in the checklist, we had the following justification: "Justification: We mention the limitation of our approach in the conclusion section and experiment section".
Please let me know if there are further questions. | Summary: In this paper, the authors establish a theoretical framework for multiagent debate. The authors find that LLMs are susceptible to issues such as leaning toward majority opinion (tyranny of the majority) and error expansion during debates (shared misconceptions between models). Thus, the authors propose three types of interventions, including debate selection and debate modification to mitigate these issues. Results show that LLMs improve on 4 common datasets, BoolQ, MMLU, TruthfullQA, Math.
Strengths: 1. Multiagent debate shows promising improvement of LLMs. However, most of the papers are on the empirical level. This paper provide a theoretical framework that gives a more accurate analysis of what we should focus on.
2. The phenomena of the tyranny of the majority and shared misconceptions between models are very interesting. They show how large language model bias occurs when debating without human participation.
3. The empirical results prove the theory and improve the original debating framework by a large margin.
Weaknesses: 1. Some assumptions seem too strong. It is assumed that each agent can only see one round of history. However, empirically, the agent can see all his own dialogues. For Asumpotion 4.1, how could the model respond without the reaction of other models? For instance, other models may react with opposite results.
2. Lacks some details on experiments. How do you force the model to affirm and then prompt m of the models (out of 11) to provide responses (Line 287)? Besides, for Figure 2, what is the dataset used, and is it averaged across all 4 datasets?
3. Since debating is ready API costly, I would expect a time estimation for the proposed algorithm on one sample and how it compares with the original debating.
Technical Quality: 2
Clarity: 3
Questions for Authors: Some presentation typos:
line 292 $Z^{(0)}$
Figure 3 caption is repeating
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: I encourage the authors to write limitation and broader impacts sections independent of the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[Theoretical Assumptions]** While we agree that we make some stylized assumptions, we believe that these assumptions are reasonable in the context of debate and LLMs. For example, we formulate debate in a Markovian manner (i.e., each agent can only see one round of history). This Markovian property is not unique to our paper and is common in most papers on LLM debate (e.g., [Du, Yilun, et al.]). It would be interesting to consider cases where agents have access to longer histories, especially with the rapidly growing context windows of modern models (e.g., 128K token window of the newly released Llama 3.1).
As for Assumption 4.1, we would like to provide some clarification. At each round $t$, the question $\mathbf{x}$ and the previous responses $Z^{(t-1)}$ are formatted into a prompt, which is passed to the model. Assumption 4.1 says that the model's generation at time $t$ (namely $\mathbf{z}^{(t)}$) is independent of the question and the other models' responses at time $t$ **given** the latent embedding $theta$ and model parameters $\phi$. This assumption can be thought of as saying that once a prompt has been passed through the encoder of the model, the original prompt (i.e., $\mathbf{x}$ and $Z^{(t)}$ in our case) is no longer needed.
**[Majority Answers and Figure 2]**
For the results in Figure 2, we force the models to provide specific answers using special target prompts (an example for BoolQ is provided below). We design these prompts to be as similar to the original prompts as possible (see line 576 of the supplement for the original BoolQ prompt).
We use the targeted prompt 20 times for each target answer to obtain a diverse set of responses. Any response that does not provide the target answer is removed. We will add this explanation to the experiment section (after the first sentence of line 285) and the following example prompt to Section B.1 of the supplement (after line 617).
```
You will be given a yes-no question which is based on a passage.
You should use the passage to provide an answer of {_TARGET_ANSWER_}.
You should give a brief justification for that answer,
and you must provide a final answer {_TARGET_ANSWER_}.
\n Question: {_QUESTION_}
\n Passage: {_PASSAGE_}
```
The values shown in Figure 2 are averages over all questions in each dataset (see line 273 for these values). We show one plot for each dataset, denoted by each plot's title in Figure 2.
If anything else is unclear about the experimental design, please let us know; we are always eager to improve the readability of our paper.
**[Running Time]** This is a good point; debate as an inference procedure can be costly, and our method introduces additional overhead. We believe that the performance gain from our method justifies the increased inference time. Below is a table showing the average run time for a single question (in seconds) for both the vanilla debate method and each of our interventions. We use 6 models and report the average time to complete 1 question, averaged over 3000 questions from BoolQ. The comparisons are vanilla debate (VD), diversity pruning (DP), quality pruning (QP), misconception refutation (MR), and a combination of all three interventions outlined by Algorithm 1 (All 3).
| Model | VD | DP | QP | MR | All 3 |
| ---------------- | ------- | -------- | ------- |-------- | -------- |
| $6\times$GPT-3.5 | 2.3 | 2.6 | 2.5 | 4.8 | 4.3 |
| $6\times$Llama-3 | 5.2 | 4.6 | 3.8 | 10.3 | 7.1 |
| $6\times$Llama-2 | 13.8 | 11.2 | 6.6 | 27.1 | 18.0 |
| $6\times$Mistral | 6.3 | 4.9 | 4.5 | 12.8 | 9.4 |
When combining all three of our interventions, we see that the additional overhead compared to vanilla debate is not too severe. This is due primarily to the fact that diversity pruning and quality pruning remove responses from $Z^{(t)}$, which shortens the prompt lengths at round $t+1$, which in turn speeds up the inference time of each model. We will add this table to Section B of the supplement and reference/discuss this table on line 312 of our experiments section.
For Llama-2, Llama-3 and Mistral, we use a single Tesla-v100-32GB GPU, an Intel i9 CPU, and the VLLM acceleration library (https://docs.vllm.ai/en/latest/). For GPT-3.5 we use the openai api and use the AsyncOpenAI class to handel the prompts in batches of 500 (https://github.com/openai/openai-python).
**References**
[1] Du, Yilun, et al. "Improving factuality and reasoning in language models through multiagent debate." arXiv preprint arXiv:2305.14325 (2023).
---
Rebuttal Comment 1.1:
Comment: Thanks for the point-by-point in-depth analysis and thoughtful explanations. I appreciate that my concerns on W2 and W3 are mostly addressed. I encourage the authors to include the analysis in the final version. Thanks! | Rebuttal 1:
Rebuttal: # General Rebuttal
Thank you for taking the time to provide feedback on our work.
**[Limitations]** While we discuss our work's limitations within the main body, we realize that this discussion is insufficient and needs to be more overt, focused, and in its own section. We propose adding the following section right before our conclusion section. We will move Figure 1 to the supplement to make room for this additional section.
**Limitations and Impact** While we do aim to address some of the fundamental issues of multi-LLM debate, such as tyranny of the majority, there are several factors that one needs to consider when adopting our framework or using our interventions. Firstly, our theoretical results leverage a latent concept space $\Theta$, which may not be accessible in practice, necessitating the use of proxies such as sentence embeddings. This can particularly limit the applicability of quality pruning and diversity pruning; these two interventions are less effective in domains where sentence embeddings are less meaningful (e.g., arithmetic questions). Additionally, these interventions can increase the inference time of debate (this is particularly true for misconception refutation). While our work aims to provide some insights into the debate procedure, there is much that still needs to be explored. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
UniFL: Improve Latent Diffusion Model via Unified Feedback Learning | Accept (poster) | Summary: This paper introduces UniFL, a novel approach for improving diffusion models through unified feedback learning. The objective of UniFL is to enhance visual generation quality, preference aesthetics, and inference acceleration. To achieve this, the paper proposes three key components: perceptual feedback learning, decoupled aesthetic feedback learning, and adversarial feedback learning. UniFL is designed to be a two-stage training pipeline that can be applied to various models and yields impressive improvements in both generation quality and acceleration. The paper provides experimental results demonstrating the effectiveness of UniFL in terms of generation quality and acceleration. Additionally, the paper discusses the potential broader impacts and ethical implications of advancements in image generation techniques.
Strengths: 1. The writing of this article is commendable, with a well-structured format.
2. The author conducted a large number of visualization experiments to illustrate the focus and effectiveness of the method, which is very attractive.
3. The proposed method serves as a plug-and-play indeed improves the performance on both SD15 and SDXL.
Weaknesses: 1. There are some minor LaTeX formatting issues: for example, there should be a consistent space before parentheses in abbreviations, the quotes in the Appendix should be implemented using `’, and “our” should be “Our” in line 130.
2. The values of hyper-parameters such as \alpha_d are not explicitly stated in the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Have the authors explored the impact of different aesthetics on generated images?
2. Are the user studies enough to prove the effectiveness of each module in this field?
3. Why not compare with SDXL-IR and SDXL-DPO in Ablation on Acceleration Steps?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your kind words about our good writing, sufficient experiments, and the effectiveness of our method. We would like to answer the proposed questions in the following:
1. **Minor Typos**: Thanks for the suggestion, we will modify these places in our revised version.
2. **Selection of hinge coefficient $\alpha_d$**: Please refer to the Author Rebuttal part and Fig.4 of the global rebuttal PDF for more details.
3. **Impact of different aesthetics during generation**: The impact of different aesthetic reward models in the generated images can be clearly observed in Fig.3 in our main paper. As can be seen, with the integration of color aesthetic rewards, the colors of the images generated by UniFL are more harmonious and natural. This enhancement is particularly pronounced in the acceleration scenario, where LCM and Turbo exhibit color deterioration characterized by darker, grayer, and vaguer, contrasting with our method's maintenance of a lively color palette. The effects of the detail and layout reward model are also notable. For instance, when considering the prompt `A bloody may cocktail`, the image generated by UniFL showcases intricate background details while maintaining a well-balanced layout with the foreground, resulting in a visually appealing composition. Furthermore, the lighting aesthetic reward ensures the lighting with an atmosphere in the image, as opposed to the flat center background lighting observed in alternative methods.
4. **Effectiveness of User Study**: Due to the subjective nature of the image quality judgment, the most effective way to evaluate the performance of a model in the context of text-to-image generation so far is still the user evaluation. This is also the standard practice of various classical text-to-image works such as SDXL[1], SDXL-Turbo[2], etc. Moreover, to increase reliability, we involve a considerable number of users(10 users) in our quality evaluation study, which we believe can validate our conclusion relatively accurately.
5. **Acceleration ablation with DPO/ImageReward**: We did not choose to compare the accelerated version of our method with the SDXL-IR and SDXL-DPO as _they are not the methods tailored for inference acceleration instead of generation quality optimization_. There is no few-steps optimized version of SDXL of ImageReward and DPO for fair comparison with our method of inference acceleration. Even though, the unaccelerated version SDXL optimized via our approach still displays more superior image quality than these methods as evidenced by our experiments.
We hope our rebuttal can address your concerns.
[1] SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis
[2] SDXL-Turbo: Adversarial Diffusion Distillation
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I have checked the rebuttal and tend to keep my score. | Summary: Considering that current diffusion models still suffer from several limitations, this paper aims to propose a unified framework, UniFL to address the main existing challenges by applying feedback learning. To respectively solve the issues of visual distortion, poor aesthetic appeal, and inefficient inference, this paper demonstrates different sub-modules, namely Perceptual Feedback Learning, Decoupled Feedback Learning, and Adversarial Feedback Learning. By fully leveraging the ability of Perceptual Feedback Learning to fine-tune the diffusion during the two training stages, the proposed model aims to achieve both remarkable generation quality and speed simultaneously. Sufficient experiments, including extensive ablation studies for different proposed sub-modules, have proved the validation and effectiveness of the model’s design.
Strengths: The proposed idea about a combination of feedback learning and diffusion models is well introduced with necessary background information and is sufficiently motivated.
The three main issues remaining for diffusion, namely inferior quality, lack of aesthetics, and slow inference speed are well addressed, keeping the following methodology sections’ logic pipeline clear and understandable.
The proof of the methodology part is adequately explained with abundant support of pipeline figures and pseudocode that help the reader get full access to the novelty insight.
Sufficient experiments over different domains, especially the wide range of ablation studies, have been carried out validly which demonstrate the effectiveness of all the proposed feedback learning modules, which enhances persuasiveness.
Weaknesses: The proposed qualitative visualization comparison mainly focuses on justifying the overall generation style and the structure correctness. However, a main issue that may occur during the inference speed-up is detail loss. It will be more appreciated if the author can provide some further visual comparison results to demonstrate the model’s ability to keep the visualization results consistent with that adjective in the text prompt context.
Also, according to Figure 7, an unwelcomed issue can be found that the images generated by UniFL for text prompt A cream-colored labradoodle wearing glasses and a black beret teaching calculus at a blackboard mistakenly wears a black tie, which has never been mentioned in the text prompt. It shows an unexpected trend of overfitting and inconsistency. Further explanation about the cause of such results should be addressed.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weakness part.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Though there are still some blemishes part for some of the proposed results, generally the experiment part is well-organized, and filled with abundant ablation studies for the proposed modules.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your kind words about our well-motivated method, sufficient experiments, and good writing. We would like to answer the proposed questions in the following:
1. **Visualization on T2I alignment after acceleration**: We visualize the text-to-image(T2I) alignment performance of SDXL accelerated via our method on the widely used benchmark prompt set - DrawBench, and provide some results in Fig.3 of the global rebuttal PDF. It clearly demonstrates that, even with a reduced number of inference steps, our approach maintains superior text-image consistency under various circumstances after acceleration, including attribute binding, object counting, and counterfactual prompt inputs.
2. **Explanation of the unexpected generation**: It should be noted that although it is not what the user expected, there is nothing wrong with the model to generate this black tie, as the input prompt does not explicitly specify that a tie is undesired and all the stuff mentioned in the prompt(e.g. cream-colored labradoodle, glasses, black beret. etc) has already correctly generated. In other words, it is free for the model to generate additional items in addition to the input prompt specified. One possible reason for the model to generate the black tie in this case is the data bias residing in the training samples. That is, there are vast training examples of teachers lecturing at a blackboard, where most teachers in these examples all wear ties. Such co-occurrence bias makes the model easily generate an extra tie when comes to the character in front of the blackboard. A possible solution is to input an additional negative prompt to suppress such undesired behavior.
We hope our rebuttal can address your concerns.
---
Rebuttal Comment 1.1:
Comment: Thank you for providing the response. After reading it, I have decided to keep the original score. | Summary: The work introduces Unified Feedback Learning (UniFL), a unified framework to enhance diffusion models through feedback learning. It addresses three main challenges in diffusion models: visual quality, aesthetic appeal, and inference efficiency. UniFL comprises perceptual feedback learning, decoupled feedback learning, and adversarial feedback learning.
- Perceptual Feedback Learning utilizes existing perceptual models, such as an instance segmentation model, to fine-tune diffusion models on a specific aspect.
- Decoupled Feedback Learning decomposes the general aesthetic concept into color, layout, lighting, and detail. It fine-tunes the model by reward feedback learning in all these sub-categories.
- Adversarial Feedback Learning exploits a general reward model as a discriminator to improve the generation quality of fewer denoising steps.
Experiments demonstrate that UniFL improves the performance of diffusion models like SD1.5 and SDXL in terms of generation quality, aesthetic preference, and inference speed.
Strengths: - The paper is clear and easy to follow
- The proposed method utilizes different priors from other perceptual models to improve diffusion models.
- New aesthetic human feedback dataset is proposed.
Weaknesses: - The overall method is some modification from ReFL.
- The perceptual feedback learning part is limited by the selected models, which seem to only fine-tune concepts used to train these perceptual models.
- The performance is not significantly superior to other methods according to Table 1.
Technical Quality: 3
Clarity: 3
Questions for Authors: - If users want to generate images containing concepts not shown in COCO dataset, but shown in LAION, it seems perceptual feedback learning part will not work.
- If users want to fine-tune the diffusion model in several different perceptual aspects simultaneously, will the priors from different perceptual models interrupt each other?
- Weaknesses above
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes, the authors addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your kind words about our writing and contribution to the feedback dataset curation. We would like to answer the proposed questions in the following:
1. **Comparison with ReFL**: There are significant differences between our method and ReFL. Specifically: 1) Given the input prompt, ReFL starts with pure noise to obtain the denoised image and then imposes the reward scoring. This limits their usage to the reward models pre-trained on the collected preference data for the reward guidance. In contrast, _we incorporate an extra image condition for noise initialization and use the perceptual label of this condition image for feedback supervision on the denoised image_. Such formulation allows us to leverage the prior knowledge of a wide range of existing perceptual models to perform feedback fine-tuning. 2) In ReFL, a single coarse-grained preference reward model is applied, which may encounter the rewarding conflict, resulting in insufficient fine-tuning. As a comparison, _we develop multiple decoupled aesthetic reward models, which enable more effective fine-grained aesthetic preference alignment_. 3) A core finding exhibited in ReFL is that it can only be applied to later denoising time steps during fine-tuning with the reward model as the reward model cannot reward correctly in the early steps. By contrast, _we introduce adversarial feedback supervision, which effectively removes this limitation on optimization steps, allowing us to apply feedback fine-tuning to any denoised time step_, thereby achieving model inference acceleration. Overall, these novel designs make our approach distinct from ReFL and yield a more flexible and effective feedback fine-tuning framework for LDM.
2. **Fine-tune concepts limitation in PeFL**: On the one hand, although we apply the SOLO instance segmentation model trained on the COCO dataset(80 categories), we observe that PeFL exhibits exceptional generalization capability and the generation performance of many concepts not shown in the COCO dataset is also boosted significantly as shown in Fig.2 of the global rebuttal PDF. We believe that the LDM can be guided to learn general and reasonable structure generation via PeFL optimization. On the other hand, the proposed perceptual feedback learning is a very general module, and it is very straightforward to replace the close-set model SOLO with other open-set instance segmentation models such as Ground-SAM to achieve further improvement for the concepts in the wild(e.g. concepts in LAION dataset).
3. **About the performance in Tab.1**: We argue that our method demonstrates notable improvements compared to other approaches as outlined in Tab.1. It is essential to emphasize that different metrics possess unique properties, and _absolute enhancement should not be the sole criterion for evaluation_. For instance, the inherent limitations in the fine text-to-image alignment of the CLIP model make it challenging to achieve significant advancements in CLIP scores as it cannot capture the nuanced content change. Consequently, even with tailored text-to-image alignment feedback fine-tuning, ImageReward achieves a mere 0.004 CLIP score improvement with SD1.5. Therefore, _it is more reasonable to asses the relative improvements_. Specifically, UniFL obtains 2$\times$ CLIP score improvement over SD1.5 base(0.005 _vs_ 0.01) than the second best method(i.e. SD15-DS), which showcases the superiority of our method. There is also a similar case of other metrics, in which UniFL obtains relatively more notable improvement upon the base model. Moreover, our method also displays an obvious advantage over other methods by the extensive user study.
4. **About multiple perceptual aspects optimization with PeFL**: We show the results of multi-aspect optimization on style and structure through PeFL in Fig.1 of the global rebuttal PDF. As can be seen, incorporating these two distinct objectives does not hurt the effectiveness of each other. Take the prompt ``a baby Swan, graffiti`` as an example, integrating the style optimization via PeFL upon the base model successfully aligns the image with the target style. Further integrating the structure optimization objective retains the correct style and exhibits more complete structural details(e.g. the feet of the Swan) simultaneously.
We hope our rebuttal can address your concerns.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for your rebuttal. After reviewing your rebuttal, it addressed most of my concerns. I tend to increase my rating to Borderline accept. | Summary: This paper proposes a framework to enhance visual quality, aesthetic appeal, and inference efficiency using various methods, including perceptual feedback learning, decoupled feedback learning, and adversarial feedback learning. Good experimental results are observed.
Strengths: The concept of perceptual feedback learning is promising, as it can leverage various pretrained expert models to improve learning.
The idea of decoupled feedback learning makes sense, as it allows the model to focus on fine-grained details in the images.
Weaknesses: The author claims that previous works primarily focus on individual problems through specialized designs and proposes a unified framework. However, three different methods are designed to solve different problems. Why is it called unified?
In Line 145, it states that image content is incorporated as an additional condition for guidance. However, as shown in Figure 1, no extra inputs are added to the diffusion model.
In the experiment, only an instance segmentation model is used for perceptual feedback learning. Have the authors tried other types of models? It would be interesting to see some analysis on this aspect.
For decoupled feedback learning in Equation 5, why is a hinge loss used instead of a winner-loser loss as in Equation 4? What do the data annotations look like? How to choose the hyper-parameters of the hinge loss?
What are the details of the semantic-based prompt filter and nearest neighbor prompt compression described in Line 189 for active prompt selection?
In Line 202, it states that samples with low inference steps tend to be too noisy to obtain correct rewarding scores in previous methods. How does the proposed adversarial feedback learning improve this and how does it accelerate inference?
Technical Quality: 2
Clarity: 2
Questions for Authors: see weakness
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: see weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your kind words about the design of perceptual feedback learning and decoupled feedback learning in our method. We would like to answer the proposed questions in the following:
1. **Clarification on unified design**: We claim our method is a unified design as all the modules are seamlessly integrated under the unified feedback learning framework. Specifically, under this framework, we developed different reward mechanisms (namely perceptual reward, decoupled reward, and adversarial reward), which are effectively combined to mitigate the problems of low fidelity, poor appeal, and inefficient inference of existing LDMs simultaneously at the first time via the **_unified rewarding then fine-tuning paradigm_**. In contrast, existing works exhibit high heterogeneity in the way(e.g. changing network structure, or altering the sampling scheduler) to address these defects of LDM, making it challenging for these methods to be well combined to achieve comprehensive improvement.
2. **Additional image condition**: We emphasize that _the introduced additional condition of PeFL refers to the image content incorporated into the noise initialization_, instead of the extra control image input like in ControlNet. Specifically, we highlight the perceptual feedback learning(PeFL) with additional image input in Fig.1(left) via the **blue solid line**. As depicted in Fig.1, this image is first injected noise after encoded(the injection process is not illustrated in Fig.1 for clarity, but detailed in Algo.1), then sent to the diffusion model for denoising, the denoised image is then supervised by the perceptual label (instance/semantic seg mask) of the input image. As a comparison, below the input image for PeFL, the decoupled feedback learning starts with pure noise as indicated in the yellow solid line.
3. **More instantiation of PeFL**: We have presented two more case studies of perceptual feedback learning with other types of perceptual models, including the semantic segmentation model(i.e. DeepLab-V3) and the style extraction model(i.e. VGG-16) in the Appendix attached to our main paper. The results(Fig.8 and Tab.2 for style, Fig.9 and Fig.10 for layout) demonstrate that PeFL can effectively exploit the prior image knowledge embedded in these expert models to boost the performance of style generation(increasing the style response rate) and semantic layout optimization(more preferred layout).
4. **Loss formulation in Eq.5**: We choose the hinge loss instead of the winner-loser loss in Eq.5, as _it describes the reward-tuning procedure, wherein there are no winner and loser samples for pair-wise loss calculation_. Specifically, in this process, a sample is generated from the pure noise given a prompt and then scored via the already well-trained reward models. Eq.5 then encourages the generated image to gain a higher reward score, and the hinge coefficient helps to avoid the unbounded increasing reward, preventing the LDM overfit to the reward model. The winner-loser loss is only used in the reward model training stage, where the reward model is trained to increase the score of the preferred sample while lowering the unpreferred one, aligning with the human preference.
5. **Preference data annotations**: We provide some examples of the collected annotated preference data in Sec.B.1(Fig.11) in the Appendix of our main paper.
6. **More details of active prompt selection**: We provide more details of the semantic-based prompt filter and nearest neighbor prompt compression during active prompt selection in Sec. B.2 in the Appendix of the main paper. In short, the semantic-based prompt filter selects the prompts owning rich semantic meaning by the ruled-based filtering, while the nearest neighbor prompt compression is designed to suppress the redundant prompts by checking the semantic similarity in the embedding space to ensure diversity.
7. **Selection of hinge coefficient $\alpha_d$**: Please refer to the Author Rebuttal part and Fig.4 of the global rebuttal PDF for more details.
8. **Analysis on Adavasarial Feedback Learning**: We provide a detailed analysis of the effect of adversarial feedback learning on acceleration in our ablation study section (L316). In summary, incorporating adversarial feedback plays two roles: (1) With the diffusion model and the adversarial reward model updated adversarially, the diffusion model is not prone to overfitting. The synchronously updated discriminator forces the diffusion model to evolve continuously, enjoying the reward guidance for a longer duration; (2) The denoised images under the low inference steps are forced to be clearer via the strong adversarial objective and thus can be correctly rewarded by the aesthetic reward model. Given these two benefits, the generation quality of low-step images steadily improved via the reward guidance, achieving inference acceleration.
We hope our rebuttal can address your concerns. | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for their constructive comments. We are delighted that the core contributions of our proposed method are regarded as promising and effective (R-qtQN, R-dHXQ, R-af1S), novel (R-dHXQ), the paper is well-motivated (R-dHXQ) and well-written (R-nwEj, R-dHXQ, R-af1S). All the comments will be addressed in the revised paper. We would like to answer some common questions in the following:
**R-qtQN, R-af1S: Selection of hinge coefficient $\alpha_d$**: We customized the hinge coefficient for each aesthetic reward model based on their reward distributions on the validation set. As illustrated in Fig.4(left) of the global rebuttal PDF, there are clear margins in the reward scores between preferred and unpreferred samples. Moreover, such margin varies across these dimensions, which emphasizes the importance of the decoupled design. Empirically, we set the reward hinge coefficient to the average reward scores of the preferred samples to encourage the diffusion model to generate the sample with higher reward scores. Taking the reward model on color as an example, we ablate such coefficient selection. As depicted in Fig. 4(right) of the global rebuttal PDF, setting $\alpha_{color}$ too small resulted in marginal improvement due to limited guidance, while excessively large hinge coefficients led to over-saturation in images. Conversely, selecting a coefficient around the average reward score of the preferred samples yielded optimal results. A similar trend was observed in the layout and lighting aesthetic dimensions, except for the detail dimension. Notably, a slightly lower $\alpha_{detail}$ sufficed to achieve satisfactory results for the detail reward model, whereas a higher coefficient tended to introduce more background noise. This phenomenon is likely attributed to the substantial reward score margin between preferred and unpreferred samples for detail dimension, where a high coefficient could lead to overwhelming guidance toward the target reward dimension.
We hope our rebuttal can address the concerns about these questions.
Pdf: /pdf/b829596f85a06b2a31bf67d461cce58826208791.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Retentive Network | Reject | Summary: The paper proposes the Retentive Network (RetNet) as a foundation architecture for large language models. RetNet has a multi-scale retention mechanism with three computation paradigms: parallel, recurrent, and chunkwise recurrent.
The retention mechanism starts with a recurrent modeling formulation and derives a parallel formulation. It maps input vectors to state vectors recurrently and implements a linear transform to encode sequence information. Then, it makes the projection content-aware by using learnable matrices. The retention layer is defined using these matrices and a complex position embedding, combining causal masking and exponential decay along relative distance.
It achieves low-cost inference, efficient long-sequence modeling, comparable performance to Transformers, and parallel training. Experimental results show its superiority in language modeling, inference cost, and training throughput.
Strengths: 1. The RetNet also shows competitive performance in language modeling and knowledge-intensive tasks compared to other Transformer variants and has the potential to replace Transformers for large language models.
2. Achieves significantly better inference efficiency in terms of memory, speed, and latency.
Weaknesses: 1. The paper presents the scaling curves of RetNet and Transformer with model sizes ranging from 1.3B to 6.7B, concluding that RetNet is favorable in terms of size scaling and starts to outperform Transformer when the model size is larger than 2B. However, it does not provide a detailed explanation for this trend. Understanding the underlying reasons for this performance difference with increasing model size could provide more insights into the effectiveness of RetNet and its potential advantages over Transformer.
2. The use of $\gamma$ in the RetNet may appear somewhat heuristic. The paper assigns different $\gamma$ for each head in the multi-scale retention (MSR) module and keeps them fixed among different layers. While this approach is used to achieve certain effects, such as enhancing the non-linearity of the retention layers and improving the model's performance, the specific rationale for choosing these values and the potential impact on the model's behavior could be further explained.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can you provide more insights into why RetNet starts to outperform Transformer when the model size is larger than 2B? What specific characteristics or mechanisms of RetNet contribute to this improved performance at larger scales? Have you tried different learning rates to investigate their impact on the scaling of the RetNet model? If so, what were the results and how did they affect the model's performance and scalability?
2. A more detailed discussion on the selection of $\gamma$ and its effects on the model's performance, as well as how it compares to other possible approaches or values, would be beneficial in providing a deeper understanding of the RetNet's functionality. Additionally, exploring the sensitivity of the model to different values of $\gamma$ or conducting experiments to justify the choice could strengthen the argument for using this particular heuristic.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: none
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your positive comments.
>Q1: Can you provide more insights into why RetNet starts to outperform Transformer when the model size is larger than 2B?
A1: Because of the recurrence nature of the proposed method, the dimension of "hidden states" is critical for model performance, which is similar to the concept of LSTM hidden size. We find that expanding the width is beneficial for retention, especially for smaller size. Along with model size increases, the hidden size becomes larger.
---
>Q2: Have you tried different learning rates to investigate their impact on the scaling of the RetNet model? If so, what were the results and how did they affect the model's performance and scalability?
A2: We follow the learning rate schedule used by Transformers for fair comparisons in the paper. The suggested investigation would be valuable to obtain the relation between optimal learning rate and model size.
---
>Q3: Use of $\gamma$? How is $\gamma$ determined?
A3: As descibed in Line 66 and Eq. (3), the decay term $\gamma$ is introduced by diagonalizing the matrix $A$. As shown in Table 3, the ablation "without $\gamma$ decay" performs worse than the proposed retention, indicating the effectiveness of our design. The physical meaning of $\gamma$ is relative position bias, such as Alibi[1], and xPos[2]. Following the above works, we assign different decay values for the heads. Moreover, the decay speed is similar with previous position embedding works, rather than heuristic search of $\gamma$.
[1] Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
[2] A Length-Extrapolatable Transformer
We hope the above explanation clarifies the rationale behind our designs.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply. I decided to maintain my original score. | Summary: The authors propose a linear attention model called RetNet for language modeling, which has a linear training complexity and constant inference complexity.
Strengths: 1. RetNet has both linear time complexity and constant inference memory complexity.
2. RetNet has a chunk recurrent form which can be beneficial for speculative decoding.
Weaknesses: 1. The authors introduce a new term called "Retention," but this is essentially the same as Linear Attention without the denominator, which has already been proposed in [1].
2. Lack of comparison with the baselines on open source pretraining data. All the training experiments are conducted on in-house data mixtures, which harms the reproducibility.
3. The paper doesn't compare RetNet with other linear attention model (such as GLA, RWKV, Mamba) on downstream tasks with standard metrics instead of perplexity. Table 2 only include RetNet and Transformer. The efficiency measurment of RetNet+ is absent.
4. The evaluation on MMLU/Qasper is using perplexity but not the widely-used accuracy/F1 metric. The perplexity results don't necessarily mean that the model can make correct choices for the samples in MMLU, and has less guidance for the model's downstream performance.
5. Missing citations: The authors should also cite [1] for the normalization after retention, and discuss the details of the triton implementation of RetNet and its difference from the implementation in the Flash Linear Attention [2] library.
[1] Zhen Qin, Xiaodong Han, Weixuan Sun, Dongxu Li, Lingpeng Kong, Nick Barnes, and Yiran Zhong. The devil in linear transformer. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 7025–7041, Abu Dhabi, United Arab Emirates, Dec. 2022. Association for Computational Linguistics.
[2] Yang, Songlin and Zhang, Yu. FLA: A Triton-Based Library for Hardware-Efficient Implementations of Linear Attention Mechanism. https://github.com/sustcsonglin/flash-linear-attention
Technical Quality: 2
Clarity: 2
Questions for Authors: Can you also add the results on MMLU and Qasper with the standard metrics besides perplexity?
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: No, the authors should have a limitation section to point out the strong assumptions of their approximation of self-attention and relative position embedding.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >Q1: The authors introduce a new term called "Retention," but this is essentially the same as Linear Attention without the denominator, which has already been proposed in `The devil in linear transformer`.
A1: The term is proposed to avoid confusion with the pioneer work "Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention", where linear attention is with the denominator. We can add the discussion of `The devil in linear transformer` (Transnormer) in the paper. We also list several key differences as follows.
- We derive the design starting from the recurrent view, rather than empericially modifying the original attention.
- The theoretical derivations naturally have position embedding and the decay term ($\gamma$), which is critical for stable training and good performance. As shown in Table 3, the ablation "without $\gamma$ decay" performs worse than the proposed retention. Directly using the QKV implementation can not easily converge as expected at larger scale. The Transnormer model additionally interleaves diagonal blocked sparse softmax attention.
- The three equivalent computation paradigms, i.e., parallel, recurrent, and chunkwise recurrent representations, are also important features of retention.
- Although the overall forms seem similar, but the specific equations are still different. More importantly, the modeling and training behaviors are very different, based on the theoretical nature of the proposed method.
We appreciate that you point out this issue. We are open to discuss these key elements and discuss Transnormer in the paper.
---
>Q2: Lack of comparison with the baselines on open source pretraining data. All the training experiments are conducted on in-house data mixtures, which harms the reproducibility.
A2: The training corpora used in the paper are all public available. They can be easily downloaded online for acadamic purpose. We use the same training data across models for fair comparisons.
---
>Q3: The paper doesn't compare RetNet with other linear attention model (such as GLA, RWKV, Mamba) on downstream tasks with standard metrics instead of perplexity. Table 2 only include RetNet and Transformer.
A3:
- As shown in Table 1, we compare RetNet with Hyena, RWKV, Mamba, H3 on fine-grained language modeling evaluation and MMLU answer perplexity. The fine-grained language modeling evaluation correlates well with downstream tasks.
- As pointed out in [1][2], robustly evaluating accuracy metrics need larger model size. Otherwise the accuracy metrics tend to be affected by the "emergence" issue. Transformer is still a quite strong architecture, so we compare with Transformer with 6.7B model size in Table 2, rather than scaling up all the variants to 6.7B for robust comparisons. The metric protocal is by design for more scientic evaluation.
We hope the above explanation clarifies the rationale behind our experiment designs.
[1] Are Emergent Abilities of Large Language Models a Mirage?
[2] Understanding Emergent Abilities of Language Models from the Loss Perspective
---
>Q4: The efficiency measurment of RetNet+ is absent.
A4: The efficiency measurment of RetNet+ can be obtained by interpolating the efficiency results of hybrid blocks, depending on the ratio of the mixed blocks, which are provided in the main content.
---
>Q5: The evaluation on MMLU/Qasper is using perplexity but not the widely-used accuracy/F1 metric. The perplexity results don't necessarily mean that the model can make correct choices for the samples in MMLU, and has less guidance for the model's downstream performance.
A5: The metric protocal is by design for more scientic evaluation. As pointed out in [1][2], robustly evaluating accuracy metrics need larger model size. Otherwise the accuracy metrics tend to be affected by the "emergence" issue. In comparison, we report accuracy numbers for scaling-up settings, such as Table 2 (i.e., scaling up size) and Table 5 (i.e., scaling up training tokens and size).
[1] Are Emergent Abilities of Large Language Models a Mirage?
[2] Understanding Emergent Abilities of Language Models from the Loss Perspective
---
>Q6: Missing citations: The authors should also cite [1] for the normalization after retention, and discuss the details of the triton implementation of RetNet and its difference from the implementation in the Flash Linear Attention [2] library.
A6: We can add them in the camera-ready version. | Summary: This paper presents Retentive Network (RetNet), a family of efficient models that incorporate exponential decay within a linear attention-like structure. RetNet shares similarities with state-space models and linearized attention, enabling both training parallelism and O(1) inference cost. Additionally, RetNet supports chunk-wise parallel computation for efficient long-sequence training. Experimental results demonstrate RetNet achieves performance comparable to Transformers and outperforms other efficient variants on language modeling and vision tasks.
Strengths: - The structure of RetNet is easy to understand and follow
- RetNet exhibits promising training and inference efficiency, and is able to scale up to 6B.
- Comprehensive evaluation on both language and vision tasks, highlighting its generalizability.
Weaknesses: - Some experiments could be improved
- Some claims may be misleading
- RetNet's performance lags behind Transformers at smaller model scales, suggesting it might be more demanding in terms of capacity and compute resources for optimal performance. This trade-off should be carefully considered and analyzed.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The inference results in Figure 5 start with 2048. What’s the inference speed for shorter sequences?
2. The claim that "None of the previous work can achieve strong performance and efficient inference at the same time compared with Transformers" is overly strong and potentially misleading. Recent advancements in efficient modeling, such as Mamba, have demonstrated better scaling properties than Transformers.
3. It would be great to include the training loss curve for both Transformer and RetNet.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I didn't see serious problems.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive review.
>Q1: The inference results in Figure 5 start with 2048. What's the inference speed for shorter sequences?
A1: The RetNet inference speed remains almost constant across length, and Transformers' speed can be extrapolated according to Fig 5. As shown in Figure 5(c), we also compare inference latency of Transformer with 1024 length (yellow). It shows there is still improvement for sequences shorter than 2048.
---
>Q2: The claim that "None of the previous work can achieve strong performance and efficient inference at the same time compared with Transformers" is overly strong and potentially misleading. Recent advancements in efficient modeling, such as Mamba, have demonstrated better scaling properties than Transformers.
A2: We can improve this part as suggested to take more recent advances into consideration.
---
>Q3: It would be great to include the training loss curve for both Transformer and RetNet.
A3: We saved the loss curves in Tensorboard. We can provide them for future research work. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Off-Policy Selection for Initiating Human-Centric Experimental Design | Accept (poster) | Summary: The paper presents the First-Glance Off-Policy Selection (FPS) framework, aimed at improving policy selection for human-centric systems (HCSs) like education and healthcare, by addressing participant heterogeneity. FPS groups participants with similar traits, augments each sub-group with trajectories generated by a variational auto-encoder (VAE), and selects the optimal policy based on an estimator of the policy selection criteria. The proposed method is tested in a real-world intelligent education (IE) system and demonstrates significant improvement in the learning outcomes by personalizing tutoring policies based on initial student states. The framework's effectiveness was also demonstrated in selecting optimal treatment policies for sepsis patients in a simulated healthcare environment.
Strengths: - The paper introduces a new framework that selects policies for each new participant joining the cohort based solely on the initial state.
- The proposed method has been tested in a real-world IE system with 1,288 students and in a simulated healthcare environment.
- Although each component of the proposed framework has been previously studied, the framework itself demonstrates advantages over baseline methods concerning the addressed problems.
Weaknesses: - The proposed method is applicable only to problems with a finite number of policies, as policy selection is based on evaluating each candidate target policy separately.
- The real-world experiment was not a randomized controlled trial that directly compared the proposed method against baseline methods. Instead, in the IE system, a policy was randomly assigned to each student. The authors then tracked the outcomes for students in each subgroup who were assigned the policy recommended by each method.
Technical Quality: 3
Clarity: 3
Questions for Authors: - What is the error bar in Figures 1 and 2? Is it the standard deviation, standard error, or confidence interval? If it is the standard error, the difference between the baseline and the proposed method is small compared to the variation within each method.
- In Figure 1(b), most methods overestimate the reward with the proposed estimator. Could the authors explain the reason for this? Is there a systematic bias in the estimator?
- In Figure 2, FPS and the best possible baseline combinations perform exactly the same in subgroups K1 and K2. Is this because the two methods assign the same policy to every student in these first two subgroups?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and efforts on evaluating our work. Please find our point-by-point response below.
Q1. The proposed method is applicable only to problems with a finite number of policies, as policy selection is based on evaluating each candidate target policy separately.
R1. Good question. We focused on a common scenario when deploying RL policies to real human participants in practical human-centric systems, where all deployed policies have to be highly regulated and scrutinized by departmental committees enforcing the guidelines for human-centric experiments, where the total number of deployed policies would be generally limited [1-4],
Q2. The real-world experiment was not a randomized controlled trial that directly compared the proposed method against baseline methods. Instead, in the IE system, a policy was randomly assigned to each student. The authors then tracked the outcomes for students in each subgroup who were assigned the policy recommended by each method.
R2. Unfortunately, as we had to follow the pre-defined guidelines agreed with a departmental committee, that each student has to be treated equally during the experiment (i.e., they either opted-in for testing out all the methods or none). As a result, we were not able to run typical randomized control experiments. However, the chi-squared test was employed to check the relationship between policy assignment and sub-groups, and it showed that the policy assignment cross sub-groups were balanced with no significant relationship (p-value=0.479), as described in Appendix A.2.
Q3. What is the error bar in Figures 1 and 2? Is it the standard deviation, standard error, or confidence interval? If it is the standard error, the difference between the baseline and the proposed method is small compared to the variation within each method.
R3. The error bar represents standard error. Though the difference may not be large for potential high-performing sub-groups (e.g., $K_1$ & $K_2$), we observed that the baselines can even have a negative effect on some sub-groups ($K_3$ & $K_4$) as in Figure 2, which is undesired in human-centric experiments. In empirical human-centric scenarios, such as education, the behavior policy is generally highly regularized by department committees strictly following guidance from human-centric experiments, such that the target policy would not be dramatically opposed to the behavior policy – suggesting the underlying assumption that the divergence between behavior and target policies could be intrinsically bounded. We will definitely clarify this in the camera-ready version, if accepted.
Q4. In Figure 1(b), most methods overestimate the reward with the proposed estimator. Could the authors explain the reason for this? Is there a systematic bias in the estimator?
R4. This is probably due to the unobserved confounding existing in human-centric systems, leading the method that assumed sequential ignorability for the behavior policy to overestimate or underestimate, and similar findings were observed in [5-6].
Q5. In Figure 2, FPS and the best possible baseline combinations perform exactly the same in subgroups K1 and K2. Is this because the two methods assign the same policy to every student in these first two subgroups?
R5. Both FPS and baselines selected the same policy for sub-groups $K_1$ and $K_2$.
We hope these answers provide some explanations to address your concerns and showcase that our work is solving a significant challenge in a satisfying manner. We are happy to answer any followup questions or hear any comments from you.
References
[1] Mandel et al. Offline policy evaluation across representations with applications to educational games. AAMAS 2014.
[2] Gao et al. Offline policy evaluation for learning-based deep brain stimulation controllers. International Conference on Cyber-Physical Systems 2022.
[3] Zhou et al. Hierarchical reinforcement learning for pedagogical policy induction. International Conference on Artificial Intelligence in Education 2019.
[4] Abdelshiheed et al. Leveraging deep reinforcement learning for metacognitive interventions across intelligent tutoring systems. International Conference on Artificial Intelligence in Education 2023.
[5] Namkoong, et al. Off-policy policy evaluation for sequential decisions under unobserved confounding. NeurIPS 2020.
[6] Fu et al. Benchmarks for Deep Off-Policy Evaluation. ICLR 2021.
---
Rebuttal Comment 1.1:
Title: Mid-point check-in
Comment: As we are stepping into the 2nd half of the discussion period, should the reviewer have any follow-ups, we will try our best to address them in time. If satisfied, we would greatly appreciate the reviewer to update the reviews/acknowledge our responses. We sincerely thank the reviewer again for the efforts devoted to the review process, allowing the work to be thoroughly evaluated and discussed.
Sincerely,
Authors of submission 2308 | Summary: This paper studies off-policy selection in healthcare settings where users are heterogeneous and in situations where new users can appear in the policy deployment phase. To deal with new participants, this paper proposes a two-stage evaluation procedure: (1) learning a partitioning function of users and (2) choosing different policies for each subgroup via OPE. The proposed method (sub-group partitioning) achieves better performance than random user partitioning.
Strengths: - **Reasonable approach**: Using similar features and trajectory augmentation sounds like a reasonable approach, given that the dataset is sparse and only initial states are available for some users.
- **Simple and easy-to-implement algorithm**: The proposed method is not too complicated, and it seems to be easily implementable in real-world applications.
- **Ablations are informative**: Ablations, especially FPS-noTA and FPS-P, are instructive in telling the benefits of TA and sub-group partitioning, respectively.
Weaknesses: - **Feasibility of sub-group partitioning**: Section 2.2 (Definition 2.3) states the objective function of sub-group partitioning, and it indicates that the partitioning process requires knowledge about V^{\pi} for every candidate policy. In my understanding, this procedure itself requires OPE. However, looking at Algorithm 1, it seems the algorithm determines partitioning before applying OPE. How is this sub-group partitioning actually conducted? If using OPE, I believe this partitioning procedure can have a high variance. If simple clustering based on the features of initial states is used, the partitioning may not align with the objective function.
- **Variance in the sub-group partitioning phase**: Related to the above point, how does this algorithm deal with the variance in the sub-group partitioning phase? Also, using silhouette scores is distance-based and does not consider the variance. It would be useful if there is a way to determine the number of clusters (M) in a data-driven way, taking both bias and variance into account.
- **Improvement over Keramati et al. (2022) seems incremental**: The paper states that the benefit of the proposed method over Keramati et al. (2022) is that the proposed method does not require full-trajectory information, and thus applicable when only initial states are accessible. I understand the practical advantages, however, the technical progress is not convincing, as the way the proposed method overcomes the challenges in sub-group partitioning *without* full trajectory is not well-described. Also, if Keramati et al. (2022) is a skyline, how does the performance of the proposed method differ from Keramati et al. (2022)? A more detailed discussion of comparing the proposed method with existing work would be appreciated.
Keramati et al. (2022): Identification of Subgroups With Similar Benefits in Off-Policy Policy Evaluation. Ramtin Keramati, Omer Gottesman, Leo Anthony Celi, Finale Doshi-Velez, Emma Brunskill. CHIL, 2022.
Technical Quality: 2
Clarity: 2
Questions for Authors: - How does the proposed method estimate the value during the sub-group partitioning phase?
- How does the proposed method deal with the variance in the data partitioning?
- If Keramati et al. (2022) is a skyline (with full knowledge about trajectory under the logging policy), how much can the proposed method achieve?
- This may be due to random seeds, but I wondered why FPS-P can be quite pessimistic while FPS and FPS-noTA are rather optimistic.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Ambiguity about the sub-group partitioning phase and light comparison with Keramati et al. (2022) (one of the most related work). See weaknesses for the details.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and efforts on evaluating our work. Please find our point-by-point response below.
Q1. How is this sub-group partitioning actually conducted?
R1. We used an off-the-shelf algorithm called Toeplitz inverse covariance-based clustering (TICC) [1] to obtain the initial partitioning as described in Appendix A.3.5, to jumpstart the iterative optimization process toward objective (1) that follows. It could be our writing that causes such a confusion, and we will clarify that in the camera-ready version, if accepted. Please see the response below for the concern over variance.
Q2. Variance in the sub-group partitioning phase:
> Q2.1. How does this algorithm deal with the variance in the sub-group partitioning phase?
R2.1. We thank the reviewer for this thoughtful comment where we carry out additional analyses to address it, which lead to additional findings. Specifically, we alter the total number of subgroups, $M \in \{4,5,6\}$, and re-run the sub-group partition. It is observed that a minimal percentage of students have had their group changed over different $M$'s -- see the table below. This may not be surprising, because in empirical human-centric scenarios, such as education, the policies are generally highly regulated and scrutinized by departmental committees strictly enforcing the guidelines for human-centric experiments, so that both behavioral and target policies deployed at scale would not interfere students' learning greatly. There it implies the underlying assumption that the divergence between behavior and target policies could be intrinsically bounded. Such a finding is important and can be potentially generalized to common human-centric environments, and we plan to further pursue such an avenue in broader contexts both empirically and theoretically in the future.
| Changes of # of Sub-groups | 3->4 | 4->5 | 5->6 | 3->5 | 3->6 | 4->6 |
|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|
| Perc. of students changing sub-groups | 2% | 4% | 3% | 6% | 10% | 9% |
> Q.2.2. Using silhouette scores is distance-based and does not consider the variance. It would be useful if there is a way to determine the number of clusters (M) in a data-driven way, taking both bias and variance into account.
R2.2. Given our work is the first targeting and solving a practical challenge often encountered in OPS, we provided the framework in a straight-forward manner, and silhouette score has been broadly employed and exhibits good performance across human-related tasks [1-3]. However, we greatly appreciate this comment and agreed that it would be an interesting topic to investigate in the future toward bias and variance in partitioning.
Q3. Improvement over Keramati et al. seems incremental:
> Q3.1. I understand the practical advantages, however, the technical progress is not convincing, as the way the proposed method overcomes the challenges in sub-group partitioning without full trajectory is not well-described.
R3.1. We respectfully disagree with the opinion that the work is incremental over Keramati et al. Though the theoretical analysis of bound of variance in both work may be grounded on [4-5], please note that the objective of our work is to identify the policy that can possibly work the best for each arriving individual of HCSs from a set of policy candidates, **without access to any prior offline data or entire trajectory collected over the individual**, which targets to address a common bottleneck in empirical human-participated studies. In contrast, Keramati et al. assumed that there existed a population-level (optimal) policy and then identified the sub-group that may benefit most of that policy, which is the opposite side of our problem statement. Moreover, they required the entire trajectory for each individual to be pre-given before sub-grouping, which would not be practical for HCSs in the real-world, as the initial state would be the only observation available at the time an individual arrives at the HCS right after which a group needs to be assigned (e.g., doctors need to sketch out diagnoses/treatment plan for patients soon after they stepped into the triage room).
> Q3.2. how does the performance of the proposed method differ from Keramati et al.?
R3.2. As we mentioned above, given the complicated real-world human-centric environments and highly limited information of each incoming individual, it’s hard to directly implement Keramati et al. work to our empirical experiments, as the assumption required would not be met, nor the availability of the trajectory beyond the initial state for their method to use for sub-grouping.
Q4. Why FPS-P can be quite pessimistic while FPS and FPS-noTA are rather optimistic.
R4. There could be some unobserved confounding that exists in human-centric experiments, leading the method that assumed sequential ignorability for the behavior policy to underestimate or overestimate, and similar findings were observed in [6].
We hope our responses sufficiently addressed your concerns and clarified that our work is solving a significant practical challenge effectively. We are happy to respond to any follow-ups you may have.
References
[1] Toeplitz inverse covariance-based clustering of multivariate time series data. KDD 2017.
[2] Domain generalization via model-agnostic learning of semantic features. NeurIPS 2019.
[3] A Hierarchical Clustering algorithm based on Silhouette Index for cancer subtype discovery from genomic data. Neural computing and applications 2020.
[4] Learning bounds for importance weighting. NeurIPS 2010.
[5] Policy optimization via importance sampling. NeurIPS 2018
[6] Off-policy policy evaluation for sequential decisions under unobserved confounding. NeurIPS 2020.
---
Rebuttal Comment 1.1:
Title: Mid-point check-in
Comment: As we are stepping into the 2nd half of the discussion period, should the reviewer have any follow-ups, we will try our best to address them in time. If satisfied, we would greatly appreciate the reviewer to update the reviews/acknowledge our responses. We sincerely thank the reviewer again for the efforts devoted to the review process, allowing the work to be thoroughly evaluated and discussed.
Sincerely,
Authors of submission 2308 | Summary: This paper introduces First-Glance Off-Policy Selection (FPS), a novel approach to off-policy selection (OPS) in human-centric systems (HCSs) such as healthcare and education, where the heterogeneity among participants requires personalized interventions.
FPS addresses this by segmenting participants into sub-groups based on similar traits and applying tailored OPS criteria to each sub-group. This method is evaluated in two real-world applications: intelligent tutoring systems and sepsis treatment in healthcare. The results demonstrate significant improvements in both learning outcomes and in-hospital care, highlighting FPS's ability to personalize policy selection and enhance system performance.
Strengths: - **Novel Approach**: The introduction of the First-Glance Off-Policy Selection (FPS) framework is a significant innovation. By systematically addressing participant heterogeneity through sub-group segmentation, FPS offers a fresh perspective on OPS in human-centric systems (HCSs).
- **New Problem Formulation**: The paper tackles the unique challenge of selecting policies for new participants without prior offline data. This problem formulation is distinct from existing OPS/OPE frameworks, which typically assume homogeneity among agents.
- **Methodological Rigor**: The paper demonstrates a high level of methodological rigor. The use of variational auto-encoding (VAE) for trajectory augmentation and the development of an unbiased value function estimator with bounded variance reflect thorough and robust algorithm design.
- **Comprehensive Experiments**: The experimental evaluation is extensive, covering both real-world educational systems and healthcare applications. This diversity in testing scenarios strengthens the validity of the results. The paper provides a detailed analysis of the results, including comparisons with various baselines and ablation studies. This thoroughness ensures that the findings are well-supported and credible.
Also, the paper is well-organized, with clear sections that guide the reader through the problem formulation, methodology, experiments, and results. Each part builds logically on the previous one.
Weaknesses: The paper is generally well-written. I will combine the weaknesses and questions into one section.
1. Assumption of Independent Initial State Distributions
The FPS framework assumes that the initial state distributions for each participant are independent and can be uniformly sampled from the offline dataset. This assumption may not hold true in real-world scenarios where participants’ initial states can be influenced by various contextual factors and past interactions. The independence assumption may oversimplify the complexity of human-centric systems and may lead to suboptimal policy selections. The authors may consider addressing the potential dependencies in initial states.
2. Lack of Consideration for non-stationary state transition
FPS focuses on the initial state for policy selection without considering longitudinal data that captures the progression of participant states over time. This, in the AI community's language, is a non-stationary state transition issue faced by meta-RL. Is it possible that the state transition for each patient is also independent of each other? In other words, chances are that the state transition for each patient $p_i$ is sampled from a distribution. Would this FPS work in such a case?
Technical Quality: 4
Clarity: 3
Questions for Authors: Questions are combined in the weaknesses section. Please see above.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The paper does not thoroughly address FPS's scalability and applicability in large-scale, real-world settings, especially in healthcare. I suggest the authors clarify the scalability and emphasise the importance of clinical guidance. One to two sentences will be enough.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and efforts on evaluating our work, and your positive comments that the paper is making an important impact. Please find our point-by-point response below.
Q1. Assumption of Independent Initial State Distributions The FPS framework assumes that the initial state distributions for each participant are independent and can be uniformly sampled from the offline dataset. This assumption may not hold true in real-world scenarios where participants’ initial states can be influenced by various contextual factors and past interactions. The independence assumption may oversimplify the complexity of human-centric systems and may lead to suboptimal policy selections. The authors may consider addressing the potential dependencies in initial states.
R1. Thank you for the insightful comment. That independence assumption we made was inspired by the use of uninformative prior in Bayesian methods where we do not have much prior knowledge in terms of sub-grouping's potential outcome before the experiment starts. We agreed with the reviewer that investigating the potential confoundings in initial states would be important -- and we plan to do so as a separate work in the future, as it could be very challenging to capture those in practical scenarios, e.g., there exist a few literature attempting to soly address that issue [1-3].
Q2. Lack of Consideration for non-stationary state transition FPS focuses on the initial state for policy selection without considering longitudinal data that captures the progression of participant states over time. This, in the AI community's language, is a non-stationary state transition issue faced by meta-RL. Is it possible that the state transition for each patient is also independent of each other? In other words, chances are that the state transition for each patient 𝑝𝑖 is sampled from a distribution. Would this FPS work in such a case?
R2. Great point. Initially we did not consider the progression/transition of participant states since the overall scenario our work considered was that each individual needs to be assigned a group upon arrival where the initial states would be the only observation available. However, we agreed that our framework can be further extended toward also considering each participant's follow-up visits once they are available. And if the state transitions are also independent from each other, a simple variation of our method may work, as one could re-run the partitioning algorithm at each new visit; this way it would not violate the assumptions needed by our work. Further, in the future we plan to explore the possibilities of temporally dynamic sub-group partitioning, where the agent can consider all the historical states of a participant upon arrival of each visit. It would make the problem setup more challenging as it falls more under a POMDP schema. We greatly appreciate both of these comments as they indeed helped us to shape the future work down the avenue and led us to think of more challenging, interesting and realistic setups. We also hope that our work could possibly inspire more researchers to recognize and emphasize the empirical challenges faced by deploying RL/OPS in real-world systems, if accepted.
Q3. The paper does not thoroughly address FPS's scalability and applicability in large-scale, real-world settings, especially in healthcare. I suggest the authors clarify the scalability and emphasise the importance of clinical guidance. One to two sentences will be enough.
R3. We sincerely appreciate your suggestion. We will add the following discussion in the camera-ready, if accepted, i.e., *"Compared to IE systems, HCSs in healthcare would be considered even more high-stakes, thus may further limit the options (i.e., policies) that are available to facilitate sub-grouping experiments, due to stricter clinical experimental guidelines. However, FPS has demonstrated its extraordinary capabilities over a real-world experiment that involved >1,200 participants with years of follow-ups, which showed its efficacy and scalability toward working with more challenging systems and larger cohorts as in healthcare, as the assumptions needed by FPS across these two systems would not change fundamentally. Moreover, potential underlying confoundings may exist across the patient's initial states in healthcare, and it is also important to consider inputs from healthcare professionals during sub-grouping. As a result, one may further extend our framework toward such a direction, allowing it to function better in the healthcare domain."*
We hope these answers provide some explanations to address your concerns and showcase that our work is solving a significant challenge in a satisfying manner. We are happy to answer any followup questions or hear any comments from you.
References
[1] Namkoong et al. Off-policy policy evaluation for sequential decisions under unobserved confounding. NeurIPS 2020.
[2] Xu et al. An instrumental variable approach to confounded off-policy evaluation. ICML 2023.
[3] Tennenholtz et al. Off-policy evaluation in partially observable environments. AAAI 2020.
---
Rebuttal Comment 1.1:
Title: Mid-point check-in
Comment: As we are stepping into the 2nd half of the discussion period, should the reviewer have any follow-ups, we will try our best to address them in time. If satisfied, we would greatly appreciate the reviewer to update the reviews/acknowledge our responses. We sincerely thank the reviewer again for the efforts devoted to the review process, allowing the work to be thoroughly evaluated and discussed.
Sincerely,
Authors of submission 2308 | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
On the Expressive Power of Tree-Structured Probabilistic Circuits | Accept (poster) | Summary: The paper studies the how expressive probabilistic circuits (PCs) whose underlying structure is a tree are compared to those whose structure is a DAG. This is motivated by the fact that algorithms for learning PCs usually construct trees, and thus do not potentially take advantage of the potentially more expressive nature of DAGs. The present work shows that DAGs are strictly more powerful in the sense that there are polynomials that can be expressed as DAG-structured PCs of polynomial size but require super-polynomial size as tree PCs. On the other hand, the gap is not exponential, since DAG-structured PCs can be transformed into tree PCs of sub-exponential size, as shown in the paper.
Strengths: The paper is well-motivated and, in general, written well.
PCs are an active area of research.
The results are novel to my knowledge and give theoretical contributions on both lower and upper bounds.
Weaknesses: For the lower bound, depth constraint $o(\log n)$ needs to be imposed on the tree-structure. As such, the lower bound appears to have very few practical implications, since we rarely need the tree PC to be that shallow. Admittedly, the authors briefly discuss this limitation in the paper.
Sometimes, the presentation could be more polished: For example, symbols $v$, $w$, $u$, $v'$, $v_1$, $v_2$, and $t$ are all used for denoting nodes. Could the notation be somehow made more consistent? Some other examples are listed in the next question.
Technical Quality: 3
Clarity: 3
Questions for Authors: In Sec. 1.1, you state that "[the] restriction on the graph of using nodes with at most two children ... is not necessary for PCs", suggesting that nodes in PCs can have multiple children in your setting. On the other hand, you define decomposability for product nodes with exactly two children. It would be good to clarify in the paper which of these is the case.
In Sec. 1.2, you briefly mention the notion of expressive efficiency. Perhaps here (or somewhere else), it would be relevant to cite the paper [34] on probabilistic generating circuits (PGCs), since they subsume PCs for Boolean random variables. Also related is their follow-up paper (Broadrick et al., 2024; full reference below), where they show that allowing PCs to have negative weights in sum nodes makes them as expressive as PGCs.
Lemma 3.2 seems like folklore to me, since it is well-known how circuits of arbitrary fan-in can be transformed into, say, fan-in 2.
The linebreak in Lemma 3.6 can be confusing to the reader.
Algorithm 1: Definition of $m_2$ is unclear, since $w$ is not specified.
Oliver Broadrick, Honghua Zhang, Guy Van den Broeck:
Polynomial Semantics of Tractable Probabilistic Circuits. CoRR abs/2402.09085 (2024)
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Authors discuss the limitations briefly but sufficiently.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for acknowledging our contributions, and we deeply appreciate for pointing out the cluttered notation and the limitations of the conditional lower bound. The following is our responses for the questions.
> In Sec. 1.1, you state that "[the] restriction on the graph of using nodes with at most two children ... is not necessary for PCs", suggesting that nodes in PCs can have multiple children in your setting. On the other hand, you define decomposability for product nodes with exactly two children. It would be good to clarify in the paper which of these is the case.
In Definition 2.1, the definition decomposability is intended for ANY pair of a product node’s children v_1 and v_2, instead of implying they are the only children of a product node. We will emphasize this in our final draft.
> In Sec. 1.2, you briefly mention the notion of expressive efficiency. Perhaps here (or somewhere else), it would be relevant to cite the paper [34] on probabilistic generating circuits (PGCs), since they subsume PCs for Boolean random variables. Also related is their follow-up paper (Broadrick et al., 2024; full reference below), where they show that allowing PCs to have negative weights in sum nodes makes them as expressive as PGCs.
Thanks for pointing out the references. We will properly acknowledge their contributions in our final draft.
> Lemma 3.2 seems like folklore to me, since it is well-known how circuits of arbitrary fan-in can be transformed into, say, fan-in 2.
We agree. We added this result for the sake of completeness, because our proof for the upper bound result uses previous results in circuit complexity based on circuits with fan-in two. We will add a clarification stating its relation with existing results in our final draft.
> The linebreak in Lemma 3.6 can be confusing to the reader.
Thanks for pointing out this. We will update the typeset to fix this issue.
> Algorithm 1: Definition of $m_2$ is unclear, since $w$ is not specified
Thanks for pointing out this. The definition of $m_2$ is indeed dependent on the choice of a pair $(u, w)$. We will move the line “Fix $m_2$ … ” down to ensure clarity.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I'll keep my score. | Summary: This paper considers the structure expressive power of Probabilistic Circuits (PCs) from a theoretical perspective. Specifically, this paper explores how PCs with directed acyclic graph (DAG) structures and those with tree-like structures contrast each other in terms of circuit size and expressive power. The contributions include theoretical derivations for an upper bound and a conditional lower bound for DAG-structured and tree-structured PCs. First, the authors show that there exists a sub-exponentially-sized tree-structured PC to represent the same network polynomial as a DAG-structured PC. Second, they show that a tree-structured PC must have a super-polynomial size to represent the same network polynomial as a DAG-structured PC given a restriction on the tree depth.
Strengths: - This paper is well-organized and logically coherent.
- This paper provides novel theoretical insights into a relatively under-explored topic (DAG-structured PCs).
- The theoretical results and algorithmic methods are well-presented.
Weaknesses: - While the results are interesting, I am a bit skeptical about the applicability of this work in structure learning of PCs. In practice, challenges in PC structure learning often arise from issues such as defining appropriate learning objectives, addressing overfitting, and improving the computational efficiency of learning algorithms. The findings of this paper appear to be of little help in addressing these issues.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Could the authors provide some new insights on how this work can help further research in PC structure learning?
- When taking into account the computational costs for learning DAG-structured and tree-structured PCs, how does one manage the trade-offs between these costs, the expressiveness of the circuit, and the resulting circuit size?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have discussed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are sincerely grateful to the reviewer for acknowledging the novelty of our work and its presentation, and we deeply appreciate the insightful questions.
> Could the authors provide some new insights on how this work can help further research in PC structure learning?
We agree with the reviewer that our main focus of this paper is theoretical, but we hope that our results on clarifying the size separation between tree-structured and DAG-structured PCs can inspire future algorithmic innovation on directly learning DAG-structured or deep tree-structured PCs from data. As we state in the second paragraph of Section 1 Introduction, existing research on structure learning algorithms (e.g. references 1, 8, 12, 16, 21, 26) for PCs either only learns shallow tree-structured PCs or DAG-structured PCs by using tree-structured PCs as intermediates. Neither line of work can fully exploit the size advantages of DAG-structured PCs. Hence, from this perspective, our work suggests and encourages future research into designing algorithms to directly learn DAG-structured PCs from data, and this should also help in terms of generalization of PCs, due to the size reduction.
> When taking into account the computational costs for learning DAG-structured and tree-structured PCs, how does one manage the trade-offs between these costs, the expressiveness of the circuit, and the resulting circuit size?
Balancing the trade-offs among those criteria indeed requires careful consideration. In particular, there are two significant points to consider.
- Tree-structured and DAG-structured PCs are equally expressive, in the sense that there does not exist a distribution that can be represented by one but not the other. Therefore, expressiveness of a circuit’s output usually does not raise concerns.
- There indeed exists a potential tradeoff between the size of PCs and the complexity of learning algorithms. While to the best of our knowledge there is no existing structure learning algorithm that can directly learn a DAG-structured PC from data, in principle one could control the trade-off based on the underlying complexity of the distribution to be learned.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I am happy to keep my score. | Summary: This paper studies the expressive power (i.e., expressive efficiency) of tree-structured probabilistic circuits (PCs). Specifically, this paper shows that:
- Any decomposable PC over n random variables can be transformed into a tree-structured PC of depth O(log n) with n^{O(log n)} nodes.
- A super-polynomial separation between tree-structured PCs of depth o(log n) and decomposable PCs. Specifically, the authors show the existence of a network polynomial that can be computed by a poly(n)-size decomposable PC but the size of any tree-structured PC of depth o(log n) is lower-bounded by n^{\omega(1)}.
Strengths: - Though the proof of the upper bound relies heavily on the existing depth-reduction algorithm proposed by Valiant et al. [32] and Raz and Yehudayoff [25], it is significant for the study of PCs to show that this depth-reduction algorithm preserves decomposability.
- The lower bound result is already very close to showing an unconditional super-polynomial separation between tree-structured PCs and decomposable PCs and eventually leading to a tight bound.
Weaknesses: n/a
Technical Quality: 4
Clarity: 3
Questions for Authors: n/a
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are sincerely grateful to the reviewer for appreciating our contributions. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning-Augmented Dynamic Submodular Maximization | Accept (poster) | Summary: Authors consider submodular maximization with cardinality constraint
in dynamic setting: Algorithm sees a series of $n$ insertions and deletions
of elements and has to maintain a subset of active elements
maximizing given submodular function.
Best known algorithms for this problem by Lattanzi et al., Monemizadeh,
and Banihashem et al., achieve approximation ratio 0.5-eps
with amortized update time polylogarithmic in $n$ and $k$, where
$n$ is the length of the input stream and $k$ denotes the cardinality
constraint.
In this paper, authors propose an algorithm which receives predictions
about insertion and deletion time of each element beforehand.
This allows the algorithm to precompute an update strategy assuming that the
predictions are correct.
Given a parameter $w$, authors define the prediction error $\eta$ as the number
of elements whose predicted insertion or deletion time is not within
$w$ time steps from the real one.
Their main result is an algorithm with approximation ratio 0.5-eps
whose amortized update time is polylogarithmic in $\eta$, $w$, and $k$.
With $\eta = o(n)$, their algorithm achieves an assymptotic improvement
over the existing algorithms which do not use predictions.
Strengths: * result seems strong and requires introduction of new ideas as well
as proving new properties about existing algorithms
Weaknesses: * the predictions used are quite verbose:
having all predictions ahead of time is quite a restrictive requirement.
Authors pose utilization of predictions which come one by one as an open
problem.
Technical Quality: 3
Clarity: 3
Questions for Authors: * where does dependence on epsilon appear in your bounds in Theorem?
* is the algorithm of Lattanzi the only existing algorithm which is
threshold-based, or is this property more common among existing techniques?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Clearly explained.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments!
- *"where does dependence on epsilon appear in your bounds in Theorem?*"
Our approximation is 1/2 - epsilon. In addition, the query complexity per update during the streaming phase and the query complexity during the precomputation phase both have a polynomial dependence on epsilon.
- *"is the algorithm of Lattanzi the only existing algorithm which is threshold-based, or is this property more common among existing techniques?*"
Algorithms that used threshold-based ideas are very common in submodular maximization. One of the first such algorithms is by Badanidiyuru-Vondrák in their 2013 paper on “Fast Algorithms for Maximizing Submodular Functions:.
---
Rebuttal Comment 1.1:
Comment: thank you for your answers. | Summary: The authors studied monotone submodular maximization under a cardinality constraint in a dynamic model where predictions of insertions and deletions are given. In submodular maximization, a ground set of elements and a function assign a value to any subset of these elements. A function is submodular if adding an element to a smaller set contributes more value than adding it to a larger set. A function is monotone if its value always increases with the set size. In the dynamic model, elements are inserted and removed, and only elements that are inserted but not deleted can be selected. The error $\eta$ is defined as the number of elements whose actual insertion or deletion time differs from the prediction by at least ww. The goal is to find, after each update, a subset of size at most k from the current elements that maximizes the value of the monotone submodular function while minimizing the number of query calls.
Previous works on dynamic monotone submodular maximization under cardinality constraints (without predictions) achieve a 1/2 approximation factor using O(polylog(n)) or O(k⋅polylog(k)) query calls. There are also works on dynamic monotone submodular maximization with predictions under matroid constraints, which is a generalization of cardinality constraints, achieving a ∼0.32 approximation factor. In this paper, the authors design an algorithm for dynamic monotone submodular maximization under cardinality constraints using predictions that achieve a 1/2 approximation factor with only $O(poly(log(\eta),log(w),log(k)))$. When the predictions are poor, the algorithm’s complexity is as bad as the algorithm without predictions, having O(polylog(n)) query complexity since $\eta$ would become $\theta(n)$.
They use precomputation to compute a solution based on predictions for each time and leverage previous works on the dynamic model and delete-robustness model to update their solution according to real insertions and deletions. They also run experiments, comparing their query complexity with the dynamic algorithm. The results show that when prediction errors are small, their algorithm requires significantly fewer query calls. However, when the predictions are poor, the query complexity is comparable to that of the dynamic algorithm.
Strengths: Recently, many learning-augmented algorithms have been developed for dynamic and online models. They are interesting because, in real applications, we sometimes have an idea of how elements will change, raising the question of how we can improve our algorithms by leveraging these predictions. In this work, the authors improved the query complexity, eliminating the dependency on n and reducing the dependency on k to only logarithmically. It is interesting how query complexity can be improved using predictions, and it would be even more interesting if the approximation factor could also be improved.
Weaknesses: There were different algorithms for this problem without using predictions: one using O(polylog(n)) (first algorithm) and the other using O(k⋅polylog(k)) (second algorithm) query calls. Since the authors' algorithm has polylog(k) in its query complexity, if k is as large as $\theta(n^\epsilon)$ where $\epsilon$ is a constant, their algorithm is not better than the first algorithm. If k is small, the second algorithm is not that bad. Still, in the second case, the authors' algorithm is slightly better, but we should note that they also have O(n⋅polylog(n)) for precomputation.
## Comments for the authors:
- Line 37: cited [30] twice
- Some sentences are too long and make it hard to read. For example, look at the sentences from lines 60 to 63, and the next sentence.
Technical Quality: 3
Clarity: 4
Questions for Authors: I see that you mentioned Chen and Peng [10] proved that a dynamic algorithm for this problem needs poly(n)poly(n) query complexity, but does their approach apply when we have good predictions? I understand this question may be difficult and don’t expect the authors to answer it, but if they can answer affirmatively, it makes their result much stronger.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments!
- *"Line 37: cited [30] twice"*
- *"Some sentences are too long and make it hard to read. For example, look at the sentences from lines 60 to 63, and the next sentence."*
Thank you, we have fixed the double citation and shortened these sentences to “A dynamic algorithm with predictions consists of two phases. During the precomputation
phase at t = 0, the algorithm uses the predictions to perform queries before the start of the
Stream. During the streaming phase at time steps t > 0, the algorithm performs queries, and uses the precomputations, to maintain a good solution with respect to the true stream. In this model, there is a trivial algorithm that achieves a constant update time when the predictions
are exactly correct and an O(u) update time when the predictions are arbitrarily wrong. Here, u
is the update time of an arbitrary algorithm A for the problem without predictions.“
- *"I see that you mentioned Chen and Peng [10] proved that a dynamic algorithm for this problem needs poly(n) query complexity, but does their approach apply when we have good predictions? I understand this question may be difficult and don’t expect the authors to answer it, but if they can answer affirmatively, it makes their result much stronger."*
The result from Chen and Peng that shows that poly(n) query complexity per update is necessary to achieve an approximation better than 1/2 does NOT hold when we have perfect predictions. In particular, it is possible to precompute solutions S_t that achieve a 1-1/e approximation for each time step during the precomputation phase and then use these solutions during the streaming phase while having 0 queries per update when the predictions are exactly correct. It is an interesting question for future work whether a 1-1/e approximation with an update time faster than poly(n) can be achieved not only when the predictions are exactly correct but also when the prediction error is small.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I don’t have any further concerns. | Summary: The paper studies the monotone dynamic submodular maximization problem under a cardinality constraint $k$ in the framework of algorithms with predictions. The authors consider a prediction model where the insert and delete times of the elements are predicted at time 0, and for any window size $w$, the prediction error $\eta$ is defined to be the number of elements whose actual insertion or deletion times differ from the predicted insertion or deletion times by more than $w$. Their main result is an algorithm that produces a $(1/2-\epsilon)$-approximate solution at each time step with expected amortized update time $O(\text{poly}(\log \eta, \log w, \log k))$ and preprocessing time $\tilde{O}(n)$.
Strengths: * The problem studied is important and interesting to the NeurIPS community
* The result is strong, and the prediction model and the notion of error are natural. The algorithm is robust in the sense that it can handle an arbitrary number of elements with low prediction error (that fall within the $w$ window of their actual insertion/deletion time) and a reasonable number of elements with high error (whose predictions are off by more than $w$, counted by $\eta$). Moreover, their algorithm can handle elements that are not predicted to arrive but actually show up in the input sequence (these elements contribute to $\eta$). The performance of the algorithm also degrades gracefully as the prediction error increases.
* While the subject is inherently complicated as there are lots of parameters involved, the authors did a good job keeping everything clear and precise so that it is relatively easy to follow. The notation is also good. Moreover, the warm-up algorithm helps the reader understand some of the main ideas of the final algorithm.
Weaknesses: * In the case where $k = o(\log n)$ and the prediction error $\eta = \Omega(n)$, the update time of the algorithm is worse than the update time $O(k \cdot \text{polylog}(k))$ achieved in reference [7]. Thus, in some cases, the algorithm performs worse than a worst-case algorithm without predictions.
Technical Quality: 3
Clarity: 4
Questions for Authors: * Why is not the algorithm with update time $O(k \cdot \text{polylog}(k))$ presented in [7] compared to your learning-augmented algorithm theoretically or empirically?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors state their theoretical results formally, describing all assumptions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments!
- *“In the case where k=o(logn) and the prediction error η=Ω(n), the update time of the algorithm is worse than the update time O(k⋅polylog(k)) achieved in reference [7]. Thus, in some cases, the algorithm performs worse than a worst-case algorithm without predictions.” and “Why is not the algorithm with update time O(k⋅polylog(k)) presented in [7] compared to your learning-augmented algorithm theoretically or empirically?”*
The reviewer is correct that there are cases where our update time is worse than the update time O(k⋅polylog(k)). Our main goal was to achieve a worst-case update time that is at most the O(poly(log n, log k)) update time achieved by Lattanzi et al.. We believe that achieving a worst-case update time that is at most the O(k⋅polylog(k)) update time achieved by Banihashem et al., while also achieving an improved update time when the predictions are accurate, is an interesting open question. At the time when we performed the experiments, the paper by Banihashem et al. was not online yet. We plan on adding their algorithm as a benchmark for our experiments in the next version of the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. | Summary: This paper studied the dynamic submodular maximization problem with predictions. The goal of the problem is to maintain a high-quality solution in the presence of insertions and deletions. The main contribution is leveraging predictions, in the form of the pattern of insertions and deletions, to accelerate the update time of dynamic submodular maximization algorithms.
Strengths: 1. The paper is well-written and easy to read. The proposed solution and analysis, especially the connection to the robust submodular maximization problem, are technically sound.
2. The results improve upon existing ones when the prediction is accurate.
Weaknesses: 1. The paper does not provide enough details on how predictions are obtained in real life. For example, it lacks specific information about machine learning algorithms that could potentially be used to make the predictions. This is concerning because the current notation of prediction error is defined in a worst-case manner. If there exists one prediction with poor accuracy, it might dramatically hurt the results. To fully benefit from their results, a very robust and stable prediction algorithm is required.
2. The tightness of their results is unclear. As mentioned in the introduction, if the prediction is 100% accurate, there exists a trivial algorithm that requires constant update time. However, even with zero prediction error, their bound does not reduce to a constant.
3. A more natural benefit of predictions is achieving an enhanced approximation ratio. While the authors mention existing studies along this direction, it is worth exploring whether predictions can improve the approximation ratio, using the same update time as algorithms without predictions.
4. Generally, incorporating predictions often sacrifices worst-case performance, which is especially true if the prediction is inaccurate. It would be beneficial for the authors to comment on this, specifically addressing to what extent bad predictions might hurt the update time of their algorithms.
5. The last comment is more of a clarification question instead of a comment. In the dynamic setting, why not simply use the state-of-the-art offline algorithm, such as the classic greedy algorithm, to solve the maximization problem in each round? I understand that this might incur higher update time, but if we consider a special case of the dynamic setting where there is only one round, it seems that the update time cannot be lower than the state-of-the-art offline algorithm. Is there any assumption regarding the length of the time horizon, for example, the number of rounds must be at least O(n)?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weakness section.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments!
- *“If there exists one prediction with poor accuracy, it might dramatically hurt the results.”*
We believe that there might have been a misunderstanding about the prediction error. The prediction error is the number of elements whose predicted inserted and deletion times were not sufficiently accurate. In particular, if the predicted inserted and deletion times of one element are completely wrong, then this only increases the prediction error eta by one. Thus, it is not the case that “if there exists one prediction with poor accuracy, it might dramatically hurt the results."
- *“The paper does not provide enough details on how predictions are obtained in real life.”*
Regarding how predictions could be obtained, consider the product recommendation application where a platform only wants to display items that are in stock. Platforms often have accurate predictions about when an item will go from in-stock to out-of-stock (due to current inventory and rate of purchase) or from out-of-stock to in-stock (due to inventory information provided by seller).
- *"The tightness of their results is unclear. As mentioned in the introduction, if the prediction is 100% accurate, there exists a trivial algorithm that requires constant update time. However, even with zero prediction error, their bound does not reduce to a constant."*
The reviewer is correct that our update bound is not constant even when the prediction error is zero. With the following simple change, our algorithm achieves a constant update time when the predictions are exactly correct, while also maintaining its current guarantees: (1) as additional precomputations, also compute a predicted solution S_t for each time t assuming the predictions are exactly correct, (2) during the streaming phase, as long as the predictions are exactly correct, return the precomputed predicted solution S_t. At the first time step where the predictions are no longer exactly correct, switch to our main algorithm in the paper.
- *"A more natural benefit of predictions is achieving an enhanced approximation ratio. While the authors mention existing studies along this direction, it is worth exploring whether predictions can improve the approximation ratio, using the same update time as algorithms without predictions."*
We agree that using predictions to improve the approximation is also an interesting direction. We note that the best potential improvement we can hope to achieve with a polynomial-time algorithm is to go from 1/2 to 1-1/e, which is the best approximation achievable in the offline setting.
- *"Generally, incorporating predictions often sacrifices worst-case performance, which is especially true if the prediction is inaccurate. It would be beneficial for the authors to comment on this, specifically addressing to what extent bad predictions might hurt the update time of their algorithms."*
As mentioned in lines 86-87, even when the prediction error is arbitrarily large, the update time of our algorithm asymptotically matches the O(poly(log n, log k)) amortized expected query complexity from Lattanzi et al.. Thus, asymptotically, incorporating predictions does not cause our algorithm to sacrifice worst-case performance in comparison to this previous work.
- *"The last comment is more of a clarification question instead of a comment. In the dynamic setting, why not simply use the state-of-the-art offline algorithm, such as the classic greedy algorithm, to solve the maximization problem in each round? I understand that this might incur higher update time, but if we consider a special case of the dynamic setting where there is only one round, it seems that the update time cannot be lower than the state-of-the-art offline algorithm. Is there any assumption regarding the length of the time horizon, for example, the number of rounds must be at least O(n)?"*
There is no assumption regarding the length of the time horizon. As mentioned by the reviewer, simply using offline greedy in each round would lead to slower update time. We note that the notation n in dynamic submodular maximization refers to the length of the stream (and not the size of the ground set). We also note that, since it is assumed that at time t=0 there are no active elements, the total number of elements that can be active at any time t is at most t. If there is only one time step (which is what we believe the reviewer means by “only one round”), then there is at most one element that was inserted and the maximizing problem over one element is trivial.
We believe that we have addressed all the reviewer’s concerns. If there remains any concern, we would be happy to answer those during the reviewers-authors discussion phase.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer L8zN,
Thank you again for your helpful suggestions. Since the author-reviewer discussion period is almost over, we wanted to see if our response addressed all your concerns. We would be happy to answer any follow-up questions you might have. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Toward Efficient Inference for Mixture of Experts | Accept (poster) | Summary: The authors propose three techniques to speed up inference of mixtures of experts: 1) dynamic gating: during the all-to-all exchange process, the authors enable sending different number of tokens to each expert, which requires sorting and sending an extra message about the number of tokens; 2) expert buffering optimization: a last-in-first-out scheme to offload experts to CPU memory; and 3) load balancing optimization: greedily selecting experts with the lowest expected load.
Strengths: 1. The authors show consistent speed up over competing methods.
2. The authors show detailed latency and memory analysis in their appendix.
Weaknesses: 1. The techniques are not very substantial. The authors propose a collection of techniques that speed up inference, but the techniques are not thematic. As a result, I feel this paper may be much better suited for systems/industry tracks, where it would be significantly more relevant.
2. The authors do not provide performance analysis (everything is about throughput and latency). From my understanding (which may be wrong), dynamic gating and load balancing should affect performance, and I would be curious to know how it affects that.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. The authors show analysis for language modeling and machine translation. How about other tasks, e.g., summarization?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Limitations are sparsely discussed throughout the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the review and suggestions. The reviewer raised several questions which we address in order:
- **Is this paper suitable for NeurIPS?**
We note that per the [call for papers](https://neurips.cc/Conferences/2024/CallForPapers), NeurIPS infrastructure track calls for submissions related to libraries, improved implementation and scalability, and distributed solutions, and we have submitted our paper to the infrastructure track. We believe our improved MoE implementation is a perfect match for this track.
- **On performance impact.**
Dynamic gating and load balancing will have no negative impact on the accuracy, perplexity, or BLEU score of the model. See common response 2 for a more detailed explanation.
- **On summarization performance.**
We note that summarization is increasingly performed by language models via zero-shot prompting (e.g. [1, 2]). Therefore, we believe experiments on summarization would be similar to those for language models.
[1] Wu, Jeff, et al. "Recursively summarizing books with human feedback." arXiv preprint arXiv:2109.10862 (2021).
[2] Adams, Griffin, et al. "From sparse to dense: GPT-4 summarization with chain of density prompting." arXiv preprint arXiv:2309.04269 (2023).
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. After carefully reading the reviews and responses, I plan to increase my score to 6. | Summary: This paper addresses the challenges of efficient inference for Mixture-of-Experts (MoE) models. The authors identify key inefficiencies in MoE inference, particularly in language modeling and machine translation tasks. They propose three main optimization techniques: dynamic gating, expert balancing, and expert load balancing. The authors implement and evaluate these optimizations, demonstrating improvements in inference throughput and memory usage compared to existing methods.
Strengths: - The paper provides a detailed characterization of MoE workloads, identifying specific sources of inefficiency.
- The proposed techniques address key challenges in MoE inference.
- The optimizations show substantial gains in throughput.
Weaknesses: - The paper lacks evaluation of existing representative MoE configurations, such as DeepSeek-MoE. Compared to traditional top-1 and top-2 gating, DeepSeek-MoE selects a larger number of experts, which could potentially impact inference performance. This omission limits the comprehensiveness of the study's comparative analysis.
- The proposed method demonstrates throughput advantages over MegaBlock only with large batch sizes. For smaller batch sizes, MegaBlock remains superior. This limitation somewhat restricts the applicability of the proposed method across different usage scenarios.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Is there a comprehensive experimental comparison available? Figure 3 in the main text only compares the throughput of different methods. However, the latency and memory usage of these methods are not reported. A more complete analysis including these metrics would provide a fuller picture of the proposed method's performance relative to existing approaches.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: - The placement of most experimental figures in the appendix creates difficulties for readers. While this is understandable given the length constraints of conference submissions, it does impact the paper's readability. If additional space becomes available, the authors should consider including some of the key results in the main text. This would improve the flow of the paper and allow readers to more easily grasp the main findings without constantly referring to the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the very detailed review and suggestions. Here are our responses to each point raised by the reviewer:
- **Including evaluation on representative configurations such as DeepSeek-MoE (2401.06066).**
We appreciate the reviewer’s suggestion to include a discussion on the DeepSeek-MoE architecture. We will add citations and a detailed discussion in a later version of this paper. It is difficult to have a fair comparison with DeepSeek-MoE, because (1) the DeepSeek-MoE released implementation assumes single-device deployment and does not support expert-parallelism, and (2) the DeepSeek-MoE released implementation is based on Huggingface TGI framework, whereas our work is based on NVIDIA Megatron, and the custom kernels involved made it hard for us to perform an apple to apple comparison.
We believe that our optimizations can also be applied to the DeepSeek-MoE architecture. Compared to traditional MoE Transformer architecture discussed in the paper, DeepSeek-MoE incorporates two additional strategies – (1) fine-grained expert separation and (2) the idea of shared experts. Below, we discuss the interaction of the two strategies and our optimizations.
1. Our optimizations are relevant for DeepSeek-MoE because, even when the MoE layer activates multiple experts (e.g., six in 16B model), many of MoE’s inefficiencies remain. Expert sparsity remains a problem because the total number of experts is large, especially due to DeepSeek-MoE’s approach to fine-grained expert separation. In this setting, our optimizations for expert buffering and load balancing will reduce memory use and latency for multi-device inference.
2. Using shared experts is tangential and our optimizations could be extended to support them. For example, expert buffering would lock any shared experts into the cache and prevent their eviction from GPU memory to CPU memory.
- **On limitation of proposed optimization on small batch sizes compared to Megablock.**
Please check the common response 1 for a detailed response and discussion.
- **On detailed results for memory and latency.**
Detailed memory usage for each model and optimization combination can be found in Figure 11, Sec. D.2 of the Appendix. In our experiments, the sequence length for each task was fixed, making throughput effectively inverse of mean latency, thus we omitted a separate figure for it. We will provide a detailed table of the measured mean latency in future revisions.
- **On readability and figure placing.**
We appreciate your understanding that the figures were moved to the appendix due to space constraints. We agree that the flow of the paper can be improved by relocating key figures back to the main text. We will prioritize moving more results to the main text as space permits in future versions.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed reply. I'll keep my original score. | Summary: This paper dives deep into the Mixture of Experts architecture, trying to identify its weaknesses and inefficiencies while coming up with novel solutions to improve the architecture in terms of token throughput, memory use and load balance. The authors find out that the static gating function that assigns tokens to each expert contributes most to the high latency and high memory footprint of MoE models. To this end, the authors come up with dynamic gating which improves the token throughput and reduces memory usage. Additionally the authors also add other optimizations such as expert buffering for better usgae of GPU cores and load balancing that takes into account the difference in token distribution at train and inference time. The authors demonstrate the efficacy of their method on Language Modeling and Machine Translation tasks.
Strengths: 1. I believe the paper is expertly written and provides excellent motivation for coming up with methods for making inference of MoE models efficient. I especially enjoyed reading Section 4 wherein each and every design choice is explained in detail.
2. The optimisations introduced in the paper are quite novel and lead to insane gains in throughput and memory usage. These optimisations are simple and can be easily used in practice for serving large MoE models. The results are pretty strong as well in terms of speedups gained.
Weaknesses: 1. While there are super detailed experiments on throughput and memory usage and I understand that those are the main talking points of the paper, I would have appreciated if there were some details about how the perplexity or BLEU scores were impacted by adding these optimisations. Some detailed insights into performance which we care about would have been great.
2. The authors have mentioned limitations of their work anywhere.
Technical Quality: 4
Clarity: 3
Questions for Authors: Check weaknesses.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The authors have not a limitations section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are encouraged to hear that the reviewer found our work to be thorough and our method effective. Here are our responses to each point raised by the reviewer:
- **On details about how the perplexity or BLEU scores were impacted by adding these optimizations.**
Our optimizations will not negatively impact the perplexity or BLEU scores of the proposed model. Please check our common response 2 for a detailed explanation.
- **On creating a separate limitation section.**
Due to the space limit, we choose not to create a limitation section, but we have mentioned limitations of our method in the main text as well as the Appendix. Specifically, limitations of our method include:
- Dynamic gating may suffer from the kernel launching cost, resulting in slightly higher latencies compared to Megablock when batch size is small. We discuss this on line 273-277, and a more detailed discussion can be found in App. D.1.
- Expert buffering trades latency for a smaller memory footprint. Additional CPU-GPU communication is necessary for offloading the memory. We mitigate the effect by designing a buffering strategy based on the LLM activation pattern and applying load balancing. Discussion can be found at line 311-313.
- We discussed other failed approaches we tried in Appendix Section E.
We hope that these responses help to clarify our points!
---
Rebuttal Comment 1.1:
Comment: Thank You for the clarifications. I would maintain my score of 7. | Summary: This paper analyzed the behavior of standard MoE Transformer workloads and pointed out the bottleneck in inference latency and memory usage. Then it introduces a Dynamic Gating policy instead of static-size computation to improve the efficiency of the gating operation. It also proposes Expert Buffering which offloads inactive experts to CPU and Load Balancing which distributes experts and data in a more balanced way to cooperate with the Dynamic Gating policy to further improve the efficiency. Experiments on LM and MT demonstrate good performance of the proposed method.
Strengths: 1. The authors really do very detailed systematic and sound analysis of the bottlenecks in MoE and point out the modules that affect the efficiency, which is a strong motivation of the proposed method.
2. In view of the inefficiency of the gating operation, the authors proposed a dynamic gating policy, along with expert buffering and load balance to systematically improve the efficiency.
3. The experiments are convincing to address the motivation and support the methods.
Weaknesses: 1. In Section 4, line 155, the notations S, C, E, D lack explanation when they appear. Although I can guess out their meaning from the context surrounded, it really takes me time to understand these.
2. In Figure 3, it shows that when batch size is less than 32, Dynamic Gating performs worse than Megablock. During inference especially online service, I guess many requests are batchsize=1 and would be better to use Megablock?
Technical Quality: 4
Clarity: 3
Questions for Authors: N/A
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your enthusiastic and encouraging review of our work. Below are our responses towards each point and question raised in the review:
- **The meaning of notation S, C, E and D in Section 4.**
We appreciate the reviewer pointing out that these notations are not clearly defined when they appear. Here, S represents the sequence length, C represents the Capacity Factor, E represents the number of total experts, and D represents the dimension of each token. In future revision, we will add definitions to these notations in both Section 4 and Figure 2 to enhance clarity.
- **Dynamic Gating performs worse than Megablock when the batch size is less than 32.**
Please check our common response 1 for more details. | Rebuttal 1:
Rebuttal: We would like to thank reviewers for providing us with valuable feedback.
We have taken note of the concerns raised by each reviewer and addressed them in detail. Here, we provide responses to the most shared questions first followed by a detailed response to each reviewer's concern in the rebuttal.
- **Dynamic Gating performs worse than Megablock when the batch size is less than 32.** (BGdS, JXaG):
While Dynamic Gating performs worse than Megablock when the batch size is less than 32, we argue that such cases are actually rare during inference, especially online serving. Performing inference with small batchsize will limit the throughput due to lower computational intensity of the workload, which is demonstrated in Fig. 4. Therefore, to improve resource utilization, request serving will be separated into multiple steps, and during each step, requests will be dynamically **re-organized into large batches** (i.e., Continuous Batching), as proposed by [1]. This technique has been adopted by recent LLM serving systems, such as vLLM [2] and TGI [3]. In a production system, multiple requests arrive at the node with short intervals between their arrival times, creating numerous opportunities for continuous batching. Therefore, we argue that the **performance on larger batch sizes is more important** for online serving, and our technique outperforms Megablock in such a scenario. We also note that a detailed discussion on the reason for the performance trend is provided in Appendix. D.1.
- **Impact of our optimizations towards the performance of the transformer model, in terms of perplexity, BLEU scores, or other metric of interest.** (dmgJ, JXaG, t1yX)
When expert capacity (the number of tokens accepted by an expert) is large, our optimized model is **mathematically equivalent** and as accurate as the baseline. When expert capacity is small, our optimized model may be more accurate (because it never drops tokens) than the baseline (which can drop tokens). Note that in baseline studies, evaluation is performed with large expert capacity and tokens are not dropped.
[1] Yu, Gyeong-In, et al. "Orca: A distributed serving system for Transformer-Based generative models." 16th USENIX Symposium on Operating Systems Design and Implementation (OSDI 22). 2022.
[2] Kwon, Woosuk, et al. "Efficient memory management for large language model serving with paged attention." Proceedings of the 29th Symposium on Operating Systems Principles. 2023.
[3] https://huggingface.co/docs/text-generation-inference/index | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
DI-MaskDINO: A Joint Object Detection and Instance Segmentation Model | Accept (poster) | Summary: This work investigates the detection-segmentation imbalance issue in MaskDINO. It proposes DI-MaskDINO model with the residual double-selection mechanism to alleviate the imbalance. The framework mainly includes De-Imbalance and Balance-Aware Tokens Optimization. Experiments prove the effectiveness.
Strengths: 1. The finding of detection and segmentation imbalance at the beginning of MaskDINO is interesting.
2. Experiments prove the effectiveness on various benchmarks.
3. Overall, the whole framework is clear and easy to follow.
Weaknesses: 1. The motivation of De-imbalance module requires more verification. The authors claim "The token interaction is the key point to make sure that the secondly-selected tokens are beneficial for detection" in L169. But there are no experiments or theory to prove this claim or design. This makes the reviewer confused why the token interaction can play such a role. It's better to add some experiments or heatmap visualization.
2. The Balance-Aware Tokens Optimization module contains several components. It's essential to give experimental analysis of each design.
3. This work is built on MaskDINO, it is important to generalize this proposed manner to other decoder-based methods. It can further validate the generality of the proposed approach.
4. Experimental results on COCO test set should be provided for fair comparisons.
Technical Quality: 3
Clarity: 3
Questions for Authors: My main concern is the motivation and analysis of the designed component. Please refer to the Weakness section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations are discussed in the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are greatly encouraged by your positive comments, including "the finding of detection and segmentation imbalance at the beginning of MaskDINO is interesting" and "the whole framework is clear and easy to follow".
**[W1]** Thank you very much for the insightful observation. To clearer clarify the effect of token interaction, we firstly make the analysis from the theoretical view, and then prove its effect by experiments.
(1) Theoretical analysis
Each token actually corresponds to a patch (remarkably smaller than an object in most cases) in the image [52], and the bounding box of an object is regressed by integrating the multiple patches (belonging to the same object) that have global patch-to-patch spatial relations, thus it is really needed for the detection task to learn the interaction relation between patches. In contrast, the dense all-pixel supervision for the segmentation task mainly focuses on local pixel-level similarity with GT mask [25], hence the segmentation task is not particularly depend on the patch-to-patch relation as the detection task. Via token interaction, different tokens representing the patches (belonging to the same object) can interact with each other to learn the global geometric, contextual, and semantic patch-to-patch relations, benefiting the perception of object bounding boxes. Therefore, executing token interaction makes De-Imbalance module to be more beneficial for detection.
[25] Feng Li, Hao Zhang, Huaizhe Xu, Shilong Liu, Lei Zhang, Lionel M Ni, and Heung-Yeung Shum. Mask dino: Towards a unified transformer-based framework for object detection and segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3041–3050, 2023.
[52] Sixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu, Zekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao Xiang, Philip HS Torr, et al. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6881–6890, 2021.
(2) Experimental analysis
To validate the effectiveness of token interaction, we test the performance of our model under two configurations: w/o token interaction and w/ token interaction. The experiment results are reported in Tab.R4 of the rebuttal PDF file. The configuration with token interaction yields higher performance, demonstrating the effect of token interaction.
**[W2]** Thanks for your suggestion. In DI-MaskDINO, De-Imbalance (DI) module is the critical design to alleviate the detection-segmentation imbalance, generating balance-aware query $Q\_{bal}$ in Eq.4. Balance-Aware Tokens Optimization (BATO) module serves as an auxiliary design to utilize $Q\_{bal}$ to optimize the initial feature token $T\_{i}$ and finally compute balance-aware feature tokens $T\_{bal}$. The effect of BATO is validated by the diagnostic experiments on main modules, and the results are reported in Tab.4 of the original version. We can observe that, when BATO is enabled, the performance obtains significant improvement, demonstrating the effectiveness of BATO.
The main component of BATO is the Guiding Token Generation (GTG), which uses $Q\_{bal}$ to generate overall guiding token $T\_{g}$. $T\_{g}$ is then used to optimize $T\_{i}$ via Multi-Head Cross-Attention. Multi-Head Cross-Attention is a fundamental module that could not be disabled (i.e., if it is disabled, $T\_{i}$ will be directly inputted to the transformer decoder as MaskDINO does). Therefore, in the diagnostic experiments on BATO module, we only evaluate the effect of GTG by testing the performance of model under the configurations with/without GTG. The experiment results are reported in Tab.6 of the original version.
**[W3]** Thank you very much for this comment. Based on the structure characteristic of existing transformer-based joint object detection and instance segmentation models, our proposed De-Imbalance module (the core of our model) could be friendly applied on other models. Transformer-based joint object detection and instance segmentation models have a similar model architecture, consisting of backbone, transformer encoder, transformer decoder, and prediction heads. De-Imbalance module takes the output of transformer encoder as the input, and the output of De-Imbalance module is then taken as the input of transformer decoder. Therefore, adding the De-Imbalance module to a existing model will not hurt its main structure. Similarly, our proposed Balance-Aware Tokens Optimization module can also be directly added between De-Imbalance module and transformer decoder. Therefore, our proposed modules can be easily applied to other transformer-based joint object detection and instance segmentation models. We would like to carefully explain why the generality is not prioritized in the original version. Though the generality of a model is important, the prior goal of our work is to make the model achieve SOTA results.
**[W4]** Thank you very much for the suggestion. According to your suggestion, we compare DI-MaskDINO with MaskDINO on COCO test-dev. The results are summarized in Tab.R5 of the rebuttal PDF file, which demonstrates the effectiveness and robustness of DI-MaskDINO. We note that the test-dev evaluation server is only available to object detection task, since the testing needs to upload a prediction results file of json format to the evaluation server provided by the COCO dataset website (https://cocodataset.org/?spm=a2c6h.12873639.article-detail.137.34fa30e89NSdTI#upload), but the website does not provide evaluation server for instance segmentation task. Therefore, the performance of instance segmentation is not reported.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's rebuttal. I have read the rebuttal and other reviews. It solved most of my concern. So I'd like to change my rating to Borderline accept. | Summary: This paper focuses on the detection-segmentation imbalance issue and proposes DI module with the residual double-selection mechanism to alleviate the imbalance; moreover, Balance-Aware Tokens Optimization (BATO) is proposed to guide the optimization of the initial feature tokens. The proposed method termed DI-MaskDINO, achieves SOTA results in both object detection and instance segmentation.
Strengths: 1. The authors claim that the performance of object detection lags behind that of instance segmentation from the beginning transformer decoder is interesting;
2. The proposed method achieves SOTA results in both object detection and instance segmentation;
3. The paper is clear, and the experimental results are detailed.
Weaknesses: The technological innovation is somewhat limited.
Technical Quality: 2
Clarity: 3
Questions for Authors: The authors implement De-Imbalance module by stacking several self-attention and Multi-Head Cross-Attention. However, it could be seen as one-layer transformer encoder / decoder. The current phenomenon cannot clarify whether the performance improvement stems primarily from mitigating the detection-segmentation imbalance issue or from the increase in parameters/layer. The authors need to provide evidence to support this claim. For example, they could show that a 6-layer DI-MaskDINO outperforms a 7-layer MaskDINO, etc.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your positive comments on our work, such as "the proposed method achieves SOTA results" and "the paper is clear and the experimental results are detailed". There are two concerns regarding technological innovation and whether the performance improvement stems primarily from mitigating the detection-segmentation imbalance issue. Our responses to the two concerns are as follows.
**[W1Q1]** "The technological innovation is somewhat limited. De-Imbalance module could be seen as one-layer transformer encoder / decoder."
**Response:** From the technological view, De-Imbalance module is composed of several self-attention and cross-attention. However, we carefully explain that self-attention and cross-attention are basic units to build our De-Imbalance structure. Therefore, **the technological innovations of De-Imbalance module mainly lie in its explanation, simplicity, and effectiveness, which contributes new inspiring thought to the community to push forward the study of fundamental object detection and instance segmentation tasks.** The detailed analyses are as follows.
(1) De-Imbalance module is explainable and effective
In transformer-based joint object detection and instance segmentation models, the two tasks are closely-related and mutually-affected, and the mutual effects could be positive or negative. The imbalance brings the negative impact on model performance. As shown in Fig.1(a) of the original version, at the beginning layer, object bounding boxes do not fit well with object mask, which will hinder the cooperation of the two tasks, leading to the negative effect. In contrast, the interaction under a balanced state makes the two tasks mutually-beneficial.
However, most joint object detection and instance segmentation models (e.g., SOLQ and SOIT) do not focus on how the interaction between the two tasks affects the model performance. We focus on the imbalance issue through studying SOTA joint object detection and instance segmentation model MaskDINO. Furthermore, we propose De-Imbalance module to alleviate the imbalance. It is noted that the detection-segmentation imbalance means that the performance of object detection lags behind that of instance segmentation at the beginning layer of decoder. Therefore, the core idea of De-Imbalance module is to strengthen the performance of detection to alleviate the imbalance at the beginning layer of decoder. By narrowing the imbalance between the two tasks at the beginning layer, the two tasks could rapidly reach a mutually-beneficial state, which contributes to improve the performance. In addition, a large number of experiments (e.g., Tab.3-5 in the original version) have proven the effectiveness of De-Imbalance module.
In summary, De-Imbalance module contribute to handle the long-standing and naturally-existing imbalance issue between object detection and instance segmentation, which is commonly ignored in previous works. Although De-Imbalance module is composed of basic network units (e.g., self attention), they have been proven to be explainable and effective.
(2) The simplicity of our model
To analyze the simplicity of our model, we compare the parameters of DI-MaskDINO configured with different numbers of decoder layers with that of MaskDINO, and the results are summarized in Tab.R3 of the rebuttal PDF file. It is worth noting that our model has only 6 decoder layers (52.3M), while MaskDINO contains 9 decoder layers (52.1M). Our model achieves higher performance, at the cost of involving only 0.2M parameters. We can also observe that our model with 3 decoder layers has achieved comparable performance compared to MaskDINO with 9 decoder layers (i.e., 45.8 v.s. 45.7 on $AP^{box}$ and 41.3 v.s. 41.4 on $AP^{mask}$), saving 4.5M parameters at the same time.
In addition, alleviating the imbalance contributes to accelerate the convergence. As shown in Tab.1 of the original version, DI-MaskDINO presents significant advantage under the training condition of epoch = 12, which potentially reveals that our model reaches the convergence with a faster speed. This is attributed to that alleviating the imbalance issue promotes the collaboration between the two tasks, thus the two tasks could rapidly reach a mutually-beneficial state.
**[Q1]** "Whether the performance improvement stems primarily from mitigating the detection-segmentation imbalance issue or from the increase in parameters/layer."
**Response:** Thank you very much for the insightful comment. According to your comment, we conduct the experiments to validate that the performance improvement does not stem from the increase in parameters/layer. In detail, we test the performance of DI-MaskDINO configured with 3, 6, and 9 decoder layers and compute the parameters of corresponding configurations, respectively. The results are reported in Tab.R3 of the rebuttal PDF file, indicating that the performance improvement does not stem from the increase in parameters or decoder layers. Specifically, our model with 3 decoder layers has achieved comparable performance with MaskDINO, and 4.5M (52.1-47.6) parameters are reduced at the same time. In contrast, the performance improvement stems from mitigating the detection-segmentation imbalance issue, which could be evidenced by the experiment results in Tab.4 of the original version. When enabling De-Imbalance module, the overall performance significantly increases (i.e., from 45.6 to 46.4 on $AP^{box}$ and from 41.2 to 42.1 on $AP^{mask}$).
---
Rebuttal 2:
Comment: The author's rebuttal has solved most of my concern, and I'd like to change my rating to Borderline accept. | Summary: This paper initially observes that in the current state-of-the-art model MaskDINO, the performance of object detection lags behind instance segmentation at the initial layer of the transformer decoder, resulting in a performance imbalance phenomenon.
To explore whether this "performance imbalance issue" is a factor that restricts the effectiveness of the detector, the authors propose the DI-MaskDINO model, which introduces two key components: the De-Imbalance (DI) module and the Balance-Aware Tokens Optimization (BATO) module, to alleviate the performance imbalance between detection and segmentation tasks.
Strengths: - The paper identifies a novel issue: the performance imbalance that exists between object detection and instance segmentation, a problem that is less discussed in existing literature. This work is of significant importance for advancing research in the fields of object detection and instance segmentation, particularly by providing new perspectives and solutions for dealing with performance imbalance issues.
- The proposed model demonstrates certain improvements over the baseline on the COCO and BDD100K benchmark tests, which to some extent indicates that the "performance imbalance issue" is a factor that restricts the effectiveness of detectors, and the logic of the paper is coherent.
Weaknesses: - The lack of in-depth analysis of "performance imbalance problem" in this paper, only through the experimental phenomenon, there may be a certain coincidence.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In Experiment Table 1, why were AP_S, AP_M, and AP_L not tested separately for MaskDINO at epochs 12 and 24?
- In Experiment Table 2, why is there no comparison on the BDD100K validation set for the SwinL backbone?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - Although the paper has proposed the "performance imbalance issue," it only conducted experimental observations on MaskDINO. Can other detectors also improve model performance by addressing the "performance imbalance issue"? The paper does not generalize this issue to a more general level, making it more universally applicable.
- The paper's discussion of the "performance imbalance issue" is only focused on the initial layer of the transformer decoder and does not explore whether other layers of the decoder may also have the "performance imbalance issue."
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[W1]** We are greatly encouraged by your positive comments "the paper identifies a novel issue, and this work is of significant importance for advancing research in the fields of object detection and instance segmentation, particularly by providing new perspectives and solutions for dealing with performance imbalance issues". The experimental analyses for the "imbalance'' have been conducted, as shown in $\S$ 4.3.1 in the original version. To further solve your concern regarding the "imbalance'', we make the in-depth analysis from the theoretical perspective. The analysis begins with the explanation for the performance imbalance at the beginning layer of transformer decoder, followed by analyzing essential reasons causing the "imbalance''.
The "imbalance'' in our paper is a concise summary for the phenomenon that the performance of object detection lags behind that of instance segmentation at the beginning layer of transformer decoder. The reasons are multi-fold.
Firstly, the individual characteristics and supervision manners of detection and segmentation tasks lead to the "imbalance''. Object detection is a coarse-grained region-level regression task to accurately locate the bounding box of an object, while instance segmentation is a fine-grained pixel-level classification task to correctly classify each pixel of an object. Therefore, object detection relies on more global information that reveals geometric, contextual, and semantic relations of different patches belonging to an object. However, at the beginning layer, the global information is limited, thus the performance of object detection lags behind.
In addition, the supervision for object detection is sparse (i.e., a 4D vector of x, y, w, and h), while the supervision for instance segmentation is dense (i.e., hundreds or thousands dimension vector of GT mask pixels). The sparse supervision is weak, but it encourages a model to learn the relatively global contexts of different patches belonging to an object, which is challenging at the beginning layer. In contrast, the dense supervision is strong, and it is suitable for a model to achieve local pixel-level classification. Therefore, the supervision manners also sharpen the "imbalance'' at the beginning layer.
**[Q1]** Thank your very much for kindly pointing out this detail. We fetch the results from [25] (i.e., MaskDINO) at epochs 12 and 24, and $AP^{box}\_{S}$, $AP^{box}\_{M}$, and $AP^{box}\_{L}$ are not provided in [25]. We are also unable to get the model weights for testing since the weight files are not available.
[25] Feng Li, Hao Zhang, Huaizhe Xu, Shilong Liu, Lei Zhang, Lionel M Ni, and Heung-Yeung Shum. Mask dino: Towards a unified transformer-based framework for object detection and segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3041–3050, 2023.
**[Q2]** Thank you very much for the insightful observation. In the original version, the experiments on BDD100K dataset are treated as additional experiments, thus we only conduct the experiments using ResNet50 backbone on BDD100K dataset. According to your comment, we conducted additional experiments on BDD100K using Swin-L backbone. The results are reported in Tab.R2 of the rebuttal PDF file, from which we can observe that our model presents the advantage.
**[L1L2]** Thank you very much for pointing out the limitations, which inspire us to optimize the model. Regarding the generality, we do not extend the imbalance issue to other models, since the prior goal of our work is to achieve SOTA performance. In the future, we will further study the generality issue. Regarding the imbalance issue in other decoder layers, the imbalance will gradually be weakened with the increasing of decoder layers, thus the imbalance issue is not prominent in other decoder layers.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors responce, my concerns are resolved. I rise the score to Accept. | Summary: The paper starts from an observation regarding imbalance in the intermediate results between detection and instance segmentation, which motivates the authors to propose DI-MaskDINO, which tries to improve the imbalance through the DE-Imbalance module and Balance-Aware Tokens Optimization module. Evaluated on COCO and BDD100K benchmarks, the proposed DI-MaskDINO achieves better results than MaskDINO baseline.
Strengths: - The paper is generally well-written and technically solid
- The starting point of det/seg imbalance sounds interesting
- Extensive experiments and achieve improvement over strong baseline MaskDINO
Weaknesses: - The paper claims improvement over SOTA results, yet it seems that the MaskDINO-SwinL is only reported under 12 epochs setting, while in MaskDINO paper they report results under 50 epochs setting with AP_mask=52.3, AP_box=59.0. I do not see why in tab.1 epochs = 12/24/50 are all reported for R50 backbone but only epochs = 12 are reported for SwinL backbone, making the SOTA claim less comprehensive and less convincing.
- The motivation stems from the observation on det/seg imbalance. Yet, I do not see why performance gap between det and seg from the first layer can illustrate the imbalance. If the absolute AP value is used to measure the imbalance between det and seg, then why AP_box < AP_mask at first layer yet AP_box > AP_mask at the last layer? Furthermore, it is also possible that DI-MaskDINO just improves the performance of both branches which naturally reduce the performance gap (e.g., improving det from 10 to 20 may be in similar difficulty of improving seg from 15 to 22, but the absolute gap is reduced.) In short, I believe there lacks a solid definition and study on the "imbalance" problem, and the current comparison based on absolute AP value from first layer is not convincing to me.
- Minor: I see the reported FPS is significantly different from the ones reported in MaskDINO, which I assume could be the hardware differences. Please explain how the FPS is measured.
- Minor: The paper provides an anonymous link to the code/model, yet the provided repo is empty.
Technical Quality: 3
Clarity: 3
Questions for Authors: Though I hold several concerns as illustrated in the weakness, I feel the paper is generally technically good and with performance improvement over strong baseline MaskDINO. Thus my initial rating is boarderline accept. My major concerns lies in the motivation on imbalance is not so convincing to me, I look forward to the author's rebuttal for further illustration or other more convincing measurement.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No other limitations as far as I know
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are greatly encouraged by your positive comments, including "paper is generally well-written and technically solid" and "the starting point of det/seg imbalance sounds interesting".
**[W1]** We supplement the experiment under the condition of epoch = 50 when the backbone is Swin-L. The results are shown in Tab.R1 of the rebuttal PDF file, indicating that DI-MaskDINO exhibits the superiority over MaskDINO. It is noted that the experiments with the Swin-L backbone are conducted on the same 4 A6000 GPUs with the batch size of 4 (the maximum bacth size that 4 A6000 GPUs supports). The batch size is smaller than that in MaskDINO paper (i.e., batch size = 16) and the 4 A6000 GPUs present weaker computation power than 4 A100 GPUs, thus the results we reproduced are lower than those in the original MaskDINO paper (i.e., $AP^{box}=59.0$, $AP^{mask}=52.3$). In the original version, DI-MaskDINO is compared with MaskDINO under three different settings of epochs = 12/24/50 when the backbone is ResNet, which have demonstrated the superiority of DI-MaskDINO to the certain extent. Therefore, when the backbone is Swin-L, we only test one setting of epoch = 12.
**[W2]** To better answer your question, we firstly explain the definition of "imbalance" and the reason for using the absolute AP to measure the "imbalance". Then, we analyze why $AP^{box}$ is smaller than $AP^{mask}$ at the beginning layer. Finally, we explain why $AP^{box}$ is larger than $AP^{mask}$ at the last layer.
(1) Explanation of "imbalance"
Specifically, the "imbalance'' summarizes the phenomenon that the performance of object detection lags behind that of instance segmentation at the beginning layer of transformer decoder. Therefore, "imbalance'' could be better understood as a concise summary for the above mentioned phenomenon. To quantitatively evaluate the "imbalance'', we previously think about the idea to align $AP^{box}$ and $AP^{mask}$ to (0,1) to better clarify relative AP value. However, it might be controversial to decide the AP value corresponding to 1. If we define that AP=100 is corresponding to 1, then the relative AP equals to absolute AP. Therefore, we directly use the absolute AP to measure the "imbalance".
(2) Why is $AP^{box}$ smaller than $AP^{mask}$ at the beginning layer?
Firstly, the individual characteristics and supervision manners of detection and segmentation tasks lead to the "imbalance''. Object detection is a region-level regression task to locate the bounding box of an object, while instance segmentation is a pixel-level classification task to classify each pixel of an object. Therefore, object detection relies on more global information that reveals the contextual relations of different patches in an object. However, at the beginning layer, the global information is limited, thus the performance of object detection lags behind.
In addition, the supervision for object detection is sparse (i.e., a 4D vector of x, y, w, and h), while the supervision for instance segmentation is dense (i.e., hundreds or thousands dimension vector of GT mask pixels). The sparse supervision is weak, but it encourages a model to learn the relatively global contexts, which is challenging at the beginning layer. In contrast, the dense supervision is strong and suitable for local pixel-level classification.
(3) Why is $AP^{box}$ larger than $AP^{mask}$ at the last layer?
Firstly, as mentioned above, object detection relies on more global information. With the increasing of decoder layers, the global information is richer to benefit object detection.
Secondly, object detection is a coarse-grained task that only needs to locate the four vertices of an object bounding box, while instance segmentation is a fine-grained task that require to accurately classify a large number of pixels of an object mask. Therefore, in an overall view, object detection is relatively simpler. As shown in Tab.1 of the original version, for various models (Mask RCNN, HTC, SOLQ, and MaskDINO), $AP^{box}$ is much higher than $AP^{mask}$, indicating it is easier for object detection to achieve higher final performance. Therefore, the performance of object detection is larger than that of instance segmentation at the last layer.
(4) The significance of our work
The above analysis could potentially reveal the significance of our work. The detection branch and segmentation branch share the unified query, thus the two tasks are interactive and mutually affected. At the beginning layer, as shown in Fig.1(a) in the original version, object bounding boxes do not fit well with object mask, which will hinder the cooperation of the two tasks, leading to the negative interactive effect. By narrowing the imbalance between object detection and instance segmentation at the beginning layer, the two tasks could rapidly reach a mutually-beneficial state, which contributes to improve the performance, as validated by our experiment results in Tab.1 of the original version. In addition, our model also contributes to reduce the parameters of model, as validated by the supplemented experiment results in Tab.R3 of the rebuttal PDF file, which shows that our model configured with 3 decoder layers could achieve the similar performance with MaskDINO configured with 9 decoder layers, and 4.5M (52.1-47.6) parameters are reduced at the same time.
**[W3]** For the fairness, the FPS of all models is tested on the same 4 RTX3090 GPUs when the backbone is ResNet50. When the backbone is Swin-L, the FPS is tested on the same 4 A6000 GPUs, since 4 RTX3090 GPUs could not support the Swin-L backbone.
**[W4]** Despite using the anonymous link, we are still concerned about violating the double-blind reviewing policy. Therefore, we only provide an empty repository to clarify our willingness of releasing the code/model. We will definitely release the code/model if the paper is accepted. | Rebuttal 1:
Rebuttal: We appreciate the reviewers for their constructive comments and suggestions.
We are particularly encouraged that the reviewers unanimously acknowledge **our work is interesting** (aN5C, 5ayP, TyXi, and KbFf). Reviewers commend us for **achieving state-of-the-art results** (aN5C, 5ayP, TyXi, and KbFf). Reviewers comment positively on **writing and presentation** (aN5C, TyXi, and KbFf). Specifically, reviewer 5ayP remarks that "**the work is of significant importance for advancing research in the fields of object detection and instance segmentation**". Reviewer aN5C notes that our paper is "**technically solid**". Reviewer KbFf finds "**the whole framework is clear and easy to follow**".
Pdf: /pdf/e3d7e0c811c4d51213644e71ecf1c4ae94e82b65.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Verified Code Transpilation with LLMs | Accept (poster) | Summary: This paper proposes an LLM-based approach (LLMLIFT) to building verified lifting tools. LLMLIFT leverages LLMs to generated both code and proof annotations together. For four real-world DSLs, LLMLIFT not only outperforms previous symbolic-based tools in both the number of benchmarks transpiled and transpilation time, but also requires significantly less effort to build.
Strengths: For the important task of automatically code transpilation for DSLs, this paper domonstrate a novel approach to generating and verifying the generated code using LLMs. While using LLMs, the key idea of VL is combined: Python is used as IR, which solves the shortcomings of direct translation based on LLMs. The experimental results are improved compared with the existing methods.
Weaknesses: Some statements are not clear and accurate:
1. An important contribution and innovation of this paper is that generating and verifying the generated code using LLMs. However, to be precise, LLMs just seems to generates loop invariants for validation and then combine them with validators.
2. Contributions should be distinguished from strengths.
The experiment does not seem to well illustrate the advantages of using LLMs to validate generated code as proposed in this paper. For example, to what extent did this LLMs-based validation phase improve the experimental results?
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Compared with traditional VL, does the LLMs-based VL proposed in this paper have any other advantages besides being better in finding IR expression sequences? For example, what are the specific advantages of generating loop invariants compared with traditional methods?
2. It is mentioned that the model uses PS to generate invariants, then whether it will lead to the error of the invariant due to the error of PS generation?
3. Did LLMLIFT make any adjustments after Boolean Feedback?
4. How to verify correctness if no loops are included in the program.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: Are there DSLs that are hard to define with Python as IR?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **LLMs just seems to generates loop invariants for validation and then combine them with validators. Compared with traditional VL, does the LLMs-based VL proposed in this paper have any other advantages besides being better in finding IR expression sequences?**
To clarify, LLMLift does not directly use LLMs for code validation. Instead, LLMs are used to guess program summaries and loop invariants, which are then passed to a theorem prover to check the validity of the generated program and its proof.
The challenge of finding a correct program summary or an equivalent program in a domain-specific language (DSL) is significant. For example, in our tensor processing domain, the search space for some problems can reach approximately 100,000 expressions for a 20 line input program. Traditional symbolic tools depend on domain-specific heuristics to manage this complexity, whereas LLMLift simplifies the entire search phase by using a straightforward structured prompt, as detailed in the paper.
The program verification community has devoted substantial effort over the years to address the problem of automatically inferring loop invariants. Various approaches have been proposed, including template-based synthesis [1], example-driven methods [2], and machine learning techniques [3]. What sets LLMLift apart is that it bridges two traditionally separate lines of work—program synthesis and formal verification—allowing us to leverage the strengths of both. By integrating LLMs in our iterative framework, we are able to generate candidate loop invariants and program summaries more flexibly and efficiently than traditional methods. To the best of our knowledge, LLMLift is the first approach that can simultaneously solve *both* the translation and loop invariant generation problems while generalizing across multiple domains.
This capability not only significantly reduces the human effort required to build these techniques but also provides formal guarantees on the output of the LLMs—a task that has traditionally been very challenging.
**It is mentioned that the model uses PS to generate invariants, then whether it will lead to the error of the invariant due to the error of PS generation?**
In LLMLift, there is a possibility that the PS guessing phase might result in an incorrect program summary, as the model initially validates only for syntactic correctness. Since the loop invariants are generated based on this potentially incorrect PS, the generated invariants could also be incorrect. However, these erroneous solutions are caught and rejected during the verification phase of LLMLift. In this phase, a theorem prover is used to check for the semantic equivalence of the generated PS and invariants with the original source program (as described in how we prove equivalence earlier). If the generated PS and its associated invariants do not semantically match the source program, they are discarded. We provide examples of programs rejected by the theorem prover in Figure 10.
**Did LLMLIFT make any adjustments after Boolean Feedback?**
No, we do not make any adjustments after the Boolean feedback. We just prompt the model with the feedback and the solution if the solution is incorrect.
**How to verify correctness if no loops are included in the program.**
See general response.
### References
[1] Pranav et al. Learning invariants using decision trees and implication counterexamples
[2] Saswat et al. Data-Driven Inference of Representation Invariants
[3] Xujie et al. Learning loop invariants for program verification | Summary: This work proposes LLMLift that leverages large language models (LLMs) to perform program transpilation. It first uses a prompt to lift the source program into an intermediate representation of the operators in the target language, called a program summary. Then, it prompts the LLM to generate loop invariants if necessary. The generated program summary and loop invariants are used to verify the equivalence of the source program and the program summary. This process is repeated until a correct program summary is found or the computation budget is reached. Finally, the program summary is rewritten into code in the target language using simple rules. The evaluation was performed on four tasks: distributed computing, network packet processing, TACO, and tensor processing. The results show that LLMLift can correctly transpile more benchmarks, spend less time, and require less effort to develop.
Strengths: Code transpilation is an interesting and important problem for both research and industry. The paper has demonstrated a nice combination of LLMs and verification to obtain provably correct transpiled code. The evaluation is performed on a diverse set of four scenarios. Moreover, I think the approach can have good potential even for translation between mainstream programming languages.
Weaknesses: My main concern for this paper is poor writing, in both syntactic and semantic level. The writing needs significant improvement in general. I also have some other concerns about the evaluation.
### 1. Syntactic Writing Issues
I found the following syntactic issues:
- Typos: Line 135 program program, Line 175 Contarary. There might be other typos so I suggest the authors run spell checks.
- The number of arguments received by $\phi$ are different in Equations (1) and (2).
- It is visually difficult to separate the citations from normal text. I suggest to add a pair of parentheses like the ICML format.
### 2. Semantic Writing Issues
The general idea of the approach is easy-to-understand, the paper lacks many technical details:
- How do you verify the equivalence of the source program and the program summary? The paper provides hardly any details on this, but I think this is a non-trivial task.
- How do you produce the target program from the program summary? Again, the paper mentions that this can be done using simple rules without giving any technical details.
- How difficult are the four tasks considered in the evaluation? Providing some end-to-end examples would be helpful.
- Why do you use one-shot prompting for generating program summaries but zero-shot for generating loop invariants?
### 3. Concerns about Evaluation
- LLMLift’s benefit lies mostly in speed. However, it does not solve significantly more benchmarks than baselines. Why is it the case?
- The paper claims that LLMLift reduces development effort compared to prior rule-based approaches. I am not sure if this is a fair comparison, because the development of LLMs, such as GPT-4 used in the paper, requires significant development effort and computation resources.
- The paper targets only transpilation from mainstream languages to DSLs. What are the main obstacles and efforts needed to extend LLMLift to transpilation between mainstream languages?
Technical Quality: 2
Clarity: 2
Questions for Authors: Please consider addressing the points raised in the “Weakness” section.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The paper has sufficiently addressed the points concerning limitations and societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **How do you produce the target program from the program summary?**
Once we have the verified program summary in the IR, the code generation phase uses syntax-driven rules to map IR operators to the concrete syntax of the target DSL. Below we show a snippet of a code generation function that translates a tensor processing IR to PyTorch. This code generation function recursively parses the IR operators and maps them to the corresponding DSL operators.
For instance, the `elemwise_add` operator in IR translates `torch.add` in PyTorch, and `elemwise_sub` translates to `torch.subtract`. Thus, the IR expression `elemwise_add(elemwise_sub(a, b))` becomes `torch.add(torch.subtract(a, b))` in PyTorch.
```python
def codegen(expr: Expr):
if isinstance(expr, Var): return expr.name()
elif isinstance(expr, Lit): return expr.val()
elif isinstance(expr, Call):
f_name, args = expr.name(), expr.arguments()
if f_name == "elemwise_add":
return f"torch.add({codegen(args[0])}, {codegen(args[1])})"
elif f_name == "elemwise_sub":
return f"torch.subtract({codegen(args[0])}, {codegen(args[1])})"
...
```
**Why 0 shot for PS and 1 for invariants:**
The task of generating program summaries is relatively easier compared to generating loop invariants. The primary instruction for this task is that the language model should combine operators from the given DSL without introducing any external operators. This constraint is easily expressible in natural language.
Generating invariants, on the other hand, is a more complex task for the following reasons:
1) Specific Template: While LLMs might have encountered loop invariants in their training data, the invariants required for verified lifting often follow a specific template that may not be well-represented in the model's training set. Loop invariants in verified lifting must use a particular syntactic structure. As described in Equation 3 of our paper, they should contain expressions over loop variables and include inductive expressions using the output variable.
2) Language Specificity: The generated invariants are in Python instead of SMT-LIB. This specific format requirement makes it easier to implement a pattern-matching parser for translating the invariants to SMT-LIB later in the process.
Describing these structural and syntactic restrictions accurately in natural language is challenging. Hence we use a one shot prompt for generating the invariants.
**LLMLift does not solve significantly more benchmarks than baselines**
We compare LLMLift against symbolic implementations that were evaluated on a small, diverse set of benchmarks designed to showcase the effectiveness of their specific approaches. As a result, these tools achieve high accuracy on these benchmarks. To demonstrate the scalability of our approach, we created a set of 10 synthetic benchmarks for C2TACO, which are far more complex than those in the original set of benchmarks (see Appendix C). Our experiments show that LLMLift easily scales to handle these more challenging benchmarks, solving all 10 benchmarks in under 2 seconds. In contrast, C2TACO is unable to solve any of the 10 synthetic benchmarks with a 90 minute timeout.
**LLMLift reduces development effort compared to prior rule-based approaches**
One of the key strengths of LLMLift is its ability to scale the search process for VL without relying on domain-specific heuristics. This significantly reduces the human effort required to build VL based compilers, making the technology more accessible across various domains. As detailed in our experimental section, symbolic search techniques heavily rely on domain-specific heuristics for scalability. For instance, C2TACO uses over 1000 lines of code solely to describe these heuristics, which are specific to the TACO IR and cannot be easily reused for other domains. Even with these heuristics, their search fails to scale to more complex benchmarks beyond those used for evaluation in the paper (see Appendix C). In contrast to domain-specific heuristics, the development effort for LLMs is a one-time investment that can be leveraged across multiple domains and tasks. Once trained, these models can be applied to various VL tasks with minimal additional development.
**Main obstacles and efforts needed to extend LLMLift to transpilation between mainstream languages**
The objective of VL is different from translating programs between mainstream languages. Our approach focuses on mapping functional programs to the operators of a DSL in a semantics-preserving manner. As described in the paper, this problem of mapping to DSL is important as a) DSLs often provide a more concise way of expressing the same program compared to imperative implementations, enhancing maintainability and b) DSLs typically offer domain-specific optimizations that can significantly improve performance. However, there are several challenges in this mapping problem. In addition to the search space being huge, LLMLift must not only find a correct translation but also generate a proof of equivalence. This adds another layer of complexity.
Translating between mainstream languages has its own set of challenges:
1. Different languages support various constructs, making direct mapping challenging,
2. Generating accurate verification conditions and formal proofs for equivalence checking across diverse language constructs is complex.
Due to these challenges, prior work in mainstream-to-mainstream translation, such as TransCoder[1], often relies on test-case-based approaches to demonstrate semantic equivalence, rather than formal verification. Such approaches do not provide any correctness guarantee in the generated code, and hence are risky to deploy in practice. On the other hand, LLMLift formally verifies the code which is generated and proves the semantic equivalence with the source program.
### References
[1] Baptiste Roziere et al. Unsupervised translation of programming languages.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Thanks for submitting the rebuttal. I have read it but I believe the paper needs a major revision to reach the bar of NeurIPS. Therefore, I will keep my score. Below I provide a list of suggestions that might be helpful for the next iteration of the paper:
- Improve writing and formalization.
- Include the verification method and the way of generating target programs from program summaries into the appendix.
- Design and present benchmarks that can better demonstrate LLMLift’s advantages, especially in solving more test cases.
- Include ablation studies on important hyperparameters. | Summary: The authors propose an approach named LLMLift, which utilizes large language models (LLMs) to achieve verified code transpilation. LLMLift not only translates a given program into its corresponding equivalent in the target language but also generates proofs for functional equivalence. The paper claims that this method outperforms previous symbolic-based tools in both the number of benchmarks transpiled and transpilation time, while requiring significantly less effort to build.
Strengths: - The paper is highly innovative. It fully explores and leverages the generalization capability, few-shot learning capability, and inherent domain knowledge of LLMs. Additionally, it transforms the code translation task from end-to-end generation into generating PS and INV with LLMs, which cleverly reduces task difficulty and enhances the utilization of LLMs.
- The evaluation is thorough, covering multiple DSLs and comparison against a tensplier proposed in 2024, which fully demonstrates the capability of the method.
Weaknesses: - The paper mentions a budget of 50 queries for the program synthesis (PS) and a budget of 10 queries for each PS, but it lacks an analysis of the success rate and the number of queries used. Providing this information would offer a clearer picture of the efficiency and effectiveness of the approach
Technical Quality: 3
Clarity: 3
Questions for Authors: - Why is there no performance comparison given in the experimental evaluation? Is it because the task is merely to translate DSLs and is not related to performance issues, or does performance remain similar as long as the DSL is generated?
- How does the approach handle complex features in source languages that do not have direct counterparts in the target DSLs?
- What are the common factors in tests that failed, and how can the approach be adjusted to address these issues?
- Why use GPT-3.5 other than the latest version of GPT in Figure 3?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have not adequately discussed the limitations of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Why use GPT-3.5?**
The use of GPT-3.5 in Figure 3 was to illustrate a key challenge in applying LLMs to the task of verified lifting: the difficulty these models face in generating correct code for DSLs, even when the DSL is not particularly new. Our experiments with GPT-3.5 showed that despite TACO being introduced in 2017, which is well within the training cutoff date for GPT-3.5 (2021), the model still struggled to generate correct TACO code when used as-is.
 This issue persists even with more recent and advanced models. For instance, when we tested the same prompt with GPT-4, which presumably has been trained on a larger corpus potentially including more TACO programs, it still produced an incorrect solution. Specifically, the generated program contained an incorrect usage of the get function in the final return statement.
```
int mag_array(int *a, int n) {
Tensor<int> A({n}, Format{Dense}); Tensor<int> result(ScalarType::Int);
for (int i = 0; i < n; ++i) { A.insert({i}, a[i]); }
A.pack(); IndexVar i;
result() = sum(i, A(i) * A(i)); result.compute();
return result.getStorage().getValues().get<int>(0);
}
```
Similarly, when we tested Claude 3.5 Sonnet, one of the latest models from Anthropic, with the same prompt, it also returned incorrect code, misusing the sum function.
```
int mag_array(int* a, int n) {
Format csr({Sparse, Dense}); Format dense({Dense});
Tensor<int> A({n}, csr); Tensor<int> B({n}, dense); Tensor<int> result({}, dense);
for (int i = 0; i < n; i++) { A.insert({i}, a[i]); }
A.pack(); IndexVar i; B(i) = A(i) * A(i);
result() = sum(B(i)); result.compile(); result.assemble(); result.compute();
return result.at({});
}
```
These results show that even the largest and most advanced LLMs struggle to reliably generate correct code for new DSLs when directly prompted to do so. This highlights the inherent difficulty of generating code reliably in new DSLs and illustrating that LLMLift is solving a challenging problem.
**How does the approach handle complex features in source languages that do not have direct counterparts in the target DSLs?**
LLMLift currently only supports a subset of the C/C++ and Python language in the source programs. In particular, it does not support any code that uses pointers or objects as verifying programs with these constructs is challenging. That said, we did not encounter the use of these constructs in any of the benchmarks in all the four domains that we evaluated on. Moreover, we currently support commonly-used data structures including arrays, tuples, maps, vectors and tensors, and that was enough for us to show good experimental results.
**The paper mentions a budget of 50 queries for the program synthesis (PS) and a budget of 10 queries for each PS, but it lacks an analysis of the success rate and the number of queries used. Providing this information would offer a clearer picture of the efficiency and effectiveness of the approach.**
The average number of tries to get to the correct PS solution is 4.3 and the median is 1, which means that most benchmarks generate the correct PS in the first try. Given a correct PS, it takes an average and a median of 1.77 and 1.74 tries, respectively, to generate the correct invariant.
### References
[1] Maaz Bin Safeer Ahmad et al. Automatically leveraging mapreduce frameworks for data-intensive applications. | Summary: The paper aims to solve the problem of conversion of source code from one language to any domain specific language (DSL) leveraging LLMs eliminating manual intervention. The primary contribution lies in the domain of automating the transpilation along with providing functional correctness proof simultaneously. Their technique LLMLift converts source language to an intermediate representation (IR, python in this work), generates program summary and invariants, uses program summary to check the functional correctness using a verifier and finally converts to a DSL. They have evaluated multiple benchmarks across domains: distributed computing, network packet processing, tensor processing and TACO (tensor processing compiler for GPU code). Their evaluations set them apart by improving on latency (6x on average) and having high semantic accuracy.
Strengths: 1. Timely problem considering the emergence of different accelerators.
2. Automated transpilation with integrated feedback and verifier makes the process independent and self-contained.
3. Use of python as an IR improves user readability and is more probable to be accurate given the huge online corpus and training data on the same.
4. Zero-shot approach requires no retraining, but this could lead to issues as well. See weakness and detailed comments.
Weaknesses: 1. DSLs have less representation in the present LLM training dataset (more so lacking correct and efficient codebases).
2. Limited novelty, as there already exists work, and the authors confirm the same, which use LLMs to transpile from one language to another and produce proof annotations separately. The effort in the current work is largely automating and integrating these two steps.
3. Underutilization of the GPT4 model which has a huge memory and compute footprint.
4. The zero-shot approach might not yield very good results if the LLM does not have sufficient training data on the DSL. On the other hand, a solution like RAG is more feasible as it can be finetuned and made context aware with minimum information on the DSL.
5. The end-to-end process of transpilation is not very clear. Although it sounds intuitive, the fine-grained steps are missing in the explanation, leading to a belief that the paper majorly contributes towards an efficient prompt-engineering for doing transpilation rather than having an innovative approach.
6. The evaluation metrics are poorly chosen. Using LoC as a metric of effort is not fair where the model and the framework (GPT4, and it’s API along with the implementation platform) doing the transpilation work has a resource (compute, memory, LoC, energy etc.) requirement of many orders of magnitude higher than any of the conventional tools like C2TACO.
7. Although the evaluation consists of many benchmarks, we are missing comparisons for many important DSLs like HDLs, GPU programming, OpenCL etc. These languages form the foundation of most of the new generations of the domain specific accelerators design framework.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the part marked using (***) below.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Leveraging generative models to solve the problem of transpilation and providing quantitative estimates of functional correctness simultaneously seems to be a timely and much needed solution in today’s tech diaspora. But usage of a model like GPT4 seems to be an overkill considering the resource requirement that it entails. Using a smaller model, that is trained to do sequence to sequence conversion along with target language library as a database in case of RAG can be thought of as a more efficient way to achieve the same. In the suggested method, there is dependency/ sole reliance on the model only to do the conversion to IR and then the target language, but not all DSLs would find the same degree of contribution in the training dataset of the models. Using RAG here gives the user more freedom.
The writing does not make it easy for the readers to understand the underlying sub-components and the handshake thereby. For instance, we do not know how the verifier works, the criteria on which it works and its logical complexities.
There seems to be a scalability issue for bigger codebases which would overshoot the supported prompt length of the model. We do not know how the code base is broken down into cohesive pieces and each forms a prompt. Also, for large code bases we see KV cache management, storage and communication to different compute nodes to be a serious problem.
(***) The paper does not include the rationale behind many of their assumptions like:
a. Why python was chosen as the IR, and how much of a benefit (in terms of correctness/ latency) it provides compared to a direct conversion without an IR.
b. Why is accuracy the right metric? Given we do not have much knowledge about the length, functionality and complexity of the source and the target languages. The authors fail to provide a comparison between the transpiled code and the DSL implementation designed by an expert.
c. Why was a temperature of 0.7 chosen and what is the impact of varying the same?
d. Why was GPT 4 specifically chosen given its huge size? And how would the design fair against self correcting models and AutoGPT kind of models?
e. The paper overall misses sensitivity and ablation studies making it difficult, if not impossible, to understand the impact of each of the design components.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Why GPT-4**
With LLMLift, our objective was to demonstrate that LLMs can be effectively used for the task of VL without requiring fine-tuning. We needed a model that was good in two things:
1. Instruction Following: The model needed to generate programs strictly using the defined operators in a DSL.
2. Handling Long-Context: The prompt can grow in size when incorporating all DSL operators and feedback, so the model needed to manage long-context tasks effectively.
GPT-4 was the best model available that met both of these requirements.
To explore whether LLMLift could leverage open-source models or if larger models were necessary, we evaluated LLMLift using two recent open-source models (Llama3 8B and Mistral Nemo 12.2B) and another proprietary model (Claude-sonnet-3.5). Due to budget constraints, we used a subset of benchmarks from the tensor processing domain, which is the most complex among all the DSLs we evaluated. Specifically, we used 12 image processing blend benchmarks. We use the same budget of queries and temperature settings.
Open-source models, Llama3 and Mistral, did not solve any of the benchmarks. Their solutions use Python constructs outside of the defined IR, causing them to immediately fail our syntactic parser. Claude, on the other hand, successfully solved 9 out of the 12. Llama3 and Mistral are significantly smaller than proprietary models like Claude and GPT-4. The results suggest that larger models are better suited for VL.
**Complexity of source programs**
Lines of code (LOC) are often used as a proxy for evaluating program complexity. In the benchmarks, the average LOC is 14. As described in the paper, our aim is for LLMLift to translate functional programs, and we observe that 10-20 lines of code are typical for these programs in the real world.
**RAG is more feasible solution**
To clarify, LLMLift does not depend on the presence of specific DSLs in the training data of large language models to perform code translation. In fact, we demonstrate that LLMLift can reliably generate code in DSLs that the model may have never encountered during training, for instance our tensor processing DSLs and the TACO IR. With LLMLift, we introduce a structured prompting approach that explicitly includes the semantics of the DSL operators using an IR. The model's task is then to generate a program using only the operators defined in the prompt. Once we have a verified solution in the IR, we apply syntax-driven rules to translate the program into the concrete syntax of the DSL.
RAG-based approaches rely on the source and target programs having similar representations in the encoding space. However, in the context of VL, the source and target languages often have completely different syntactic structures. For example, in the tensor processing domain, the source programs are written in vanilla C++, while the target language involves tensor operations like tensor-tensor arithmetic and tensor-scalar arithmetic. Moreover, multiple lines of code in the source language can be mapped to a single operator in the target (e.g., a loop adding corresponding elements of two vectors can be mapped to element-wise addition operator), and lifting-based transpilation is rarely one-to-one token-wise between the source and target languages. This significant syntactic difference makes it challenging to use a RAG-based implementation for generating a program summary in the DSL.
That said, we believe RAG could be beneficial in LLMLift in a different way—specifically, in selecting examples for few-shot prompting. RAG could be used to retrieve the most relevant examples for a new program, which could then be used to prompt the model. This approach might help reduce the sample complexity of LLMLift.
**For large code bases we see KV cache management to be a problem**
We apologize, but we don't fully understand how KV cache management would be a significant issue in the context of VL. In the VL scenario, we're dealing with individual programs which are just a few hundred lines rather than large codebases, so the issues of KV cache management are not a concern for our current approach.
**Why python as IR?**
Python is the most widely represented language in the training data. This ensures that the model can generate Python code reliably with minimal prompting. Python's highly syntactic nature facilitates easier parsing. This is beneficial as a) it allows for the development of syntax-driven parsers that can efficiently translate the IR to theorem prover languages and b) It simplifies the process of translating the IR to the target DSL's concrete syntax.
Direct conversion to a DSL is challenging because many DSLs may not be well-represented in the model's training data (see Figure 3). Moreover, it is also challenging to verify DSL programs using theorem provers as they do not provide support for these languages and it is not trivial to translate the DSL directly to the language supported by them. On the other hand, many theorem provers have existing libraries for handling Python-like syntax, making the verification step of LLMLift easier.
**Why is accuracy the right metric?**
Accuracy is the most critical metric for evaluating a transpiler, as the translated program must be semantically equivalent (checked by a formal verifier) to the given source program. Another important metric is the performance of the translated code. Currently, the code generated by LLMLift significantly outperforms its corresponding program in the source language across all domains.
**Why 0.7 as the temperature?**
At lower temperatures, these models generate more deterministic outputs, which limit their ability to explore diverse solutions. Due to budget constraints, we only experimented with temperature 0.7, which is close to the optimal temperature found for code generation in recent paper[1].
### References
[1] Evaluating Large Language Models Trained on Code. https://arxiv.org/abs/2107.03374 | Rebuttal 1:
Rebuttal: We thank the reviewers for their helpful comments and suggestions. We will incorporate all suggestions and clarify the confusion in our next version. Below, we address some of the common concerns that the reviewers raised.
**How do you verify the equivalence of the source program and the program summary?**
We use Floyd-Hoare Logic (FHL) to establish the validity of generated programs[1]. In FHL, verification problem is represented as a Hoare triple $\{A\} \, P \, \{B\}$, where:
- $A$ is the pre-condition
- $P$ is the program to be executed
- $B$ is the post-condition
To establish the validity of a Hoare triple, we prove that all executions starting from states satisfying $A$, after executing program $P$, result in states satisfying $B$. This involves finding a Boolean predicate called the verification condition ($VC$) that characterizes the set of pre-conditions from which every execution of $P$ leads to a state satisfying $B$. Formally, we need to prove that the $VC$ is true given pre-condition $A$, i.e., $A \rightarrow VC(P, B)$.
Standard techniques exist to generate verification conditions from a given source program[2]. For programs containing loops, an additional predicate called a loop invariant is required. This invariant helps prove that the post-condition remains valid regardless of the number of loop iterations. The inference rules provided by FHL can be encoded into a format that can be fed into automated theorem provers or SMT (Satisfiability Modulo Theories) solvers. This encoding allows for the mechanical checking of any Hoare triple's validity.
Following are the VCs generated for our running example (Figure 1a) in the paper:
_Verification conditions:_
1. Initial: $$Inv(i=0, sum=0, data)$$
2. Preservation:
$$
\begin{gather}
Inv(i, sum, data) \land (i < size(data))
\rightarrow \\\\
Inv(i+1, ite(data[i] < 100, sum = sum + data[i], sum = sum), data)
\end{gather}
$$
3. Termination:
$$
\begin{gather}
Inv(i, sum, data) \land \lnot(i < size(data)) \rightarrow \\\\
PS(sum, data)
\end{gather}
$$
_Generated program summary ($PS$) and $Inv$:_
\begin{align}
S &= sum \\\\
&= reduce(map(data, \lambda j : ite(j < 100, i, 0)), \lambda a, b : a + b) \\\\
Inv &= (i \geq 0) \land (i \leq size(data)) \land sum \\\\
&= reduce(map(data[:i], \lambda j : ite(j < 100, j, 0)), \lambda a, b : a + b) \\\\
\end{align}
_Proof:_
1. Initial Condition:
Before the loop executes, $i = 0$ and $sum = 0$. The invariant expresses sum as the result of a map followed by a reduce operation over the first $i$ elements. Since $i = 0$, the map-reduce operation is applied to an empty list, resulting in a zero sum. Therefore, the invariant holds in the initial state.
2. Preservation Condition:
The preservation condition ensures that the invariant holds throughout all iterations of the loop. This can be shown by induction. Assume the invariant holds at the $i$-th iteration. In the $(i + 1)$-th iteration, map-reduce would compute the sum for the first $i + 1$ elements, incrementing sum with the $(i + 1)$-th element of the input list data if it is less than 100.
3. Termination Condition:
The termination condition requires that the invariant implies the $PS$. When the loop terminates, $i = size(data)$, and both the $PS$ and $Inv$ expressions for sum will be identical, meaning the postcondition is satisfied.
**Why is there no performance comparison given in the experimental evaluation?**
The primary objective of verified lifting is to generate semantically equivalent programs in the target DSL from the source. DSLs are inherently designed to offer domain-specific optimizations, and the performance gains observed post-translation are attributable to the implementation of operators within the DSL rather than the translation process itself.
In LLMLift, our aim was to replicate the existing VL-based compilers. We performed a manual verification of LLMLift's output against the corresponding symbolic tools, confirming output equivalence of the two tools. Given this equivalence, the performance gains reported by the original symbolic tools are directly applicable to LLMLift's translations. Performance numbers for some of the domains are following:
1. Tensor Processing: The objective in this domain is to generate tensor programs executable on tensor processing backends. Translations to this intermediate representation (IR) yield performance gains of 2.1x (NumPy) and 167.71x (PyTorch) compared to sequential C++ implementations when compiled with GCC -O3.
2. TACO: The TACO compiler generates optimized GPU code. Translating programs to the TACO IR results in a performance gain of 24x compared to sequential C++ programs when compiled with GCC -O3.
3. Distributed Computing: The generated Spark implementations achieved an average speed-up of 15.6x compared to the sequential Java implementations. Additionally, when compared to manually written code by an expert, the generated outputs performed competitively. For more details on the user study, we refer the reader to the paper[1].
It is important to note that finding a program with optimal performance on the target backend would require performing the search phase with specific cost (objective) functions. While finding an equivalent program in the target DSL is already a challenging task, incorporating an optimization function into the search adds another layer of complexity. In addition, defining these cost functions is non-trivial in itself, as they must accurately capture the performance characteristics of the target backend. Currently, even without using cost functions, LLMLift is still able to generate performant code, as described earlier.
### References
[1] C. A. R. Hoare. An axiomatic basis for computer programming.
[2] Mike Barnett et al. Weakest-precondition of unstructured programs. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Synthesize, Partition, then Adapt: Eliciting Diverse Samples from Foundation Models | Accept (poster) | Summary: This paper proposes a method to generated diverse responses for the language model. Given a synthetic finetuning dataset and test dataset, they calculate the importance of the finetuning data on each test data using methods such as influence function, and then partition the dataset into several smaller ones. Finally, they finetune several lora weights based on different partition datasets.
Strengths: The authors conducted experiments to verify that the proposed SPA based on influence function are validated to improve the diversity among baselines.
Weaknesses: It involves finetuning, and need to finetune several lora weights. This would cause difficulty on parallel generation because we need to store multiple checkpoints into memory.
Technical Quality: 3
Clarity: 3
Questions for Authors: How to find the optimal number of partition in practice? I suppose there should be a sweet point.
Can authors report the actual time on calculating influence function and training the lora models for the experiments in this paper?
If you need multiple lora weights to generate diverse responses, are you able to still generate them in parallel without demanding memory requirement?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: .
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful review. We'd like to address your main concerns.
> Parallel Generation and Memory Efficiency
Recent advances in serving multiple LoRA adaptations in parallel significantly mitigate the need to store multiple full checkpoints. Notably, the S-LoRA system [1] demonstrates that it's possible to efficiently serve thousands of LoRA adapters with minimal overhead, using techniques like Unified Paging to manage memory efficiently. Furthermore, FLoRA [2] allows each input example in a minibatch to be associated with unique low-rank adaptation weights, enabling efficient batching of multiple LoRA adapters. In these systems, we only need to store the LoRA weights in memory, which are often low-rank, significantly reducing memory requirements. These advancements suggest that SPA can be implemented without significant memory constraints or parallel generation difficulties. As the field continues to progress, we anticipate even more efficient solutions for managing multiple LoRA adapters.
[1]: Sheng, Y., Cao, S., Li, D., Hooper, C., Lee, N., Yang, S., Chou, C., Zhu, B., Zheng, L., Keutzer, K., Gonzalez, J.E., & Stoica, I. (2023). S-LoRA: Serving Thousands of Concurrent LoRA Adapters. MLSys 2024
[2]: Wen, Y., & Chaudhuri, S. (2023). Batched Low-Rank Adaptation of Foundation Models. ICLR 2024
> Best number of partitions
Determining the optimal number of partitions is indeed an important consideration. This highly depends on the size of the synthetic dataset, with larger datasets typically inducing a higher optimal number of partitions. We conducted an ablation study (Fig. 6 in our paper) examining the effect of the number of partitions on diversity. We found that the diversity score remains relatively stable as the number of adaptations increases from 8 to 12, suggesting that 8 partitions is roughly the sweet spot for the synthetic dataset we used (75k data points). We also observed improvements in diversity when increasing the number of partitions from 4 to 8, though this data was not included in Fig. 6.
> Computational time
We provided a rough complexity in the limitation section. To offer more details: One epoch of fine-tuning on the 75k synthetic data points takes approximately 8 hours using 2 A40 GPUs. We trained the LoRA weights for 4 epochs. Influence function calculation costs roughly 1 epoch of the training time, which is approximately 8 hours. We want to highlight that the main computational cost occurs during the offline partitioning and adaptation phases.
Future work could explore more efficient data attribution methods, such as TRAK [1] and K-FAC [2], which have the potential to substantially reduce this overhead while maintaining the benefits of our approach.
We appreciate your feedback and hope our clarifications have addressed your concerns about parallel generation and memory requirement. If you have any remaining concerns, please let us know. We're grateful for the opportunity to further explain our work.
[1]: Park, S.M., Georgiev, K., Ilyas, A., Leclerc, G., & Madry, A. (2023). TRAK: Attributing Model Behavior at Scale. International Conference on Machine Learning. ICML 2023
[2]: Grosse, R.B., Bae, J., Anil, C., Elhage, N., Tamkin, A., Tajdini, A., Steiner, B., Li, D., Durmus, E., Perez, E., Hubinger, E., Lukovsiut.e, K., Nguyen, K., Joseph, N., McCandlish, S., Kaplan, J., & Bowman, S. (2023). Studying Large Language Model Generalization with Influence Functions.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal! I'm still a bit uncertain on "These advancements suggest that SPA can be implemented without significant memory constraints or parallel generation difficulties. As the field continues to progress, we anticipate even more efficient solutions for managing multiple LoRA adapters." Are you able to implement the combination with some of these methods you mentioned and show some results?
---
Rebuttal 2:
Comment: Thank you for your follow-up question. We conducted two simplified scenarios using S-LoRA codebase and vLLM to compare sampling from multiple LoRA components versus a single model.
Our setup uses a synthetic dataset in S-LoRA repo with 600 prompts and responses ranging from 8 to 512 tokens respectively.
For each prompt, the model generates tokens until the output length matched the response's length. We used Llama-13b with LoRA rank 16, leading to a 10% memory overhead for storing 8 LoRA adapters compared to a single model. When running the serving experiment, the requests are coming with a request rate of 2 per second.
In the first experiment, we simulated a scenario where multiple GPUs were used for serving. Here, the goal was to generate 8 diverse samples per request. We used 8 A100-40G GPUs. For the single model setup, incoming requests (say x and y) were expanded into batches of 8 identical prompts, $[x_1, x_2, ... , x_8]$ and $[y_1,y_2,...,y_8]$ and handled by vLLM’s continuous batching and routing. Usually, it will distribute $[x_1, x_2]$ to GPU-1 job queue, $[x_3, x_4]$ to GPU-2 job queue and so on.
For the SPA model, the batch becomes $[x_i, y_i, LoRA_i]$, where i ranges from 1 to 8. The router then distributed these to different GPU job queues, with each GPU handling a unique prompt-LoRA pair.
This also gives us 8 samples per request. The results showed that this leads to a minor 3% overhead in both throughput (tokens / sec) and latency (secs / output token), primarily due to the additional step of distributing the LoRA adapters. This approach ensures parallel generation but requires multiple GPUs. However, it is a common practice to use multiple GPUs to serve real-world requests.
| 2 req/s | tokens / sec | secs / output token |
|----------------|----------|------------|
| Single | 473 | 0.035 |
| SPA | 460 | 0.036 |
In the second experiment, we explored a single GPU scenario using S-LoRA. This setup also stores all LoRA weights in memory, leading to a similar 10% memory overhead as in the first scenario. In this experiment, we only generated one sample per request for simplicity. This means that for the single model, the metrics should be roughly the same as those obtained when sampling 8 outputs using 8 GPUs in the above scenario. In the SPA setup, each request was coupled with a random LoRA adapter, which is handled by the S-Lora library. The result below shows that this configuration leads to a ~10% overhead in throughput and latency.
| 2 req/s | tokens / sec | secs / output token |
|----------------|----------|------------|
| Single | 480 | 0.035 |
| SPA | 433 | 0.040 |
The first result indicates that SPA can be implemented efficiently in a multi-GPU setup with almost no performance lost. Even in a single GPU scenario, leveraging S-LoRA allows us to sample from multiple LoRA adapters without significant trade-off. This is also an active research field, we could anticipate even more efficient solutions for sampling from multiple LoRA adapters with some future works. We hope these results help address your concerns.
---
Rebuttal 3:
Comment: Thank you! I am afraid that I would still keep my score because in a high level, I am not quite in favor of the idea of having multiple lora weights.
---
Rebuttal Comment 3.1:
Comment: Thank you for your engagement throughout this process. Our approach assumes LLMs undergo post-training with synthetic data, a common practice (e.g., Llama 3's iterative post-training). Given the constraints of this paradigm and motivated by diminishing returns of increasingly many synthetic data, sampling from multiple distributions using LoRA is our strategy to outperform a single model tuned on the entire synthetic dataset. This strategy aims to generate diverse, high-quality responses while addressing the challenge of diminishing returns in LLM post-training with synthetic data.
Although you may not be fully convinced, we hope that our paper and the additional experiments we've provided will contribute more insights to the later discussion. | Summary: This paper presents a framework for eliciting diverse outputs from language models while maintaining quality/accuracy. The framework consists of: first partitioning a (synthetic) dataset of supervised instruction tuning data, then training parameter efficient model adapters on each partition, and finally at inference time sampling from each of the the models trained in the prior step.
Within this framework, the paper contributes empirical evaluations of 2 methods (influence functions and token overlap) for partitioning the instruction tuning datasets and measures the effects on both code completion and natural language generation benchmarks. They find that both methods generate more diverse output over random partitioning (i.e. an ensemble of models trained on random subsets of the data), suggesting that the method used to partition is important.
The authors contrast their method with temperature based sampling approaches to generating diverse data from an LLM.
Strengths: * This paper recognizes certain diminishing returns of scaling up dataset size and instead finds alternative uses of large datasets to generate diverse outputs. This is neat approach to thinking about other ways to use "scaled-up" data.
* Demonstrates that the data partitioning approach does result in significant differences wrt diversity at lower temperatures. In particular the authors demonstrate that model ensembles of randomly partitioned data do not generate as diverse outputs as ones trained on partitions generated more principled approaches.
* The paper is overall well written and easy to read.
Weaknesses: * Evaluation on natural language (non-code) tasks lacks a quality/accuracy metric for the proposed method (Fig 5). While it is great to see that diversity is increased, the authors do state that maintaining quality is important but it is unclear if that is the case for these tasks. It would be helpful to see "standard" eval metrics for some or all of MMLU, BBH, GPQA and WinoGrande using the methods proposed.
* More clarity in the tradeoff the proposed methods make vs high temperature sampling would be helpful. At least for the code models, it appears that as temperature increases, the diversity of the single model might converge to that exibited by SPA. If that is the case some discussion of the tradeoffs involved, (e.g. benefits of being able to sample at low temperature) would strengthen the paper.
* It is difficult to know how sensitive the results are to the diversity of queries used to generate the partitioning matrix? It would be helpful to understand the performance of a partitioning method that is based on the data but doesn't use human selected test queries (e.g. randomly select 12 prompts from HumanEval and then partition based on that using influence and token overlap approaches). This is not however a fatal flaw by any means.
Technical Quality: 3
Clarity: 4
Questions for Authors: Experiment Design
* How were the 12 queries for each task generated? Could the authors shed any light on what they think important properties of this query set are?
* What are the generation params for Section 5.3? (is it all greedy sampling)
* Also in Section 5.3, why not compare with a single model at high temperature (Similar to the "Single" condition used for the code generation tasks)?
* Results
* How do the charts in figure 4 look like as temperature increases even further? Why did you stop at 0.5? The CodeLLama paper reports their main results at temperate=0.8 so it would be useful for readers to see if and where the methods converge in terms of diversity.
* Did you try using fewer than 8 model adapters? While there doesn't seem to be any much improvement observed in using more than 8 adapters in this setting, do you have a sense of how few adapters one could use?
* Metrics
* For the average KL divergence metric could you explain how you go from token level probability distributions for sequence level probability distribution for Pi and Pj? Or do you mean something else, I'm mostly trying to understand how you handle sequences of different lengths in the computation.
* For the sample diversity/Diversity score metric what value of K (line 273) is used in the experiments? And is the total number of samples generated in each condition the same?
* In figure 4 how many samples are generated for each method to calculate pass\@k? And in the case of the SPA methods how is that distributed across the various adapters? My understanding is that there are 8 adapters in this experiment.
* Do the authors have any thoughts on why pass@1 isn't higher for SPA compared to single model?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes. Though the paper could also better acknowledge the extra computational cost at inference time (compared to just sampling multiple times at high temperature).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful review. We'd like to address your main concerns.
> Accuracy on NL tasks
The accuracy of SPA methods (influence-based, lexical-based, and random) all achieve similar accuracy to the single model, reaching the same conclusion as the pass@1 metric in Fig. 4a. We didn't observe improved pass@k with increased diversity on these tasks, likely because most are multiple-choice questions. For diversity measurement, we asked models to continue generating text even after producing an answer choice. We omitted accuracy results as they didn't provide new insights beyond the code generation tasks. However, we acknowledge that including this data would have provided a more complete picture and will add it to the revised paper.
> Trade-off in high temperature sampling, temperature plot stops at 0.5
You're correct that at higher temperatures, the diversity of single models converges to SPA models. In practice, low temperature or even greedy sampling is often preferred when average sample quality is crucial. As shown in Fig. 4, high temperature leads to a decrease in the pass@1 metric, indicating a drop in sample quality. High temperature sampling may deviate from the learned distribution and produce hallucination or less coherent outputs [1]. We briefly discussed the disadvantages of high temperature in the introduction and will expand it in the revision.
We limited our plots to temperatures up to 0.5 as SPA's primary goal is to maintain average sample quality (measured by pass@1) while achieving diversity. Fig. 4a shows that temperature 0.5 already leads to a significant drop in pass@1. However, we acknowledge the importance of understanding where methods converge in terms of diversity. Our preliminary experiments indicate convergence around temperature 0.8. We will include these higher temperature results in our revision.
> Number of adapters
The optimal number of adapters highly depends on the size of the synthetic dataset, with larger datasets typically inducing a higher optimal number of adapters. We observed improvements in diversity when increasing the number of adapters from 4 to 8, though this data was not included in Fig. 6, suggesting that 8 adapters is roughly the sweet spot for the synthetic dataset we used (75k data points).
> KL divergence, hyper-parameters, distribution over LoRA adapters
We measured the KL divergence at the first decoding step, which is at the token level. This measures the difference in states across different adaptations after seeing the same prompts. This is not applicable to the single model approach.
For each method (when temperature > 0), we sampled a total of 120 samples to calculate pass@1, pass@k, and diversity score (K=5). This larger sample size helps reduce metric variance. For SPA methods, we distributed samples evenly among adapters (15 samples per adapter). Future work could explore weighted sampling in SPA. The exception is greedy sampling (temp=0 in Fig. 4 and Sec. 5.3), where we only get one sample per adapter, leading to a total of 8 samples.
> Why pass@1 is not higher, how to select queries.
SPA is primarily designed to improve diversity, which doesn't necessarily improve average sample quality. Hence, it shows similar pass@1 performance to the single model.
Our principle was to create a diverse query set. We used few-shot prompting with ChatGPT to generate ~20 queries, then manually selected 8 covering a wide range of topics.
> Single model at higher temp in Sec. 5.3
We didn't include comparisons with single models at higher temperature because, as shown in Fig. 4a, higher temperatures lead to sample quality drops. However, we acknowledge that including this comparison could provide valuable insights. In our revision, we will add this comparison and discuss the trade-offs.
We appreciate your feedback and hope this clarifies your concern. Please let us know if there is any remaining concerns.
[1]: Lee, M. (2023). A Mathematical Investigation of Hallucination and Creativity in GPT Models. Mathematics.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response and clarifications. I overall still think this is an interesting paper!
It might be helpful to post the accuracy scores for the NL tasks that you mentioned in your rebuttal here. If accepted readers may also be interested in the distribution of accuracy across the various adapters.
---
Rebuttal 2:
Comment: Thank you for your positive feedback. Here are the accuracy results:
| Tasks | BBH | GPQA | MMLU | WinoGrande |
|----------------|----------|------------|------------|------------|
| Single | 46.66 | 4.14 | 55.40 | 76.5 |
| SPA | 46.48 | 3.80 | 55.26 | 76.3 |
SPA maintains comparable accuracy to the single model. The GPQA evaluation has high variance; even small changes in the prompt can lead to big differences in accuracy. Despite this, we still included GPQA because it provides a good source for evaluating diversity due to its broad range of topics. We will include the distribution in the next revision as you suggested.
We've also added new experiments in our reply to reviewer 4Daa addressing concerns about parallel generation and memory overhead. We greatly appreciate your thoughtful feedback and hope we have addressed your concerns. We're keen on your support for our submission. | Summary: The paper introduces Synthesize-Partition-Adapt (SPA), a framework designed to generate diverse and high-quality responses from foundation models. SPA uses synthetic data and data attribution methods to partition data into subsets, training multiple model adaptations for these subsets.
Strengths: I believe this paper is studying an important problem. The diversity problem in LLM has received attention recently.
Weaknesses: - The motivation of three phases: synthesize, partition, then adapt is not clearly discussed (See my questions). The authors spend more time to describe what they are doing but not why they are doing so or why these specific designs would lead to desired outcome, which is diverse sampling technique.
- The experiments only compare with two simple baselines ablated from the proposed methods, omitting existing baselines in the literature.
- The paper omits most of the existing works in the literature regarding diverse sampling for LLMs and only considers simple temperature sampling techniques.
Technical Quality: 1
Clarity: 2
Questions for Authors: - The motivation of three phases: synthesize, partition, then adapt is not clearly discussed. How to ensure that the existing dataset is diverse?
- The framework heavily relies on the existing synthetic dataset where its diversity is already a concern. As the authors mentioned, the problem of generating diverse synthetic dataset is still a problem with current generative models. Would finetuning on these synthetic datasets destroy the generalizability of the models?
- The method relies on $M$ questions requiring various expertise knowledge to compute influence matrix for partitioning. Does those M questions need to be manually chosen?
- Why partitioning the datasets using the importance scores and then finetune different LoRAs with K partitions would lead to K diverse LoRAs components? The connection is vague to me.
- The method relies on a predefined number of partitions K (or finetuned LoRA components) when training. What if during the inference time, we would generate a lot more than K diverse responses? Does each finetuned model still suffer from diversity problem?
- During inference, LoRA components are sampled randomly, does it affect the quality of generated responses? What if some components being biased or learn some skills while losing other skills? It is not clear that, after partition and finetuning, which LoRA learns which expertise. Can each component has a weight depending on the question?
- The paper is missing most of the important baselines and citations, I listed some here for reference:
- KL-Divergence Guided Temperature Sampling
- Diversity of Thought Improves Reasoning Abilities of LLMs
- Large Language Models as In-context AI Generators for Quality-Diversity
- Controlled Decoding from Language Models
- Quality-Diversity through AI Feedback
- Language Model Decoding as Direct Metrics Optimization
Confidence: 5
Soundness: 1
Presentation: 2
Contribution: 1
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful review. We'd like to address your main concerns.
> Motivation of why doing SPA and why doing so leads to diverse samples. Role of synthetic data and the generalizability of it.
We would first like to clarify that LLMs must typically go through synthetic data tuning before deployment, whether for domain adaptation, value alignment, or distilling knowledge from a stronger model. SPA is designed to **utilize** this existing synthetic data used in LLM post-training stages (or mid-training stage). Operating within this existing paradigm, SPA addresses the inefficiency of tuning a single model on **increasingly larger** synthetic datasets due to diminishing returns.
The first motivation, as stated in Section 3, is to turn this potential inefficiency into an advantage for diversity from the multi-model perspective. The second motivation is sampling from multiple distinct distributions (i.e., different model adaptations) naturally yields more diverse outputs compared to sampling multiple times from a single distribution (i.e., a single model).
This is why SPA leads to diverse samples by creating multiple model adaptations, each trained on a different partition of the synthetic data. Importantly, due to diminishing returns, training on data partitions doesn't significantly affect accuracy, as shown in Sec. 3.
Furthermore, our partitioning strategy, especially when using influence functions, creates subsets of data that induce distinct model skills. This further enhances the diversity of the resulting adaptations (compared to the random and lexical partitions).
Regarding generalizability, since SPA uses data already part of the post-training process, it doesn’t introduce new generalization risks. While our method benefits from more diverse data, it's designed to improve diversity even when working with less diverse synthetic datasets, by being able to sample from multiple distinct distributions.
> More comparisons and related works
Thank you for this feedback. For the additional comparisons, please see the general response. We didn't compare to QD search methods because SPA and QD methods operate on fundamentally different principles. SPA operates at the model level, creating multiple models during an offline process. The main computational cost occurs during the offline partitioning and adaptation phases. This allows for quick, parallel diverse sampling at inference time. In contrast, QD search operates at the generation level, iteratively going through mutation, evaluation, and refinement cycles. It introduces overhead during sampling, making it less suitable for real-time applications. We will also expand our literature review to include all the suggested works and other relevant studies.
> Connection between partitioning and diverse LoRA components
The influence function captures how training examples affect model predictions on specific test queries. Our partitioning strategy groups examples with similar influence patterns, resulting in K clusters. Each cluster consists of synthetic data that most strongly influences a particular test query (detailed at line 257). When fine-tuning LoRA on these individual clusters, each component focuses on the skills emphasized by its associated test query.
Importantly, we don't require extremely diverse LoRA components because the diversity primarily stems from sampling across multiple distinct distributions (motivation #2). Even with random partitioning, this strategy ensures that these distributions are sufficiently different to generate diverse outputs when sampled collectively.
> Generating more than K diverse responses
While we train K adapters, SPA is not limited to generating only K diverse responses. Each adapter can produce multiple outputs, especially when combined with other sampling techniques (e.g., temperature sampling). In our experiments, for temperatures > 0, we generated a total of 120 samples to calculate pass@1, pass@k, and diversity scores. For SPA, this meant sampling 15 samples from each of the 8 adapters. In contrast, for the single model baseline, all 120 samples were drawn from the same distribution. This sampling strategy highlights a key advantage of SPA: even when generating many more than K samples, we're still drawing from multiple distinct distributions. This inherently promotes diversity compared to repeatedly sampling from a single distribution.
Moreover, SPA is complementary to other diversity-enhancing methods.
> Quality and bias in randomly sampled LoRA components
Our experiments show that random sampling of LoRA components during inference doesn't significantly affect response quality, as evidenced by our pass@1 results (also because of motivation # 1). Our partitioning strategy aims to create balanced subsets that cover different aspects. The additional comparisons in the general response showed that SPA is less biased than diversity of thought. However, we acknowledge that some components might specialize more in certain areas. In practice, a held-out dataset can be used to identify and remove overly biased components before deployment. Future research could explore weighted component selection, using a similar routing strategy in MOE.
> Selection of M queries
The M questions don't necessarily need to be manually chosen. In our experiments, we used a semi-automated approach: we few-shot prompted ChatGPT to generate ~20 diverse queries and then manually selected 8 of them to cover a wide range of topics (Appendix C).
We appreciate your feedback and hope our clarifications have addressed your concerns about motivation and the empirical results. If you have any remaining concerns, please let us know. We're grateful for the opportunity to further explain our work.
---
Rebuttal Comment 1.1:
Title: Thank you!
Comment: Thank the authors for the rebuttal and conducting additional experiments. For this reason, I increased my score, however, I still tend to rejection for this paper since I'm not really being convinced by diverse sampling process only via multiple LoRA components. Each LoRA component still suffers from the diversity sampling problem since the training procedure for each LoRA component is still the same.
Furthermore, calibrating the quality-diversity trade-off typically depends on each use case. Predefining K before finetuning makes it's difficult for the users to calibrate this trade-off, thus limiting its use case in practice. In those cases, we then need to come back to calibrate this trade-off by temperature, which, as the authors argued, has drawbacks. This also relates to my concerns regarding random choices of LoRA components during sampling. What if after finetuning, we want to generate the most probable, high-quality answer?
---
Rebuttal 2:
Comment: We sincerely appreciate your reconsideration and the increased score. We'd like to take the opportunity to address your remaining concerns:
> The same training procedure for each LoRA component leads to less diversity
Our approach operates under the assumption that LLMs undergo post-training using synthetic data, which is a common practice (e.g., iterative post-training in Llama 3 utilizes synthetic data in each round). Under this assumption, we have limited flexibility to modify the training procedure.
Given these constraints, and motivated by diminishing returns, sampling from multiple distributions using LoRA components is our strategy to outperform a single model tuned on the entire synthetic dataset. While each LoRA component on its own may face diversity challenges like you mentioned, it won't be worse than the single model scenario.
Moreover, if modifying the training procedure is allowed, we could incorporate the mutual information as one of the objectives when optimizing LoRAs, this will encourage these LoRA components to be as diverse as possible.
> Quality-diversity trade-off
We agree that calibrating this trade-off may be challenging in practice. On the other hand, SPA offers an improved Pareto frontier, improving diversity even at low temperatures. Users can now achieve good diversity at low temperatures (even when greedy), which was previously difficult. Thus, while SPA doesn't eliminate the trade-off, it provides users with more options.
> Predefined K and component quality, get the most probable sample
We want to clarify that K is not arbitrarily selected but depends on the synthetic dataset size and the strength of diminishing returns. As shown in Fig 4a, $\sum_i^K \text{Accuracy}(\text{model}_{\phi_i}) / K$, roughly equals the accuracy of a single model $\phi$, fine-tuned on the entire synthetic dataset. The average quality of LoRA components is roughly the same as the single model baseline.
For cases requiring the most probable answer (only one sample is required), we can conduct an offline evaluation on a held-out dataset before deployment, then sample from the best-performing LoRA component.
We hope our clarifications can further address your concerns. | Summary: This paper aims to improve the diversity of generations of LLMs. Current methods are not great because when they offer more diversity, they lose in quality (for instance worse performance). This is for instance the case of temperature sampling. In this paper, they propose synthetise partition and adapt which consists in taking a synthetic dataset, partitioning it using influence function for instance and a few high quality test examplars, train a multiple models on each of these subsets and then eventually, use this collection of models to do sampling. They show that thanks to their approach, they are able to get better diversity while maintaining good performance on code and MMLU.
Strengths: Overall, I like this paper. I think that the question addressed by the authors is central and to me one of the biggest issues with current LLMs is their lack of their generations diversity. To my knowledge, this paper is one of the first ones that proposes a method to alleviate this issue and I hope many other works will follow. The paper is well presented, easy to read and the results are clear.
Weaknesses: I want to raise a few points regarding the approach:
- **In-context approach?**: I think that one criticism could be that he method may seem a bit computationally heavy (because of the data clustering step + training many models), I know that one of the most commonly used methods in practice to ensure diversity is to select different text chunks and use them to seed the prompt and get a different generation. This was used for instance in the Phi and cosmopedia approaches to generate synthetic textbooks [1]. In other words, do you think that the finetuning step is necessary? Wouldn't putting one of the clusters of data (that you create with data attribution) **In-context** be enough?
- **Role of the rank of LoRA**: I imagine that the rank of LoRA may play a role here? Intuitively, if the rank is very small, all the models will have similar representations even after fine-tuning. As you increase the rank of LoRa, you may increase the diversity of your generations. Is it correct?
- **Diversity measures**: The diversity measures proposed by the authors are legit. However, maybe another interesting one would be to train a new model on datasets that they have generated with their approach. If the newly trained model gets a better performance, I think this would be the best proof for diversity.
- **Increasing the number of clusters leads to better diversity + performance?**: the current paper just considers the case where there are 8 clusters in the data attribution phase. One experiment that would have been nice is also to understand in the case of code, how pass@5 improves as the number of clusters increase ?
- **Some improvements that may be done for the introduction**: I think it is a shame that the method is not fully presented in the introduction. Maybe it would be good to summarize it in a few words? Besides, I believe that it would be nicer if Figure 3 is in page 2 as a main figure.
[1] Gunasekar, S., Zhang, Y., Aneja, J., Mendes, C. C. T., Del Giorno, A., Gopi, S., ... & Li, Y. (2023). Textbooks are all you need. arXiv preprint arXiv:2306.11644.
Technical Quality: 3
Clarity: 3
Questions for Authors: I mentioned my questions in the weaknesses seciton.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The authors mentioned the limitations of their work and they look sound to me.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful review. We'd like to address your main concerns.
> In-context v.s. Fine-tuning
We can formulate the difference as sampling from $P_{\phi_1} (y|x)$ and $P_{\phi_1}(y|x')$ for in-context versus $P_{\phi_1}(y|x)$ and $P_{\phi_2}(y|x)$ for our approach ($\phi$ denotes model parameters). We argue that sampling from multiple distinct distributions (SPA) produces more diversity than modifying the prompt. In the general response, we conducted additional experiments comparing SPA with Diversity of Thought, an in-context approach. The results show SPA's significant advantage. Moreover, one could potentially combine them by sampling from $P_{\phi_1}(y|x)$ and $P_{\phi_2}(y|x')$, further enhancing diversity.
Finally, we assume the LLM must go through the synthetic data tuning before deployment, whether for domain adaptation, value alignment, or distilling knowledge from a stronger model. Given the diminishing returns observed with increasing synthetic data, the motivation for SPA remains strong.
> Role of LoRA rank
We agree that your intuition about LoRA rank is correct. A smaller rank would lead to more similar representations, potentially reducing diversity, while a larger rank might enhance it. We used a fixed rank for consistency in our experiments, but exploring different ranks is an interesting direction for future work. Yet, the benefit of sampling from multiple distinct distributions remains valid regardless of the rank.
> Number of adapters
The optimal number of adapters highly depends on the size of the synthetic dataset, with larger datasets typically inducing a higher optimal number of adapters. We observed improvements in diversity when increasing the number of adapters from 4 to 8, though this data was not included in Fig. 6, suggesting that 8 adapters is roughly the sweet spot for the synthetic dataset we used (75k data points).
> Further improvements
Thank you for your value suggestions for new diversity measures and how to improve the intro section. Regarding the new diversity measure, we'd appreciate clarification on training a new model on generated datasets, as our single model adaptation baseline is already trained on the entire synthetic data. We’ll summarize the method more concisely in the intro and move Fig. 3 ahead in our revision.
We appreciate your positive feedback and hope this addresses your concerns. We kindly ask for your support of our paper.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for responding to my questions. After reading the other reviews, I still believe that the paper is meaningful, has an interesting contribution and may benefit the community. I maintain my score.
---
Rebuttal 2:
Comment: Thank you for your positive feedback. We've also added new experiments in our reply to reviewer 4Daa addressing concerns about parallel generation and memory overhead. We're keen on your support for our submission. | Rebuttal 1:
Rebuttal: We thank all reviewers for their valuable feedback. Here, we address the concern raised by reviewer j8SA regarding limited comparisons.
Our initial comparisons focused on ablations as SPA represents a novel multi-model adaptation method addressing diminishing returns of abundant synthetic data. Unlike most methods that modify the sampling process, SPA operates at the model level, making it complementary to existing methods.
To provide more context, we've conducted additional experiments comparing SPA with KL-divergence guided temperature sampling [1] and Diversity of Thought [2].
For KL-guided temperature sampling, following [1], we maintained two parallel decoding sequences: one with full information and another identical but without the docstring (mainly test cases). At each decoding step, we calculated the KL divergence between the logits of these two sequences' forward passes, using it to rescale the temperature. The newly sampled token was then added to both sequences.
We iterated hyperparameters $T_0 = \\{0.1, 0.2, …, 0.5, 0.8, 1.0\\}$ and $\sigma = \\{1e-3, 1e-2, 0.1, 1, 10\\}$, reporting the best runs for pass@1 and pass@5 separately. Notably, no combination achieved both high pass@1 and pass@5 simultaneously like SPA did. The results show that KL-guided temperature scaling generates diverse samples but at the cost of average sample quality.
For the DIV-SE, following Fig.5 in the paper, we generated 8 reasoning approach templates for HumanEval problems. We only used 3 of them (Iterative, BruteForce, Greedy) in the final evaluation due to poor pass@1 performance of the others. This is because the reasoning templates don't generalize well to the diverse coding challenges in HumanEval though they work well in the math domain. The result shows that SPA is better in terms of all metrics when sampled at different temperatures.
These comparisons further show SPA's advantage in sampling from multiple distinct distributions.
| HumanEval | pass@1 | pass@5 | Diversity |
|----------------|----------|------------|---------------|
| DIV-SE temp 0.1 | 46.78 | 58.70 | 0.72 |
| Influence-SPA temp 0.1 | 49.80 | 69.59 | 0.85 |
|
| DIV-SE temp 0.2 | 45.61 | 61.70 | 0.79 |
| Influence-SPA temp 0.2 | 48.74 | 69.82 | 0.86 |
|
| DIV-SE temp 0.3 | 44.57 | 62.01 | 0.84 |
| Influence-SPA temp 0.3 | 47.64 | 70.89 | 0.88 |
|
| DIV-SE temp 0.4 | 43.63 | 63.34 | 0.88 |
| Influence-SPA temp 0.4 | 46.97 | 71.91 | 0.91 |
|
| DIV-SE temp 0.5 | 42.01 | 64.36 | 0.91 |
| Influence-SPA temp 0.5 | 46.82 | 72.49 | 0.92 |
|
| KL-guided temp 0.1 | 49.24 | 60.84 | 0.54 |
| KL-guided temp 0.8 | 42.17 | 66.63 | 0.93 |
[1] Chang, C., Reitter, D., Aksitov, R., & Sung, Y. (2023). KL-Divergence Guided Temperature Sampling. ArXiv, abs/2306.01286.
[2] Naik, R., Chandrasekaran, V., Yuksekgonul, M., Palangi, H., & Nushi, B. (2023). Diversity of Thought Improves Reasoning Abilities of LLMs. ArXiv, abs/2310.07088 | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Interpretable Generalized Additive Models for Datasets with Missing Values | Accept (poster) | Summary: The paper discusses challenges posed by missing data in important datasets for machine learning models. Existing methods like imputation or using indicator variables for missingness can compromise model interpretability or introduce complexity and reduced sparsity. The authors propose M-GAM, a sparse, generalized additive modeling approach. M-GAM addresses these issues by incorporating missingness indicators and their interactions, while maintaining sparsity through l0 regularization. They demonstrate that M-GAM achieves comparable or better accuracy than existing methods while significantly improving sparsity compared to imputation or straightforward inclusion of indicator variables.
Strengths: - The paper is well-written
- The proposed method is novel
- Extensive experiments are conducted to robustly support the claims.
Weaknesses: - The proposed method is constrained by its reliance on l_0 regularization.
Technical Quality: 2
Clarity: 2
Questions for Authors: N/A
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review! We appreciate your recognition that this work is well written, novel, and contains extensive experiments, and we are open to discussing any additional concerns you might have.
> The proposed method is constrained by its reliance on l_0 regularization.
While $\ell_0$ regularization has historically been difficult to optimize, a key strength of our paper is that it leverages the fast optimization framework from Liu et al. 2022, which introduces several computational tricks that make it quite manageable. This lets us gain the substantial benefits of $\ell_0$ regularization (extreme sparsity in a setting that is prone to producing dense models) without suffering substantial runtime costs.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I've decided to keep my original score.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your feedback! | Summary: The paper presents M-GAM, a novel generalized additive model that that incorporates missingness indicators while maintaining a sparse model via l_0 regularization. Results shows that on augmented datasets with missing at randomness M-GAM provides better performance; on real-world (not augmented) datasets, M-GAM achieves similar performance but is faster than impute-then-predict.
Strengths: - Novel approach for dealing with missing values in GAMs while keeping the model sparse.
- Interesting theoretical results to support the work including
- The paper is generally clear and written well.
Weaknesses: - My understanding is that the setting considered in this paper seems to be limited to GAMs with main effects and not higher-order GAMs (in particular, GAM with pairwise interaction effects, GA2M [Lou et al., KDD 2013]).
- There has been significant work on handling with missing values in decision trees (that are also a widely used interpretable model). There is very little both in terms of discussion (e.g., are the approaches there applicable to GAMs, does the proposed approach share some similarities to existing approaches in decision trees) as well as in terms of experimental results (how do decision trees with missing values compare to GAM with missing values, at the moment only SMIM with decision tree is considered and not approaches that are inherent to decision trees).
- The experimental results on real datasets do not demonstrate strong improvement in performance (but there is improvement in other factors, e.g., runtime compared to impute-then-predict).
- The analysis on other models, including decision trees (with and without SMIM) does not include runtimes so it is not clear if the new model is also significantly faster from, say, decision trees or logistic regression or random forests with SMIM.
Technical Quality: 3
Clarity: 3
Questions for Authors: I would appreciate the authors response to the weaknesses listed above. In addition I was wondering if the authors have considered incorporating this mechanism into one of the recent neural GAM approaches, e.g., Neural Additive Models [] which will allow potentially for better scalability?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your review, and your recognition of this work’s novelty, theoretical foundations, and clear writing. We have responded to each of your criticisms below, and look forward to any continued discussion.
> My understanding is that the setting considered in this paper seems to be limited to GAMs with main effects and not higher-order GAMs (in particular, GAM with pairwise interaction effects, GA2M [Lou et al., KDD 2013]).
This is generally correct, although M-GAMs could naturally be extended to include interaction effects of an arbitrary order – both between missingness and between observed values, and between different combinations of observed values. However, we note that, even in Lou et al. 2013, higher order GAMs showed substantially improved classification performance relative to main effect GAMs only on digit recognition tasks (“Letter”, “Gisette”) and not on datasets that would be treated as tabular today. We did experiment with additional interaction terms, along the lines of GA2M, but did not find any scenarios where this improved performance. As such, we did not feel comfortable including it in the paper.
>There has been significant work on handling with missing values in decision trees (that are also a widely used interpretable model). There is very little both in terms of discussion (e.g., are the approaches there applicable to GAMs, does the proposed approach share some similarities to existing approaches in decision trees) as well as in terms of experimental results (how do decision trees with missing values compare to GAM with missing values, at the moment only SMIM with decision tree is considered and not approaches that are inherent to decision trees).
This is a fair point. We will add discussion of this work in the related work section, and have added comparisons to some decision tree based methods that explicitly handle missingness as implemented in SKLearn, as shown in the shared response. In general, because these papers introduce changes that are specific to decision trees, such as default traversal paths when data is missing, they are not directly applicable to M-GAM. We found that decision trees with this explicit handling improved runtime and offered interpretability, but generally had poorer performance than M-GAM. Random forests offered comparable performance and runtime, at the cost of interpretability.
In summary, we’re happy to address this work in related work, but it does not decrease the value of our own approach, which performs well and is separate from the decision-tree-inherent methods you’ve mentioned.
> The experimental results on real datasets do not demonstrate strong improvement in performance (but there is improvement in other factors, e.g., runtime compared to impute-then-predict).
While this is true, the fact that M-GAM has comparable performance to other methods on real data means that our runtime and interpretability gains come with no substantial costs. M-GAM quickly provides a sparse, transparent model that is as accurate as complex impute-then-predict baselines, which we see as a strength.
> The analysis on other models, including decision trees (with and without SMIM) does not include runtimes so it is not clear if the new model is also significantly faster from, say, decision trees or logistic regression or random forests with SMIM.
We have added some such comparisons in the global response. Please note that the SMIM framework consists of both imputing missing data and providing missingness indicators, meaning that it is no faster than the given runtimes with imputation. Additionally, these methods that use imputation introduce complexity, sacrificing interpretability in a way that M-GAM avoids.
> I would appreciate the authors response to the weaknesses listed above. In addition I was wondering if the authors have considered incorporating this mechanism into one of the recent neural GAM approaches, e.g., Neural Additive Models [] which will allow potentially for better scalability?
We hadn’t previously, but it's an interesting idea. However, based on the results presented in [1], it seems that NAM’s will not actually scale better than M-GAMs. Our current $\ell_0$ regularized method is already quite fast. If we were to discard our approach in favor of a Neural GAM with some other sparsity approach, we don’t anticipate benefits to scalability because such an approach would effectively need to fit neural nets for up to (# features)$^2$ shape functions to cover missingness interactions, and a sparsity-regularized neural GAM can take quite a while even for a simple dataset. For example, table 5 of [1] shows a neural GAM taking ~30 seconds on a 6172 sample, 13-feature dataset, COMPAS, which is slower than our method on the much more computationally difficult FICO dataset (10,459 samples, 23 features), even when we augment the FICO dataset with missingness interactions.
[1] Shiyun Xu, Zhiqi Bu, Pratik Chaudhari, Ian J. Barnett. Sparse Neural Additive Model: Interpretable Deep Learning with Feature Selection via Group Sparsity. ECML PKDD 2023
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I will increase my score by one point.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your feedback, and for increasing your score! | Summary: The paper introduces M-GAM, a novel extension of Generalized Additive Models (GAMs) designed to maintain interpretability while handling datasets with missing features. By incorporating missingness indicators and their interaction terms through ℓ0 regularization, M-GAM balances accuracy and sparsity. The model provides comparable or superior performance to traditional imputation methods while avoiding the complexity and overfitting issues associated with the naive inclusion of missingness indicators.
Strengths: - Very clear explanation in section 3.
- Sparsity and interpretability are highly relevant for practitioners.
Weaknesses: - The title should convey that the study is limited to GAMs
Runtime comparison in 4.3 is a bit vacuous because it depends on the exact implementation, etc. Maybe looking at computational complexity would be more interesting.
- Empirical evaluation is somewhat limited
Technical Quality: 3
Clarity: 3
Questions for Authors: - l.27, it seems strange to contrast the choice of (not) using missingness with the choice of using l0 regularization. They seems to be rather orthogonal choices.
- Why l0 and not some other regularization?
- It seems questionable to encode reasons for missingness with natural numbers, as these imply an ordering.
- It looks like all other methods have better mean accuracy than M-GAM in Figure 6, Breast Cancer dataset. Is this correct? If so, the text should discuss this rather than glossing over it.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: - The authors state interpretability as one of M-GAM's main advantages. I would like to see this made explicit, e.g., with an example, especially in comparison to existing methods.
- Empirical performance is not great.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful review – we appreciate your recognition that sparsity and interpretability are particularly important for practitioners. We hope we have addressed each of your concerns below, and look forward to any ongoing discussion.
> The title should convey that the study is limited to GAMs
This is a fair point – we plan to update the title to something like “Interpretable Generalized Additive Models for Datasets with Missing Values”. GAMs are currently among the most popular forms of interpretable ML models [1].
[1] https://github.com/interpretml/interpret
> Runtime comparison in 4.3 is a bit vacuous because it depends on the exact implementation, etc. Maybe looking at computational complexity would be more interesting.
While computational complexity would provide an interesting additional kind of comparison, we believe practical runtime is the more relevant metric here because it is more reflective of what users will experience. This is particularly true for methods like FastSparse, where computational tricks substantially improve the algorithm’s practical runtime without necessarily affecting its big O runtime.
While runtime can vary with implementation, we deliberately used implementations that are well established in prior work whenever possible. As such, the results in 4.3 should be fairly reflective of what an actual user would experience, as they would likely use these same implementations.
> Empirical evaluation is somewhat limited
We are unsure in what way this is meant, but note that appendix E contains a variety of additional empirical results, including evaluation over six datasets using a wide variety of baseline methods. Together, sections D and E of the appendix are longer than the entire main paper and consist of empirical evaluation.
> l.27, it seems strange to contrast the choice of (not) using missingness with the choice of using l0 regularization. They seems to be rather orthogonal choices.
Yes, they are orthogonal choices. We think this is just ambiguous phrasing – we meant to contrast handling missingness alongside $\ell_0$ regularization with handling missingness without $\ell_0$ regularization. We will update the wording to make this clearer.
> Why l0 and not some other regularization?
$\ell_0$ regularization is beneficial because it produces very sparse models, and this paper is focused on interpretability. In the context of M-GAMs, this means fewer variables will contribute to model predictions and therefore fewer shape functions will need to be presented. This makes it easier for users to interpret model predictions.
> It seems questionable to encode reasons for missingness with natural numbers, as these imply an ordering.
Good point, we’ll change it to letters (or another notation you suggest that implies no ordering). This does not actually affect the implementation since M-GAM is only exposed to the binary indicator variable for each missingness reason. If this is referring to the FICO description from the dataset description, it was not our choice to use the stated missingness encodings; those are from the creators of the dataset.
> It looks like all other methods have better mean accuracy than M-GAM in Figure 6, Breast Cancer dataset. Is this correct? If so, the text should discuss this rather than glossing over it.
This is true, but reflects an oversight in creating Figure 6. Breast Cancer and MIMIC are both heavily imbalanced, and as such performance is better measured via AUC, which is done in other plots. Accuracy is not reflective of quality for imbalanced datasets. Under this metric, it is not true that the median AUC for M-GAM is worse than all other methods, as shown in the combined response.
The class balance for each of the four datasets is as follows
Breast Cancer: 78.2% negative
MIMIC: 89.6% negative
Pharyngitis: 54.9% negative
FICO: 47.8% negative
> The authors state interpretability as one of M-GAM's main advantages. I would like to see this made explicit, e.g., with an example, especially in comparison to existing methods.
We intended for Figures 1 and 2 to serve as such examples, but believe that additional examples on the datasets we study might help prove this point. We will prepare such examples, and add them to the appendix.
---
Rebuttal Comment 1.1:
Title: Ack
Comment: Thank you for the response. With the additional explanation and background (provided also in responses to other reviewers), I would like to up my overall rating to 6. I hope this paper will be accepted.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your constructive review, and for increasing your score! | Summary: This paper introduces M-GAM, which incorporates concept of missingness into Generalized Additive Model(GAM). Since GAM represents arbitrary function with sum of univariate functions which take each input feature as input, M-GAM maintains sparsity and interpretability for inference with missing data.
Strengths: - Adapting GAM as tool for modeling under missingness is new and useful. Common approach for integrating missingness information is concatenating missingness indicator to input feature, but this makes exhaustive expansion of input dimension which can potentially harm the inference in many aspects. So I think this approach is smart and efficient
- it is faster than other baselines with reasonable performance
- abundant experiments that support the authors claim and detailed description of experimental setup
Weaknesses: - I think the main weakness of M-GAM is its performance which just barely match other baselines' performance. Since it is trained end-to-end manner, we can expect more performance gain than impute-then regress methods since it directly uses supervision during training. Since informative missingness is rare in real world dataset, predictive performance of M-GAM on real world dataset can harm the applicability.
Technical Quality: 3
Clarity: 2
Questions for Authors: - why M-GAM especially good on MAR setup?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We appreciate your recognition of the novelty of this work and its strong experimental backing. We look forward to discussing any ongoing questions or concerns you may have.
> I think the main weakness of M-GAM is its performance which just barely match other baselines' performance. Since it is trained end-to-end manner, we can expect more performance gain than impute-then regress methods since it directly uses supervision during training. Since informative missingness is rare in real world dataset, predictive performance of M-GAM on real world dataset can harm the applicability.
The goal of this work is to improve interpretability without harming performance, and we accomplished that. M-GAM is consistently among the most performant models on each dataset. Sometimes, it even improves both performance and interpretability. By producing an interpretable model, we allow practitioners to easily identify confounded reasoning and use the model more responsibly. This ability to troubleshoot helps improve overall accuracy of the system during and after deployment, not just performance on a static dataset.
> why M-GAM especially good on MAR setup?
M-GAM is particularly effective in the MAR setup used in the semi-synthetic example because the value is MAR with respect to the label. That is, in that semi-synthetic case, there is some signal about the outcome embedded in whether or not a feature is missing.
This can happen in a variety of settings; for example, imagine conducting a survey to help predict whether individuals care about privacy or not. If we ask for a home address and an individual doesn’t answer the question, resulting in missing data, that might be a strong indicator that they do care about privacy.
---
Rebuttal Comment 1.1:
Comment: Thanks for answering my questions. Your answers are really helpful for my understanding of this paper.
---
Reply to Comment 1.1.1:
Comment: Thanks again for your feedback, we're glad this was helpful! We're happy to discuss any further issues you might have, and we hope you will consider increasing your score if we have addressed them all :) | Rebuttal 1:
Rebuttal: We thank all the reviewers for their thoughtful reviews. We have responded to each reviewer separately. Attached to this message, please find figures used in responses to individual reviewers.
We look forward to further discussion in the days to come.
Pdf: /pdf/198c67d05e608ff91cad783a0f134904bd144e24.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper "Interpretable Machine Learning for Datasets with Missing Values" proposes generalized additive models incorporating missingness indicators and their interaction terms with sparseness ensured by l0 regularization. Therefore, the authors combine GAMs with the Missing indicator method and l0 regularization. The method is compared on real-world data sets with missing values and added synthetic missingness and against a variety of techniques used on imputed data with and without selective addition of missingness indicators.
Strengths: The manuscript is well-written and structured.
Utilizing models of missingness into the ML process is a nice idea.
Weaknesses: I would have run 2 distinct experiments at least. One with synthetic missingness where the ground truth is known and the missigness can be increased. Additionally I would have liked to see also a mix of MCAR and MAR to demonstrate the results when the assumed mechanism is incorrectly chosen.
The authors emphasize the interpretability enabled by M-GAM and figures are shown. It would have been nice to discuss the interpretation in the text.
Technical Quality: 3
Clarity: 3
Questions for Authors: Proof only works since the unobserved noise for k1 and k2 is chosen as it is. "Let k_2<k_1<0.5"
To me this looks like circular reasoning. If I assume the noise is lower for knowing the missingness relationship with the predictor Y, than my uncertainty with the oracle -> of course the one with lower noise with reach higher probability. If the assumption is switched or equal signs added the proposition will fall apart. It seems the proposition needs to be far more limited than it is right now.
In 108 should it than not say "Corollary 3.2 states that perfectly imputing missing data can reduce the best possible performance of
a predictive model, if the noise of the assumed predictor dependent missingness mechanism is lower than the measurement noise."?
I find this not very surprising. Adding correct information with higher certainty should always be better. Adding correct certain models of the missingness mechanism is of course preferable to high noise imputation.
Why did the authors impute the data for RF? If C4.5 is used than the missingness can be handled without explicit imputation by change of the impurity function. There are also internal impute proposals: Stekhoven & Bühlmann (2012), "MissForest—non-parametric missing value imputation for mixed-type data", Bioinformatics, 28, 1.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: see above. I think their proposition fails to emphasize a major assumption to their claim.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We hope we have addressed your primary concern below, and look forward to any further discussion.
>Proof only works since the unobserved noise for k1 and k2 is chosen as it is... If the assumption is switched or equal signs added the proposition will fall apart. It seems the proposition needs to be far more limited than it is right now.
>In 108 should it than not say "Corollary 3.2 states that perfectly imputing missing data can reduce the best possible performance of a predictive model, if the noise of the assumed predictor dependent missingness mechanism is lower than the measurement noise."?...
Proposition 3.1 and Corollary 3.2 are existence claims, which we prove by construction. It is valid and standard to prove such claims by constructing a single viable Data Generating Process (DGP); such a proof does not imply that DGP is the only case where our claim holds. In fact, in situations like the one you've named, where missingness information is higher noise than the rest of the data, there still exist cases where missingness information is needed to achieve the best possible model - we just chose not to focus on them to keep the proof simple. As an example, we have constructed a case where the noise in the missingness indicator is the greatest (k_1<...<k_2<0.5), but the Bayes’ optimal model with missing data is still superior to that with perfect imputation:
Consider a case similar to that used for the proof in proposition 3.1, where we add an additional variable $X_3$ and, as requested, have the noise for the missingness be higher than any other noise.
$$Y = |X_1X_2 - \epsilon_1|, \epsilon_1 \sim \textrm{Bern}(\frac{1}{6})$$
$$M = |Y - \epsilon_2|, \epsilon_2 \sim \textrm{Bern}(\frac{1}{4})$$
$$X_3 = |Y - \epsilon_3|, \epsilon_3 \sim \text{Bern}(\frac{1}{5})$$
We also adjust the probabilities for $X_1$ and $X_2$ being true so that this is a balanced classification problem: $X_1, X_2 \sim \textrm{Bern}(\frac{1}{\sqrt{2}})$, so $X_1X_2 \sim \textrm{Bern}(\frac{1}{2})$
The complete proof for this case is available at the bottom of this rebuttal.
>Why did the authors impute the data for RF? If C4.5 is used than the missingness can be handled without explicit imputation by change of the impurity function. There are also internal impute proposals: Stekhoven & Bühlmann (2012), "MissForest—non-parametric missing value imputation for mixed-type data", Bioinformatics, 28, 1.
Avoiding explicit imputation is a valid alternative baseline, which we have added, as shown in Figure 1 of our shared response. The results of this experiment align with those for previous baselines: random forests and decision trees without imputation do not outperform M-GAM.
Additionally, we note that MissForest is used as an imputation method in Section E.2 of the supplement. We find that MissForest imputation (including MissForest used with a random forest model) does not outperform any of our other impute-then-predict random forest baselines.
Weaknesses:
>I would have run 2 distinct experiments at least. One with synthetic missingness where the ground truth is known and the missigness can be increased. Additionally I would have liked to see also a mix of MCAR and MAR to demonstrate the results when the assumed mechanism is incorrectly chosen.
We don’t actually make any strict assumptions about the missingness mechanism. We regularize towards MCAR through our $\ell_0$ regularization, but this is not a strict assumption (for example, we still handle the MAR setting in figure 3). As per your suggestion to include MCAR missingness, we’ve included a plot where we switch to MCAR instead of MAR missingness for the experiment done in figure 3 (plot included in the general response). We still handle missingness as well as our competitors, though there is no informative missingness that we can take advantage of in this setting.
>The authors emphasize the interpretability enabled by M-GAM and figures are shown. It would have been nice to discuss the interpretation in the text.
We agree that additional discussion of how to interpret an M-GAM would be appropriate – we will add more discussion around Figure 2, and several more visualizations will be added to the supplement.
---
Proof:
The bayes optimal model with perfect imputation of $X_1$, and no access to $M$, is still just to predict in accordance with $X_1X_2$ for $P(X_1X_2=Y)=\frac{5}{6}$. When $X_1X_2=X_3$, all information available suggests $X_1X_2$ is correct.
When $X_1X_2\neq X_3$, we still have the bayes optimal prediction aligning with $X_1X_2$: $P(Y=1|X_1X_2=1,X_3=0) =\frac{5}{9}>0.5$ and $P(Y=0|X_1X_2=0,X_3=1)=\frac{5}{9}>0.5$.
If we instead only have access to $X_1$ when it is not missing, but we also know when $X_1$ is missing (i.e. we know $M$), then the following approach will perform better than the above model:
$$Y=\begin{cases}
X_3,&\textrm{if $M=1$}\\\\
X_1X_2X_3,&\textrm{if $M=0$}
\end{cases}$$
When $M=1$, we make additional errors relative to the previous approach at rate
$$P(M=1)(P((X_3)\neq Y)-P((X_1X_2)\neq Y))=\frac{1}{2}(\frac{1}{5}-\frac{1}{6})=\frac{1}{60}$$
When $M=0$, we improve our classifier's accuracy by:
\begin{align*}
&P(\text{Imputation model is wrong, model with missingness is right}|M=0)\\\\
&-P(\text{Imputation model is right, model with missingness is wrong}|M=0)\\\\
=&P(X_1X_2\neq Y=X_1X_2X_3)-P(X_1X_2=Y\neq X_1X_2X_3)\\\\
=&P(X_1X_2=1,X_3=0,Y=0,M=0)-P(X_1X_2=1,X_3=0,Y=1,M=0)\\\\
=&P(X_1X_2=1)P( Y=0|X_1X_2=1)P(X_3=0|Y=0)P(M=0|Y=0)\\\\
&-P(X_1X_2=1)P(Y=1|X_1X_2=1)P(X_3=0|Y=1)P(M=0|Y=1)\\\\
=&\frac{1}{2}\frac{1}{6}\frac{4}{5}\frac{3}{4}-\frac{1}{2}\frac{5}{6}\frac{1}{5}\frac{1}{4}\\\\
=&\frac{7}{240}\\\\
\>&\frac{1}{60}
\end{align*}
So the model that uses missingness outperforms the imputation model.
---
Rebuttal Comment 1.1:
Comment: Yes it is formally correct that one can construct situations in which even perfect imputation can be outperformed. As the authors note in cases where the noise of the features is higher than the noise of the missingness model. Also when M is 0 or 1 with a 50/50 chance as seen in the rebuttal. I still find these situations not realistic and having very restricted practical use. Many approaches base on the idea that a missing feature is not essential for the task and information is still covered by the remaining observations accepting some increase in uncertainty. I appreciate the reviewer added such methods in the comparison. Furthermore, I suggested making a fully synthetic experiment with controlled and increasing missingness in a toy situation where one can easily follow the method and its interpretation. Only as a second part of the sentence the mix of MCAR and MNAR was mentioned, which was less important than the first part. Synthetic missingness in real data is always a mix for which the influence and sensitivity of the method is less accessible. This has not been addressed.
However, besides finding the existence claims on the trivial and less interesting side, I see a value in the method itself, also by the discussion with the other reviewers, so I will hire my score accordingly.
I would like to point the authors to censoring in Survival analysis as potential interesting application area for which missing data and its interpretation is really important. There the data sets are small, very imbalanced and missigness can be high due to the longitudinal and medical nature.
---
Reply to Comment 1.1.1:
Comment: We appreciate you taking the time to review our response, and for increasing the score.
We’re sorry that you find this setting unrealistic, but we are unaware of any standard definition for what a realistic data generating process looks like. We have shown that, under a variety of different constraints, it’s possible to show that the proposition holds. We've shown that it holds in the most realistic setting we can imagine (missingness < 50%, signal from missingness weaker than from other variables), and include the proof at the bottom of this response. We hope this helps show that the setting for this proposition really is quite broad, even though we have not formally quantified this.
We do believe that, generally, settings with informative missingness are realistic - consider, for example, the result of a medical test being missing because a doctor judged that the patient was not at sufficient risk to need that test; on a survey about politics, an individual skipping questions because they distrust the pollster’s political leaning; or in loan default prediction, a credit report being missing because an individual has no credit history.
We apologize for missing your request that we add synthetic data - we misunderstood the suggestion, and thought that semi-synthetic data with a ground truth missingness mechanism would satisfy your concern. With this clarification, we’re happy to add fully synthetic examples to our paper (although we aren’t able to include those plots today because of the rules concerning the discussion period).
Thank you for the suggestion regarding survival analysis – that does sound like a very relevant setting, and we hope to explore it in the future!
Again, thank you for your feedback and discussion!
------
Proof:
Consider a case similar to that used for the proof in proposition 3.1, where we add an additional variable $X_3$ and have the noise for the missingness be higher than any other noise.
$$Y = |X_1X_2 - \epsilon_1|, \epsilon_1 \sim \textrm{Bern}(\frac{1}{12})$$
$$M = \begin{cases} |Y - \epsilon_2|, \epsilon_2 \sim \textrm{Bern}(\frac{1}{4}), \&\textrm{with probability $\frac{1}{2}$} \\\\
0, \&\textrm{with probability $\frac{1}{2}$}\end{cases}$$
$$X_3 = |Y - \epsilon_3|, \epsilon_3 \sim \text{Bern}(\frac{1}{11})$$
We also adjust the probabilities for $X_1$ and $X_2$ being true so that this is a balanced classification problem: $X_1, X_2 \sim \textrm{Bern}(\frac{1}{\sqrt{2}})$, so $X_1X_2 \sim \textrm{Bern}(\frac{1}{2})$
Note that we now also have missingness at well under 50\% of the data (missingness happens a quarter of the time)
The bayes optimal model with perfect imputation of $X_1$, and no access to $M$, is still just to predict in accordance with $X_1X_2$ for $\mathbb{P}(X_1X_2=Y) = \frac{11}{12}$. When $X_1X_2 = X_3$, all information available suggests $X_1X_2$ is correct.
When $X_1X_2 \neq X_3$, we still have the bayes optimal prediction aligning with $X_1X_2$: $\mathbb{P}(Y=1|X_1X_2=1,X_3=0) = \frac{11}{21} > 0.5$ and $\mathbb{P}(Y=0|X_1X_2=0,X_3=1) = \frac{11}{21} > 0.5$.
If we instead only have access to $X_1$ when it is not missing, but we also know when $X_1$ is missing (i.e. we know $M$), then the following approach will perform better than the above model:
$$Y = \begin{cases}
X_3, &\textrm{if $M=1$} \\\\
X_1X_2X_3, &\textrm{if $M=0$}
\end{cases}$$
When $M=1$, we make additional errors relative to the previous approach at rate
\begin{align*}
\mathbb{P}(M=1)(\mathbb{P}((X_3)\neq Y) - \mathbb{P}((X_1X_2)\neq Y))\\\\
\&=\frac{1}{4} (\frac{1}{11}- \frac{1}{12})\\\\
\& = \frac{1}{528}
\end{align*}
When $M=0$, we improve our classifier's accuracy by:
\\begin{align*}
\&\mathbb{P}(\text{Imputation model is wrong, model with missingness is right and }M=0)\\\\
\&- \mathbb{P}(\text{Imputation model is right, model with missingness is wrong and }M=0)\\\\
= \&\mathbb{P}(X_1X_2 \neq Y = X_1X_2X_3, M=0) - \mathbb{P}(X_1X_2 = Y \neq X_1X_2X_3, M=0)\\\\
= \&\mathbb{P}(X_1X_2=1, X_3=0, Y=0, M=0) - \mathbb{P}(X_1X_2=1, X_3=0, Y=1, M=0)\\\\
= \&\mathbb{P}(X_1X_2=1)\mathbb{P}( Y=0|X_1X_2=1)\mathbb{P}(X_3=0|Y=0)\mathbb{P}(M=0|Y=0) \\\\ \&- \mathbb{P}(X_1X_2=1)\mathbb{P}( Y=1|X_1X_2=1)\mathbb{P}(X_3=0|Y=1)\mathbb{P}(M=0|Y=1)\\\\
= \&\frac{1}{2}\frac{1}{12}\frac{10}{11}\frac{7}{8} - \frac{1}{2}\frac{11}{12}\frac{1}{11}\frac{5}{8}\\\\
= \&\frac{15}{2112}\\\\
\> \&\frac{1}{528}
\\end{align*}
So the model that uses missingness outperforms the imputation model. | null | null | null | null | null | null |
The Iterative Optimal Brain Surgeon: Faster Sparse Recovery by Leveraging Second-Order Information | Accept (poster) | Summary: This paper proposes a theoretically convergent iterative Optimal Brain Surgeon (OBS) algorithm, which generalizes the classic Iterative Hard Thresholding (IHT)-based algorithms by incorporating approximate second-order information in the sparse projection step. The author also provides practical variants of these algorithms for solving sparse linear regression and model pruning. The experiments show that the proposed algorithm leads to faster convergence than traditional algorithms and improves accuracy.
Strengths: The proposed algorithm is simple yet effective.
Weaknesses: Theoretical analysis does not hold in general for practical problems.
Technical Quality: 3
Clarity: 4
Questions for Authors: Detailed comments are as follows:
1. The author elaborates on the theoretically convergent I-OBS method and provides specific calculations for both practical and theoretical schemes. What is the gap between the two schemes? Can it be verified through experiments or illustrated with examples? What is the specific operator T in the practical scheme?
2. In lines 158 and 169, the specific form of the model \( \phi_t(\theta) \) differs.
3. In the model pruning experiment, the author provides the specific algorithm Alg. 2 for the practical use of I-OBS. By constructing a constrained optimization problem for the learnable parameters \( W \) of each layer to fine-tune the model, the author has verified the method's feasibility through experiments. Is it theoretically feasible?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The author objectively analyzed the limitations of this article.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the valuable questions and positive comments. We address your questions below:
**Q1, Regarding the gap between the practical and theoretical schemes:**
We note that for the theoretical scheme we have $|| \theta\_{t+1} - \theta^* ||\_2 \leq \left(1+ \sqrt{\frac{L}{\mu}\frac{d-k^*}{d-k}} \right) \frac{M}{2 \mu} || \theta\_t - \theta^*||\_2^2.$ (Theorem 1) and for the practical scheme we have $ || \theta\_{t+1} - \theta\_* ||\_2 \leq \left( 1+ \sqrt{\frac{k^*}{k}} \right) \frac{M}{2 \mu} || \theta\_t - \theta^* ||\_2^2$ ( Lemma 3). In both cases, the upper is in the form of $|| \theta_{t+1} - \theta_* ||_2 \leq C|| \theta_t - \theta^* ||_2^2 $, where $C$ is a constant. Thus, both scheme gives a super-exponential local convergence rate, namely both algorithms achieve $\epsilon$-error with initialization $||\theta_0 - \theta^*||_2 \leq \frac{1}{2 C}$ in $O( \log \log \frac{1}{\epsilon})$ steps. Thus, there is no gap in terms of convergence rate. There is indeed a difference in the convergence radius $ \frac{1}{2 C}$ since the constant $C$ is different, but we remark that this difference is minor in practice.
**Q2, In lines 158 and 169, the specific form of the model ( $\phi_t (\theta)$ ) differs**
We thank the reviewer for the detailed review, however, there are no $ \phi_t(\theta)$ in line 169. Do you mean the $ \phi_t(\theta)$ in line 164?
In line 164, we replace the gradient $\nabla f(\theta_t) $ term with the stochastic gradient term$g_t(\theta_t) $, so that if one solves the proximal problem induced by the $ \phi_t(\theta)$ in line 164, one will get the update of stochastic gradient descent rather than gradient descent.
Please let us know if there are any further questions regarding this point.
**Q3, Regarding the theoretical analysis of the practical implementation of I-OBS**
While we believe it would be interesting to obtain a theoretical guarantee for the practical implementation of I-OBS, we find it is difficult to make it end-to-end rigorous, as the underlying constrained optimization problem is NP-hard. In particular, to efficiently solve the constraint optimization problem in line 7 of Algorithm 2, we applied existing heuristic quadratic sparsity solvers, such as OBC or SparseGPT. (In preliminary experiments, we have also investigated applying algorithms with approximation guarantees, such as Natarajan’s algorithm [Natarajan95], but found that these are much less scalable and do not provide better solutions.) These solvers use greedy heuristics to efficiently compute the masks, since obtaining the theoretical optimal masks shown in line 7 of Algorithm 1 is practically infeasible. Despite this approximation, we note that our approach obtains high practical performance.
[Natarajan95] Natarajan, Balas Kausik. "Sparse approximate solutions to linear systems." SIAM journal on computing 24.2 (1995): 227-234. | Summary: This work combines second-order curvature information with sparse recovery algorithms to demonstrate, both theoretically and empirically, that the curvature information leads to improved convergence rates and generalization performance in post-training iterative pruning and sparse recovery
Strengths: * This work offers a rigorous analysis of the proposed method, I-OBS, complete with detailed derivations and proofs.
* The authors ground their contributions within the context of the existing literature but clearly identify how it differs from prior works by leveraging second order information.
* Despite the intractability of the theoretical approach, the authors offer a practical formulation that yields improved performance across a diverse range of data domains.
* The paper is rather dense; however, the authors provide all the necessary notation required to follow their derivations.
* This is a well motivated line of inquiry as post-training compression is increasingly important in the era of LLMs. Further, despite requiring additional iterations compared to one-shot pruning methods, I-OBS converges in a small number of iterations (<100 for DeiT and <3 for LLMs).
Weaknesses: * Grounding this work in some practical considerations would improve the overall impact of the paper and may help the reader assess whether this technique is suitable for a desired use case. For instance, adding some overall runtime characteristics for the empirical results would establish an order-of-magnitude estimate of the overall computational requirements.
* In a similar vein, conducting experiments with fine-grained sparsity such as 2:4 would be a nice extension to produce sparse neural networks that can actually be accelerated in practice. As it currently stands, the unstructured networks learned by I-OBS do not offer much in the way of an immediate practical application.
* For the LLM experiments, perplexity has been shown to be a somewhat misleading metric when evaluating compressed LLMs [1]. Ideally, the LLM experiments should be evaluated on downstream tasks such as GLUE or better yet the LLM-KICK benchmark [1].
* Several post-training compression schemes have been proposed recently. Comparisons with methods such as [2-6] would improve my confidence in the significance of this work.
* Several small typos, see suggestions below.
[1] A. Jaiswal, Z. Gan, X. Du, B. Zhang, Z. Wang, and Y. Yang, “Compressing LLMs: The Truth is Rarely Pure and Never Simple.” arXiv, Oct. 02, 2023. doi: 10.48550/arXiv.2310.01382.
[2] T. Dettmers et al., “SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression.” arXiv, Jun. 05, 2023. [Online]. Available: http://arxiv.org/abs/2306.03078
[3] M. Sun, Z. Liu, A. Bair, and J. Z. Kolter, “A Simple and Effective Pruning Approach for Large Language Models.” arXiv, Jun. 20, 2023. doi: 10.48550/arXiv.2306.11695.
[4] Y. Zhang, H. Bai, H. Lin, J. Zhao, L. Hou, and C. V. Cannistraci, “Plug-and-Play: An Efficient Post-training Pruning Method for Large Language Models,” presented at the The Twelfth International Conference on Learning Representations, Oct. 2023. [Online]. Available: https://openreview.net/forum?id=Tr0lPx9woF
[5] Y. Zhang et al., “Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs.” arXiv, Oct. 17, 2023. doi: 10.48550/arXiv.2310.08915.
[6] Y. Ma et al., “AffineQuant: Affine Transformation Quantization for Large Language Models.” arXiv, Mar. 19, 2024. doi: 10.48550/arXiv.2403.12544.
Technical Quality: 3
Clarity: 4
Questions for Authors: ## Questions
* In terms of wall clock time, what was the duration required for the pruning results in Tables 1 and 2 and what hardware configuration was used?
* How do the compressed LLMs compare on downstream tasks such as LLM-KICK or GLUE?
* Can I-OBS be extended to fine-grained sparsity types such as N:M sparsity?
* Are the results in Table 2 obtained with 32- or 16-bit weights? How does a quantized dense model with half the precision as used in Table 2 compare?
## Suggestions
* Use vector graphic formats for Figure 1.
* Potential typos to address:
* L281 & Figure 1 Caption: Refers to topk-WoodFisher, I believe this should be **topk-I-OBS**? Perhaps a prior naming convention?
* Table 2 caption: Phi-1.5M -> Phi-1.5B
* L332: pwer-layer -> per-layer
* L340: mode -> model
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: In general the limitations listed are appropriate. However, I encourage the authors to add some discussion about the practical requirements for their algorithm in terms of time and compute.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review, as well as your valuable questions and comments. We address your questions and concerns below
**Q1 and W1 Regarding the training time of the pruning results**
For the experiments on ViTs (Table 1 in the paper), it took about 4 hours to run 100 iterations and the hardware we used was Nvidia A100 GPU with 80GB memory. For LLMs experiments, we ran I-OBS on Llama-2 (7B) and Llama-3 (8B) models, and it takes around 1.5 hours per iteration (15h for 10 iterations). Unfortunately, we haven't recorded the time of running the experiments on OPT-125M and Phi-1.5 (Table 2 in the paper).
**Q2 and W3 Evaluating the performance on LLM-KICK or GLUE**
To address your concern and study the scalability of I-OBS, we ran I-OBS on Llama-2 (7B) and Llama-3 (8B) models. Specifically, we apply SparseGPT (50% sparsity) and finetune on the same calibration set used for Hessian estimates. We evaluate the performance on 5-shot MMLU, one of the tasks from LLM-KICK benchmark–we did this due to time constraints but will expand to the full benchmark for the next revision.
Experimental results are provided in Table 1 and Table 2 of the PDF file we attached in the global rebuttal.
One can observe that the quality of sparse solution significantly improves after first finetuning iteration, thus validating our approach. Moreover, there is small improvement during the next few iterations, followed by gradual performance deterioration afterward, which we explain due to overfitting on the calibration set. Specifically, I-OBS improves accuracy on MMLU by >1 point, a 20% reduction in error relative to the original dense model.
**W4 Comparison to other works in pruning**
Thank you for the suggestion. As also suggested by reviewer JoZw, we provide a more detailed literature review. Please refer the answer to **Q2 of reviewer JoZw**. We apologize that we can not copy the whole response here due to the character limit.
Also, as suggested by reviewer JoZw, we conducted experiments on MobileNetV1 to compare to the CBS methods. The results are shown in Table 3 of the attached file in global rebuttal. Our experiments imply that, starting with 60\% sparsity, our I-OBS pruner outperforms CBS method on MobileNetV1 by a large margin: 2\% for 60\% sparsity, 5\% for 70\% sparsity and 12\% for 80\% sparsity. For low sparsities (30\% to 50\%), the two methods are comparable, since the accuracy difference is less than 0.5\%
**Q3 and W1 Extend to fine-grained sparsity type**
The practical implementation of I-OBS indeed extends to N:M sparsity. In particular, we apply OBC or SparseGPT as sparsity solvers for the layerwise pruning problem, and OBC or SparseGPT applies to N:M sparsity. However, for the theoretical results, we are unable to extend the analysis to the masks selected based on N:M sparsity, we will leave this to future work.
**Q4 Are the results in Table 2 obtained with 32- or 16-bit weights? How does a quantized dense model with half the precision as used in Table 2 compare?**
The results in Table 2 are obtained with bfloat16 (half precision) weights, which is standard. Generally, we found the weight precision to be orthogonal to our results, as sparsification can be applied independently of the baseline weight representation, and always yields similar speedups (the maximum speedup due to 2:4 sparsity is 2x for INT8 and FP16).
More broadly, we believe our iterative approach should be generalizable to compression via quantization as well: specifically, we could modify the projection step to perform quantization rather than sparsification.
**Suggestions**
Thank you for the suggestions. We will change the figure format and correct the typos in the revision
**References**
[CBS] Yu, X., Serra, T., Ramalingam, S., & Zhe, S.. “The combinatorial brain surgeon: pruning weights that cancel one another in neural networks.” ICML 2022
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal and additional experiments, I believe they will improve the manuscript. I have elected to maintain my original score. | Summary: The paper presents a new family of algorithms called Iterative Optimal Brain Surgeon (I-OBS), extending the post-training Optimal Brain Surgeon (OBS) framework to an iterative setting commonly used in sparse recovery. I-OBS algorithms utilize second-order information during the sparse projection step, enhancing convergence guarantees compared to classic algorithms like Iterative Hard Thresholding (IHT).
Contributions of the paper:
* Introduction of I-OBS, which improves classic IHT-based algorithms by incorporating approximate second-order information in the sparse projection step.
* Provision of faster analytical convergence rates for I-OBS under standard first- and second-order smoothness and strong convexity assumptions. It also offers theoretical guarantees for existing practical pruning algorithms such as WoodFisher and OBC.
* Development of practical versions of the I-OBS algorithms that relax theoretical constraints, making them easy to implement and scalable for large problem instances, such as compressing vision and language models.
Strengths: * The paper introduces an algorithm with theoretical guarantees that its predecessors do not provide.
* It also offers a practical computational algorithm that approximates the theoretical one.
* The paper shows the benefit of the proposed algorithm in training sparse linear regression.
* The paper shows empirically that the proposed algorithm is applicable to prune large models in experiments and obtain promising results.
Weaknesses: * The paper does not analyze the differences between the practical version of the algorithm and the theoretical one.
* Since the algorithm uses second-order information, it may have time complexity issues.
* The paper only compares the proposed method with SparseGPT. Hence, other existing methods for pruning can be also added.
Technical Quality: 3
Clarity: 3
Questions for Authors: * How does the algorithm using second-order information compare with one using first-order information?
* Can you provide experiments comparing training time in the pruning process among the proposed method and existing works?
* How does the proposed algorithm apply to the Convolution network?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the valuable questions and comments, we address the questions and comments below:
**W1. The difference between the practical and theoretical version of the algorithm**
We compare the practical and theoretical version of Algorithm 1 below:
(1) The way of choosing the mask is different: For the theoretical version of the algorithm, we choose the mask in an optimal way, which is equivalent to solving the integer programming problem in Line 7 of Algorithm 1; for the practical version, we simply use the top-k mask to replace the optimal mask
(2) The complexity of the two versions is different. For the theoretical one, solving the integer programming problem in Line 7 of Algorithm 1 is NP-hard; while for the practical version, computing a top-k mask only has $O(d)$ complexity
(3) The local convergence rate of the two algorithms is the same: in both case we obtain a local convergence in the form of $||\theta\_{t+1} - \theta^*||\_2 \leq C||\theta\_t - \theta^*||\_2^2$ implies a $\mathcal{O}(\log \log(1/\epsilon))$ iteration complexity for achieving an $\epsilon$-error with initialization $||\theta\_0 - \theta^*||\_2 \leq \frac{1}{2C}$.
**W2. The complexity issue by using second-order information**
While second-order information is historically hard to apply at the scale of deep networks, notice that there have recently been several efficient variants that work in linear time and space (in the model dimension) and can therefore be scaled. Specifically, we apply the recent scalable M-FAC method [M-FAC] to approximate the inverse of the Hessian.
**W3. Comparing I-OBS with other existing methods**
We provide extra experiments that compare our methods with [CBS]. In Table 3 of the attached file in global rebuttal, we provide results for I-OBS applied to MobileNetV1 model used in STR. Our experiment results implies that: starting with 60\% sparsity, our I-OBS pruner outperforms CBS method on MobileNetV1 by a large margin: 2\% for 60\% sparsity, 5\% for 70\% sparsity and 12\% for 80\% sparsity. For low sparsities (30\% to 50\%), the two methods are comparable, since the accuracy difference is less than 0.5\%
**Q1. How does the algorithm using second-order information compare with one using first-order information?**
In I-OBS, we solve a sparse optimization problem at each step with the objective function being the second-order approximation of the loss function, in particular, we have $\theta\_{t+1} = \arg \min_{\theta: ||\theta||\_0 \le k} \; \phi\_t(\theta) + \tfrac{1}{2}||\theta-\theta\_t||^2_{H_t}$. The Hessian of the objective funcion appears in the $\tfrac{1}{2}||\theta-\theta\_t||^2\_{H\_t}$ term. While in the first-order method such as k-IHt studied in [AC/DC], the algorithm solves a sparse optimization problem at each step with the objective function being the first-order approximation of the loss function, in particular $\theta\_{t+1} = \arg \min\_{\theta: ||\theta||\_0\le k} \; \phi\_t(\theta) + \tfrac{1}{2\eta}||\theta-\theta\_t||^2$, the Hessian is replaced by a quadratic term.
Regarding the convergence rate, typical first-order methods such as k-IHt studied in [AC/DC] have achieved $\epsilon$-error in $O(\log \frac{1}{\epsilon})$ iterations for any initialization. The I-OBS proposed in this paper, achieved $\epsilon$-error in $O(\log\log \frac{1}{\epsilon})$ iterations with initialization $||\theta\_0 - \theta^*||\_2 \leq \frac{1}{2 C}$. In short, I-OBS have a faster local convergence rate but we are unable to provide a global convergence rate. The difference resembles the one between gradient descent and Newton's methods.
**Q2 Comparing training time in the pruning process among the proposed method and existing works**
Thank you for the suggestion. However, it is a bit difficult to directly compare the pruning time since it depends on the implementation of the algorithms and the device. We need to adapt the implementation of methods in other existing works on our device, which is a bit time-consuming. We report the pruning time of our methods below:
For the experiments on ViTs (Table 1 in the paper), it took about 4 hours for running 100 iterations and the hardware we used was Nvidia A100 GPU with 80GB memory. For LLMs experiments, as suggested by reviewer T9TW, we ran I-OBS on Llama-2 (7B) and Llama-3 (8B) models, and it takes around 1.5 hours per iteration (15h for 10 iterations). Unfortunately, we haven't recorded the time of running the experiments on OPT-125M and Phi-1.5 models (Table 2 in the paper).
**Q3 Whether the proposed algorithm applies to CNNs**
The proposed methods indeed apply to CNNs. In Algorithm 2, we use OBC as the quadratic sparse solver for the problem defined in line 7 of Algorithm 2, and OBC applies to convolution layers. In fact, the experiments we are doing to baseline our methods with CBS are on MobileNet_v1, and we find I-OBS improves the performance of CBS on pruning MobileNet_v1.
**References**
[M-FAC] Frantar, Elias, Eldar Kurtic, and Dan Alistarh. "M-fac: Efficient matrix-free approximations of second-order information." Neurips 2021.
[AC\DC] Peste, Alexandra, et al. "Ac/dc: Alternating compressed/decompressed training of deep neural networks." Neurips 2021
[CBS] Yu, X., Serra, T., Ramalingam, S., & Zhe, S.. “The combinatorial brain surgeon: pruning weights that cancel one another in neural networks.” ICML 2022
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you for your detailed response.
I will keep my initial rating due to the lack of domain knowledge. I believe Iterative Optimal Brain Surgeon (I-OBS) is a novel contribution and can be widely used.
Best regards, | Summary: Having clarified my concerns in their rebuttal, I have updated my score for acceptance.
----
This paper presents a variant of the classic Optimal Brain Surgeon (OBS) method to iteratively prune multiple weights of a neural network at once, with each step consisting of obtaining a pruning mask for the fraction of weights to be removed and then updating the remaining weights. The update step is framed as a sparse recovery problem to be solved using second-order methods based on the Hessian of the loss function. Two approaches are proposed for the selection of weights to prune as well as for the update of the remaining weights, with one having stronger guarantees and the other being more manageable in practice.
Strengths: In my opinion, the core contribution here is the idea of iteratively changing the pruning mask and updating the remaining weights, which implies that the decision about which weights to prune is revisited at every step. This is a subtle departure from how this is typically done with post-training pruning (i.e., prune once, update once; or prune a smaller amount, update, and then repeat by pruning more).
Moreover, I appreciate that the authors discuss the two steps by providing methods that either have better guarantees (Option 1) or that are more tractable in practice (Option 2).
The connection with sparse recovery is also very interesting, although I wished it was made more explicit. After one paragraph in Section 2, this connection is leveraged for their algorithm but the authors do not provide greater insight on the algorithms used in sparse recovering.
Weaknesses: The authors imply that their work is the first generalization of OBS for simultaneously pruning multiple weights. However, this was already explored before in an ICML 2022 paper as the Combinatorial Brain Surgeon (CBS) [1]. In that paper, the method is also broken down in two steps, the first step selecting which weights to prune (CBS-S), which is equivalent to selecting the support mask in I-OBS, and the second step determining how to update the remaining weights (CBS-U), which is equivalent to optimizing the parameters in I-OBS. Because of the intractability of the problem, CBS-S is also approached in a greedy way, similar to Top$k$ in I-OBS. The main difference would be that CBS-S uses a semi-greedy heuristic to vary the selection of weights, whereas I-OBS updates the weights after fixing a mask and consequently affects which weights would be chosen by the greedy selection in the next iteration.
[1] https://arxiv.org/abs/2203.04466
More generally, the authors only cite fours papers about network pruning in the last decade in their literature review, one of those being a survey and another two being from a same group. For the rest of the paper, they just refer back to these two papers from the same group as the state-of-the-art and do not benchmark with anything else. There has been several papers focused on OBS in the last decade, both before and following [1]. Moreover, there are other pruning methodologies with theoretical guarantees, such as those based on coresets.
The connection with sparse recovery is quite interesting, but it generalizes the algorithm much more than it would be need for OBS. Because the objective function is quadratic, my understanding is that the weights has a closed-form solution (see discussion in the WoodFisher paper, in [1] regarding CBS-U, and also in lines 211-212 of I-OBS).
Finally, the authors only present results for sparsity 50%, which is not a lot of pruning in practice. For that amount of pruning, magnitude pruning (preserving the half of the weights with largest absolute value) is competitive with more refined techniques, such as WoodFisher and CBS. Because the weight selection done by the authors with Top$k$ is basically magnitude pruning, the results do not provide evidence of the strength of the technique. The technique may be strong, but that needs to be shown in more detail.
Other minor comments about the writing:
6-7: "lack a solid theoretical understanding": this is a bold statement, perhaps a bit unfair, and I don't think that the theory provided in the paper addresses exactly that (you just prove convergence rates for an update based on an optimization algorithm)
Equations 7 and 8: What is $H_t$ used as a subindex?
201: "critierion" -> criterion
201: "[,] which recovers"
210: $X$ not in bold as in previous uses
215: "in the first step one": remove one?
216 and 217: it is not "bruteforcing" if what you are doing takes linear time; bruteforcing is solving an entire problem by exhaustion
222: "As observed Algorithm 1": please rewrite
251, 278, 304-305, 311, 346: these references should have been in parantheses rather than having the authors names mentioned in the sentence without proper wording for it
256: "call[ed]"
269: "the[o]retical"
282: remove comma before "due"
285: remove "to"
Algorithm 2, line 1: remove first "for each layer" and remove comma
339: "guarentee" -> guarantee
340: "mode[l] pruning"?
341: "those assumption": either use "this" or use "assumptions"
342: "is" -> to be
Technical Quality: 3
Clarity: 2
Questions for Authors: 1) Can you please address the overlap with CBS?
2) Can you please frame your contribution over a broader range of recent work on network pruning, besides the four papers that you cited?
3) Can you please discuss the connection with sparse recovery in more detail, and also the relevance of this connection if having a quadratic objective function as in I-OBS leads to an optimization problem that can solved in closed-form?
4) Can you please benchmark your approach with at least magnitude pruning and CBS?
5) If possible, can you please provide results with higher amounts of sparsity?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors did a reasonable work discussing limitations and what should be done in future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed reviews, as well as the insightful comments and questions. We address your comments and questions below
**Q1**. CBS assumes the model gradient is zero for post-training pruning, while I-OBS does not, making I-OBS more general. We retain the gradient term because models may not be fully trained and subsequent I-OBS iterations can increase gradient norms for sparsity. Additionally, I-OBS is an iterative algorithm, in contrast to CBS's one-shot approach, allowing CBS-S to be used as a projection step in our iterative process by taking the gradient term into consideration.
The objectives for selecting the masks are also different. Specifically, in the IQP formulation of CBS-S procedure for mask selection, the remaining weights and their further updates are ignored. This means that the updates of the unpruned weights are not considered during the mask selection procedure. Consequently, solving each subproblem (CBS-S and CBS-U) to optimality does not guarantee optimal solution to CBS.
In the theoretical I-OBS algorithm, masks and weight updates are solved jointly, as stated in Lemma 2. Optimization in equation (8) handles both new weights and masks. Although the practical I-OBS algorithm first selects a mask and then updates weights (similar to CBS-U), it optimally finds the mask by solving an Integer Programming (IP) problem, unlike CBS-S's IQP. This makes I-OBS mask selection optimal but computationally challenging, so the practical version uses a top-k mask for efficiency, maintaining the same local convergence rate in strongly convex cases.
**Q2.** We will discuss more literature review over broader range of model pruning:
Our work follows the salience-based weight pruning approach, which evaluates the impact of removing weights on the model's loss or output. Among these, methods based on second-order information are most relevant, particularly those following the [OBS] framework. [L-OBS] uses a second-order approximation of the loss function, assuming a zero gradient, and introduces a layerwise strategy to approximate the Hessian. [WoodFisher] scales this idea with the empirical Fisher approximation of the Hessian, improving performance and considering non-zero gradients. However, WoodTaylor methods in [WoodFisher] focus on pruning one weight at a time, whereas our work extends this to multiple weights.
Other methods addressing multiple weight pruning include [CBS], which formulates mask selection as an integer programming problem and proposes two heuristics to approximate its solution. [OBC] tackles layerwise pruning with a quadratic problem and introduces a greedy heuristic for efficient layerwise problem-solving. Similarly, [CHITA] formulates layerwise pruning as an $L0L2$-regularized quadratic problem and proposes an IHT-like iterative algorithm with a line search strategy. [Net-trim] minimizes the $L1$ norm of weight matrices while ensuring similar activations in the pruned layer. While prior iterative algorithms like [CHITA] focus on specific problems, our methods apply to general loss functions beyond quadratic problems and offer convergence guarantees. Practically, within our iterative framework, using a cost-effective projection step (e.g., TopK or SparseGPT) yields competitive results compared to more complex one-shot solvers.
**Q3.** We consider sparse optimization problems, aiming to find the sparse solution $\theta_*$ for an objective function $f(\theta)$ with a $k^*$-sparse global optimum $\theta_*$. This includes the classic sparse recovery problem: recovering a $k$-sparse signal $\theta_*$ from a noisy observation $A \theta_* + \epsilon$ with a sensing matrix $A$.
I-OBS is a sparse optimization algorithm for sparse recovery. Besides finding $\theta_*$, it ensures each iteration's weights are $k$-sparse (with $k \ge k^*$), guaranteeing a $k$-sparse solution even if $\theta_*$ is not found. Such type of algorithms are known as proper learners in sparse recovery. Both theoretical and practical versions of I-OBS are sparse optimization algorithms resembling sparse Newton's methods. Similar first-order algorithms are studied in [AC/DC], but I-OBS leverages second-order information to potentially speed up convergence (both I-OBS versions achieve super-exponential local convergence rates).
For quadratic objectives, consider two settings: strongly convex (Hessian is positive definite) and convex (Hessian is positive semi-definite but not positive definite).
Strongly convex: The theoretical I-OBS solves the problem in one step as the second-order approximation equals the function itself. The challenge is efficiently computing the optimal mask. The practical I-OBS doesn't converge in one step but shows super-exponential local convergence.
Convex: I-OBS cannot be directly applied due to the non-invertible Hessian, similar to issues in Newton's method for convex optimization. Adding a small L2 regularization can address invertibility but complicates obtaining a convergence rate, as the global optimum is no longer sparse. This is left for future work.
**Q4 and Q5.** Thank you for the suggestion. In Table 3 of the attached file in global rebuttal, we provide results for I-OBS applied to MobileNetV1 model used in STR, which is the same model studied in Table 4 of the CBS paper. We skip the depth-wise convolutions having shape $(C, 1, K, K)$ when we apply our pruning algorithm, as is standard. Starting with 60\% sparsity, our I-OBS pruner outperforms CBS method on MobileNetV1 by a large margin: 2\% for 60\% sparsity, 5\% for 70\% sparsity and 12\% for 80\% sparsity. For low sparsities (30\% to 50\%), the two methods are comparable, since the accuracy difference is less than 0.5\%.
---
Rebuttal Comment 1.1:
Title: Discussion gentle reminder
Comment: Dear reviewer,
As the discussion period will end soon, we wanted to respectfully ask if you could please provide feedback on our responses to your concerns, and specifically on the additional experimental results provided.
Best regards,\
The authors
---
Rebuttal Comment 1.2:
Title: Follow up
Comment: I would like to thank the authors for addressing my concerns and extending their computational evaluation. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their detailed feedback. We briefly summarize our responses here for clarity:
1. One general question was regarding the positioning of our work relative to prior work on pruning.
In this context, our work tries to provide the first rigorous analysis of approaches inspired by the Optimal Brain Surgeon paper of [1], achieving quadratic convergence rates in the case of an idealized algorithm for choosing masks. Surprisingly, we are also able to provide similar rates for a practical algorithm based on greedy TopK mask selection (Lemma 3).
In addition, our work directly connects to recent work on pruning: as discussed in Section 3.4, methods such as WoodFisher/WoodTaylor [2] and OBC [3] are special cases of our iterative framework. Generally, our iterative approach can be seen as complementary to methods investigating better one-shot Hessian approximations, such as CBS [4], as they can be directly plugged into our algorithm’s projection/selection steps.
To fully address this point, we have provided detailed comparisons with additional recent work suggested by the reviewers, such as CBS and CHITA[5] , in the individual responses, and will definitely expand this discussion and the citations in the next revision of our work.
2. Another general question regarded expanding the experimental comparisons. To address this, we provide additional experiments on Llama2-7B and Llama3-8B models, again providing significant improvements relative to the baselines (specifically, SparseGPT[6]). Moreover, we show that this improvement also holds on the MMLU LLM task, a subset of the suggested LLM-KICK benchmark. (We plan to further expand this comparison in the next revision.)
3. We provide the extra experiment results in the attached PDF file of this global rebuttal.
4. Beyond these two common points, we have provided answers to each individual reviewer's concerns.
We hope that our responses address the reviewers’ questions, and would be happy to continue the discussion during the rest of the rebuttal period.
[1] LeCun, Yann, John Denker, and Sara Solla. "Optimal brain damage." Advances in neural information processing systems 2 (1989).
[2] Singh, Sidak Pal, and Dan Alistarh. "Woodfisher: Efficient second-order approximation for neural network compression." Neurips 2020
[3] Frantar, Elias, and Dan Alistarh. "Optimal brain compression: A framework for accurate post-training quantization and pruning." Neurips 2022
[4] Yu, X., Serra, T., Ramalingam, S., & Zhe, S.. “The combinatorial brain surgeon: pruning weights that cancel one another in neural networks.” ICML 2022
[5] Benbaki, Riade, et al. "Fast as chita: Neural network pruning with combinatorial optimization." ICML 2023
[6] Frantar, Elias, and Dan Alistarh. "Sparsegpt: Massive language models can be accurately pruned in one-shot."ICML 2023
Pdf: /pdf/07638911d98e423fd0091b77afa1c2fc35f2f917.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Adapting Diffusion Models for Improved Prompt Compliance and Controllable Image Synthesis | Accept (poster) | Summary: This paper propose a new family of diffusion model: FG-DM for better prompt compliance. Unlike traditional diffusion model, FG-DM models the joint distribution of images and conditioning variables like semantic, sketch, depth or normal maps via a factor graph. These extra factors largely boost prompt compliance compared to use text only. Users can also easily edit target image by modifying intermediate conditions. This work shows better results compared to vanilla diffusion.
Strengths: By decomposing the text into different factors such as seg maps, depth maps, etc., and using these factors as conditions for generation, this method achieves better prompt compliance. Additionally, the introduction of intermediate factors makes this method easier to edit.
Weaknesses: Since intermediate factors are generated sequentially, these method need more time for generation. The novelty of this method is limited.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1) How much more time do we need for the conditional generation chains? (for example, how much more time do we need when we use one or two conditions compared to original SD?)
2) If we use an off-the-shelf condition generator e.g. a text-to-depth map generator, and we fed the generated depth map to controlnet for generation, what is the advantage of our method?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive comments and useful feedback on the paper.
Q1. How much more time do we need for the conditional generation chains? (for example, how much more time do we need when we use one or two conditions compared to original SD?)
Regarding time, we make several considerations. First, the fact that the conditions can be synthesized with lower resolution makes their synthesis much faster than vanilla SD. Typically SD requires 3+ second per image while each condition typically requires less than 1 second. Hence, the increase in overall synthesis time is small. This is shown in Table 4 of the main paper, where generating an image with SD requires 3.25 seconds while generating an image with one condition using FG-DM takes 4.45 seconds for 10 timesteps and 4.81 seconds for 20 timesteps. For two conditions, these numbers become 4.9/5.26 seconds for t=10/t=20 timesteps. Second, a comparison to vanilla SD is somewhat unfair because the whole point is to increase the prompt compliance of vanilla SD. A fairer comparison is to popular methods to achieve this goal, such as A-E, which the table shows to require 36 seconds per image! Third, while the implementation of SBPC with the FG-DM achieves good results for single image synthesis (N = 1), the bottom half of the table shows that its true potential is unlocked by larger batch sizes. Both the SD and FG-DM implementations of SBPC achieved the much higher average object recall in this setting. However, for SD, the sampling of a batch of $N = 10$ high resolution images requires 23s per prompt, while the FG-DM is 4x faster (when using 10 DDIM steps for the segmentation mask and the same 20 steps for image synthesis) and achieves the same object recall.
Q2. If we use an off-the-shelf condition generator e.g. a text-to-depth map generator, and we fed the generated depth map to controlnet for generation, what is the advantage of our method?
Table 8 and Table 13 in the Appendix compare to the sequential implementation trained with segmentation condition, where the models are trained independently and used as the reviewer suggests. They show that joint training and joint inference of image and factor synthesis improves the quality of the generated images as compared to sequential operation.
Q3. The novelty of this method is limited.
Regarding novelty, we would note the following facts.
- First, the paper is the first to propose Sampling based Prompt Compliance (SBPC), which is shown to be much more effective than the existing Inference based prompt compliance (IBPC) approach. We show that IBPC methods are slower and underperform the proposed implementation of SBPC with the FG-DM.
- Second, we do introduce the attention distillation loss, which is shown to make a difference, both in the quantitative ablations of Table 2 of the main paper and the qualitative examples that we now show in the rebuttal pdf file (also see reply to Reviewer cyg3).
- Third, the concept of the FG-DM is novel and could have many other applications beyond improved prompt compliance. For example, its modularity enables more effective continual learning schemes, where factors are added or updated for specific conditions without the need to fully retrain the model. These type of benefits are not easy to measure/quantify without devoting a complete paper to the formulation of the continual learning problem, introducing experimental protocols, baselines, etc. We intend to consider the issue in future work.
- Finally, the paper shows some results that we consider somewhat surprising (and thus "novel"), e.g. 1) that SD can be prompted, without even changing the auto encoder, to produce segmentation maps or other conditions and that this even maintains the generalization ability of SD, or 2) that, as is now shown in the rebuttal pdf, model inversion and editing methods work as well for conditions as for image synthesis (see reply to Reviewer cyg3). Overall, we believe that all these contributions make the paper quite novel, and the FG-DM an interesting new direction for further research by the text-to-image synthesis community.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. You have addressed most of my concerns. | Summary: This paper proposes a method for sequentially generating images using a frozen SD. Starting from a text prompt, the process iteratively generates several visual conditions and images, with each step depending on the previous ones. The model uses the VAE in SD to encode and decode visual conditions and employs an condition-specific adapter to introduce information into the SD for generating these conditions. This approach makes the image generation process more controllable, flexible, and easier to edit.
Strengths: 1. The iterative generation method using a stable diffusion model is impressive. It provides strong control over the output while maintaining flexibility, aided by classifier-free guidance training.
2. The new method allows for flexible editing by modifying visual conditions and letting the model generate realistic content based on these changes.
Weaknesses: 1. The newly introduced iterative process adds extra loop to each generation step.
2. Some components and experimental details are not well-explained.
Technical Quality: 4
Clarity: 2
Questions for Authors: 1. The assumption behind attention distillation suggests that word tokens and visual regions remain constant, even if the visual regions are non-semantic, like poses. Why does this happen? It seems odd that the attention score of a general word like "person" can help generate the poses of a person.
2. Why do you use independent noise for different factors?
3. The experimental details are unclear. Does the model run with a single visual condition for each result, or does it generate all conditions sequentially? It would help to clearly present the generation process for each experiment, including Tables 1-5 and Figures 5-7.
4. How do you implement null conditions for different variables?
5. You claim that after choosing the best segmentation factor, the image synthesis only needs to run once. Why is this? Does the segmentation factor determine the quality of the generated images, or is your goal just to ensure the object appears in the generated image?
Confidence: 4
Soundness: 4
Presentation: 2
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive comments and useful feedback on the paper.
Q1. The newly introduced iterative process adds extra loop to each generation step.
Although the proposed FG-DM adds an extra loop in the generation process, in Table 4 we show that using lower resolution and fewer timesteps for conditional generation has no impact on overall FG-DM performance. Hence, the increase in inference time over vanilla SD is small (4.45 secs for FG-DM vs 3.25 secs for vanilla SD). However, this is not a fair comparison since the object recall of the FG-DM (67.8) is much higher than that of SD (59.8). Note that the whole point is to improve prompt compliance. For reference, the popular Attend-Excite method [4] has a much lower improvement in recall (63.6) for a much larger inference time (36 secs). On the other hand, the table shows that the FG-DM is 4 times faster than SD under a fair comparison, where the two methods have equal recall. Please see the reply to Reviewer t2kD for a more detailed discussion.
Q2. Some components and experimental details are not well-explained.
Good point about missing details. Table 1-5 and Figures 5-7 use a single visual condition for the image generation. The results presented in Figure 2 and Figure 19 follow the chain segmentation--pose--image (image conditioned on both). In general, the FG-DM can be trained with a single or multiple conditions. However the complexity (training, inference, and model size) increases with the number of conditions. Hence, the best model configuration is a trade-off between meeting application requirements and complexity. The critical condition for improved prompt compliance is segmentation, since it allows the measurement of object recall. It is also important for editing, since it allows moving and resizing objects. In the rebuttal pdf we show that it can even be combined with editing methods like LEDITS++ to enable text-based object editing operations. The other conditions are important for some image editing applications, e.g. changing object depths, people pose, or sketching, and can also contribute image quality improvements as shown in Figures 2-3. Overall, the FG-DM is a very flexible framework, allowing to mix and match different factors so as to best meet application needs. We will add all details to the final revision.
Q3. It seems odd that the attention score of a general word like "person" can help generate the poses of a person.
The reviewer has a good point that it does not make a lot of sense to use attention distillation for pose. We do not use it for pose factor and only use it for other conditions when these are the only conditioning factors. Note that the pose factor is conditioned by the segmentation, which is itself subject to the attention distillation loss. For multi-condition FG-DMs, the distillation loss is only required for the synthesis of the first condition, which is conditioned by text alone (see Figure 4 of main paper). For the subsequent factors, which are already conditioned by a visual condition (e.g. pose conditioned by segmentation), the attention loss is not needed. We will clarify this in the revised version.
Q4. Why do you use independent noise for different factors?
The question about independent noise is an interesting one. Independent noise is a natural assumption (after all, the noise is independent even across steps of the same diffusion model factor) and enables a simple training and inference process. It is also not clear what a natural model for the dependencies between the noise of a segmentation mask and an image would be, for example. Maybe one could consider the use of a common noise in all diffusion chains. This would be trivial to implement in the forward chain (just sample noise once per step instead of per step and per factor) but less trivial for the reverse denoising step, as some regularization would be needed to guarantee that all the chains produced the same noise. Maintaining the consistency of the noise across factors in the denoising process would likely be difficult. So, while we do not rule out that dependent noise may be an interesting possibility, we do not see an obvious way to implement it and we are not clear that it would be beneficial. We leave this question as a topic for future work.
Q5. How do you implement null conditions for different variables?
We use a null condition to support classifier free guidance during inference when generating conditions, even for the single condition FG-DMs. The null condition is implemented by using an image filled with zero values as the conditional input for the final factor of FG-DM (ControlNet). For each visual condition model (seg, depth, sketch, normal), we use the empty prompt ("") as the null condition while training the models.
Q6. is your goal just to ensure the object appears in the generated image?
The image synthesis only needs to run once because we are only attempting to improve prompt compliance. As the reviewer notes, the idea is to sample segmentations until we can ensure that the objects appear in the image. We verify the object recall with the segmentation factor for the results presented in Tables 3 and 4 and Figure 7. It is true that we could also sample multiple images given segmentation but this would not improve prompt compliance, maybe only image quality given the segmentation. We do not do this because 1) the existing models already produce high quality images (it is not clear that the extra computation would be justified), 2) the existing metrics of image quality do not support an evaluation of image quality fine-grained enough to even know if this would make a big difference in terms of the human judgements of quality for the resulting images, and 3) it can be done for any diffusion model (there is no special advantage of the FG-DM here).
[4] Chefer, H., et. al.: Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models. SIGGRAPH (2023)
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: Thank you for your response. Most of my concerns have been addressed. | Summary: The paper proposes unified generation framework of simultaneously generate image, segmap. depthmap, etc.
Strengths: The idea of simultaneous generation of images with various conditioning maps is very interesting.
Weaknesses: 1. Although the method shows good performance in image editing for generated images, it does not include any discussion of whether the framework can be applied to real image editing case. Does it can be applied to inverted real images? Does the framework can still predict the segmentation map and keypoints of given real images? If that is possible, there would be great impact and future usage.
If it is still applicable to real images, then please show some results which are similar to ControlNet.
2. The manuscript does not contain any qualitative comparison of ablation study. Please show some visual results on ablation study.
3. Please show the generated image results using handcrafted conditions, not the edited condition from generated conditions. For example, give the hand-drawn sketches (user-identified one) to the model and show the generated outputs.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive comments and useful feedback on the paper.
Q1. Does it can be applied to inverted real images? Does the framework can still predict the segmentation map and keypoints of given real images? If it is still applicable to real images, then please show some results which are similar to ControlNet.
The editing question is quite interesting and motivated us to consider aspects that we had not thought about. Since the framework is based on Stable-Diffusion, it can be applied for inversion. Figure 2 of the rebuttal pdf provides an answer to our understanding of the question "Can it be applied to inverted real images? Does the framework can still predict the segmentation map and keypoints of given real images?"
We have experimented with editing of both real images and their segmentation masks. The top of Figure 2 refers to inversion of the segmentation mask. We use an off-the-shelf OpenSEED [2] model to extract the segmentation map of a real image (shown on the bottom left of the figure) and apply the FG-DM segmentation factor model for inversion and editing using LEDITS++ [3], a recent method for text based image editing. We apply LEDITS++ to the segmentation factor to 1) replace the mask of the woman by that of a chimp (third image of the top row) and 2) to delete the mask of the umbrella (fifth image). New images (fourth and sixth) are then generated by the image synthesis factor conditioned on the edited segmentation masks. We have found that the inversion and editing of segmentation masks is quite robust. The synthesized masks usually reflect the desired edits. However, because the final image synthesis is only conditioned on these masks, the synthesized image does not maintain the background of the original image. The synthesized image is a replica of the original image at the semantic level (similar objects and layout) but not at the pixel level. From our experiments, this method has high robustness and quality for semantic-level editing. It can also be used for pixel-level editing using copy and paste as discussed below.
We next investigated pixel level inversion and editing, which is harder. The bottom part of Figure 2 shows the comparison of LEDITS++ editing with inversion by SD and by the image synthesis factor of the FG-DM. For the latter, we apply inversion to the ControlNet image generation factor using the real image and the segmentation mask extracted from it. Then we perform the LEDITS++ edit using the edited mask from the top part of Figure 2 (inverted with the FG-DM segmentation factor) to produce the edited image as shown in columns 4 and 5. This pixel-level inversion and editing tends to maintain the background of the original image but is much less robust than mask-level editing in terms of editing quality. This can be seen from the images in columns 2 and 3, which show the inversion using SD, which fails to produce a realistic chimp and turns the woman into a stone sculpture. The FG-DM produces more meaningful edits but which still has room for improvement, as shown in columns 4 and 5. The last column of the bottom part of the Figure 2 shows an added advantage of FG-DM where the chimp generated in the top portion can be pasted to the original image due to availability of the segmentation mask. In this example the pasting is rough around the object edges since we have made no attempts to beautify it. We believe that this copy and paste technique can produce high quality images with a bit more of work. Nevertheless, these examples show that real images can be edited directly or used as semantic guidance to generate new images following the specified semantic layouts. The pixel-level inversion can also be improved with more work but not within the time frame of the rebuttal. It will require some additional technical contributions and we leave it, as well as the optimization of the copy and past technique, as a topic for a subsequent paper on the use of FG-DMs for text-based image editing.
Q2. The manuscript does not contain any qualitative comparison of ablation study. Please show some visual results on ablation study.
Great point about ablations. Figure 1 of the rebuttal pdf shows some qualitative results of the ablation for the impact of the attention distillation loss. There is a clear qualitative benefit in introducing this loss. Without it, the model generates less accurate masks, leading to an unrealistic pizza making depiction/ cart-person relationship/ zebra pair from left to right. This confirms the qualitative ablation showing the benefits of the attention distillation loss in Table 2 of the main paper but provides a stronger illustration of the advantages of the loss, which tends to produce more "plausible" scenes. Such plausibility is difficult to measure with qualitative metrics. For example, the CLIP score is not sensitive to the fact that the cart and the person are not interacting in a normal way, or that the pizza making activity is unrealistic.
Q3. For example, give the hand-drawn sketches (user-identified one) to the model and show the generated outputs.
We note that if hand-drawn sketches are provided the model reduces to a standard conditional DM. In this case, it inherits all the capabilities of the ControlNet, or whatever DM is used to implement the final factor. Figure 3 of the rebuttal pdf shows some qualitative results on using hand drawn sketches to synthesize images. These are as good the ones produced with the ControlNet.
[2] Hao Zhang, Feng Li, Xueyan Zou, Shilong Liu, Chunyuan Li, Jianfeng Gao, Jianwei Yang, Lei Zhang. A Simple Framework for Open-Vocabulary Segmentation and Detection, ICCV 2023
[3] Manuel Brack, Felix Friedrich, Katharina Kornmeier, Linoy Tsaban, Patrick Schramowski, Kristian Kersting, Apolinário Passos. LEDITS++: Limitless Image Editing using Text-to-Image Models, CVPR 2024
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed rebuttal. Most of my concerns have been addressed.
Just one little concern remaining is that there are so many concurrent and previous methods which try similar tasks.
I leave the final decision to mete-reviewer whether this manuscript can go beyond the borderline of acceptance in this extremely competitive field. | Summary: This paper introduces a Factor Graph Diffusion Models (FG-DMs) to address limitations in current generative models. FG-DMs model the joint distribution of images and conditioning variables using a factor graph decomposition, offering advantages like efficient sampling for better prompt compliance, fine-grained editing, and explainability. The method's effectiveness is validated on several datasets.
Strengths: This paper introduces a method to model the joint distribution of images and conditioning variables, enhancing prompt compliance and control over image synthesis.
The Attention Distillation Loss is proposed to enhance the consistency and quality of generated images and conditions.
Weaknesses: The proposed method seems promising, however, it's very important to compare with current SOTA works to justify its effectiveness. The author only put Stable Diffusion in Table 5, which is not fair. Please include more recent works especially sine 2023 such as StyleGAN-T: Unlocking the Power of GANs for Fast Large-Scale Text-to-Image Synthesis, Make a Scene.
Technical Quality: 3
Clarity: 3
Questions for Authors: What's the training cost of your model? Is there a comparison with other methods?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive comments and useful feedback on the paper.
Q1. The proposed method seems promising, however, it's very important to compare with current SOTA works to justify its effectiveness. The author only put Stable Diffusion in Table 5, which is not fair. Please include more recent works especially sine 2023 such as StyleGAN-T: Unlocking the Power of GANs for Fast Large-Scale Text-to-Image Synthesis, Make a Scene.
Unfortunately, the StyleGAN-T work, mentioned by the reviewer, only has open-source code. The trained checkpoints are not available. This makes it impossible for us to compare, since we are an academic group and do not have the resources to train the model from scratch. Make-a-Scene does not have either code or checkpoint released, which makes us unable to perform quantitative comparisons on COCO validation dataset. Instead, we compared the qualitative results of FG-DM with Make-a-Scene and SpaText in Figure 13 of the Appendix (figures taken from the respective papers). Note that the proposed FG-DM synthesizes both segmentation maps and images, while the results of these methods were obtained with manual sketching of the segmentation, which is cumbersome and requires some skill. Nevertheless, the FG-DM generates high quality images that adhere well to the prompts as compared to these prior works.
In any case, the reviewer has a good point in requesting quantitative comparisons to more recent methods. For this, we compare with the recent 4M model [1], an autoregressive model trained from scratch on both discriminative and generative tasks. In this case, we use the largest model (4X-ML) released by the authors. Table 1 in the rebuttal pdf presents the updated comparison. Figure 1 in the rebuttal pdf shows a qualitative comparison between FG-DM and 4M. It can be seen that 4M generates images of weaker quality (distorted hands, missing person's head, deformed zebra bodies) as compared to FG-DM with/without the attention distillation loss.
Q2. What's the training cost of your model? Is there a comparison with other methods?
Regarding training time, one of the significant benefits of the proposed FG-DM is the short training time, since it only requires training adapters that are added to the pretrained SD model. Unfortunately, detailed comparisons are hard to perform since, as mentioned above, we do not even have the resources needed to train many of the models that have appeared in the literature. For the FG-DM factors, we simply train the adapters for 100 epochs, please see section A.5.2 in the Appendix of the original paper for a more detailed discussion. The training time per factor is about 1.5 days on 2 A100 GPUs or 72 A100 GPU hours. Finetuning the ControlNet takes about 2 additional days on 2 A100 GPUs. We find it quite interesting that these simple adaptations work quite well even for the synthesis of conditions, which the SD model was not originally trained to synthesize. For certain conditions the training even converges faster, e.g for depth maps it only takes 10 epochs and we do not observe any improvement after that.
For reference, training the original SD model requires about 150,000 A100 GPU hours, which is $\approx 2,000$ times larger. The FG-DM is also much more efficient than 4M, which requires 1.5 days training on 64 A100 GPUs for their smallest model (4M-B) and 8 days on 128 A100 GPUs for their largest model (4M-XL) which produces lower quality images as discussed above.
[1] Mizrahi, D., Bachmann, R., Kar, O.F., Yeo, T., Gao, M., Dehghan, A., Zamir, A.: 4M: Massively
multimodal masked modeling. In: Thirty-seventh Conference on Neural Information Processing Systems
(2023)
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed rebuttal. Most of my concerns are properly addressed. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time and comments. The rebuttal for individual reviews are posted below each review. Here we list the summary of the responses.
- In the rebuttal pdf, we added the qualitative results of ablation study with and without attention distillation loss (Figure 1). We also added quantitative and qualitative comparisons to the recent 4M-XL model (Figure 1 and Table 1). We show additional qualitative results using hand-drawn sketches (Figure 3).
- We show that FG-DM is also applicable for inversion and editing of real images and their extracted segmentation maps in Figure 2 of the rebuttal pdf.
- We clarify concerns regarding novelty, computational cost and the inference time for FG-DM.
- We apologize for missing the details on the experiments and clarified them in the rebuttal. We will add the details in the revised version.
Pdf: /pdf/cb1a3c83580dfe59164cdb4476704a312bbe166e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
The Impact of Initialization on LoRA Finetuning Dynamics | Accept (poster) | Summary: The paper investigates the impact on training dynamics of two initialization schemes for LoRA. For this purpose the authors investigate the asymptotic behaviour of activations and weights for LoRA adapters. The authors find that init[A] where A is intialized randomly and B is initalized with zeros leads to more efficient learning dynamics but leads to some numerical instabilities. This initialzation method is the one most commonly used for LoRA-style methods. Further, the authors found that init[B] leads to suboptimal learning dynamics but is less prone to numerical instabilities. Finally, init[A] performs better with larger learning rates than init[B], which is supported by experiments on NLU as well as NLG finetuning tasks.
Strengths: The paper is well motivated and presents theoretical analysis accompanied by empirical experiments.
Claims are supported by theoretical analysis as well as relevant experiments.
Furthermore experiment hyperparameters as well as hardware requirements are well documented.
Weaknesses: **Novelty**
Most of the theoretical Analysis of the LoRA finetuning dynamics in this paper have already been presented in [1].
The authors should have made it more clear how their work differs from the Analysis done in [1], especially since most formulas and notation is identical.
Further, both of the methods evaluated in this paper have also been investigated in [1].
The paper would greatly benefit from a clear explanation on how the analysis in this work differs from results published in [1].
Finally, it is not clear if the contribution of individual LoRA layers to feature learning can be derived from a setting where only a single LoRA layer is trainable while all others are frozen (as claimed in chapter 3.1).
**Experimental results**
Since the main gist of the paper is to investigate the learning dynamics of LoRA, it would be good to also investigate different initialization schemes aside from init[A] and init [B] prevalent in the litearture, e.g. gaussian init [2], kaiming init [3], principal components [4], etc.
In [1] only different learning rates for A and B have been investigated.
**Significance of results**
While the main finding that init[A] has a better optimal performance is interesting, this initialization scheme is already the common initialization scheme used by LoRA.
Therefore, the impact of the main findings is marginal.
[1] Hayou et al., LoRA+: Efficient low rank adaptation of large models., ICML 2024
[2] Hu et al., LoRA: Low-Rank Adaptation of Large Language Models, ICLR 2022
[3] He et al., & Sun, J. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification., ICCV 2015
[4] Meng et al., PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models., arXiv 2024
Technical Quality: 3
Clarity: 2
Questions for Authors: - Did the authors investigate other commonly used initialization schemes?
- Will the authors provide code that can be used to reproduce the results in the experiments section?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have properly addressed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's feedback, but we respectfully disagree with several points and believe there are significant misunderstandings in their assessment. We address these below:
1) **Initialization schemes**: The reviewer suggests investigating "different initialization schemes aside from init[A] and init[B]," citing Gaussian, Kaiming, and principal component initializations. We believe this is a fundamental misunderstanding of our work. Init[A] and Init[B] in our setup already use Kaiming (Gaussian) initialization. The distinction lies in whether A or B is set to zero to ensure $BA=0$ at the start. More generally, our method covers (as stated in Footnote 2 in page 3) general random distributions (not only Gaussian).
2) **Connection to prior work**: The reviewer states, "In [1] only different learning rates for A and B have been investigated." This is precisely our point - [1] focused on learning rates, while our work addresses the unexplored impact of initialization schemes in vanilla LoRA. While our paper uses similar tools to those used in [1], to study the learning dynamics in the large width limit, our results are *orthogonal* to those of [1]. In this paper, we study the impact of the initialization scheme in **vanilla** LoRA as introduced in [2] (same learning rate for A and B), while in [1], the authors study the impact of setting different learning rates for A and B. Moreover, note that the tools used in our paper and also in [1] are not new and are based on the theory of infinite-width networks. For instance, the same machinery was used in the Tensor Programs series (see e.g. [10]). We decided to use the $\gamma$ notation introduced in [1] because it simplifies the message for practitioners. However, as we explained, our results are orthogonal to those of [1]. We will add this discussion to the revised version of the paper.
3) **Contribution**: The reviewer claims our findings have "marginal" impact because init[A] is already commonly used. We respectfully agree with this argument for the following reasons:
a) It ignores the scientific value of providing the first rigorous explanation for an empirical practice.
b) It overlooks the importance of validating existing methods, which is crucial in scientific research.
c) It fails to recognize that our work prevents potential future misapplications of init[B], which could lead to suboptimal results.
d) it judges our paper as a 'method' paper where the main contribution is the introduction of a novel method. This is not the case. Our paper studies theoretically and empirically an existing method, and shows which initialization is better.
5) **Scope**: Multiple layer analysis presents challenges. With two or more layers, changes in LoRA features $Z_B = BAz$ are not only due to updates of $A$ and $B$ but also changes in $z$ via previous LoRA layers. This requires a more refined, layerwise definition of efficient feature learning. While the fundamental analysis should remain similar, this adds unnecessary complexity to the setup. Importantly, our empirical results with multiple LoRA layers confirm that our findings hold in these more complex scenarios.
[1] Hayou et al., LoRA+: Efficient low rank adaptation of large models., ICML 2024
[2] Hu et al., LoRA: Low-Rank Adaptation of Large Language Models, ICLR 2022
[10] Feature Learning in Infinite-Width Neural Networks (2021). Yang and Hu.
We believe our work provides valuable, novel insights into LoRA dynamics, filling a gap in the literature. We are surprised by the low score assigned to our paper, and we hope this clarification addresses the misunderstandings raised above.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed response and the clarifications.
There was indeed a misunderstanding, I would recommend to make the fact that the analysis comprises general random distributions more explicit in the main text and not only mention it in a footnote.
Is it possible to extend this analysis to other initialization schemes than general random distrbutions, as in e.g. [1]?
I believe this would strengthen the paper.
Further, after carefully double-checking the contributions of [2], I agree that the paper presents a valid contribution, which is why I decided to raise my score. However, since I did not carefully check the math, I will also decrease my confidence.
[1] Meng, F., Wang, Z., and Zhang, M. Pissa: Principal singular values and singular vectors adaptation 507 of large language models, 2024.
[2] S. Hayou, N. Ghosh, and B. Yu. LoRA+: Efficient Low Rank Adaptation of Large Models. 2024. | Summary: This paper investigates the impact of initialization schemes on the finetuning dynamics of Low Rank Adaptation (LoRA) in large language models. The authors compare two initialization approaches: Init[A], where A is initialized randomly and B to zero, versus Init[B], where B is initialized randomly and A to zero. Through theoretical analysis and empirical experiments, they demonstrate that Init[A] allows for larger learning rates and leads to more efficient feature learning compared to Init[B], albeit at the cost of some internal instability. The paper provides a rigorous theoretical framework based on large width limits and supports the findings with experiments on toy models and real-world language tasks.
Strengths: S1: The paper addresses an important and understudied aspect of an usually handwaved-away aspect of LoRA, namely the impact of initialization schemes. This is a relevant topic given the widespread use of LoRA for efficient adaptation of large language models.
S2: The theoretical analysis is rigorous and well-grounded, using asymptotic analysis in the large width limit to derive principled insights about the dynamics of LoRA finetuning under different initialization schemes.
S3: The empirical results on both toy models and real-world language tasks provide strong support for the theoretical findings, demonstrating the practical relevance of the initialization choice.
S4: The paper presents a clear trade-off between feature learning efficiency and internal stability, providing nuanced insights that go beyond simply recommending one initialization over the other.
S5: I would like to commend the authors for putting great amount of work on both the main document as well as the appendix. Overall not only does the paper read well and seems theoretically and experimentally sound, but the manuscript includes an excellent degree of experimental details, and it's thus likely that future work will be able to cleanly build on it.
Weaknesses: W1: While the theoretical analysis is sound, the paper could benefit from a more intuitive explanation of why Init[A] allows for larger learning rates and more efficient feature learning. This would make the insights more accessible to some practitioners.
W2: The experiments focus primarily on language models and NLP tasks. This is definitely acceptable and overall a very sound choice, however the inclusion of experiments from other domains (e.g., finetuning an image classification model, or some other architecture that relates to a vision transformer) would strengthen the experimental section significantly, and increase the overall impact of the paper in the broader NeurIPS community.
Technical Quality: 4
Clarity: 4
Questions for Authors: Q1: How sensitive are the results to the choice of LoRA rank? Do the theoretical predictions and empirical findings hold across different rank values?
Q2: The paper focuses on vanilla LoRA. How do the authors expect these initialization schemes to interact with variants like QLoRA or LoRA+? Would the trade-offs and recommendations change?
Q3: Given the internal instability observed with Init[A], are there any potential negative consequences for downstream task performance or generalization ability? How might this instability manifest in practice (and would there be a difference depending on input domain and/or other parameters of the model + problem space)?
Q4: The theoretical analysis seems to assume a single LoRA layer, for simplicity. How well do the insights generalize to more realistic scenarios with multiple LoRA layers throughout a network?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: L1: The theoretical analysis seems to rely on assumptions about the asymptotic behavior in the large width limit. The paper could benefit from a more detailed discussion of how these assumptions may or may not hold in practical scenarios with finite-width networks.
L2: The empirical evaluation, while comprehensive, is limited to a subset of NLP tasks and model architectures. Expanding the evaluation to a broader range of tasks, model sizes, and architectures would strengthen the generalizability of the findings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive and constructive comments. We address their main questions below:
1) **Sensitivity to LoRA rank**: Our results hold for different rank values. For LoRA rank $r$, we used two primary values: $r=8$ (for RoBERTa) and $r=64$ (for LLama). We also conducted limited experiments with $r=4$ for RoBERTa. However, we chose to allocate more compute resources to $r=8$ to increase the number of random seeds. We designed these experiments to maximize the amount of useful empirical results within our compute budget constraints. All our experiments were conducted using the LoRA+ codebase (available on GitHub) and can be easily reproduced. To replicate vanilla LoRA, one can set the lora_plus_ratio to 1 (ensuring the same learning rate for both A and B) and choose between Init[A] or Init[B] using the use_original_lora_init argument.
2) **Interaction with LoRA variants**: We appreciate this interesting question. While we haven't conducted a theoretical analysis of initialization's impact on advanced LoRA variants (e.g., QLoRA and LoRA+), we can provide some intuition: For QLoRA, we expect similar results since the only difference from LoRA is the quantization step. Our analysis should remain fundamentally the same.
For LoRA+, the situation is more complex as it involves choosing different learning rates for $A$ and $B$. While we can't definitively state the outcomes, our preliminary results suggest that the optimal ratio in LoRA+ is affected by the choice of initialization (init[A] vs init[B]).
3) **Instability of Init[A] in practice**: As specified in the paper, with Init[A], LoRA "internal" features ($Az$) grow at most as $\Theta(n^{1/2})$ with respect to width (embedding dim). This growth, although potentially problematic, is slow (in $n$) and should only become an issue for extremely large widths. In practical settings (e.g., $n \approx 10^3$ for LLaMA 3.1 405B), this is not significantly problematic. The constant in the $\Theta(.)$ growth term depends on the model and downstream task, which affects this growth term.
4) **Generalization to multiple layers**: Multiple layer analysis presents challenges. With two or more layers, changes in LoRA features $Z_B = BAz$ are not only due to updates of $A$ and $B$ but also changes in $z$ via previous LoRA layers. This requires a more refined, layerwise definition of efficient feature learning. While the fundamental analysis should remain similar, this adds unnecessary complexity to the setup. Importantly, our empirical results with multiple LoRA layers confirm that our findings hold in these more complex scenarios. | Summary: The paper analyzes the impact of different initialization techniques for A and B matrices in LoRA adapters. Typically, either A or B matrix is initialized with zero while the other is initialized form a Gaussian distribution. This is done so that fine-tuning starts with LoRA adapters ($A \times B = 0$) having zero effect. However, the paper argues that it matters which of A and B initialize to 0. As per the results, $B = 0$ init allows higher optimal learning rates which leads to better overall performance. However, there is still instability during training which could be mitigated with a method like LoRA+.
Strengths: It's useful to know that the default initialization scheme in LoRA happens to be the better alternative. The experiments are convincing.
Weaknesses: Perhaps I am not the right person to review this paper, but I don't fully understand the impact of this paper. It shows that there are two ways of initializing AB matrices and the default one that we have been using until now is the better option. Additionally, it says that the default initialization is not good enough but doesn't propose anything new. Maybe the theory part is important, but I couldn't understand it fully in the limited time I had. Having said that, I still feel that the paper has something worthwhile to contribute to the community. So, I vote to accept it for now.
Technical Quality: 2
Clarity: 2
Questions for Authors: None.
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The limitations section in the paper is adequate.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive comments. We provide some details that explain the scope and contributions of our paper.
1) **Scope and contributions**: As correctly noted, in low-rank adaptation (LoRA), we generally aim to initialize the model such that $BA=0$. This naturally leads to two possible initialization schemes: either initialize $A$ randomly and $B$ to zero (init[A] in our paper), or vice versa (init[B]). Prior to our study, practitioners had no clear guidance on which scheme to use, often defaulting to the one implemented in their chosen package (e.g., init[A] in the PEFT package). Our paper provides the first definitive answer to this question: init[A] is generally superior to init[B] because it enables finetuning with larger learning rates, resulting in more efficient learning.
2) **Issue at extreme width**: The reviewer correctly points out that both init[A] and init[B] become problematic in extreme scenarios where the width is exceptionally large (large enough that a value of order $n^{1/2}$ causes numerical instability). However, this is an extreme regime, and current state-of-the-art models have not yet reached this threshold.
We hope this clarifies the scope and significance of our paper. | Summary: Finetuning large language models has become a common technique among practitioners, however due to the memory footprint of practically viable LLMs, there is a dire need of techniques that allow memory-lightweight finetuning (PEFT). Among those techniques LoRA has gained immense popularity. In LoRA a dense matrix is modified via addition of a low-rank update consisting of matrices A and B. In order for this update to initially conserve the model’s outputs while preserving the ability to learn, exactly one of the matrices needs to be initialized to zeros.
“The Impact of Initialization on the Finetuning Dynamics in LoRA” proposes that the B matrix should be initialized with zeros. It hypothesizes that the reason for that is that this initialization (Init[A]) allows using greater learning rate without sacrificing stability.
The main points of the paper are supported by theoretical derivations stemming from the perspective of infinite-width networks. Furthermore, experimental evaluations serve as proof of the soundness of the theoretical approach and as practical justification of the method.
Strengths: - Mathematical support combined with experimental results
- Potentially significant due to how widespread LoRA adoption is and due to how easy the technique is to implement
- The paper is quite original - even though LoRA is a popular technique and it is a fundamental question whether the A or the B matrix should be initialized with zeros, this work seems to provide the first attempt at answering it.
Weaknesses: - There is virtually no discussion of why the training hyperparameters are set to those exact values, in particular with regard to the number of epochs and weight decay = 0. I suspect changing those HPs might mitigate the need to choose the initialization schema so carefully. Also the r parameter seems to be chosen arbitrarily.
- The gains provided by the method seem to be small and would be better visualized if they were put in the perspective of comparison with other methods such as differing r or even, if possible, full finetuning.
- The Toy Model setup is not explained clearly.
Technical Quality: 3
Clarity: 2
Questions for Authors: - There are other works that try to improve LoRA’s performance (LoRA+) or are inspired by it (VeRA, AdaLoRA, etc.). Do you believe your work could be used with those techniques?
- Do you feel that there is some simple, informal intuition suggesting why the Init[A] outperforms Init[B]?
- AdamW seems to be among the most popular optimizers, however the paper focuses on rigorous derivation in case of SignSGD claiming that the result should extend to the typical choice of the Adam optimizer. Even though the authors claim they are using the AdamW optimizer in the experiments, they set weight decay = 0, effectively replacing AdamW with Adam. Would the method perform as well while using weight decay? Would it be justified theoretically the same way Adam is?
- Can you justify the fixed hyperparameters’ values?
- Can you explain the toy model setup more clearly? As I understand it now, the same model is being adapted and is used to create the dataset which does not seem to make sense.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: - I do not believe the authors have sufficiently explained the limitations of the fixed hyperparameters in the evaluations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback. We believe there are some misunderstandings and we take this oppotunity to address them.
1) **Connection to other LoRA variants**: We emphasize that we do not introduce a new method in this paper; rather, we study the finetuning dynamics of the original LoRA method [1] for two natural initializations: init[A] (set A to random and B to zero) and init[B] (set B to random and A to zero). The impact of initialization on other LoRA variants (e.g., LoRA+, VeRA, AdaLoRA) is an interesting question but outside the scope of this paper, as stated in our conclusion and limitations section. While we haven't conducted a theoretical analysis of initialization's impact on advanced LoRA variants (e.g., QLoRA, VeRA, LoRA+, AdaLoRA), we can provide some intuition: For QLoRA, we expect similar results since the only difference from LoRA is the quantization step. Our analysis should remain fundamentally the same. For LoRA+, the situation is more complex as it involves choosing different learning rates for A and B. While we can't definitively state the outcomes, our preliminary results suggest that the optimal ratio in LoRA+ is affected by the choice of initialization (init[A] vs init[B]). As the reviewer might guess from this intuition, the impact of initialization on advanced LoRA methods will depend on the method itself. We will add this discussion in the revised version.
2) **Intuition on why Init[A] outperforms init[B]**: Assume LoRA rank is $r$ and the embedding dimension is $n$. LoRA features are given by $BA z$ for input vector $z \in \mathbb{R}^n$ (LoRA input). With init[A], B is set to zero, allowing the magnitude of $Az$ to increase significantly without causing a blow-up in LoRA features $BAz$, as the small magnitude of B compensates for the growth in $Az$. We show in Thm 1 that the maximal learning rate (that does not cause features $BAz$ to explode) scales as $\Theta(n^{-1/2})$ in width. This is not possible with init[B], where B's magnitude is always $\Theta(1)$ due to random initialization with scaled variance, in which case the maximal learning scales as $\Theta(n^{-1})$ (Thm 2). We will add this discussion to the revised version.
3) **AdamW vs Adam (impact of weight decay)**: We ran sweeps with weight decay $wd \in (0, 0.05, 0.1)$ (0.1 used in the original LoRA paper). This hyperparameter did not significantly impact the results for common $r$ values (8, 64), and our conclusion that Init[A] outperforms Init[B] holds for different $wd$ values. With $wd=0$, we reproduced the results of [1] within a statistically sound confidence interval (recall that in [1], the authors used wd=0.1). We expect $wd$'s impact to be more significant for large (though generally impractical) $r$ values due to overparameterization. We will add these details to the revised version.
4) **Fixed hyperparameters' values**: Our results are insensitive to $wd$ choices within the tested range. We used LoRA ranks $r=8$ (RoBERTa) and $r=64$ (LLama). For train batch size, we tested 4 and 16, selecting the best (4 for RoBERTa, 16 for LLama), aligning with Hu et al. (2021). This setup maximizes useful empirical results within our compute budget. While larger sweeps and more models would be ideal, our experiments confirm our theoretical results beyond a trivial statistical threshold.
5) **Toy model setup**: In statistics, the teacher-student model is a simplified framework for studying learning. It involves a "teacher" model generating data and a "student" model learning from it. Both models are of the same nature, making learning feasible. This setup allows analytical study of learning dynamics and generalization, providing insights into more complex machine learning scenarios. It's particularly useful for understanding phenomena like overfitting, learning curves, and the relationship between model complexity and sample size.
[1] Hu et al. (2021). LoRA: Low-Rank Adaptation of Large Language Models
We hope these answers clarify any misunderstandings. We're happy to address further questions.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the replies. I raise my score to 5. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Online Estimation via Offline Estimation: An Information-Theoretic Framework | Accept (poster) | Summary: This work studies the possibility of converting offline estimation algorithms into online estimation algorithms using an information-theoretic approach. It introduces the Oracle-Efficient Online Estimation (OEOE) framework, where the learner interacts with a data stream indirectly through a sequence of offline estimators produced by a black-box algorithm. The main contributions are the characterization of the statistical and computational complexity of online estimation within this framework. It demonstrates that near-optimal online estimation error can be achieved via offline estimation oracles, though computational efficiency is generally not feasible except in specific cases like conditional density estimation. This framework is also applied to interactive decision-making problems, providing insights into the relative power of online and offline estimators.
Strengths: Here are some strengths of the paper:
1. This work introduces a novel framework, Oracle-Efficient Online Estimation (OEOE), which creatively combines existing ideas from offline and online estimation and therefore close the gap between classical statistical estimation and contemporary online learning.
2. Paper provides theoretical analysis and proof to support its claims. The results show a deep understanding of the complexities involved in converting offline estimation algorithms to online contexts.
3. By addressing the relative power of online and offline estimation methods, the paper studies a fundamental question in the field of statistical learning. The implications of this work extend to practical applications in interactive decision-making, such as contextual bandits and reinforcement learning.
Weaknesses: Here are some weaknesses of the paper:
1. While the paper provides a comprehensive theoretical analysis, it acknowledges the computational inefficiency of the proposed Oracle-Efficient Online Estimation (OEOE) framework in general cases. The authors could improve this aspect by exploring potential heuristic approaches or approximations that could offer practical computational advantages, even if they come with some trade-offs in theoretical guarantees.
2. The paper lacks empirical validation of the proposed framework. Including a few experimental results or simulations demonstrating the practical performance of the OEOE framework could significantly strengthen the paper. Even a small set of experiments showing the feasibility and comparative performance against existing online estimation methods would provide valuable insights.
3. Certain sections of the paper, particularly the detailed proofs and technical discussions, are dense and may be difficult for readers to follow. Improving the clarity of these sections by adding more intuitive explanations, step-by-step breakdowns of complex proofs, or illustrative examples could make the paper more accessible. This would help readers better understand the contributions .
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Have you considered any specific real-world applications or practical scenarios where this framework could be directly applied or tested? How do you envision the implementation challenges being addressed in such cases?
2. The theoretical results are derived under certain assumptions, such as the metric-like structure of the loss function and the bounded offline estimation error. How sensitive are your results to these assumptions? Could you provide insights or potential extensions to your framework that could handle more relaxed or different sets of assumptions, thereby increasing the generalizability of your findings?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Weakness:
>> While the paper provides a comprehensive theoretical analysis, it acknowledges the computational inefficiency of the proposed Oracle-Efficient Online Estimation (OEOE) framework in general cases. The authors could improve this aspect by exploring potential heuristic approaches or approximations that could offer practical computational advantages, even if they come with some trade-offs in theoretical guarantees.
Indeed, to improve results regarding computation, in Section 4.2, we consider the cases for conditional density estimation where an efficient online density estimation algorithm is present. This result presents a tradeoff between computational efficiency and theoretical guarantees.
The OEOE with conditional density estimation can be computationally efficient with an efficient online density estimation algorithm but is not optimal statistically in terms of minimax regret.
This result is an extension of the general reduction from OEOE to online learning in Appendix D. The tradeoff also extends to the classification and regression setup we consider.
>> The paper lacks empirical validation of the proposed framework. Including a few experimental results or simulations demonstrating the practical performance of the OEOE framework could significantly strengthen the paper. Even a small set of experiments showing the feasibility and comparative performance against existing online estimation methods would provide valuable insights.
This paper presents a purely theoretical result. We look forward to seeing experiments in future works.
>> Certain sections of the paper, particularly the detailed proofs and technical discussions, are dense and may be difficult for readers to follow. Improving the clarity of these sections by adding more intuitive explanations, step-by-step breakdowns of complex proofs, or illustrative examples could make the paper more accessible. This would help readers better understand the contributions.
Please see the general rebuttal for a detailed clarification of the theoretical insights.
Question:
>> Have you considered any specific real-world applications or practical scenarios where this framework could be directly applied or tested? How do you envision the implementation challenges being addressed in such cases?
Transferring a theoretical algorithm to real-world applications requires domain specification and engineering. This work provides guidance in algorithm design principles for this new setup rather than improving existing online learning algorithms. Together with [1], it shows that using offline regression outputs suffices for interactive decision-making. For any reinforcement learning task, the proposed algorithm involves: (1) Performing offline regression to approximate the RL model, (2) Creating a version space (trust regions) of the RL model, (3) Averaging the possible RL models to create an online regression output, and (4) Using the E2D algorithm in [1] to balance exploration and exploitation. This work provides guidance in steps (2) and (3).
[1] The Statistical Complexity of Interactive Decision Making
Dylan J. Foster, Sham M. Kakade, Jian Qian, Alexander Rakhlin, (2021)
>> The theoretical results are derived under certain assumptions, such as the metric-like structure of the loss function and the bounded offline estimation error. How sensitive are your results to these assumptions? Could you provide insights or potential extensions to your framework that could handle more relaxed or different sets of assumptions, thereby increasing the generalizability of your findings?
The metric-like losses are quite general, covering cases such as square loss, absolute loss, and KL divergence with a bounded density ratio. Assuming the offline estimation error is bounded is reasonable. The only restrictive assumption is knowing an upper bound for the offline estimation error for Algorithm 1. When this assumption is not met, Algorithm 2 in Appendix D, which has worse but still sublinear statistical regret, can be used.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer HaHn:
Can you please respond to the rebuttal as soon as possible? Your comments will be greatly appreciated. Many thanks,
AC
---
Rebuttal 2:
Title: Re:
Comment: I thank the authors for responding to my comments. I want to increase my score based on the response provided by the authors. | Summary: This paper proposes an algorithm that can convert offline estimation oracles into online estimation algorithms in a black-box fashion. The conversion is built within the OEOE framework, which manipulates the learner, offline oracle, and environment simultaneously. The authors also propose certain upper bounds and lower bounds for their conversion algorithm, which shows that it achieves both statistical and computational efficiencies. This paper also presents several additional results like the hardness of memoryless algorithms, enhancing the soundness of this paper.
Strengths: 1. The framework of OEOE is general enough to include many real-world problems.
2. The statistical complexity has its lower and upper bounds nearly matched.
3. The author shows that although there is no computational algorithm for OEOC, their algorithm is computationally efficient in some fine-grained special cases.
Weaknesses: 1. I understand that many works assume knowing the intrinsic of the data-generating, but assuming that $\mathcal{K}$ is known would exclude some cases of statistical estimation, which might reduce the generality of OEOE.
2. Can you provide a quantified definition of “oracle-efficient”? Despite it has been heavily used in the paper, I can only find some rough explanations after searching through the text.
3. I have not checked the proof of theorem 3.2 and theorem 4.1 carefully, so maybe I am wrong, but the hardness construction in them seems a bit ill-conditioned. The worst case is searched through all offline oracles, the adversary seems overpowered.
4. As stated in Theorem 4.1, there is no computational algorithm in OEOE. So could it be that the definition of this framework includes some unwanted cases that should be excluded? And does that mean that this framework needs some redefinition?
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >> I understand that many works assume knowing the intrinsic of the data-generating, but assuming that is known would exclude some cases of statistical estimation, which might reduce the generality of OEOE.
The OEOE is concerned with transforming an offline guarantee into an online one. Even when the data-generating process is unknown, whenever the offline guarantee can be obtained, the OEOE can be applied to transfer it into an online guarantee.
>> Can you provide a quantified definition of “oracle-efficient”? Despite it has been heavily used in the paper, I can only find some rough explanations after searching through the text.
We define the term 'oracle-efficient' in lines 106-107: 'An algorithm is termed oracle-efficient if it attains low online estimation error (2) in the OEOE framework.' In other words, the algorithm is statistically efficient, given access to an offline oracle without any extra labeling information.
>> I have not checked the proof of theorem 3.2 and theorem 4.1 carefully, so maybe I am wrong, but the hardness construction in them seems a bit ill-conditioned. The worst case is searched through all offline oracles, the adversary seems overpowered.
Yes, the adversary can search through all offline oracles. However, it is not overpowered for two reasons: First, in the OEOE setup, the learner has no information on how the offline oracle output is generated. Second, the offline oracle must still satisfy the stringent offline guarantee.
>> As stated in Theorem 4.1, there is no computational algorithm in OEOE. So could it be that the definition of this framework includes some unwanted cases that should be excluded? And does that mean that this framework needs some redefinition?
It is unfortunate that there is no computational algorithm in OEOE, which is interesting in its own right. To improve computational results, Section 4.2 considers cases for conditional density estimation where an efficient online density estimation algorithm is present. This result presents a tradeoff between computational efficiency and theoretical guarantees. The OEOE with conditional density estimation can be computationally efficient with an efficient online density estimation algorithm but is not optimal statistically in terms of minimax regret. This result extends the general reduction from OEOE to online learning in Appendix D. The tradeoff also applies to the classification and regression setup we consider. Refining the framework to more specific cases could ensure the presence of an efficient algorithm. We look forward to further developments in this area.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer 3FiW:
Can you please respond to the rebuttal as soon as possible? Your comments will be greatly appreciated. Many thanks,
AC
---
Rebuttal Comment 1.2:
Comment: Thanks for your reply. I happily maintain my current evaluation. | Summary: This paper studies the methods to convert offline estimation algorithms into online estimation algorithms in a black-box fashion. This work introduce a new protocol, Oracle-Efficient Online Estimation, which provides an information-theoretic abstraction of the role of online versus offline estimation.
Strengths: The problem of online estimation via a sequence of offline estimators is of great importance to the online decision making community.
Weaknesses: The organization of the paper requires significant improvement as it is currently hard to follow. The introduction section occupies almost half of the main text, leaving the main methodology and theoretical results insufficiently discussed and lacking in clear theoretical insights.
Technical Quality: 3
Clarity: 1
Questions for Authors: This work discusses the impossibility of memoryless oracle-efficient algorithms, I wonder what if the learner can select the online estimator at time t as a function of the most recent "w" offline estimators, where w is the window size.
In addition, the authors present a fine-grained perspective on the computational complexity of oracle-efficient estimation for conditional density estimation. Are there any insights for the other two problems of binary classification and regression?
Confidence: 2
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Weakness:
>> The organization of the paper requires significant improvement as it is currently hard to follow. The introduction section occupies almost half of the main text, leaving the main methodology and theoretical results insufficiently discussed and lacking in clear theoretical insights.
Please see the general rebuttal for a detailed clarification of the theoretical insights.
Question:
>> This work discusses the impossibility of memoryless oracle-efficient algorithms, I wonder what if the learner can select the online estimator at time t as a function of the most recent "w" offline estimators, where w is the window size.
Yes, Algorithm 2 in Appendix D presents a general reduction from OEOE to online learning, which only requires the learner to remember the most recent $N$ offline estimators, with tradeoffs in regret depending on $N$.
>> In addition, the authors present a fine-grained perspective on the computational complexity of oracle-efficient estimation for conditional density estimation. Are there any insights for the other two problems of binary classification and regression?
Yes, the technique for conditional density estimation extends from Algorithm 2 in Appendix D, which also applies to binary classification and regression. The conclusion remains similar: if efficient online algorithms exist for binary classification and regression, then efficient algorithms for the OEOE setting also exist.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer erpa:
Can you please respond to the rebuttal as soon as possible? Your comments will be greatly appreciated. Many thanks,
AC
---
Rebuttal Comment 1.2:
Comment: I appreciate the authors' detailed responses, and I have slightly increased my rating. | Summary: The paper considers an online estimation problem, in which the learner does not have a direct access to the past values (labels), but rather instead just an oracle access to an offline estimator based on past covariate-value pairs. The main assumption is that the loss of these offline estimators is bounded. The paper addresses the question whether this sequence of offline estimators can be converted into an online estimator, both in terms of statistical complexity and computational complexity. From the statistical aspect, it is shown that an efficient conversion is possible, yet only when the learner knows the entire sequence of past offline estimators (and just storing the latest one cannot guarantee this). The proposed protocol can be considered as a generalization of the halving algorithm.
From the computational aspect, a negative result is shown stating that polynomial run-time algorithms cannot achieve the optimal online statistical complexity, but is somewhat relieved by the setting of density estimation, since in this setting the learner can artificially produce values $y$ from the estimated density.
Strengths: 1. The setting is general, and appears to be related to a multitude of contemporary problems.
2. The paper addresses the problem from both statistical and computational perspectives.
3. The results are detailed, including different algorithms for finite and infinite classes (the latter in the appendix). The results are explained in comparison and in light of classic algorithms (like the halving algorithm). The scaling of the bounds and the dependency on each parameter is justified in detail.
Weaknesses: 1. The main proposed algorithm is somewhat brute-force and just keeps the hypotheses that agree with the data – in this case it is covariance-estimator pairs.
2. It is not obvious that the OEOE model, in which both all past covariates as well as all past offline estimators are saved, actually captures aspects such as limited past data availability and compression of past observations and decisions. This is because storing an estimator may be much more storage-consuming than the value itself (e.g., as in binary classification). Generalizations based on finite past data seems to be challenging.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In line 33 it is claimed that statistical estimation is mainly about fixed design. Random design of the covariate-value is also commonly considered.
2. Theorem 3.3 is an impossibility result, showing that even a memoryless estimator is not efficient, even if it is improper. Doesn't this trivially imply the same result for the subset of proper learners ? Why thus a different result is needed ?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors properly discuss limitations of their results throughout the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Weakness:
>> The main proposed algorithm is somewhat brute-force and just keeps the hypotheses that agree with the data – in this case it is covariance-estimator pairs.
This is the optimal approach from an information-theoretical perspective, achieving the optimal minimax guarantee for our setup. We envision better algorithms under more assumptions on the model class in future works.
>> It is not obvious that the OEOE model, in which both all past covariates as well as all past offline estimators are saved, actually captures aspects such as limited past data availability and compression of past observations and decisions. This is because storing an estimator may be much more storage-consuming than the value itself (e.g., as in binary classification). Generalizations based on finite past data seems to be challenging.
If only the estimator at the current round is remembered without covariates, Theorem 3.3’s lower bound applies, making the algorithm inefficient. However, finding efficient methods to remember the version space without previous estimators allows the use of Algorithm 1. Additionally, Algorithm 2 in Appendix D can mitigate this issue by storing only the $N$ most recent estimators, albeit with increased regret.
Question:
>> In line 33 it is claimed that statistical estimation is mainly about fixed design. Random design of the covariate-value is also commonly considered.
While random design is common, the OEOE setup reveals covariates, making fixed design regression more appropriate.
>> Theorem 3.3 is an impossibility result, showing that even a memoryless estimator is not efficient, even if it is improper. Doesn't this trivially imply the same result for the subset of proper learners? Why thus a different result is needed ?
In our OEOE setup, the adversary controls the offline oracle, and the learner only has the offline guarantee. If the oracle is improper, the adversary is stronger, so a lower bound with an improper oracle does not imply one with a proper oracle.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications! | Rebuttal 1:
Rebuttal: ## General rebuttal:
### Theoretical insights for Theorem 3.1:
We would like to highlight the technical challenges and our contributions. Online learning has been extensively studied for decades. The most classical algorithm is the exponentially weighted aggregation, which weights all parameter functions exponentially with respect to their loss but never eliminates any functions. This algorithm is inapplicable to our setup as the label and loss are never revealed. The HALVING algorithm, which predicts based on the majority vote of the functions in the version space, is also not suitable due to its binary case specification. Our algorithm generalizes both algorithms, a surprising innovation given decades of research in online learning.
When $\beta = 0$, offline estimators must reveal the correct labels of all past covariates, recovering the classical online learning setup and allowing immediate application of our algorithm. Version space averaging is a soft aggregation compared to the majority vote, making it a generalization of the HALVING algorithm. Exponential weight aggregates the whole function class, while version space averaging constrains the averaging space, making it more rigid. Our algorithm interpolates between the exponentially weighted aggregation and the HALVING algorithm. Moreover, the theorem is proven through a novel potential argument, which is of independent interest.
### Theoretical insights for Corrollary D.1:
Nevertheless, we also have a general reduction from OEOE to online learning where the loss for the online learning is generated by averaging over the most recent $N$ estimators. This reduction relaxes the condition that we need to remember all the estimators and that we need to know an upper bound of the offline estimation error. By averaging over only the most recent estimators, our method simplifies the process and reduces memory requirements, making the approach more efficient. Additionally, this reduction allows for greater flexibility in handling the offline estimation error, as it no longer necessitates a strict upper bound.
In conclusion, we believe our technical contributions are substantial, and our setup is likely to stimulate further research in this area. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SfPUEL: Shape from Polarization under Unknown Environment Light | Accept (poster) | Summary: This paper proposes SfPUEL, which estimates surface normals and material (metallic or dielectric) under unknown environmental light using a single polarization image. The proposed network integrates polarization information with photometric stereo priors using a photometric stereo feature extractor and a polarization feature extractor with DoLP cross-attention. By jointly estimating material segmentation and surface normals from the global context features, it helps predict surface normals and materials.
Strengths: * Synthetic and real polarization datasets for various materials and environmental maps.
* A novel network architecture consisting of:
1. Polarization and photometric stereo feature extraction
2. DoLP cross-attention block
3. Global context extractor
* Reasonable ablation study (Table 3) with sufficient SOTA results compared to various normal estimation methods (Figures 5, 6, 7, 8 and Tables 1, 2).
Weaknesses: * There are no comparison results with other material segmentation methods.
* The method does not clearly show high-frequency, spatially-varying material examples, such as dielectric surfaces with dense strip patterns using metallic surfaces. Each object is composed of a single material only.
* Closely related to the limitation, which is beyond the scope of the paper, there is no consideration of appearance.
Technical Quality: 4
Clarity: 4
Questions for Authors: The paper is well-written, and the network architecture is described well in both the main and supplemental papers. The published code also helps to address any further questions.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The author addressed the following limitations:
1. Requires an object mask
2. Supports dielectric and metallic objects
I might add that this paper is also limited to normal and material estimation only. Given the normal and material components, future research could focus on estimating material appearance parameters.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the constructive reviews and questions. Below, we address the concerns raised by Reviewer bz5V.
> W1: Comparisons with other material segmentation methods.
Material segmentation in our paper is only for boosting the performance of normal estimation, we do not intend to make the accuracy of material segmentation one of our contributions. As our focus is normal estimation under unknown environment light, adding the comparison of material segmentation could blur our focus. We will tone down the corresponding description of material segmentation in the paper.
> W2: Results on objects with more complex materials
We provide additional experiments on newly captured real data, containing spatially varying albedo and material types. The results have been provided in Fig. **III** and Fig. **V** of the attached PDF file, showing that our method can handle complex spatially varying materials. These experiments will be added in the final version.
> Q1: Material appearance estimation & object mask input
This paper mainly focuses on SfPUEL, i.e., shape from polarization under the challenging **unknown environment light**. While jointly estimating normal, material, and appearance parameters is also an intriguing topic but out of the scope of this work. Also, our method requires an object mask as input, but it is not difficult to obtain the mask with the help of powerful SAM [Kirillov et al. 2023] nowadays.
**Reference**
[Kirillov et al. 2023] Segment anything. Kirillov et al. ICCV 2023.
---
Rebuttal Comment 1.1:
Comment: I really appreciate the author's additional results, which show realistic outcomes with complex materials on real datasets. There are no significant issues within the discussions to change the existing rating, so I will keep my current rating, recommended for acceptance. | Summary: This paper addresses Shape from Polarization under Unknown Environment Light (SfPUEL) from a single polarimetric image. Existing SfP methods have the ambiguity of surface normal caused by unknown illumination and materials and make some assumptions on reflection type or illumination. This paper introduces a novel SfP framework based on a transformer that considers the global context. This paper also proposes to combine SfP with pre-trained photometric stereo (PS) priors by using cross-attention based on DoLPs. For guidance to resolve the ambiguity by materials, the network outputs segmentation of dielectric and metallic materials along with surface normals. The experimental results show that the proposed method quantitatively and qualitatively outperforms the SOTA normal estimation methods using a single image or polarimetric image.
Strengths: + A novel transformer-based framework for SfP effectively constrains the surface normal by considering the global context and exploiting pre-trained PS priors with DoLP cross-attention.
+ This paper proposes a joint estimation of material segmentation and surface normals to resolve the ambiguity caused by materials.
+ The reconstructed normals are significantly better than the SOTA normal estimation methods.
Weaknesses: - Since photometric stereo inherently requires multiple images captured under different lighting conditions, the pre-trained PS model is not supposed to take polarimetric images under the same lighting conditions as the input. This would lead to unexpected behavior and make it unclear how generalizable the features extracted from the pre-trained PS model are.
- A polarized light source like a sunny sky affects the polarization of reflected light [20], especially for a specular dominant surface, such as a black dielectric surface and smooth metallic surface. Since the authors seem to create synthetic data from unpolarized environment maps, the proposed learning-based method cannot handle such cases.
Technical Quality: 3
Clarity: 4
Questions for Authors: - Since the proposed method only requires a single polarimetric image (and its mask), is it possible to capture real-world data more casually if GT is not acquired? If so, testing a wider range of real-world data can validate the generalizability of the proposed method qualitatively.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: - As mentioned in Weakness, the proposed method cannot handle a specular dominant surface under polarized illumination.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the insightful reviews and valuable questions. Below, we address the concerns raised by Reviewer 7r7M.
> W1: Photometric stereo methods take as input polarization images
As suggested, we further verify the photometric stereo network taking as input polarization images by qualitative and quantitative experiments, as shown in Fig. **I** and Table **I** of the attached PDF file. Please refer to *Further analysis of photometric stereo taking as input polarization images* in the global responses.
> W2: Using unpolarized light sources in dataset creation
Please refer to *Unpolarized environment light in training data* of the global responses for the detailed discussion.
> Q: Qualitative evaluation on more real-world data
Thanks for the suggestion. We provide an additional qualitative evaluation on four more objects to show the generalizability of our method. The results are shown in Fig. **V** of the attached PDF file. Our method also generates plausible normal maps on the additional four objects. We will add this qualitative evaluation to the supplementary material in the revised version.
---
Rebuttal Comment 1.1:
Comment: I appreciate your additional discussions and experimental results that show the effectiveness of the proposed method. I keep my rating and recommend the acceptance of this paper. | Summary: This paper tries to solve normal estimation tasks using a single image captured from a polarization camera under unknown environmental light. To handle unknown environmental light conditions, this paper adopted a deep learning-based strategy to take advantage of inductive bias from the training dataset. Compared with previous methods, this paper additionally utilizes a pre-trained photometric stereo network and material segmentation tasks. The network architecture generally follows the Transformer and is trained with the synthetic dataset created by the authors. The proposed method is evaluated on the six real-world objects and shows reasonably good results compared with the GT normal map. It also shows results comparable to the multi-view SfP method (PANDORA).
Strengths: The paper shows that utilizing the pre-trained photometric stereo network can help the single-shot estimation of normal from the polarization images, which can be shown as a novelty of the paper. The paper also proposes some architectures that can be applied to the polarization images, such as the DoLP cross-attention block. The authors also captured some synthetic and real datasets, which can help future research if they are opened. Also, regardless of technical novelty, the quality of the results seems fairly good on both synthetic and real datasets.
Weaknesses: The proposed network seems to rely strongly on the photometric stereo network SDM-UniPS. It not only borrows its architecture but also adopts network weight for extracting the photometric stereo features. Such dependency on the specific network can be treated as a weakness. Also, refer to the first question, which includes the question about SDM-UniPS.
It is unclear why the BSDF type is separated into only two: dielectric and metallic (conductor). The authors provided examples and short statements but didn’t explain why they are different. This makes it weak to add the material segmentation tasks only for two types.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. It is unclear how polarization images can be treated as the photometric stereo input. The only theoretical reason I found it on the L42-44, both SfP and PS estimate the result of the same pixel position, with measurement under different conditions. Yet, PS usually assumes the difference in lighting condition, and SfP assumes the difference in polarization direction. Are there any more specific analysis for this? Besides, will it work if we replace SDM-UniPS with other PS networks and follow a similar training strategy?
2. Related to the second weakness, what makes the difference (e.g., the complex-valued IOR), and how is the difference observed (in both RGB and polarization domain) between dielectric and conductor material? To make the paper more self-contained and concrete, an explanation about this seems necessary.
3. Also, the following questions are related to the practicability of the method.
- This method also estimates the material segmentation, but the results for real data’s material segmentation are missing. Are there any reasons for excluding this?
- Since trained synthetic data consists of dielectric and conductor material, can this method deal with diffuse objects? Also, objects cannot be classified into simple dielectric or conductor in the real world—for instance, multi-layered materials or coated materials. What happens if we use the proposed method for these objects? Such limitations should be clearly stated, and a single statement in the Limitations section does not seem enough.
- It seems the method assumes the environment light is unpolarized. Since there is also a polarized light source, is it considered in the training dataset?
4. Is there any plan to make the training dataset and full test dataset publicly available? Such effort to collect datasets will be helpful for future research.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No potential negative social impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the detailed and constructive suggestions. Below, we address the concerns raised by Reviewer 66Sa.
> Q1.1 Analysis of photometric stereo (PS) and shape-from-polarization (SfP)
Please refer to *Further analysis of photometric stereo taking as input polarization images* in the global responses for the detailed discussion.
> Q1.2 Can SDM-UniPS be replaced by other PS networks?
Yes, if the PS network could work under unknown environment light. More specifically, the network structure of SDM-UniPS can effectively extract global light features from images under the same viewpoint but varying irradiance, which is essential to handle the challenging shape estimation under unknown environment light. To be more generalized, any effective feature extractor can replace SDM-UniPS. In this paper, we adopt the off-the-shelf SDM-UniPS network backbone and the pre-trained weight due to its effective global feature for handling unknown environment light.
> Q2 Why the BSDF type is separated into dielectric and metallic? What is the difference between dielectric and metallic materials?
We separate the material type into dielectric and metallic, due to polarization properties tightly correlated to these two material types. Polarimetric BRDFs [Baek et al. 2018; Ichikawa et al. 2023] are derived from Fresnel equations, and Fresnel analysis is also the main technique to distinguish dielectric from metallic materials. Early works [Tominaga and Yamamoto 2008] estimated dielectric/metallic types by polarization information, which insight us to categorize material as the two types.
The difference between dielectric and metallic materials can be analyzed via the polarimetric BRDF (pBRDF) [Baek et al. 2018]. The Fresnel terms $\textbf{F}^{\text{T}}{\text{o}}(\theta_o;\eta)$ and $\textbf{F}^{\text{R}}(\theta_d;\eta)$ in the pBRDF are depend on the refractive index (RI) $\eta$ [Baek et al. 2018]. $\eta$ is a real number for dielectric materials, while it is a complex number consisting of an imaginary part denoted as the extinction coefficient (EC) for metallic materials [Collett 2005].
As suggested, we provide visual comparisons between dielectric and metallic materials, as shown in Fig. **II** of the attached PDF file. Fig. **II** displays three synthetic spheres with (a) a dielectric surface with white diffuse albedo; (b) a dielectric surface with black diffuse albedo; (c) a metallic surface made of chromium. The three spheres have the same RI (in real number) and roughness and are rendered under the same illumination. The white dielectric sphere in Fig. **II(a)** differs from the metallic sphere in image appearance and angle of linear polarization (AoLP) distribution. The black dielectric sphere in Fig. **II(b)** has a similar reflective appearance and an AoLP map to the metallic sphere in Fig. **II(c)**, but the degree of linear polarization (DoLP) of the dielectric sphere is much higher than that of the metallic sphere. The differences in AoLP patterns and DoLP magnitudes can guide the dielectric/metallic material segmentation. This is also why we categorize materials as dielectric and metallic and introduce material segmentation to boost normal estimation.
We will add this discussion and the visual comparisons to the final version of our paper.
> Q3.1 Including material segmentation results on real data
As suggested, we provide more material segmentation results on the real data, as shown in Fig. **III** of the attached PDF file. Our method produces plausible results on objects with dielectric surfaces, metallic surfaces, and surfaces with both material types, but we observe a failure case on the metallic kettle, where the material segmentation result is dielectric. We will add these material segmentation results in the final version.
> Q3.2 Applications on more complex materials (e.g., diffuse, multi-layered, or coated materials)
To the best of our knowledge, the multi-layered and coated materials haven’t been addressed in SfP tasks, which is still an open problem due to the complex reflectance. We respectfully think it is out of the scope of this paper because we mainly focus on SfPUEL, i.e., shape from polarization under unknown environment light.
For diffuse objects, we test our method on a rough stone turtle and a fabric pillow as shown in Fig. **IV** of the attached PDF file. The DoLP of the two objects are near zero and the AoLP maps are noisy and less informative. The diffuse characteristics of rough sanded surfaces and fabric materials greatly mitigate the normal dependency on the polarization properties and degrade SfP performance, as stated by previous studies [Baek et al. 2020; Lyu et al. 2023]. As a result, our method fails to produce reliable normal maps on these two diffuse surfaces given invalid polarization cues. We will add this discussion in the final version of our paper.
> Q3.3 Polarized light sources
Please refer to *Unpolarized environment light in training data* in the global responses.
> Q4 Dataset release plan
We will release all the training & test datasets and the implementation code upon acceptance.
**References**
[Baek et al. 2018] Simultaneous Acquisition of Polarimetric SVBRDF and Normals. Baek et al. TOG, 2018.
[Baek et al. 2020] Image-Based Acquisition and Modeling of Polarimetric Reflectance. Baek et al. 2020. TOG, 2020.
[Collett 2005] Field Guide to Polarization. Collett 2005. SPIE.
[Ichikawa et al. 2023] Fresnel Microfacet BRDF: Unification of Polari-Radiometric Surface-Body Reflection. Ichikawa et al. CVPR, 2023.
[Ikehata 2022] Universal Photometric Stereo Network using Global Contexts. Satoshi Ikehata. CVPR, 2022.
[Lyu et al. 2023] Shape from Polarization with Distant Lighting Estimation. Lyu et al. TPAMI, 2023.
[Tominaga and Yamamoto 2008] Metal-dielectric object classification by polarization degree map. Tominaga and Yamamoto. ICPR, 2008.
---
Rebuttal Comment 1.1:
Title: Further comments?
Comment: We are looking forward to further discussions with the reviewers during the author-reviewer period. Thanks again for the reviewers' insightful comments and interest.
By the way, we also would like to further explain whether SDM-UniPS can be replaced by another PS network in our method. Most existing PS methods work under directional lights, as listed in the DiLiGenT10$^2$ benchmark [Ren et al. 2022], meaning that the image capture should be conducted in a darkroom. Considering SfP under unknown environment light, an alternative PS network to SDM-UniPS in our model should (1) work under unknown environment light and (2) effectively utilize variations between polarization images for normal estimation as verified in Fig. **I** and Table **I** of the attached PDF file. Currently, SDM-UniPS is the one satisfying the above requirements.
**Reference**
[Ren et al. 2022] DiLiGenT10$^2$: A Photometric Stereo Benchmark Dataset with Controlled Shape and Material Variation. Ren et al. CVPR 2022.
---
Rebuttal 2:
Comment: I appreciate the authors for the detailed rebuttal and for providing the extra results. My questions and concerns are properly answered, and there appear to be no significant technical problems. Thus, I will increase my ratings and recommend acceptance. | null | null | Rebuttal 1:
Rebuttal: We thank all the reviewers for their insightful and valuable comments. We are encouraged by reviewers’ positive comments: “This paper proposes a “novel” framework for shape-from-polarization” (Reviewers **7r7M** and **bz5V**); “the quality of the results seems fairly good” (Reviewer **66Sa**) and “significantly better than the SOTA normal estimation methods” (Reviewer **7r7M**); “reasonable” ablation study with “sufficient SOTA results” has been conducted (Reviewer **bz5V**).
Below, we address the common concerns raised by the reviewers. **The attached PDF file** contains experimental results suggested by reviewers:
- Qualitative and quantitative evaluation of SDM-UniPS taking as input different numbers of polarization images to further validate the effectiveness of a PS network using polarization images (Fig. **I** and Table **I**, as suggested by Reviewer **66Sa** and **7r7M**).
- Visual comparison of dielectric and metallic spheres on appearance and polarization properties to show the difference between two types of materials (Fig. **II**, as suggested by Reviewer **66Sa**).
- Material segmentation on the real-world data and more scenes (Fig. **III**, suggested by Reviewer **66Sa** and **bz5V**).
- Qualitative results of our method on diffuse objects for illustrating the method’s limitation (Fig. **IV**, suggested by Reviewer **66Sa**).
- More qualitative results of our method on real-world data (Fig. **V**, suggested by Reviewer **7r7M** and **bz5V**)
# Global Responses
## Further analysis of photometric stereo taking as input polarization images
We admit that photometric stereo (PS) and shape-from-polarization (SfP) methods are derived from different physical models. However, from the perspective of end-to-end network learning, both PS and SfP networks extract high-level features from multiple images with the same viewpoint but varying pixel radiance; the extracted features are decoded to predict the normal map.
To further validate that the pre-trained PS network could produce reasonable features by taking as input polarization images, we conduct the experiments to compare the normal predictions of SDM-UniPS fed with $n$ different polarization images ($n\in\\{1,2,3,4\\}$), as shown in Fig. **I** and Table **I** of the attached PDF file. As the number of input polarization images increases from 1 to 4, the mean angular error of normal prediction on the real dataset decreases from 19.46$^\circ$ to 15.73$^\circ$, which indicates that the PS network is generalizable to produce plausible predictions and intermediate features for the SfP task, given the variations between polarization images. We will add this experiment in the supplementary material.
## Unpolarized environment light in training data
We agree that environment light is mostly polarized in real-world scenes. To render large-scale polarization images under polarized environmental light as a training dataset, polarized environment maps are essential. However, large-scale polarized environment maps have not yet been collected. Therefore, we create the large-scale dataset under unpolarized environmental light. Nevertheless, experimental results of normal estimation on real-world data show that our method is robust against polarized environmental light, given that real-world environmental light can indeed be polarized. We will add this discussion in the revised version.
## Material segmentation in our method
The main purpose of material segmentation in our method is to boost the performance of normal estimation. Our method focuses on surface normal estimation under unknown environment light rather than material segmentation. We are insighted by the observation that different material types can lead to different polarimetric measurements. So, we introduce material segmentation in our method to improve the normal predictions but do not make material segmentation accuracy one of the contributions. We will revise our paper to reduce the emphasis on material segmentation as a contribution of the paper to prevent any potential misunderstandings by the readers.
Pdf: /pdf/bad195399a5fe18179c3ec979287134b00a5cc0a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
OneRef: Unified One-tower Expression Grounding and Segmentation with Mask Referring Modeling | Accept (poster) | Summary: The authors propose a unified one-tower referring framework that introduces the MRefM paradigm to capture the referential relationships between vision and text. They demonstrate the effectiveness and generality of MRefM across three settings on REC, PG, and RES tasks, consistently achieving good results.
Strengths: The proposed method is structurally simpler than previous methods and achieves higher grounding and RES performance.
The ablation experiments are thorough, validating the effectiveness of the two Ref mask modeling strategies.
Weaknesses: Adapting BEiT, a robust general-purpose multimodal model, to downstream grounding tasks and achieving performance improvement does not seem to be a particularly interesting finding. Additionally, it is puzzling why introducing two types of relation scores in MIM and MLM would enhance the model's referring performance. Especially regarding the four types of masks in the Visual target-relation score, their effective mechanism is not intuitive and lacks explanation. Why did the authors decide to introduce referring information by predicting these four scores? Are there any reference works?
In Table 3, comparisons with some related works, such as UNINEXT [1] and HIPIE [2], are missing.
[1] Universal instance perception as object discovery and retrieval. CVPR 2023.
[2] Hierarchical Open-vocabulary Universal Image Segmentation. NeurIPS 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thoughtful feedback. Please find below our responses to the questions raised in the review.
Firstly, due to the character limit in the reply box, we have included the figure and experimental tables in the PDF file at the top of this page. Please click on the file to view the corresponding figure and tables.
> **Q1. Adapting BEiT, a robust general-purpose multimodal model, to downstream grounding tasks and achieving performance improvement does not seem to be a particularly interesting finding.**
We would like to highlight the value of our work from two perspectives.
- Firstly, although BEiT-3 is a general-purpose foundation model trained by the Mask Visual Language Modeling (MVLM) paradigm, it does not perform well in grounding and referring tasks. Our work is the pioneer attempt to implement referring tasks based on the BEiT-3 model. We propose the one-tower UniRef framework, which significantly improves fine-grained cross-modal perception compared to the original BEiT-3 model. It offers a new direction for the cross-modal perception of grounding fields
- Secondly, our aim is not only to transfer the BEiT-3 model to the grounding task, but also to propose a novel paradigm with certain generality. Specifically, while the existing MVLM is a general pre-training paradigm, it can only learn coarse-grained visual and linguistic knowledge. However, fine-grained cross-modal referring relations are common in our lives, and such a paradigm cannot model these referring relations; no previous work has explored them either. Therefore, our aim is to learn these subtle referring relations with the help of the general mask modeling paradigm. As shown in the experimental results of Table 2 and Table 3, our approach has also been proven effective for VQA and cross-modal retrieval tasks. Our work represents a novel attempt that we believe can provide valuable insights for future research.
> **Q2. Additionally, it is puzzling why introducing two types of relation scores in MIM and MLM would enhance the model's referring performance.**
We explain the intrinsic mechanism of the two relation scores from the perspective of MIM and MLM, respectively.
- In the current MIM paradigm, reconstruction is limited to relying solely on the visual features within the image. To enhance content reconstruction by leveraging cross-modal information as much as possible, our Referring MIM approach incorporates visual target-relation scores alongside visual modality content during reconstruction. This modeling approach presents increased difficulty as it necessitates reliance on textual information for reconstructing two more complex visual branch. Consequently, our model achieves a more comprehensive understanding of both visual and textual information. In this way, the model not only can perceive the information of the image modality itself but also have a more accurate understanding of the location and correlation of the key object features in different regions.
- Similarly, the Referring MLM also aims to enhance the global comprehension and reasoning capabilities of the model for both visual and textual information. Specifically, existing MLM methods solely rely on contextual information within the text to reconstruct masked words. In addition to leveraging image modality information to restore masked words as much as possible, we guide the model in identifying specific words within the grounding text that require special attention during the referring process. The learning process of semantic target-relation score bears resemblance to knowledge distillation. In this way, the model can acquire a more comprehensive understanding of the referred text.
> **Q3. Especially regarding the four types of masks in the Visual target-relation score, their effective mechanism is not intuitive and lacks explanation. Why did the authors decide to introduce referring information by predicting these four scores? Are there any reference works?**
- The response to this issue is placed in the global rebuttal field. Please refer to Common Question 1 above on this page.
> **Q4. In Table 3, comparisons with some related works, such as UNINEXT [1] and HIPIE [2], are missing.**
As shown in Table R4, we compare our RES task result with UNINEXT [1] and HIPIE [2] under oIOU metric, and we will include the results in Tables 2 and Table 3 of the revised paper.
Besides, both of the two works belongs to the setting of multi-task multiple dataset-mixed training. Comparing with these two works, our base model exceeded UNINEXT by 2.79%(testA), 5.15%(testA), and 2.87%(test) respectively in the REC task for the RefCOCO/+/g three datasets. In the RES task, our base model surpassed UNINEXT by 1.42%(val), 3.85%(val), and 2.51%(val) respectively, and surpassed HIPIE by 1.02%(val), 3.85%(val), and 2.75%(val) in the same three datasets. It should be noted that while our model only utilizes RefC's data for intermediate pre-training, both UNINEXT and HIPIE employ more additional datasets as well as multi-task pre-training. Despite this distinction, our model already demonstrates superior performance compared to theirs.
---
Rebuttal 2:
Comment: We thank the reviewers for their efforts. As the discussion phase is nearing its end, we kindly remind the reviewer SMKQ to respond to our responses if it is convenient. We sincerely thank the reviewer SMKQ for the positive rating of our paper. Currently, no new questions have been raised by the reviewer, so we sincerely hope that our reply has successfully addressed his/her concerns. If not, we would sincerely appreciate any further comments that can improve the quality of our paper. We will incorporate all suggestions during this rebuttal to comprehensively revise our paper to achieve a higher standard! Lastly, we thank the reviewer SMKQ once again for the time and effort spent on reviewing our work! Thank you! | Summary: This paper proposes a Mask Referring Modeling (MRefM) paragram and a unified and extremely concise grounding and referring segmentation framework named UniRef that no longer requires the fusion or interaction of the Transformer structure and the special grounding tokens. A masked referring modeling (MRefM) is proposed to model the referential relationship, which encompasses referring-aware mask image modeling and referring-aware mask language modeling.
Strengths: A mask-referring modeling paragram is proposed to model the referential relation between visual and language effectively. They also propose a one-tower framework for grounding and referring segmentation in a unified modality-shared feature space. Experiments demonstrate the effectiveness of this method and its components.
Weaknesses: 1. Lack of discussion of related work [a]. I suggest discussing the difference between this paper's referring-aware mask language modeling and masked contrastive learning in [a].
2. Lack of computation costs analysis with other methods, including parameters/FLOPs/speed.
[a] VLT: Vision-Language Transformer and Query Generation for Referring Segmentation. In TPAMI 2023.
Technical Quality: 3
Clarity: 2
Questions for Authors: See weakness.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thoughtful feedback. Please find below our responses to the questions raised in the review.
> **Q1. Lack of discussion of related work [a]. I suggest discussing the difference between this paper's referring-aware mask language modeling and masked contrastive learning in [a].**
We will include a comparative discussion between our Referring-aware Mask Language Modeling (Referring MLM) approach and the Masked Contrastive Learning (MCL) approach proposed in VLT [a] in Sec. 2.2 and 3.3 of revised paper. Specifically, we currently briefly present the difference discussion during this rebuttal stage below:
- Firstly, the MCL proposed in VLT [a] constructs contrastive learning for three different types (i.e., same image same object (SISO), same image different object (SIDO), different image (DI)) of referring samples in the training batch. The aim is to make the features with SISO sample type as close as possible and the features with SIDO sample type as far away as possible. At the same time, MCL randomly masks prominent words in the query text of SISO type in a probability-guided manner to construct new positive samples, thereby increasing sample discrimination and model generalization. MCL to some extent proves that text masking is effective in referring tasks.
- Secondly, our Referring MLM involves masking the query text using a referring-aware text masking strategy and reconstructing the linguistic content as well as the semantic target-relation score of the masked words. Our approach differs from MCL. Specifically, while both Referring MLM and MCL employ text masking techniques, our proposed Referring MLM requires not only text masking but also the reconstruction of textual content and the target-relation probability distribution of the masked text. Our proposed Referring MLM paradigm is a relatively more comprehensive approach to perform mask modeling.
> **Q2. Lack of computation costs analysis with other methods, including parameters/FLOPs/speed.**
We compare the energy efficiency of our model with several well-known state-of-the-art works on the REC task from various perspectives, including the number of parameters, computational complexity (FLOPs), inference speed (FPS), and test time (s). The results are presented in Table R1, which can be found in the rebuttal PDF file located at the top of this page.
In this paper, we highlight two significant advantages of our model architecture over other frameworks: (a) instead of using a Transformer to fuse visual and language features, we only employ a simple lightweight task head; (b) Our one-tower architecture eliminates the need for early interaction techniques in the backbone network, thereby reducing the computational complexity of the model.
As can be seen from the Table R1, due to the simplification of our model's structure, the number of parameters and the calculation complexity are significantly lower than other well-known models. Specifically, our feature fusion and grounding head module only require 1.7M parameters, while other methods use 20M, meaning we only have about 8.5% of their parameter count. Additionally, our computation is only 34.9% of Grounding-DINO and 25.2% of MDETR. Moreover, our inference speed is 10 times faster than Grounding-DINO and TransVG++ (the speed also related to the image size used by the model). Despite these advantages, thanks to the modality-shared feature space, we outperform all these well-known works. We will include this experiment in the revised version.
---
Rebuttal 2:
Comment: We thank the reviewers for their efforts. As the discussion phase is nearing its end, we kindly remind the reviewer vP75 to respond to our responses if it is convenient. We sincerely thank the reviewer vP75 for the positive rating of our paper. Currently, no new questions have been raised by the reviewer, so we sincerely hope that our reply has successfully addressed his/her concerns. If not, we would sincerely appreciate any further comments that can improve the quality of our paper. We will incorporate all suggestions during this rebuttal to comprehensively revise our paper to achieve a higher standard! Lastly, we thank the reviewer vP75 once again for the time and effort spent on reviewing our work! Thank you! | Summary: This manuscript proposes UniRef, a framework aimed at unifying visual and linguistic feature spaces for referring expression comprehension and segmentation. The key presented is the Masked Referring Modeling (MRefM) paradigm, which includes referring-aware MIM and MLM.
This approach seeks to streamline the architecture by eliminating the need for separate modality-specific encoders and complex interaction modules, achieving sota performance on several datasets.
Strengths: 1. The introduction of the MRefM paradigm effectively captures the referential relationship between visual and linguistic features, contributing to the robustness and accuracy of the model.
2. The integration of visual and linguistic feature spaces into a unified framework simplifies the model architecture and potentially improves computational efficiency.
3. The authors provide extensive experimental results across five datasets, demonstrating the effectiveness of the proposed approach in outperforming existing methods.
Weaknesses: 1. The technical contribution of the proposed method appears to be insufficient, as it primarily adapts traditional Masked Autoencoders into a 'Referring MAE', despite the rich experimentation presented.
2. Section 3.2 is intricate and challenging to follow. I recommend a thorough proofreading and revision of this section to enhance its clarity and readability.
3. Could you explain the rationale behind designing the system to encompass four masks: x-, y-, w-, and h-masks?
4. The manuscript asserts that the approach is lightweight, raising questions about its real-time applicability. It would be helpful if the authors detailed the computational demands and processing speed of the method in practical scenarios.
5. (Minor) While the manuscript has discussed the limitations of the proposed approach, I am interested in understanding how MrefM performs in more general contexts. For instance, evaluating the backbone linear probing (or its integration with LLMs) could demonstratebroader applicability. This should certainly be considered for future work.
Technical Quality: 3
Clarity: 2
Questions for Authors: See weakness.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thoughtful feedback. Please find below our responses to the questions raised in the review:
Firstly, due to the character limit in the reply box, we have included the figure and experimental tables in the PDF file at the top of this page. Please click on the file to view the corresponding figure and tables.
> **Q1. The technical contribution of the proposed method appears to be insufficient, as it primarily adapts traditional Masked Autoencoders into a 'Referring MAE', despite the rich experimentation presented.**
We would like to highlight our technical contributions from the following points, while explaining the benefits resulting from our approach.
- Firstly, to address the limitation of the previous MIM paradigm in capturing cross-modal referential relations, we propose the Referring-aware MIM paradigm, which enhances the backbone network's ability to comprehend fine-grained cross-modal information.
- Secondly, we propose the referring-aware dynamic image masking strategy that improves the previous random image masking strategy and effectively directs the model's attention to the referred region.
- Thirdly, we propose a referring-aware MLM paradigm that not only empowers the existing backbone model to reconstruct masked words based on contextual information but also enhances the model's attention and weighting towards crucial referring words.
- Fourthly, based on the proposed MRefM paradigm, we designed two lightweight task heads and introduced a remarkably simplified one-tower grounding and referring segmentation framework, thereby obviating the necessity for complex cross-modal interaction techniques and cumbersome fusion codec modules. This approach offers a new solution to the fields of grounding.
The existing methods, such as MAE, reconstruct the image based on its context information and can only learn the uni-modal representation. With our proposed method, we are able to learn more general cross-modal representations and thus enhance the model's global reasoning ability.
> **Q2. Section 3.2 is intricate and challenging to follow. I recommend a thorough proofreading and revision of this section to enhance its clarity and readability.**
After receiving the comments on this rebuttal, we conducted a thorough proofreading of Sec. 3.2 and identified some challenging aspects in its writing. Here are several points that may cause confusion. We will update in the revised version.
- 1. Paragraph 1 in Sec. 3.2, the explanation of Referring MIM's motivation is insufficient.
Explanation: This issue will be explained in Q3 - point 1.
- 2. Paragraph 2 in Sec. 3.2, the effectiveness mechanism of the four x-, y-, w-, and h-masks needs to be further explained.
Explanation: This issue will be explained in Q3 - point 2. To facilitate explanation, we further draw Figure R1 based on Figure 2 of the main text, which also will be included in Section 3.2 of the revised version.
- 3. Paragraph 4 in Sec. 3.2, this part lacks a global execution logic of 'Referring-aware dynamic image masking strategy' .
Explanation: We have revised the writing logic of paragraph 4.
We hope the above-mentioned points meet the reviewer's requirements.
>**Q3.Could you explain the rationale behind designing the system to encompass four masks: x-, y-, w-, and h-masks?**
- The response to this issue is placed in the global rebuttal field. Please refer to Common Question 1 above on this page.
>**Q4. It would be helpful if the authors detailed the computational demands and processing speed of the method in practical scenarios.**
We analyze our gains of computational efficiency in Tab. R1. As can be seen from the table, due to the simplification of our model's structure, the number of parameters and the calculation complexity are significantly lower than other well-known models. Specifically, our modality fusion and grounding head module only require 1.7M parameters, while other methods use 20M, meaning we only have about 8.5% of their parameter count. Additionally, our computation is only 34.9% of Grounding-DINO and 25.2% of MDETR. Moreover, our inference speed is 10 times faster than Grounding-DINO and TransVG++ . Despite the reduction in computational requirements, we already outperform all of these well-known works
>**Q5.(Minor) While the manuscript has discussed the limitations of the proposed approach, I am interested in understanding how MrefM performs in more general contexts.**
To verify the generality of MRefM, as shown in Tab. R2 and Tab. R3, we follow the experimental test framework of BEiT-3 and conducted VQA fine-tuning experiments (Tab. 2), as well as cross-modal retrieval experiments (Tab. 3) on the MS COCO and Flickr30K datasets.
Our proposed MRefM is a fine-grained multi-modal pre-training paradigm that significantly enhances cross-modal tasks that involving logical reasoning and referring. As shown in the tables, our MRefM pre-training also leads to a considerable improvements in both VQA and retrieval tasks, demonstrating the effectiveness of our MRefM in cross-modal representation learning. Additionally, integrating MRefM into LLMs is a interesting directions, we will attempt corresponding studies in the future works.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. After reviewing other reviews and responses, I have decided to increase my rating to 6.
I hope you can carefully integrate the rebuttals into the revised version and thoroughly proofread the entire manuscript.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer a3fR for the positive feedback and the increased rating of our paper! We would like to express our deep gratitude for the reviewer's valuable comments provided in this review, and we will incorporate all suggestions during this rebuttal to comprehensively revise our paper to achieve a higher standard and quality. Lastly, we thank the reviewer once again for the time and effort spent on reviewing our work! Thank you! | null | null | Rebuttal 1:
Rebuttal: Dear reviewers, area chairs, senior area chairs, and program chairs,
We sincerely thank for the valuable and the thoughtful comments. It is pleasure that this work has been recognized by the three reviewers, including "the MRefM paradigm effectively captures the referential relationship", "a simple yet effective architecture", "the experiments are thorough", etc. The commonly concerns within the three reviewers lie in (a) The mechanism for the effectiveness of visual target-relation score is not clearly explained. (b) Lack of computational efficiency comparison experiments. (c) Include several relevant references. To address these concerns, we have carefully provided detailed explanations point-by-point within each reviewer's response box and have also included the relevant experiments, with the results presented in the rebuttal PDF file. We look forward to a better appreciation of this manuscript that incorporates our great efforts. Furthermore, this manuscript has been carefully revised according to the suggestions of the reviewers.
The followings are our detailed responses. We greatly thank the constructive suggestions that significantly help improve the quality of our paper.
Best regards,
The authors.
------
------
Due to the character limit in the response field, we address the first commonly asked question below.
> **Common Question 1: The mechanism for the effectiveness of visual target-relation score is not clearly explained..**
>
> * **(Reviewer a3fR)** Could you explain the rationale behind designing the system to encompass four masks: x-, y-, w-, and h-masks?
> * **(Reviewer SMKQ)** Regarding the four types of masks in the visual target-relation score, their effective mechanism is not intuitive and lacks explanation. Why did the authors decide to introduce referring information by predicting these four scores? Are there any reference works?
In order to provide a clearer explanation, we would like to address the rationale behind the visual target-relation score (i.e., the four x-, y-, w-, and h-masks) designed in Sec. 3.2 of the main text from two perspectives. Specifically:
- **Point 1**: The purpose of designing the Referring MIM algorithm.
In the existing MIM paradigm, reconstruction is limited to solely relying on the visual features within the image. To enhance content reconstruction by leveraging cross-modal information as much as possible, our Referring MIM approach incorporates visual target-relation scores alongside visual modality content during reconstruction. This modeling approach presents increased difficulty as it necessitates reliance on textual information for reconstructing the two visual branch. Consequently, our model achieves a more comprehensive understanding of both visual and textual information. In this way, the model not only can perceive the information of the image modality itself but also have a more accurate understanding of the location and correlation of the key object features in different regions.
- **Point 2:** How and why the visual target-relation score (i.e., the x-, y-, w-, and h-masks) works?
To facilitate the explanation, we provide a clearer illustration of the four masks in Fig. R1 within the rebuttal PDF file. As mentioned in Sec. 3.2 of the paper, this score represents the spatial distance between the current patch region and the referred region, it enables implicit deployment of grounding capability within each token of the model. When reconstructing the visual features and target-relation score of each local patch, the model actually needs to have an global and comprehensive understanding of the text modality information and the visual information of the image. On this basis, the model needs to rely on the reconstructed visual features of the local patch to implicitly predict the specific location and size of the referred object, and then accurately predict the visual target-relation score. Finally, Referring MIM can enhance the model's global and multimodal understanding of textual and visual information, and then learn more general visual representations, which can have better generalization ability when deployed to downstream referring tasks.
The proposed Referring MIM is our own design, which is mainly used to improve the defects existing in MAE/BEiT, and we have not found a similar method in the existing work. However, we can find the rationale of our method in some classic computer vision papers, such as the YOLO series works [1], which predicts the location, size, confidence, and category of the object box corresponding to each grid cell based on the global understanding of the image. The paper [1] also confirmed that the object detection model obtained in this way has stronger generalization ability when transfer to detection tasks that differ greatly from the training data compared with other detectors.
[1] Redmon, Joseph, et al. "You only look once: Unified, real-time object detection." CVPR. 2016.
Pdf: /pdf/7bc1fca360516969720d01d95b075139d807e4ec.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Association of Objects May Engender Stereotypes: Mitigating Association-Engendered Stereotypes in Text-to-Image Generation | Accept (spotlight) | Summary: The authors observe that some stereotypes only appear when an association of objects is required in T2I and propose to address association-engendered stereotypes for the first time. They use a pre-trained Text-Image-Text CLIP to map a prompt to its potential stereotypes and generate sensitive constraints using a Sensitive Transformer. The sensitive constraints are then used with the input prompt to affect T2I generation to be fairer. Besides, a new metric to evaluate association-engendered stereotypes in T2I results is proposed.
Strengths: - The authors make direct observations of association-engendered stereotypes in T2I results. From these observations, the motivation to address association-engendered stereotypes is validated.
- The idea of learning mappings from the prompts to their corresponding stereotypes is interesting and the overall framework is a novel design.
- The novelly-proposed evaluation metric for association-engendered stereotypes with extensive proof and discussion is valuable.
- Extensive experiments are conducted with structural evaluations, proving the effectiveness of the method.
Weaknesses: - I am wondering why the TIT CLIP can learn the mapping from prompts to stereotypes by maximizing the cosine similarity of three pairs. It is intuitively understandable that the original 2-dimensional CLIP uses contrastive learning to associate visual appearance and textual description of a single image. However, in this case, the stereotypes are something unrevealed in a single image but spotted by a statistical distribution over a set of images. The validity of pulling <prompt, stereotype> pair closer should be further explained. Besides, it would help the reader to understand the concept better if the authors could provide some examples of the stereotype descriptions used for training the TIT CLIP.
- Technical details of training the Sensitive Transformer is missing. Is it trained together with the diffusion model with the distribution alignment guidance or what is the training objective of it?
- How does the Sensitive Transformer handle engendered stereotypes across multiple attributes? Is it achieved implicitly or explicitly? Please provide further explanation.
- How to get the probability distribution for evaluation is unclear. Does it involve using pre-trained sensitive attribute classifiers like in previous works? If so, I am wondering how accurate the classifiers are when there are multiple objects in a given image.
- It would be beneficial to provide ablation studies to showcase the effectiveness of each proposed component. For example, I am curious about how a simple MLP would work in mapping the prompts to their stereotypes, compared to the TIT CLIP.
- It is visually hard to tell that the race bias is reduced in Figure 3(b), leading to concerns that the method cannot effectively mitigate biases that are strongly bonded to certain races.
- Examples after stereotype mitigation in G.2 show violation to prompt requirement "female", leading to concerns that the method might overlook the prompt input.
- One can hardly tell which person is "poor" from some examples in Figure 12. It raises concerns about how to distinguish harmful stereotypes (like relating poverty to certain races) from desirable visual concepts (like relating poverty to affordable clothes) during generations.
- The authors do not provide a discussion on the limitation of the method and its societal impact although claim to do so in the checklist.
- The stereotype mitigation target is even distribution. How can the model adapt to certain circumstances where not all attribute classes are correct? For example, the model should not generate Asian figures when asked for German soldiers in WW2.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses for major questions.
Other minor issues include:
- L41: I think the explanation in brackets is unrelated to its previous statement. Stereotype mitigation on a single object is different from stereotype mitigation on a single sensitive attribute.
- Why there are [SA] indications in some prompt templates? If the SA is given in the prompt, the evaluation of stereotype mitigation across different SA classes is rather trivial.
- How are the sensitive constraints incorporated into the prompt input? Maybe the authors could consider redrawing Figure 2 (3) for a clearer illustration.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors fail to provide a discussion on limitations and potential societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weaknesses (the first part)**
- **W1 (TIT CLIP mapping issue):** In fact, the association-engendered stereotype in T2I can only manifest in the generated images. Therefore, we need to use images as a medium to connect the relationship between prompts and stereotypes. In our task, we consider each image's stereotype to focus on the description of the object and its sensitive attributes. Thus, each image includes two descriptions: the original 2D CLIP description of the image's meaning and the stereotype-focused description that concentrates on the object and its sensitive attributes. **As shown in Figure 1 of `supplementary_images.pdf`**, although the stereotypes in the paper are derived from probabilistic statistics, we annotate the stereotypes of the images through probabilistic descriptions rather than using the probability model for prompt-image-stereotype association mapping. Finally, in Figure 1 of **`supplementary_images.pdf`**, we provide examples of stereotype descriptions.
> Training data type: <prompt, image, stereotype description>
- **W2 (technical details):** Our Sensitive Transformer is based on Transformer[1]. As shown in Equation (5) of the paper, $\text{Sensitive Matrix}(V) = \text{softmax}\left(\frac{QK^T}{\sqrt{d_k}}\right)V$, we changed the Transformer's task from translation to generating sensitive constraints.
**The Sensitive Transformer is trained together with the diffusion model under distribution alignment guidance**. The training objective of the Sensitive Transformer is to generate sensitive constraints that align with the given prompts. Additionally, to generate the optimal sensitive constraints for the prompt, we optimized the generated sensitive constraints. As shown in Equation (6), $\sigma^* = \mathop{\arg\min}\limits_{\sigma \subseteq S_{\mathcal{Y}}} \sup |\sigma(p_x^{v(s)}) - p_x^{u(s)}|$, the optimization objective is to minimize the total variation distance between $\sigma(p_x^{v(s)})$ and $p_x^{u(s)}$.
> [1] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 2017, 30.
- **W3 (the issue of handling multiple attributes):** **As shown in Figure 2 of `supplementary_images.pdf`**, the Sensitive Transformer generates sensitive constraints for each object in the original prompt using the prompt-image-stereotype embedding obtained from TIT CLIP. These sensitive attribute values are generated in the form of a dictionary. If the original prompt already specifies certain sensitive attributes, they will not appear in the sensitive constraints dictionary. **By simultaneously constraining multiple sensitive attributes, we suppress the association-engendered stereotypes**, achieving this in an **implicit** manner.
- **W4 (the evaluation issue):** As mentioned in lines 219-221 of the paper, we provide a brief description of the evaluation of SDTV values presented in Table 2 and Table 3. The detailed calculation method for SDTV values, including single and multiple objects and sensitive attributes, is provided in Appendix B. Given the potential inaccuracy of classifiers, we evaluate the SDTV values of these images using mathematical statistical methods rather than relying on classifiers.
- **W5 (the MLP suggestion):** Initially, we had the same idea as the reviewer: to use MLP to map prompts to their stereotypes. However, **as shown in Figure 1 of `supplementary_images.pdf`**, given our training data, if we skip the images and only use prompts and stereotype descriptions, we encounter the problem where the same prompt corresponds to multiple different stereotype descriptions, as illustrated in the table below.
| prompts | stereotype description |
| -------------------------------- | ------------------------------------------------------------ |
| a photo of a doctor and a nurse. | doctor, nurse, male, female,...\[doctor always male, ...\][M-O & M-SA] |
| a photo of a doctor and a nurse. | doctor, nurse, female, female,...\[doctor always male, ...\][M-O & M-SA] |
| a photo of a doctor and a nurse. | doctor, nurse, male, male,...\[doctor always male, ...\][M-O & M-SA] |
| ... | ... |
This problem causes the MLP's loss to remain constant, preventing it from learning effective features. By encoding the image into the embedding, we can address this issue while preserving the semantic relationship between the prompt and the image, which is the main advantage of TIT CLIP over MLP.
- **W6 (about Figure 3):** In Figure 3(b), the left image of the terrorist predominantly displays Middle Eastern facial features in terms of the sensitive dimension of race. After mitigation, the Middle Eastern features are covered by a mask, and the generated image then focuses on the terrorist's inherent features rather than displaying obvious racial characteristics.
---
**`Note:`** Due to the word limit imposed on each rebuttal by NeurIPS, we have included our responses to **W7-W10** and the replies to the "**Questions**" in the **Author Rebuttal** cell. We **kindly request the Reviewer L6vF to switch to the Author Rebuttal cell to view the second part of our detailed responses**. We sincerely appreciate your review.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their carefully prepared rebuttal. It addresses most of my concerns. I have raised my rating to 5. | Summary: This paper presents a novel framework (MAS) to address biases in text-to-image (T2I) models. Traditional methods focus on individual object stereotypes but fail to tackle stereotypes arising from object associations. MAS models the stereotype issue as a probability distribution alignment problem, utilizing a Text-Image-Text CLIP (TIT CLIP) and a Sensitive Transformer to align generated image distributions with stereotype-free distributions. The paper also introduces the Stereotype-Distribution-Total-Variation (SDTV) metric for better stereotype evaluation. Experiments show MAS effectively mitigates both single and association-engendered stereotypes.
Strengths: 1. The paper addresses a relatively unexplored area in stereotype mitigation for T2I models, focusing on the association of multiple objects rather than individual objects.
2. The paper is well-structured, with a clear presentation of the problem, methodology, and results. The introduction provides a solid context for the need to address association-engendered stereotypes, and the explanation of the MAS framework is easy to follow.
3. The paper extends to improving the societal and ethical aspects of AI-generated content, making it a valuable contribution to the field.
Weaknesses: 1. The proposed framework adds considerable complexity to the existing T2I pipeline, which might pose challenges for practical implementation and integration into existing systems, especially for developers with limited resources or technical expertise.
2. It is advisable to provide more details on the computational overhead introduced by the Sensitive Transformer within the MAS framework, especially in terms of real-world application performance and scalability for large-scale T2I generation tasks.
3. I wonder if the MAS framework can be adapted to other generative models beyond text-to-image, such as text-to-video or text-to-audio models.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see the weaknesses.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Please see the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weaknesses**
- **W1 (the complexity issue):** As shown in Section 4.2 of the paper, we conducted experiments to evaluate the impact of MAS on the computational load of the T2I diffusion model. Table 6 demonstrates that MAS effectively mitigates stereotypes while maintaining image generation efficiency and quality.
- **W2 (the computational overhead issue):** The time complexity of the Sensitive Transformer is consistent with that of the Transformer. For an input batch size of $b$, sequence length $N$, and an $l$ -layer Transformer model, the computational complexity is $O(l(bNd^2 + N^2d))$, where $d$ represents the dimension of the word embeddings.
We conducted MAS tests on our local server. As reported in Table 6 of the paper, the performance is similar to the original T2I model, with a difference of about $20$ seconds over a generation task of $100$ batches with a batch size of $10$. This results in an average efficiency decrease of $0.02$ seconds per image, which is negligible for practical generation tasks. Regarding the scalability of MAS, we applied it to mainstream T2I models for stereotype mitigation. Table 2 demonstrates the effectiveness of our MAS across different stable diffusion pipelines. Additionally, our MAS implements modular integration across various pipelines, allowing stereotype mitigation by simply embedding our MAS module into the original T2I workflow.
- **W3 (the adaptability issue):** Currently, our approach has only been tested for effectiveness in Text-to-Image generation. Though the concept of our approach looks feasible for Text-to-Video and Text-to-Audio generation, honestly speaking, we can not make a concrete answer about its adaptability without further deep exploration.
Overall, we will carefully revise our paper based on these valuable comments.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you for addressing my concerns. I decide to increase the Rating to 6. | Summary: This paper presented the first step to mitigate association-engendered stereotypes in Text-to-Image (T2I) diffusion models. A probability distribution alignment problem was first formulated, and then a probability distribution model was constructed for non-association-engendered and association-engendered stereotypes. This paper further presented a MAS framework, which consists of the Text-Image-Text CLIP (TIT CLIP) and Sensitive Transformer. Comprehensive experiments demonstrated that the MAS framework is an effective mitigation approach for association-engendered stereotypes in T2I.
Strengths: + A novel and important research problem that tackles the stereotypes engendered by the association of multiple objects, which was ignored by previous work. This research adds critical insights into the stereotype mitigation area.
+ A neat framework MAS was proposed to mitigate such association-engendered stereotypes. MAS innovatively models the stereotype problem as a probability distribution alignment problem, which means aligning the stereotype probability distribution of the generated image with the stereotype-free distribution.
+ Two effective components, TIT CLIP and Sensitive Transformer, were proposed to enable MAS. The framework MAS learns the mapping of prompts, images, and stereotypes via the TIT CLIP and constructs sensitive constraints via the Sensitive Transformer to guide the T2I diffusion model in generating stereotype-free images by embedding these sensitive constraints into the T2I diffusion process.
+ A novel metric, Stereotype-Distribution-Total-Variation (SDTV), was introduced to evaluate association-engendered stereotypes accurately due to the insufficiency of existing metrics.
+ Extensive experiments were conducted, supported by the 13 Appendix pages.
Weaknesses: - In line 167, for the Algorithm 1, what is the output? Please clarify.
- More output examples of the proposed framework MAS and baseline methods should be shown to illustrate the mitigation effects.
- While the paper provides an Appendix for detailed experimental settings, some implementation details may be needed in Section 4.
Technical Quality: 3
Clarity: 4
Questions for Authors: Q1: In line 167, for the Algorithm 1, what is the output? Please clarify.
Q2: More output examples of the proposed framework MAS and baseline methods should be shown to illustrate the mitigation effects.
Q3: In Appendix B2 Figure 5(a), what does "extreme" mean in this context?
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: - Please clarify the "stereotype-free distribution", which is not very clear in this paper.
- While the paper provides an Appendix for detailed experimental settings, some implementation details may be needed in Section 4.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weaknesses**
- **W1 (the algorithm issue):** The output of Algorithm 1 is the embedding of prompt, image, and stereotype.
- **W2 (about the suggestions of output examples):** We sincerely appreciate your constructive suggestions. We will add examples of outputs from MAS and other baselines to the paper to demonstrate the mitigation effects alongside the results reported in Table 3.
- **W3 (the experimental settings):** We describe the main pipeline and key evaluation methods in Section 3. Due to the paper's page limit, we have to place the detailed experimental settings in the appendix.
**Questions**
- **Q1:** Please see **W1**.
- **Q2:** Please see **W2**.
- **Q3 (the definition of extreme): Extreme** refers to a scenario in a sensitive attribute dimension where the occurrence probability of a specific sensitive attribute value significantly exceeds the combined occurrence probability of other attribute values. We refer to this specific sensitive attribute value as an **extreme attribute value**. For instance, when using "*an image of a beautiful woman*" as a prompt and generating some images, considering the woman's race, 90% of the images may depict the white race, while other races only account for the remaining 10%.
**Clarifications**
- As mentioned in lines 132-136 of the paper, stereotype-free means that a T2I model should generate images with equal probability across different sensitive attribute values, avoiding significant disparities in probability distribution caused by the dependence of sensitive attributes on the object being generated. **Stereotype-free distribution** means that the T2I model generates images with equal probability for each sensitive attribute value, resulting in a **probability distribution of sensitive attributes** close to **a uniform distribution**.
- Please see **W3**.
Overall, we will carefully revise our paper based on these valuable comments.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed rebuttal and the effort you have put into addressing the concerns raised. Your responses are thoughtful and show a clear understanding of the issues at hand. Based on your rebuttal, I am pleased to inform you that I am increasing my score for your submission. | Summary: The paper proposes a framework to detect and mitigate stereotype association in Text-to-Image models. They conduct extensive experiments to demonstrate the usability of the framework.
Strengths: + The authors aim to assess "association-engendered" stereotypes in T2I models. They model the stereotype mitigation problem as a probability distribution alignment problem and propose MAS framework to mitigate the association-engendered stereotypes.
+ The authors conduct extensive experiments to demonstrate the effectiveness of the framework.
Weaknesses: - I strongly recommend renaming the acronym for the Text-Image-Text CLIP model. The current acronym 'TIT-CLIP' is inappropriate, particularly given the project's focus on detecting stereotypes in Text-to-Image models. A more suitable name that reflects the project's serious and sensitive nature would be advisable.
- The claim that the default representation of different identity groups (e.g., 'black' and 'white' people) is not stereotypical is false (Lines 8-10, Figure 1). Research has demonstrated that the default representations of identities are indeed stereotypical, and default representations of objects can also exhibit stereotypical characteristics [1, 2, 3, 4].
[1] Easily Accessible Text-to Image Generation Amplifies Demographic Stereotypes at Large Scale. Bianchi et al. 2023 \
[2] ViSAGe: A Global-Scale Analysis of Visual Stereotypes in Text-to-Image Generation, Jha et al., 2024 \
[3] Stable bias: Evaluating Societal Representations in Diffusion Models, Luccioni et al. 2023 \
[4] AI’s Regimes of Representation: A Community-Centered Study of Text-to-Image Models in South Asia. Qadri et al., 2023 \
Technical Quality: 3
Clarity: 3
Questions for Authors: - Since the default representation of the images can also be biased, did the authors study how much of the bias stems from the isolated representations of identity groups and objects vs the two combined?
- Can authors provide more details on the dataset used for training and evaluation and the grounding used for evaluating the presence of stereotypes before and after mitigation?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Missing limitations and discussion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weaknesses**
- **W1(the acronym issue):** We sincerely appreciate your constructive suggestions. The original acronym, Text-Image-Text CLIP, was chosen from an encoding perspective, focusing on the encoding of Text (prompt, stereotype description) and Image. We agree that this name may not be precise enough and does not fully reflect the project's theme. We will rename it to **Prompt-Image-Stereotype CLIP**. This new name better describes the objects that CLIP encodes and highlights the key aspects of the prompt and the stereotypes manifested in the generated images, making it more aligned with the project's focus than TIT-CLIP.
- **W2 (about the default representation stereotype): We did not deny that default representations of identity are stereotypes**; therefore, we use the words "may not" in lines 8-10. Our focus in lines 8-10 and Figure 1 is on the emergence of new and subtle stereotypes when the sensitive attribute of race is associated with the object of a house, rather than on the stereotypes inherent in the representations themselves.
**Questions**
- **Q1 (the default stereotype issue):** As stated in W2, the primary purpose of our paper is to demonstrate that the stereotype can be engendered by the association of two objects when these two objects individually are not stereotypical. **This association-engendered stereotype is not a simple inheritance or addition of individual default representational stereotypes but is based on the interaction between the two, engendering a new stereotype.**
- **Q2 (the dataset issue):** We have provided a detailed description of the experimental datasets in Table 8 of Appendix C.1 of the paper. Most of the data comes from publicly available datasets, which can be accessed via the references provided in the paper.
**Missing Limitations**
- In lines 328-334 of the paper, we already discussed the limitations of our method and the potential social impacts. We would extend that in the revised paper if that were not enough.
Overall, we will carefully revise our paper based on these valuable comments. | Rebuttal 1:
Rebuttal: **Dear Chairs and Reviewers,**
We kindly thank all the reviewers for their time and for providing valuable feedback on our work.
In response to the reviewers, we have added the **`supplementary_images.pdf`** file. This file contains annotations and explanations for the data used in our experiments and provides a more detailed description of the Sensitive Transformer.
Kind regards,
The authors
------
------
**Response to reviewer L6vF's questions (the second part):**
**Weaknesses**
- **W7 (about the Appendix G.2):** This issue arises because the "female" position in the prompts is towards the end. According to the stable diffusion encoding rules, the weight of "female" is reduced, leading to a loss of the word's features. We retested this issue, and as shown in Figure 4 of supplementary_images.pdf, adjusting the prompt weights resolves the problem. Additionally, regarding the reviewer's concerns about semantic preservation problems, we conducted semantic preservation experiments in Section 4.2 of the paper. Table 4 indicates that our MAS maintains a similar level of semantic preservation to the original T2I model.
- **W8 (about Figure 12):** In Figure 12, due to the training data of the diffusion model, the disparity between rich and poor individuals is not always very pronounced in some examples. However, an apparent phenomenon is that images of both poor and wealthy individuals predominantly depict Black and White men. This phenomenon may engender harmful stereotypes, as it suggests that being poor or wealthy is always associated with either the Black or White races. The stereotype-free T2I diffusion model should distribute the probabilities equally across all races for both poor and wealthy individuals. A wealthy person could be White, Black, Asian, etc.; similarly, a poor person should also be represented equally across all races, not just limited to White and Black men.
- **W9 (missing limitations):** In lines 328-334 of the paper, we already discussed the limitations of our method and the potential social impacts. We would extend that in the revised paper if that were not enough.
- **W10 (the mitigation target issue):** When the prompt explicitly includes specific sensitive attributes, MAS prioritizes those specified in the prompt. As we explained in **W3**, if the original prompts already specify certain sensitive attributes, they will not appear in the sensitive constraints dictionary. For example, when generating an image of a World War II German soldier, the weight of the "German" attribute in the prompts will be significantly higher than the sensitive constraints. This ensures the model does not produce unrealistic results, such as a German soldier with Asian features.
**Questions**
- **Q1 (the Line 41 issue):** The "single object" refers to Non-Association-Engendered stereotypes. As stated in Appendix A.2, Non-Association-Engendered stereotypes include single object and single/multiple attribute cases, and the parenthetical statement aligns with Non-Association-Engendered stereotypes. We apologize for any confusion our previous wording may have caused. We will revise the sentence as follows:
> Previous works on stereotype mitigation have been limited to a single object, referred to as *Non-Association-Engendered stereotypes* (e.g., only mitigating the stereotype problem in the occupation or gender dimension), and cannot effectively address stereotypes involving the association of multiple objects, referred to as *Association-Engendered Stereotypes*.
- **Q2 ([SA] issues in prompt word templates):** In some scenarios, the prompt already includes some sensitive attributes (SA). In such cases, we must ensure that the SA specified in the prompt remains semantically aligned while mitigating stereotypes for the unspecified SA. For example, in the prompt "a female nurse," the SA of "female" is already included. We need to maintain the "female" SA unchanged while mitigating stereotypes related to other sensitive attributes, such as the race and age of the female nurse. Therefore, when setting up the prompt template, it is crucial to consider cases where specific SA is already known.
- **Q3 (the sensitive constraints incorporated issue):** In the **`supplementary_images.pdf`**, we supplement the embedding process of the sensitive constraint in T2I, as shown in Figure 3. Using the `np.concatenate` function, we concatenate the original prompt embedding with the generated sensitive constraints embedding to form the input for the diffusion process.
**Missing Limitations**
- In lines 328-334 of the paper, we already discussed the limitations of our method and the potential social impacts. We would extend that in the revised paper if that were not enough.
Overall, we will carefully revise our paper based on these valuable comments.
Pdf: /pdf/a503fc57d9d0fb6238dd6777521f138bffe2b394.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MiniCache: KV Cache Compression in Depth Dimension for Large Language Models | Accept (poster) | Summary: This paper proposes a KV cache compression method by merging keys and values of consecutive layers. Based on the empirical observation that keys and values of consecutive layers after the mid-depth layer have high cosine similarity, the authors propose a merging strategy using angular interpolation. Additionally, they emphasize the importance of retention tokens based on their empirical findings. Extensive experiments across multiple state-of-the-art models and various evaluation benchmarks demonstrate the effectiveness of the proposed compression method.
Strengths: - The proposed approach of merging keys and values can have a significant practical impact.
- The method has low computational overhead while providing meaningful efficiency improvements.
- The proposed approach is training-free.
- Experiments conducted on various models and benchmarks demonstrate the effectiveness of the proposed method.
- The writing is overall clear.
Weaknesses: - Some methodological design choices seem ad-hoc. For example, the method starts merging after the mid-layer and only merges two consecutive layers. I believe the paper could be technically improved by addressing these design choices.
- The authors compare the efficiency of the method after applying quantization (MiniCache 4-bit) to the FP16 baseline. This comparison overstates the method's effectiveness because the contribution of the paper does not involve quantization. The paper should measure efficiency improvement without using quantization. Currently, lines L22-24, L302-303, etc., can mislead readers.
- A comparison to early layer-exiting approaches seems more relevant than comparisons to KV quantization methods, which the current draft focuses on.
- e.g., "Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding", EMNLP, 2023
**minor comments**
- Figure 1, index -> layer index
- Line 140, Figure 1 (b) -> Figure 1 (a)
- For related works, it would be informative to include token compression methods:
- "Compressed Context Memory for Online Language Model Interaction", ICLR 2024
- "Recurrent memory transformer", NeurIPS 2022
Technical Quality: 2
Clarity: 2
Questions for Authors: - What will be the results when applying the merging function to more than two layers?
- How about applying the merging function based on cosine similarity? For instance, only apply merging when the cosine similarity is higher than a specified threshold. This might address the issues mentioned in the first weakness above.
- Could you compare the proposed method with early layer-exiting methods?
- In Eq (4), is the retention index defined per layer? If not, how do you measure d_i, d_{min}, and d_{max}?
- Could you provide an intuitive analysis of why some tokens have severely low cosine similarity in Figure 2 (b)? Are there any common characteristics of these tokens?
- Do you apply merging per attention head or to the entire key/value vectors?
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: In appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks to the reviewer for the valuable comments.
**Q1: Ad-hoc design choices (mid-layer merging, only two layers) need improvement.**
**A1:**
It is worth noting that MiniCache is the pioneer work to explore KV cache compression along the depth dimension (Lines 48-50). This insight has been highly recognized by Reviewer TyMG (*"...novel perspective of compressing across layers"*), Reviewer fj43 *(“The idea ... introduces a new perspective”)* and Reviewer ZDdv (*“This novel perspective, previously unexplored”*).
With this new motivation, the design of MiniCache is initially based on strong empirical evidence as shown in Figure 1(a), where KV cache states exhibit high similarity between the adjacent layers in the middle-to-deep portion of LLMs. Thus,
1. Merging after the mid-layer is **a reasonable and good starting point for our early exploration** as it would ideally keep most performance of LLMs.
2. More importantly, **we have already shown that MiniCache can perform cache-merging across more layers in Figure 4**, e.g. from shallow to deep layers. We have also discussed cross-layer merging beyond two layers using an advanced merging algorithm termed Spherical Cubic Interpolation in further work (Lines 339-342).
Furthermore, we agree that our current approach could be slightly simple as an early effort. However, at this stage, we have already conducted comprehensive experiments to demonstrate the effectiveness of our main idea of cross-layer KV cache merging. Notably, this has been **recognized by all reviewers.** Thus, we will leave more advanced design for future works, as also recognized by Reviewer fj43 ("could inspire further research in inter-layer redundancy exploitation.").
**Q2: Results of merging more than two layers?**
**A2:**
We have considered multiple-layer merging functions; however, in the current version of MiniCache, we employ SLERP (Spherical Linear Interpolation) as our merging function. This choice is due to SLERP's inherent ability to merge only two vectors.
To extend this capability to more than two layers, please refer to general response **Q3**.
As discussed in Q1, we will explore merging across more than two layers in future work.
**Q3: Merge based on cosine similarity with a threshold?**
**A3:**
In our approach, we used angular distance as the threshold for merging the KV cache.
Mathematically, cosine similarity $\cos(\theta)$ can be converted to angular distance $d$ using the following relationship:
$d = \frac{1}{\pi}\arccos(\cos(\theta))$
Thus, merging based on angular distance is equivalent to merging based on cosine similarity.
**Q4: Compare with early layer-exiting methods?**
**A4:**
Both early-exiting and MiniCache aim to accelerate inference by addressing the depth dimension. Early-exiting is more effective for compute-bound operations, whereas MiniCache is advantageous for memory-bound operations. Furthermore, early-exiting methods typically require extensive training, whereas MiniCache is training-free, making it broadly applicable to a wide range of off-the-shelf models and scenarios.
In future work, we will explore combining these two paradigms. The relevant reference [C] will be included in the revised paper.
**Q5: Is the retention index defined per layer in Eq (4)? How to measure d_i, d_{min}, and d_{max}?**
**A5:**
In Eq. (4), the retention index is **shared for the paired KV states by position between adjacent layers**. Specifically, $d_i$ represents the angular distance between the paired tokens, with $d_{min}$ and $d_{max}$ denoting the minimum and maximum distances across all paired tokens between two adjacent layers, respectively. The rationale behind this approach is to retain paired tokens when they exhibit low similarity, which corresponds to a large distance $d_i$ exceeding the retention threshold.
**Q6: Why do some tokens have low cosine similarity in Fig 2(b)? Are there any common characteristics?**
**A6:**
We conducted a thorough analysis and observed that the salient tokens often include the initial token and punctuation, as illustrated in Figure I and Table IX of the rebuttal PDF.
We have observed that the numerical values of these salient tokens are relatively large compared to other tokens. After passing through the attention layers, these salient tokens demonstrate significant numerical differences, as they receive strong attention despite not being semantically important. Similar phenomena have been observed in previous studies [D][E]. Consequently, the significant numerical differences between these tokens result in low similarity, rendering them unmergeable in our algorithm. We will include the analysis in the revised paper.
**Q7: Merging per attention head or entire key/value vectors?**
**A7:**
We apply compression to the entire key and value vectors.
**Q8: Efficiency comparison without quantization is needed to avoid misleading readers.**
**A8:**
Table VII in the rebuttal PDF demonstrates that MiniCache's cross-layer merging without quantization achieves almost lossless performance compared to the FP16 baseline, with a compression ratio of 1.53. In contrast, 2-bit quantization causes performance degradation. For instance, on the GSM8K dataset, performance drops from 0.159 to 0.127, but the compression ratio improves to 3.95.
Our MiniCache method, focusing on the depth dimension, can complement any quantization and existing KV cache compression methods. The results indicate that combining cross-layer merging with 4-bit quantization achieves the optimal balance between performance and efficiency.
To avoid confusion, we will revise the description as ``On the ShareGPT dataset, LLaMA-2-7B with cross-layer merging achieves a compression ratio of 1.53. Additionally, since MiniCache is orthogonal to existing quantization techniques, it can achieve a compression ratio of up to 5.02x when combined with the 4-bit quantization technique.”
**Note: Please refer to the reference list in the general response.**
---
Rebuttal 2:
Title: Follow-Up on Rebuttal
Comment: Dear Reviewer GYW9,
We would like to thank you for your valuable feedback to improve our work. We are wondering whether our response has addressed your questions and can improve your opinion of our work. We are committed to ensuring that our work meets the expectations of our esteemed reviewers.
Kindly let us know if you have any other concerns, and we will do our best to address them.
Best regards,
Authors of #1613
---
Rebuttal Comment 2.1:
Title: Thank you for the rebuttal
Comment: Thank you for your detailed responses to my questions. I agree that the paper presents interesting observations. However, the approach still feels ad-hoc. It's unfortunate that no consideration was given to which layers should be merged. My suggestion in Q3 was to decide which layers to merge based on specific criteria. The authors only merge consecutive layer pairs after the mid-layer (please correct me if I’ve misunderstood). Regarding Q8, I believe a 1.5x improvement is indeed very significant, and I appreciate the authors for sharing this result. On the condition that the exaggerated claims in L21, 81, 94, etc. (regarding the proposed method achieving a 5x improvement) are revised accordingly, I raise my score to weak accept.
---
Reply to Comment 2.1.1:
Title: Thank you for the response
Comment: Dear Reviewer GYW9,
Thank you for your valuable feedback and for raising the score. We are pleased that the new experiments and additional details have enhanced the clarity and positioning of our work.
We understand the reviewer's concern regarding the criteria for layer selection. We will incorporate your suggestion and include a more detailed trade-off analysis based on the experimental results shown in Figure 4. This will also explore the potential for a more dynamic layer selection strategy in our future work.
Best regards,
Authors of #1613 | Summary: Authors proposed a KVCache Compression Method, Minicache, is introduced as a method to efficiently compress the Key-Value (KV) cache in large language models (LLMs) by leveraging the high similarity of KV states between adjacent layers in the middle-to-deep portion of LLMs. This compression is achieved by disentangling states into magnitude and direction components and interpolating directions while retaining distinct state pairs to minimize storage overhead. MiniCache is training-free, complements existing compression strategies, and demonstrates significant performance improvements, including up to a 5.02x compression ratio, a 5x enhancement in inference throughput, and a 41% reduction in memory footprint, all while maintaining near-lossless performance across various models and benchmarks.
Strengths: 1.[Novel Approach, Interesting Observation] The authors explore KV cache compression from a novel perspective by focusing on the depth dimension, which is previously unexplored. They identify a significant finding: KV cache states exhibit high similarity between adjacent layers in the middle-to-later stages of LLMs. This discovery is backed by a thorough analysis and visualization of layer-wise similarity statistics, leading to a well-motivated methodology for compressing the KV cache. This approach has substantial implications for the development of KV cache compression techniques.
2.[Sound Methodology] The authors propose a training-free strategy for cross-layer merging, formulating a reparametrization compression technique to enhance inference efficiency. This method carefully considers outliers, retaining these specific tokens and restoring them through a well-constructed strategy.
3.[Strong Experiments, Well Compatibility] Extensive experiments across multiple datasets with four different models demonstrate that the proposed MiniCache consistently performs effective compression without compromising performance, achieving a superior compression ratio of ~5× and outperforming existing methods. Furthermore, the proposed method is compatible with existing quantization strategies, highlighting its practical values.
4.The paper is well-written, which is easy to understand.
Weaknesses: 1.I am concerned about the computational overhead during the reparametrization and restoration stages. Does this compression strategy increase computational overhead?
2.The authors selected the middle layers for cross-layer merging compression. Have the authors considered any principled methods to further increase the compression ratio by compressing the shallow layers in future work?
Technical Quality: 4
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The authors address the limitations inherited from the SLERP method and propose future research directions to enhance the adaptability of the merge compression strategy.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks to the reviewer for the valuable comments.
**Q1 : Concern about computational overhead during reparametrization and restoration stages. Does this compression strategy increase computational overhead?**
**A1:**
Reparametrization-based compression involves computing magnitude and direction vectors, and applying the SLERP algorithm through simple matrix manipulation. Additionally, we compute pairwise distances to identify salient tokens, as detailed in Section 4.2.
For restoration, we utilize an in-place index-based token replacement operation, where the computational overhead is managed by the PyTorch index selection procedure, as mentioned in line 216.
As shown in Table V of the rebuttal PDF, the reparametrization process, including the computation of magnitude and direction, takes 0.093 ms (0.031 ms + 0.062 ms), and the restoration process, including the computation of the distance matrix and token replacement, takes 0.116 ms (0.061 ms + 0.055 ms). These times indicate a negligible computational overhead compared to the overall attention computation, which takes 9.756 ms. The running time is measured in milliseconds per attention layer with a batch size of 1 on LLaMA-2-7B.
**Q2: The authors selected the middle layers for cross-layer merging compression. Have the authors considered any principled methods to further increase the compression ratio by compressing the shallow layers in future work?**
**A2:**
Yes, we have considered and proposed methods to further improve the compression ratio.
Our observations indicate that certain shallow layers exhibit low similarities, suggesting the presence of differences and residual errors. To address this, we suggest employing a meta-learning-based approach to approximate these differences in the shallow layers. Once these approximations are made, we can bridge the gaps in the shallow layers, enabling their effective merging and compression. The possibility of merging shallow layers has been demonstrated in concurrent work [B], though it trains from scratch.
References:
[B] You only cache once: Decoder-decoder architectures for language models. Arxiv 2024.
---
Rebuttal Comment 1.1:
Comment: My concerns are fully addressed and I would like to keep my score. | Summary: The paper proposes a novel method called MiniCache for compressing the KV cache in large language models (LLMs) by merging cache states across layers. The authors argue that this approach significantly reduces the memory footprint and enhances inference throughput without significant performance loss. The paper includes evaluations on various models and benchmarks, demonstrating the effectiveness of MiniCache in terms of compression ratio and throughput improvement.
Strengths: 1. The idea of compressing the KV cache by merging states across layers introduces a new perspective on reducing memory usage during inference. This approach could inspire further research in inter-layer redundancy exploitation.
2. Memory consumption and inference speed are critical challenges in deploying LLMs. The proposed MiniCache method addresses these issues directly, offering a potentially valuable tool for practitioners working with resource-constrained environments.
3. The method does not require additional training, making it easy to integrate into existing inference pipelines without the need for extensive retraining or fine-tuning of models.
Weaknesses: **Major Weekness 1: Lack of Baseline Comparisons**
In Table 1, the authors only include quantization methods for performance comparison, neglecting other KV cache eviction methods such as those proposed in [1] and [2].
**Major Weekness 2: Lack of Evaluation for Instruction-following Benchmarks**
Given that instruction-tuned models are more generalizable for downstream applications, it is essential to evaluate how MiniCache performs on instruction-following benchmarks such as MT-Bench [3].
[1] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models, NeurIPS 2023
[2] Efficient Streaming Language Models with Attention Sinks, ICLR 2024
[3] Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena, NeurIPS 2023
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Beyond memory compression, what is the impact of MiniCache on latency?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: As mentioned above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks to the reviewer for the valuable comments.
**Q1: Lack of Baseline Comparisons. In Table 1, the authors only include quantization methods for performance comparison, neglecting other KV cache eviction methods such as those proposed in [14] and [15].**
**A1:** We have included the baseline comparison with H2O [14] in Appendix Section A, along with additional experimental results. We further add the Attention Sink [15] benchmark, as shown in Table III in the rebuttal PDF.
According to this table, MiniCache consistently outperforms H2O [14] and Attention Sink [15]. Additionally, MiniCache explores the KV Cache in a **novel depth perspective**, making it **orthogonal to existing quantization and sparsity techniques** for further improvements.
**Q2: Lack of Evaluation for Instruction-following Benchmarks. Given that instruction-tuned models are more generalizable for downstream applications, it is essential to evaluate how MiniCache performs on instruction-following benchmarks such as MT-Bench [A].**
**A2:**
We benchmark the MiniCache using the MT-Bench dataset, which focuses on instruction-following tasks, as shown in Table VIII in the rebuttal PDF. The models employed for this benchmarking include LLaMA-2-7B-Chat, Mistral-7B-Instruct, and LLaMA-3-8B-Instruct. Our MiniCache-4bit exhibits a reasonably slight performance reduction caused by 4-bit quantization. However, MiniCache can achieve almost lossless performance via standalone cross-layer merging.
**Q3: Latency Benchmark. Beyond memory compression, what is the impact of MiniCache on latency?**
**A3:**
We benchmark the latency of LlaMA-2-7B on an NVIDIA A100 GPU using different sequence lengths ranging from 1024 to 4096 with a batch size of 16, as shown in Table IV in the rebuttal PDF. We compare it with H2O, which requires calculating full attention scores to estimate token importance. In contrast, MiniCache performs reparameterization-based merging and token restoration using simple matrix manipulations, resulting in more lightweight computations and lower latency. Specifically, when the sequence length is 4096, MiniCache shows a 36.83% reduction in latency compared to H2O.
References:
[A] Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena, NeurIPS 2023. | Summary: This paper introduces MiniCache, a novel approach to compressing the Key-Value (KV) cache in large language models (LLMs) to enhance inference efficiency. The KV cache is crucial in storing key-value states of previously generated tokens, significantly reducing redundant computations and lowering latency during autoregressive generation. However, as the sequence length increases, the KV cache's size also grows linearly, leading to substantial memory consumption. MiniCache addresses this issue by compressing the KV cache across layers from a depth perspective, leveraging the observation that KV cache states exhibit high similarity between adjacent layers in the middle-to-deep portions of LLMs. The proposed method involves disentangling the states into magnitude and direction components, interpolating the directions while preserving the lengths, and retaining highly distinct state pairs unmerged to minimize information loss. MiniCache is a training-free, general approach that complements existing KV cache compression strategies like quantization and sparsity.
The authors conducted comprehensive evaluations using various models, including LLaMA-2, LLaMA-3, Phi-3, Mistral, and Mixtral, across multiple benchmarks. The results demonstrated that MiniCache achieves superior compression ratios and high throughput, with LLaMA-2-7B showing a compression ratio of up to 5.02×, a 5× increase in inference throughput, and a 41% reduction in memory footprint compared to the FP16 full cache baseline, all while maintaining near-lossless performance. The paper highlights the potential of MiniCache to significantly reduce memory requirements and enhance the efficiency of LLM inference, making it a promising solution for applications requiring long context inputs and extensive sequence generation.
Strengths: The paper presents a substantive contribution to the field of efficient machine learning by introducing a novel, high-quality, and clearly explained method for KV cache compression in large language models. The significance of the work is underscored by its potential to improve the practicality and scalability of LLMs, addressing a critical bottleneck in their deployment. The originality of the approach, combined with comprehensive evaluations and clear exposition, makes this paper a valuable addition to the literature on efficient ML techniques.
**Originality**
- **Novel Compression Approach**: The paper introduces a unique method for KV cache compression in large language models by exploring the depth dimension, which is a previously overlooked area. This novel perspective of compressing across layers rather than within layers demonstrates a creative combination of existing ideas in a new, impactful way.
- **Reparameterization Strategy**: The method's use of reparameterization to disentangle state vectors into magnitude and direction components for interpolation is innovative. This approach preserves important information while effectively reducing memory usage.
**Quality**
- **Comprehensive Evaluation**: The authors provide a thorough evaluation of MiniCache across various models and benchmarks, including LLaMA-2, LLaMA-3, Phi-3, Mistral, and Mixtral. The extensive experiments validate the method's effectiveness and robustness.
- **Performance Metrics**: The results show significant improvements in compression ratios, inference throughput, and memory footprint reduction, with metrics such as a 5.02× compression ratio and a 41% reduction in memory usage while maintaining near-lossless performance. These metrics highlight the quality and practicality of the proposed solution.
**Clarity**
- **Detailed Exposition**: The paper is well-written, with a clear and detailed exposition of the methodology. Figures and tables are effectively used to illustrate key concepts, observations, and results, aiding in the understanding of the approach and its benefits.
- **Step-by-Step Explanation**: The authors provide a step-by-step explanation of the MiniCache method, from the initial observations to the final implementation, making the paper accessible even to those less familiar with the intricacies of KV cache compression.
**Significance**
- **Addressing a Critical Issue**: The paper addresses a significant challenge in the deployment of large language models – the growing memory consumption of KV caches with increasing sequence lengths. By reducing the memory footprint and improving inference efficiency, MiniCache has the potential to make LLMs more practical and scalable in real-world applications.
- **Broad Applicability**: The approach is general and training-free, making it applicable to a wide range of models and scenarios. This broad applicability enhances the significance of the work, as it can be integrated into existing systems with minimal modification.
Weaknesses: 1. **No Implementation Source Code Provided**: The paper does not include the implementation source code, which is a significant limitation. Releasing the source code would facilitate further research and enable other researchers to replicate and build upon the work. Providing the code upon publication would enhance the paper's impact and encourage broader adoption of the proposed method.
2. **Insufficient Justification for SLERP**: The introduction of Spherical Linear Interpolation (SLERP) in the paper feels abrupt and lacks sufficient justification. While SLERP is used for interpolating between vectors, the paper does not provide enough rationale for why this specific technique was chosen over other interpolation methods. More ablation studies should be conducted to demonstrate the effectiveness and necessity of using SLERP in this context. These studies could compare SLERP with alternative interpolation techniques to show its advantages and validate the authors' choice.
### Minor Writing Improvements
1. **Figure Clarity**:
- **Figure 1(a)**: The resolution of Figure 1(a) could be higher, or the figure could be replaced with a vector graphic to improve clarity and readability. Enhancing the visual quality would make the figure easier to understand and more professional.
2. **Typographical Corrections**:
- **Line 308**: The word "interpretation" should be corrected to "interpolation".
- **Line 328**: The phrase "A larger t" should be corrected to "A larger \(\gamma\)".
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. **Distance-Based Threshold for Retention (Line 226)**
Why did you choose a distance-based threshold instead of the merging ratio of overall tokens as the control for retention? Can you show the effect on accuracy and efficiency as the ratio of retention tokens varies, highlighting this trade-off?
2. **Ratio of Merged Tokens in Ablation Study**
In the second ablation study, can you show the ratio of merged tokens and the efficiency trade-offs for different settings?
3. **Implementation Source Code**
Can you include the implementation source code?
4. **Justification for SLERP**
Can you justify your choice of SLERP over other interpolation methods?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: There should be a section discussing the limitation of this work and its social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks to the reviewer for the valuable comments.
**Q1: Distance-Based Threshold for Retention (Line 226)**. **Why did you choose a distance-based threshold instead of the merging ratio of overall tokens as the control for retention? Can you show the effect on accuracy and efficiency as the ratio of retention tokens varies, highlighting this trade-off?**
**A1:**
According to our Observation 2, token pairs with lower similarity scores are more critical for restoration, and different adjacent layers exhibit distinct patterns.
Our proposed **dynamic distance-based threshold** allows us to selectively retain salient tokens according to low similarity scores. Setting the retention ratio of the overall tokens as a hyperparameter is impractical because determining the optimal retention ratio for each layer manually is challenging due to the dynamic nature across different layers.
To illustrate the trade-off regarding accuracy and efficiency, we conducted experiments by varying the ratio of retention tokens, as shown in Table I in the rebuttal PDF. We observe that as the overall token retention ratio increases, accuracy improves up to a certain point before plateauing. Retaining 20% of the tokens is necessary to ensure all salient tokens are preserved without performance degradation. In contrast, using a **dynamic distance-based threshold**, as proposed in our paper, we only need to retain the top 5% of the salient tokens. Note that efficiency decreases as more tokens are retained. This demonstrates that our distance-based approach better balances performance and efficiency than the fixed retention ratio counterpart.
**Q2: Ratio of Merged Tokens in Ablation Study. In the second ablation study, can you show the ratio of merged tokens and the efficiency trade-offs for different settings?**
**A2:**
We are adding another column in terms of compression ratio to represent the efficiency trade-offs, as shown in Table II in the rebuttal PDF. The results suggest that setting γ to 0.05 achieves the best balance between performance and efficiency.
**Q3: No Implementation Source Code Provided. Can you include the implementation source code?**
**A3:**
We will definitely release our source code upon acceptance.
To ensure reproducibility, we have included comprehensive pseudocode in the Appendix, specifically in Algorithm 1 and Algorithm 2.
- **Algorithm 1:** This algorithm outlines the overall logic of the MiniCache Inference process, covering both the prefilling and decoding stages. It provides a step-by-step breakdown of the core processes involved in our approach.
- **Algorithm 2:** This algorithm details the compression and restoration processes. It elaborates on how the KV cache is compressed and subsequently restored, highlighting the technical intricacies and optimizations employed in our method.
We believe that the inclusion, along with the detailed comments, will significantly aid the community in grasping the logic and mechanics of our implementation.
**Q4: Justification for SLERP. Can you justify your choice of SLERP over other interpolation methods?**
**A4:**
1. Motivation for Using SLERP:
We conceptualize the KV Cache as activations, which consist of two factors: magnitude and direction in the spherical feature space. SLERP allows us to use a more compact form to represent the original KV Cache by merging states effectively. We can compute the overall magnitude vector and store it in a channel with a dimension equal to 1, significantly reducing the memory overhead, as shown in *section 4.2*.
2. Performance Metrics and Empirical Evidence:
In our initial experiments, we considered average merging as the preliminary method; however, the performance in terms of accuracy was not promising, as shown in Figure 2(a). Subsequently, we conducted experiments using maximum norm-preserving interpolation, but it demonstrated lower accuracy compared to SLERP, as detailed in Appendix Section C (Line 579). Ultimately, we selected SLERP [64] as our final solution due to its multiple advantages in key performance metrics, including compression ratio, accuracy, and computational efficiency.
To further substantiate our claim, we conducted additional ablation studies comparing SLERP with average merging and maximum norm-preserving interpolation. The results, as shown in Table VI in the rebuttal PDF, reaffirm that SLERP provides superior performance in information preservation.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: I appreciate the authors' responses, and most of my concerns have been addressed. I will keep my evaluation for acceptance | Rebuttal 1:
Rebuttal: ## Response to all reviewers
We sincerely thank all reviewers for their valuable comments.
All reviewers agree that:
**The Novel Approach:**
- "The paper introduces a unique method …. This novel perspective impacts the field in a new, impactful way." (Reviewer TyMG)
- "The idea ... introduces a new perspective ... . This approach could inspire further research ..."(Reviewer fj43)
- "This novel perspective, previously unexplored, identifies a significant finding ..." (Reviewer ZDdv)
**The Insightful Observation:**
- "They identify a significant finding ... A thorough analysis and visualization back this discovery... ." (Reviewer ZDdv)
- "The authors provide a step-by-step explanation ... , from the initial observations to the final implementation." (Reviewer TyMG)
**The Sound Methodology:**
- "The method is innovative, preserving important information while effectively reducing memory usage." (Reviewer TyMG)
- "The authors propose a training-free strategy ... . This method ... a well-constructed strategy" (Reviewer ZDdv)
- "The method does not require additional training, making it easy to integrate into existing inference pipelines ..." (Reviewer fj43)
**The Practical Impact:**
- "MiniCache offers a potentially valuable tool." (Reviewer fj43)
- "The proposed approach has a significant practical impact." (Reviewer GYW9)
- "The approach is general and training-free ... have broad applicability ..." (Reviewer TyMG)
**The Comprehensive Experiments:**
- "The authors provide a thorough evaluation across various models and benchmarks. The extensive experiments validate the method's effectiveness and robustness." (Reviewer TyMG)
- "Extensive experiments demonstrate that MiniCache consistently performs effective compression without compromising performance." (Reviewer fj43)
## General Response
**Q1: Comparative Analysis and Benchmarks (Reviewers GYW9 and fj43)**
- We include comparisons with other KV cache eviction methods (e.g., those proposed in H2O and Attention Sinks) to strengthen our evaluation, as shown in Table III in the rebuttal PDF.
- We evaluate our method on instruction-following benchmarks like MT-Bench to demonstrate its generalizability for downstream applications, as shown in Table VIII in the rebuttal PDF.
**Q2: Efficiency and Overhead (Reviewers GYW9 and fj43)**
- We address concerns regarding computational overhead by measuring the different stages introduced during the reparametrization-based merging and token restoration, as shown in Table V in the rebuttal PDF.
- We evaluate the latency of MiniCache and compare latency with a baseline method H2O, as shown in Table IV in the rebuttal PDF. We will include these results in the revised paper.
**Q3: Future work and direction (Reviewers GYW9 and ZDdv)**
- Advanced Merging Techniques: Techniques such as Spherical Cubic Interpolation [75], mentioned in the further work Section 7, allow for the interpolation of multiple vectors. This method enables the effective merging of more than two layers.
- Multiple-Round Based Merging Strategy: This strategy involves iteratively merging layers by first combining two layers, then merging the resultant layer with a third one. However, this approach is less efficient and may introduce additional computational complexity.
- Approximation-Based Methods: To merge the KV cache in shallow layers, we plan to develop a meta-learning method to approximate their differences and merging errors. As shown in Figure 1 (a), some shallow layers exhibit low similarity, indicating significant differences. By approximating these differences, we can effectively merge and compress the KV cache in shallow layers, thereby enhancing the overall compression ratio.
Additionally, we thank all minor writing improvements given by Reviewers TyMG and GYW9. We will take these suggestions into account and improve them in the final manuscript.
**Reference:**
[A] Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. NeurIPS 2023.
[B] You only cache once: Decoder-decoder architectures for language models. Arxiv 2024.
[C] Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding. EMNLP 2023
[D] Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing. NeurIPS 2023.
[E] Efficient Streaming Language Models with Attention Sinks. ICLR 2024.
Pdf: /pdf/4dc0f9ad7e73408c877997dd4f1a7c06801eaa4e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Multi-scale Consistency for Robust 3D Registration via Hierarchical Sinkhorn Tree | Accept (poster) | Summary: This paper proposed a hierchical skinhorn tree approach to extract the correspondences that are consistent across multiple feature scales for the point cloud registration task. Besides, an overlap-aware module is proposed to better locate the correspondences around the overlap regions. The proposed methods are evaluated on two benchmarks and its results show superior performance than existing SOTA methods in terms of success rate and registration accuracy.
Strengths: 1. It applied the mulit-scale consistency concept to the point cloud registration task. Previous criteria for extreacting correspondences are either based on nearest distance in feature space or geometric consistency in Euclidean space. Here, it extends the former criteria in a multi-scale level. And its experiment results validates its effectiveness.
2. It extends the skinhorn algorithm to the multi scale level to model the multi-scale consistency and proposed a BFS way to optimize the assignment problem in multi-scale levels.
Weaknesses: 1. The use of wrong terminology. In Eq1, it seems describe how to perform feature matching across multiple scales, not describing “inlier correspondence”, which is defined as two points of a match are close enough under rigid transformation.
2. In Eq 5, the variable Tij is not explained. Even though its definition can be found in the Superglue paper, which is a binary assignment variable indicating whether it is a good or bad match, it is never a bad thing to explain more.
3. For sec 3.2.4, it is better to illustrate how to optimize the proposed HST in BFS way with a detailed figure or a piece of pseudo code.
Technical Quality: 3
Clarity: 3
Questions for Authors: I don’t have any question.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Multi-scale consistency is a quite strong constrains, which will lead to no match or erroneous matches when the learned multi-scale features are less descriptiveness or the test data are extremely low overlap cases. Moreover, the optimization of multi-scale consistency requires a careful fine tuning of hyperparameters, such as #levels, grid size.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for your valuable comments and suggestions.
**Response to Weakness 1**: Thank you for pointing out this issue. We apologize for any misunderstanding caused by the term "inlier correspondence" in Eq. (1). It should be revised as "putative inlier correspondences". As you mentioned, the definition of inlier correspondence refers to points in two point clouds that are sufficiently close given a ground truth transformation. Actually our original intention in using 'inlier correspondence' here was to convey that, since the ground truth transformation is not provided beforehand, we often have to design a method to estimate the inlier correspondence set. A common approach is to do feature matching, as you mentioned. Therefore, in Eq. (1) we aim to describe how we could solve this problem from the Multi-scale Consistency (MSC) perspective, so the "inlier correspondence" here should be revised as "putative inlier correspondence", representing the inlier correspondence inferred by MSC. We will fix it to "putative inlier correspondence" in the revised version.
**Response to Weakness 2**: Thanks for your feedback regarding the lack of explanation of $T_{ij}$. Here $T_{ij}$ is the element from the assignment matrix $T \in R^{(|\mu_{ov}|+1)\times(|\nu_{ov}|+1)}$ of optimal transport problem, indicating the pairwise matching (overlap) score from source to target point cloud. The row and column sum of the matrix $T$ are respectively subject to the augmented marginal distributions $\mu_{ov}$ and $\nu_{ov}$ to ensure the allocation constraints. $T$ is the optimization variable in the calculation of the Overlap-aware Sinkhorn Distance and can be efficiently solved by the Sinkhorn algorithm. We will include the above explanation in the revised version.
**Response to Weakness 3**: Thank you for your suggestions on better illustrating the optimization of HST. We will include a detailed diagram in the revised version that illustrate the forward computation process and the backward gradient flow to thoroughly demonstrate the optimization process of HST.
**Response to Limitations** : Thank you for your insightful feedback on our work!
We understand your concern that Multi-scale Consistency (MSC) may hinder the model's ability to learn multi-scale descriptive features because it requires the features of two point clouds to exhibit similarity across multiple scales. In fact, we also observe this issue in our experiments. Therefore, we introduce a layer-wise overlap-aware circle loss (please refer to lines 253-255 in the paper) to help maintain the descriptiveness of features at each layer. This ensures that the backbone does not learn overly uniform features while still preserving MSC.
As for the concern that MSC might fail in test data with extremely low overlap, we design a set of experiments targeting overlap ratios of less than 10% and compare the performance with GeoTransformer to validate the robustness of HST when facing extremely low overlap. Please note that the currently available preprocessed datasets, 3DMatch (overlap > 30%) and 3DLoMatch (10% < overlap < 30%), do not include samples with such extremely low overlap (overlap<10%). Therefore, we access the 3DMatch raw dataset and collect a set of point cloud pairs with overlap < 10%, which we refer to as 3DExtremeLoMatch (3DExMatch) dataset, containing a total of 1,343 samples. We first test the model pre-trained on 3DMatch directly on 3DExMatch, and the results can be found in the following table:
|Estimator|Method|RR(\%)|RRE($^\circ$)|RTE(m)|
|---|---|:---:|:---:|:---:|
|LGR|GeoTrans|29.8|3.83|0.110|
||HST|32.5|3.78|0.111|
|RANSAC-250|GeoTrans|28.8|4.56|0.123|
||HST|31.5|4.42|0.122|
|RANSAC-500|GeoTrans|30.5|4.25|0.121|
||HST|32.5|4.16|0.118|
|RANSAC-1000|GeoTrans|31.7|4.23|0.118|
||HST|33.6|4.03|0.114|
|RANSAC-2500|GeoTrans|31.9|4.09|0.116|
||HST|34.7|3.81|0.111|
|RANSAC-5000|GeoTrans|31.0|4.04|0.117|
||HST|34.4|3.87|0.114|
We then randomly divide the 3DExMatch into training, validation, and test sets with proportions of 60% (805 samples), 10% (134 samples), and 30% (404 samples), respectively. We fine-tune GeoTransformer and HST for 3 epochs on the training set, and then evaluate the models on the test set. The results can be found in the following table. Both results clearly demonstrate that HST maintains strong performance even under low overlap conditions, confirming the effectiveness of our method.
|Estimator|Method|RR(\%)|RRE($^\circ$)|RTE(m)|
|---|---|:---:|:---:|:---:|
|LGR|GeoTrans|53.8|3.73|0.107|
||HST|59.0|3.79|0.106|
|RANSAC-250|GeoTrans|47.0|4.39|0.121|
||HST|49.8|4.43|0.119|
|RANSAC-500|GeoTrans|49.9|4.55|0.113|
||HST|52.6|4.03|0.117|
|RANSAC-1000|GeoTrans|52.4|4.03|0.117|
||HST|57.5|4.13|0.115|
|RANSAC-2500|GeoTrans|55.1|3.89|0.116|
||HST|59.0|3.89|0.112|
|RANSAC-5000|GeoTrans|56.1|4.07|0.116|
||HST|60.3|3.93|0.111|
Regarding the potential need to adjust hyperparameters like the number of levels and grid size for optimization, we believe that the adjustment of the number of levels primarily involves balancing performance and latency. As shown in the ablation study in Table 4, increasing the number of levels enhances performance but also slightly increases inference time. If the application scenario requires more real-time processing, the number of levels can be reduced to accelerate the model without significantly sacrificing performance. Similar to other coarse-to-fine methods, such as GeoTransformer, hyperparameters like grid size also require fine-tuning based on the specific scenario and the source of the point clouds.
All the above discussions will be added in the revised version. We hope the above responses address your concerns.
---
Rebuttal Comment 1.1:
Comment: The rebuttal and additional experiments address most of my concerns. I will keep the original score.
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging our rebuttal and the additional experiments we conducted in response to your concerns. We will include the above discussions in the revised version. We appreciate your consideration and the time you took to reassess our work based on these updates.
Thanks again for your careful review and valuable comments to help us improve our submission. | Summary: This paper presents a method to enhance the performance of GeoTransformer by filtering outlier correspondences at the coarse level using a multi-scale, overlap-guided Sinkhorn algorithm. It introduces an overlap-aware Sinkhorn Distance designed to detect potential overlapping points, thereby enhancing the robustness of consistency calculations and reducing the complexity of solutions.
Strengths: 1. The paper introduces a coarse-level outlier removal method that can be integrated into the GeoTransformer and trained end-to-end with the correspondence search. This approach is intriguing to me, despite being incremental.
2. It proposes a method for modeling multi-scale consistency named HST, which characterizes the similarity of potential overlapping points in neighboring areas, layer by layer, and aggregates these into a multi-scale consistency framework.
3. The numerical results achieve state-of-the-art performance on both indoor and outdoor benchmarks.
Weaknesses: The method appears to be an incremental enhancement, adding a multi-scale local consistency module to the GeoTransformer, which increases both model and time complexity. The registration results still heavily depend on the coarse matching that generates initial correspondences via the GeoTransformer. If these initial correspondences are of poor quality, then performance suffers.
The paper lacks an ablation study on the parameter selection for the top-q predicted scores and the overlap points, which could provide deeper insights into the model's sensitivity to these parameters.
Additionally, it would be beneficial to explore the impact of replacing the Hierarchical Sinkhorn Tree with a direct outlier removal method such as MAC. Understanding the performance differences could highlight the advantages or disadvantages of the proposed method in handling outliers.
The overlap score-guided Sinkhorn algorithm for registration, which was also used in OCFNet, should be cited, even though the application method differs. Additionally, this method functions as an outlier removal technique, similar to approaches like FastMac [2]. Therefore, many sota outlier remove methods need to be compared.
The authors claim that the proposed method mitigates the effects of low overlap and high noise as its main contribution. However, it appears unable to address the challenges faced by the GeoTransformer in cases of extremely low overlap during registration. It would be beneficial to know how the method performs when the overlap ratio between two point clouds is less than 10%.
[1] Mei, Guofeng, et al. "Overlap-guided coarse-to-fine correspondence prediction for point cloud registration." 2022 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2022.
[2]YZhang, Yifei, et al. "FastMAC: Stochastic Spectral Sampling of Correspondence Graph." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How about if directly use a outlier removal method such as MAC, FastMac, GCRANSAC to replace the Hierarchical Sinkhorn Tree, what is the performance?
2. The authors claim that the proposed method mitigates the effects of low overlap and high noise as its main contribution. However, it appears unable to address the challenges faced by the GeoTransformer in cases of extremely low overlap during registration. It would be beneficial to know how the method performs when the overlap ratio between two point clouds is less than 10%.
3. Equation (2) appears to be incorrect and requires further explanation.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We first would like to thank the reviewer for giving us valuable comments.
**Response to the comments on our work (first paragraph of Weaknesses)**: Thank you for your insightful comments on our work. We understand your concern that our method builds upon the foundation laid by existing work. However, we would like to emphasize that modeling the multi-scale consistency (MSC) is non-trivial due to the challenges in ensuring both effectiveness and efficiency. To achieve this, we proposed a series of methods to improve performance while maintaining low complexity, such as a lightweight overlap prediction module, Overlap-aware SD with Overlap Filtering and Overlap-aware Marginals, pruned HST, and others. Our extensive experiments on both indoor and outdoor benchmarks have demonstrated their scene-agnostic effectiveness. HST shows significant improvement compared to SOTAs, maintaining robust performance even under low overlap (please refer to the 3DLoMatch results in Tab. 1 & 2, as well as our response to Question 2 below), while introducing only a slight, acceptable time overhead.
As for your concern that we might suffer from poor initial correspondences, we believe that by modeling MSC, our method improves the quality of the correspondence set and thus suffers less performance degradation compared to others. All current coarse-to-fine-based methods heavily depend on coarse matching, and if the coarse correspondences are poor, they will face performance loss. Please refer to our response to Weakness 2. Under extremely low overlap, GeoTransformer shows significant performance degradation, whereas HST suffers less.
**Response to Question 1**: Thanks for your suggestions about comparing HST with outlier rejection methods. We are also very interested in the comparison. For fairness, we replace HST directly with GC-RANSAC, MAC, and FastMAC. Results on both 3DMatch and 3DLoMatch can be found in the following table.
|3DM\|3DLM|RR(\%)|RRE($^\circ$)|RTE(m)|RR(\%)|RRE($^\circ$)|RTE(m)|time(s)|
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|vanilla|91.5|1.91|0.068|74.0|2.95|0.090|0.260|
|GC-RANSAC|92.1|1.78|0.068|73.4|2.96|0.088|0.324|
|MAC|92.2|1.99|0.067|74.4|2.85|0.086|0.475|
|FastMAC|91.9|1.73|0.062|74.2|2.86|0.087|0.282|
|HST|93.2|1.70|0.059|77.3|2.71|0.084|0.296|
**Response to Question 2**: Thank you for your constructive suggestions to test the HST under extremely low overlap. We design a set of experiments targeting overlap ratios of less than 10% and compare with GeoTransformer. Please note that the currently available preprocessed datasets, 3DMatch (overlap>30%) and 3DLoMatch (10%<overlap<30%), both do not include samples with such extremely low overlap. Therefore, we access the 3DMatch raw dataset and collect a set of pairs with overlap<10%, which we refer to as 3DExtremeLoMatch (3DExMatch), containing a total of 1,343 samples. We first test the model pre-trained on 3DMatch directly on 3DExMatch, and the results can be found in the following table:
|Estimator|Method|RR(\%)|RRE($^\circ$)|RTE(m)|
|---|---|:---:|:---:|:---:|
|LGR|GeoTrans|29.8|3.83|0.110|
||HST|32.5|3.78|0.111|
|RANSAC-250|GeoTrans|28.8|4.56|0.123|
||HST|31.5|4.42|0.122|
|RANSAC-500|GeoTrans|30.5|4.25|0.121|
||HST|32.5|4.16|0.118|
|RANSAC-1000|GeoTrans|31.7|4.23|0.118|
||HST|33.6|4.03|0.114|
|RANSAC-2500|GeoTrans|31.9|4.09|0.116|
||HST|34.7|3.81|0.111|
|RANSAC-5000|GeoTrans|31.0|4.04|0.117|
||HST|34.4|3.87|0.114|
We then randomly divide the 3DExMatch into training, validation, and test sets with proportions of 60% (805 samples), 10% (134 samples), and 30% (404 samples), respectively. We fine-tune GeoTransformer and HST both for 3 epochs on the training set and then evaluate on the test set. The results are as follows:
|Estimator|Method|RR(\%)|RRE($^\circ$)|RTE(m)|
|---|---|:---:|:---:|:---:|
|LGR|GeoTrans|53.8|3.73|0.107|
||HST|59.0|3.79|0.106|
|RANSAC-250|GeoTrans|47.0|4.39|0.121|
||HST|49.8|4.43|0.119|
|RANSAC-500|GeoTrans|49.9|4.55|0.113|
||HST|52.6|4.03|0.117|
|RANSAC-1000|GeoTrans|52.4|4.03|0.117|
||HST|57.5|4.13|0.115|
|RANSAC-2500|GeoTrans|55.1|3.89|0.116|
||HST|59.0|3.89|0.112|
|RANSAC-5000|GeoTrans|56.1|4.07|0.116|
||HST|60.3|3.93|0.111|
**Response to Question 3**: Thank you for your feedback on Eq. (2). We apologize for the typo and for any misunderstandings it may have caused.
Our purpose of Eq. (2) is to find the k-NN of any given point $x_i^{l}$ in its next layer to form the local patch. We provide the following explanations in hopes of resolving your confusion.
(1) We mistakenly used $argmax$ instead of $argmin$ operator in the formula. We should use the $argmin$ on $K$ to find the k nearest points to form the local patch.
(2) Our initial intention is to use a more mathematical $argmin$ operator to represent the k-NN search instead of directly using $kNN(\cdot)$. However, such an expression might be hard to understand and could lead to ambiguities, such as the definition of the set $S$ in the formula. Therefore, we take a compromise approach by replacing the $argmin$ with the $argtopk$ operator to make it more intuitive.
The following is the revised Eq. (2) and will be included in the revised version.
Given $\mathbf{x}_i^{(l)}$, its local patch $\mathbf{P}_i^{(l+1)}$ is defined as:
$$
\mathbf{P}_i^{(l+1)}=\operatorname{argtopk} _{\mathbf{x}_j^{(l+1)}\in\mathbf{X}^{(l+1)}}(-||\mathbf{x}_i^{(l)},\mathbf{x}_j^{(l+1)}||_2)
$$
**Lack of ablation on top-q**: We apologize for the lack of ablation on top-q for overlap points filtering. The performance of HST is not sensitive to the selection of q. The RR of varying q can be found in the following table.
|Top-q|5|10|15|25|30|35|40|45|50|
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|3DMatch|91.2|92.4|92.6|92.9|92.5|92.7|93.1|92.5|91.5|91.8|
|3DLoMatch|75.4|76.5|76.8|77.2|77.6|77.1|77.3|77.3|76.8|76.4|
**Citing other overlap score-guided Sinkhorn method**: Thank you for your suggestion. We will cite OCFNet in our revised version.
---
Rebuttal Comment 1.1:
Title: Post-Rebuttal
Comment: I appreciate the authors' clarifications. My concerns have been addressed.
---
Reply to Comment 1.1.1:
Comment: Thank you for confirming that your concerns have been addressed. Your insightful comments have been instrumental in enhancing the clarity and quality of our work.
Thank you once again for your constructive feedback and positive evaluation. | Summary: This paper studies the problem of correspondence retrieval for point cloud registration. To this end, this paper proposes the Hierarchical Sinkhorn Tree, which is a pruned tree structure designed to hierarchically measure the local consistency of each coarse correspondences. To validate the proposed methods, the authors conducted experiments on both indoor and outdoor benchmarks.
Strengths: 1. This paper is well-written, well-organized, and easy to follow;
2. The idea of introducing tree structure into a coarse-to-fine mechanism is somehow interesting.
Weaknesses: 1. Although incorporating a tree structure into a coarse-to-fine mechanism is interesting, the idea sounds more like repeating the coarse-to-fine matching strategy used by CoFiNet and GeoTransformer. I suggest emphasizing this core contribution in the rebuttal phase;
2. In the main comparison (Tab. 1), the proposed method does not significantly outperform existing methods, which limits the value of the model.
3. As the proposed model leverages a tree structure to repeat the coarse-to-fine procedure, the efficiency (in terms of running speed) is a major concern. The authors should include related comparisons with state-of-the-art approaches.
Technical Quality: 2
Clarity: 3
Questions for Authors: See the weaknesses part.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Limitations have been discussed in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments and feedback. We hope that our responses can address your concerns.
**Response to Weakness 1**: Thanks you for your valuable comments. First, we would like to emphasize that modeling multi-scale consistency (MSC) is non-trivial, even with the help of a coarse-to-fine strategy for modeling the neighborhood. We understand your concern that our method might simply be applying the coarse-to-fine procedure across multiple scales. But our proposed method is not simply a repetition of the coarse-to-fine strategy, as merely repeating such an operation would result in an $O(n^l)$ complexity, where $l$ is the number of scales. This level of complexity is unacceptable, posing a major challenge in practical applications. Additionally, this naive approach could introduce a large number of non-overlapping noisy points, negatively impacting the model's robustness and performance, and could even degrade the quality of the correspondence set compared to the original.
To address these issues, we proposed a series of designs to improve the model's feasibility and performance, which distinguishes it from the mentioned models. First, we design a lightweight Patch Overlap Prediction module to efficiently predict overlap. Next, we propose Overlap-aware Sinkhorn Distance with Overlap Points Filtering and Overlap-aware Marginal Prior to effectively retain the most likely overlapping points based on the prediction, filtering out a large number of potential non-overlapping points. Finally, we perform the above methods with local exploration layer-wisely to prune the K-ary tree rooted at the superpoint, thereby constructing the HST to model the MSC of correspondences. Our extensive experiments on both indoor and outdoor benchmarks have demonstrated the scene-agnostic effectiveness of our proposed method.
Finally, we would like to emphasize our key contributions as follows: 1) To the best of our belief, our work is the first to introduce MSC into point cloud registration tasks to mitigate the effects of low overlap and high noise. 2) We propose a method for modeling MSC called HST, which introduces a pruned tree structure to efficiently characterize the similarity of potential overlapping points in the vicinity areas layer by layer and aggregates them into MSC. 3) We introduce an overlap-aware Sinkhorn Distance with Overlap Points Filtering and Overlap-aware Marginal Prior to focus optimal transport processes only on potential overlapping points, significantly enhancing the robustness of consistency calculations while reducing solution complexity.
The above discussions will be included in the revised version.
**Response to Weakness 2**: Thank you for your comments regarding the comparative performance. As mentioned by other reviewers and shown in Tab 1, HST gains significant improvements over the state-of-the-art methods on both indoor and outdoor benchmarks. The improvements are even greater on 3DLoMatch, where point cloud pairs share fewer overlaps, suggesting that our model demonstrates better robustness and performance under low-overlap, high-noise scenarios. To validate this, we further design a set of experiments targeting overlap ratios of less than 10% and compare the performance with GeoTransformer. Please note that the currently available preprocessed datasets, 3DMatch (overlap > 30%) and 3DLoMatch (10% < overlap < 30%), do not include samples with such extremely low overlap (overlap<10%). Therefore, we access the 3DMatch raw dataset and collect a set of point cloud pairs with overlap < 10%, which we refer to as 3DExtremeLoMatch (3DExMatch) dataset, containing a total of 1,343 samples. We first test the model pre-trained on 3DMatch directly on 3DExMatch, and the results can be found in the following table:
|Estimator|Method|RR(\%)|RRE($^\circ$)|RTE(m)|
|---|---|:---:|:---:|:---:|
|LGR|GeoTrans|29.8|3.83|0.110|
||HST|32.5|3.78|0.111|
|RANSAC-250|GeoTrans|28.8|4.56|0.123|
||HST|31.5|4.42|0.122|
|RANSAC-500|GeoTrans|30.5|4.25|0.121|
||HST|32.5|4.16|0.118|
|RANSAC-1000|GeoTrans|31.7|4.23|0.118|
||HST|33.6|4.03|0.114|
|RANSAC-2500|GeoTrans|31.9|4.09|0.116|
||HST|34.7|3.81|0.111|
|RANSAC-5000|GeoTrans|31.0|4.04|0.117|
||HST|34.4|3.87|0.114|
We then randomly divide the 3DExMatch into training, validation, and test sets with proportions of 60% (805 samples), 10% (134 samples), and 30% (404 samples), respectively. We fine-tune GeoTransformer and HST both for 3 epochs on the training set, and then evaluate the models on the test set. The results can be found in the following table. Both results clearly demonstrate that HST can significantly outperform other methods even under low overlap conditions, further confirming the effectiveness of our method.
|Estimator|Method|RR(\%)|RRE($^\circ$)|RTE(m)|
|---|---|:---:|:---:|:---:|
|LGR|GeoTrans|53.8|3.73|0.107|
||HST|59.0|3.79|0.106|
|RANSAC-250|GeoTrans|47.0|4.39|0.121|
||HST|49.8|4.43|0.119|
|RANSAC-500|GeoTrans|49.9|4.55|0.113|
||HST|52.6|4.03|0.117|
|RANSAC-1000|GeoTrans|52.4|4.03|0.117|
||HST|57.5|4.13|0.115|
|RANSAC-2500|GeoTrans|55.1|3.89|0.116|
||HST|59.0|3.89|0.112|
|RANSAC-5000|GeoTrans|56.1|4.07|0.116|
||HST|60.3|3.93|0.111|
**Response to Weakness 3**: Thank you for your insightful feedback regarding the efficiency of our proposed method. We share your concern on this matter. In fact, we have compared the running time of our method with state-of-the-art approaches under different estimators in the manuscript. Please refer to the last column of Table 2. The results indicate that our method achieves significant performance improvements with only a slight, acceptable increase in latency due to the introduction of new modules. We will highlight this comparison more prominently in the revised version.
Thank you again for your valuable feedback. We hope our responses address your concerns.
---
Rebuttal Comment 1.1:
Comment: Thanks the authors for the response. I think at this moment, most of my concerns have been addressed. I am leaning towards the positive side.
---
Reply to Comment 1.1.1:
Comment: Thank you for your encouraging feedback and for acknowledging the revisions and responses we have provided. We are glad to hear that our explanations have addressed your concerns and would like to extend our sincere thanks for the upgrade in your evaluation score. We will add those revisions to the revised version.
Thank you once again for your thoughtful and constructive review. | Summary: This paper introduces the Hierarchical Sinkhorn Tree (HST) for reliable correspondence identification in point cloud registration. The core idea is to hierarchically evaluate the local consistency of each correspondence at multiple feature scales using Sinkhorn distance, thereby filtering out the locally dissimilar correspondences. Specifically, Local Exploration is first employed to extract local patches at each correspondence’s next decoder layer. Then, the overlap-aware Sinkhorn distance is used to evaluate the patch differences, filtering out the non-overlapping local patches. The filtered overlapping points are further utilized for additional local exploration and overlap Sinkhorn distance measurements. Finally, the consistency measures across all scales are aggregated for robust transformation estimation. Extensive experiments on public benchmark datasets verify the effectiveness of the proposed method.
Strengths: (1) The authors’ introduction of the concept of multi-scale consistency (MSC) for robust registration is both interesting and promising. Developing a Hierarchical Sinkhorn Tree to model MSC is also a novel approach that achieves significant performance gains.
(2) Overall, this manuscript is well-structured and clearly written. The authors have effectively organized their ideas, making the content easy to follow and understand.
(3) The authors provide sufficient comparisons and ablation studies to verify the effectiveness of the proposed mechanism.
Weaknesses: (1) Could you please explain Equation (2)? I understand that you aim to gather the k-nearest neighbor points from the next layer to form the local patch X_i^{(l)} . However, I find it difficult to grasp this idea from Equation (2). For instance, what does \|x_i^l, x_j^{l+1}\| represent? Do you mean the distance between x_i^l and x_j^{l+1} ? If so, how does applying the argmin operator on K yield the desired local patch? It’s confusing and not intuitive.
(2) Some relevant outlier rejection-focused registration methods should be cited, such as:
[1] Robust Outlier Rejection for 3D Registration with Variational Bayes. CVPR’2023
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We first would like to thank the reviewer for providing us valuable comments and suggestions.
**Response to Weakness 1**: Thank you for your question regarding Eq. (2) about the k-NN local exploration part. We apologize for the typo in Eq. (2) and for any misunderstandings it may have caused.
Your are correct about the understanding of Eq. (2). Its purpose is to search for the k-NN of any given point $x_i^{l}$ from the $l$-th layer in the next layer to form the corresponding local patch. We provide the following explanations in hopes of resolving your confusion.
(1) The term $||x_i^l, x_j^{l+1} ||$ represents the Euclidean distance between the given $l$-th layer's point $x_i^l$ and the $(l+1)$-th layer's point $x_j^{l+1}$. So the operator $||\cdot||$ we used here is actually the L2-norm of coordinates' difference. We will add a subscript 2 to the $||\cdot||$ operator to avoid any potential misunderstandings, i.e., revise $|| x_i^l, x_j^{l+1} ||$ to $|| x_i^l, x_j^{l+1} ||_2$.
(2) We mistakenly used $argmax$ instead of $argmin$ operator in the formula. The set $K$ actually represents the collection of Euclidean distances between all points in $(l+1)$-th layer and the given point in $l$-th layer. We should use the $argmin$ operator on $K$ to search for the k nearest points to form the local patch.
(3) Our initial intention is to use a more mathematical $argmin$ operator to represent the k-NN search instead of directly using $kNN(\cdot)$. However, such an expression might be difficult to understand and could lead to ambiguities, such as the definition of the set $S$ in the formula. Therefore, we take a compromise approach by replacing the $argmin$ operator with the $argtopk$ operator to make it more intuitive and easier to understand.
The following is the revised formula and will be included in the revised version.
Given $\mathbf{x}_i^{(l)}$, its local patch $\mathbf{P}_i^{(l+1)}$ is defined as:
$$
\mathbf{P}_i^{(l+1)}=\operatorname{argtopk} _{\mathbf{x}_j^{(l+1)}\in\mathbf{X}^{(l+1)}}(-||\mathbf{x}_i^{(l)},\mathbf{x}_j^{(l+1)}||_2).
$$
**Response to Weakness 2**: Thanks for your suggestion about citing relevant outlier rejection-focused registration methods. We will cite approaches like GC-RANSAC [1], MAC [2], VBReg [3], FastMAC [4], etc., and summarize them as fine-level outlier removal methods in our revised version.
[1] Barath, Daniel, and Jiří Matas. "Graph-cut RANSAC." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.
[2] Zhang, Xiyu, et al. "3D registration with maximal cliques." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[3] Jiang, Haobo, et al. "Robust outlier rejection for 3d registration with variational bayes." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023.
[4] Zhang, Yifei, et al. "FastMAC: Stochastic Spectral Sampling of Correspondence Graph." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. | Rebuttal 1:
Rebuttal: We first would like to thank all the reviewers for providing insightful comments and we are immensely grateful for your thorough feedback on our manuscript. It is encouraging that the reviews found
* Our paper is well-written
- "The authors have effectively organized their ideas, making the content easy to follow and understand." - Reviewer AXAW
- "This paper is well-written, well-organized, and easy to follow" - Reviewer Sd2B
* Our paper presents a novel and interesting idea
- "The authors’ introduction of the concept of multi-scale consistency (MSC) for robust registration is both interesting and promising. Developing a Hierarchical Sinkhorn Tree to model MSC is also a novel approach that achieves significant performance gains." - Reviewer AXAW
- "The idea of introducing tree structure into a coarse-to-fine mechanism is somehow interesting." - Reviewer Sd2B
- "This approach is intriguing to me" - Reviewer WyrU
* Experiments show superior performance and verify the effectiveness of the proposed method
- "The authors provide sufficient comparisons and ablation studies to verify the effectiveness of the proposed mechanism." - Reviewer AXAW
- "The numerical results achieve state-of-the-art performance on both indoor and outdoor benchmarks." - Reviewer - WyrU
- "The proposed methods are evaluated on two benchmarks and its results show superior performance than existing SOTA methods in terms of success rate and registration accuracy." - Reviewer hZ2Q
---
We have carefully read all the comments and responded to them in detail. All of those will be addressed in the final version.
We summarize the main concerns of the reviews with the corresponding response as follows.
1. **About the performance under extremely low overlap.**
To test the robustness of HST when facing extremely low overlap, we design a set of experiments targeting overlap ratios of less than 10% and compare the performance with GeoTransformer using different estimators. However, we found that currently available preprocessed datasets, 3DMatch (overlap > 30%) and 3DLoMatch (10% < overlap < 30%), do not include samples with such extremely low overlap (overlap<10%). Consequently, we accessed the raw 3DMatch dataset and collected a new set of point cloud pairs with overlap < 10%, which we refer to as the 3DExtremeLoMatch (3DExMatch) dataset, comprising a total of 1,343 samples. We conducted two types of experiments: one where we directly evaluated the pre-trained model and another where we evaluated the model after fine-tuning on the partitioned 3DExMatch dataset. The empirical results demonstrate that our proposed HST outperforms GeoTransformer in both settings, particularly after fine-tuning. This indicates that, even under extremely low overlap conditions, our method does not fail due to a lack of matches or excessive mismatches. Instead, by incorporating MSC modeling, HST can form a higher-quality correspondence set, resulting in improved performance under severe conditions and less performance degradation compared to other methods.
2. **About the comparison with outlier rejection-based methods.**
We have conducted experiments comparing HST with outlier rejection-based methods to gain more insights into handling outliers. For a fair comparison, we replace HST directly with the state-of-the-art approaches GC-RANSAC, MAC, and FastMAC, and evaluate them on both 3DMatch and 3DLoMatch datasets. All three methods, along with HST, showed performance improvements compared to the vanilla GeoTransformer, with HST demonstrating the most significant enhancement. This suggests that when the quality of the correspondence set is high, the registration process benefits from outlier rejection methods. However, on 3DLoMatch, the improvements from these three methods were less pronounced, and GC-RANSAC even showed a potential negative impact. In contrast, HST maintained better robustness, highlighting its superior effectiveness over previous outlier rejection methods in scenarios with low overlap and high noise. This further validates the efficacy of our MSC modeling in handling outliers from coarse correspondences.
Thanks again for your efforts in the review. We appreciate all the valuable feedback that helped us to improve our submission. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Prototypical Hash Encoding for On-the-Fly Fine-Grained Category Discovery | Accept (poster) | Summary: This paper proposes a prototypical deep hashing framework to address the fine-grained on-the-fly category discovery problem. The proposed method includes two main loss functions: first, distance minimization between the encoded hash features to the category-representative hash coding after projection $\mathcal{H}_h$; second, enforcing minimal distance separation among hash encoding of different categories, which are enforced to be quantized to -1 and +1 for each dimension. The experimental results demonstrate the effectiveness of each loss function. The performance gains are considerable on all benchmarks compared with previous state-of-the-art baselines.
Strengths: - The performance gains of the proposed approach are considerable.
- The proposed method is robust to the varying coding length.
- The identified ‘sensitivity issue’ problem exists for on-the-fly category discovery.
Weaknesses: - Writing: It is informal to leave all the related works in the appendix, which will confuse the readers on the contribution of this work. Besides, some expressions are not
- Novelty: The technical contribution is limited in that the prototypical learning [3] is a mature practice in category discovery and the author seems to adapt the deep hashing method in [4] to this problem.
- Motivation: The motivation of achieving the balance between instance discrimination and class discrimination, especially with prototypical learning in the category discovery field is not new [2,3]. However, this is accepted to some extent since on-the-fly category discovery is a new problem.
Technical Quality: 3
Clarity: 3
Questions for Authors: - It would be clearer if the author could mention the technical practice of prototype-based visualization examples, though it may be proposed by the previous work, ProtopFormer.
- In Table 3, how will the semi-supervised contrastive loss perform [1] in addition to the supcon and unsupcon baselines? These results matter to solidify the motivation for using the deep hashing framework.
[1] Generalized Category Discovery
[2] PromptCAL: Contrastive Affinity Learning via Auxiliary Prompts for Generalized Novel Category Discovery
[3] Parametric Classification for Generalized Category Discovery: A Baseline Study
[4] Deep hashing with minimal-distance-separated hash centers
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: As mentioned by the author, on-the-fly category discovery is a novel and challenging problem, therefore existing methods do not achieve satisfying performance on real-world tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer’s recognition of our method’s good performance and its robustness to varying coding lengths (sensitivity issue). We value the reviewer’s insightful comments and will incorporate these into our final revision.
**Q1: Writing**. Thanks for your constructive suggestions. We will **leave the section of related works to the main text**, and carefully check and improve the expressions in the revision.
**Q2: Technical contribution**.
We argue that our contribution mainly lies in identifying and effectively mitigating the high sensitivity issue existing in current hash-based OCD methods, which was acknowledged by Reviewers 6xA4 and Pa9u as well. We provide elaborated discussions as follows.
**1. Compared to prototype learning method (SimGCD [1])**, our PHE has two main advantages. **First**, unlike SimGCD that learns only one prototype for one category, our PHE generates multiple prototypes for one class, which is favorable for modeling intra-class variance of fine-grained categories. Empirically, we have supplemented experiments with the prototype learning method used in SimGCD to validate the superiority of our methods as shown in the table below.
| Dataset | | CUB | | | SCars | |
| ------------- | -------- | -------- | -------- | -------- | -------- | -------- |
| Method | All | Old | New | All | Old | New |
| SimGCD+PHE | 25.0 | 49.9 | 12.6 | 21.9 | 38.5 | 13.9 |
| SimGCD-MC+PHE | 34.1 | **60.6** | 20.8 | 30.3 | **65.9** | 13.0 |
| PHE (ours) | **36.4** | 55.8 | **27.0** | **31.3** | 61.9 | **16.8** |
Specifically, due to lack of unlabeled data, we removed the $L_{cls}^u$ and the prototypes corresponding to new categories in SimGCD. In the table, "SimGCD+PHE" indicates that we used SimGCD for prototype learning while also mapping the prototypes learned by SimGCD to hash centers for category encoding. This approach yielded very poor results on both datasets. "SimGCD-MC+PHE" refers to the use of manually obtained centers that satisfy the Gilbert-Varshamov bound and features from the SimGCD projection head for category encoding. Compared to mapping SimGCD’s prototypes to hash centers, the SimGCD-MC+PHE variant shows an average improvement of 8.8% on two datasets, demonstrating that the prototypes learned by SimGCD are not suitable for category encoding in fine-grained OCD scenarios.
**Second**, Unlike the prototypes in SimGCD which are weights of classifiers, the learned prototype in PHE can be explicitly visualized, which provides additional perspective for interpreting the model's behavior, as illustrated in Fig.3 of the main text.
**2. Compared with deep hash method (MDSH[2]),** our PHE has two main differences: **First**, unlike MDSH that pre-calculates hash centers and fixes these centers during training, our hash centers are derived by mapping from category prototypes and then are updated by end-to-end optimization, which better preserves the relationships between fine-grained categories learned in the prototype feature space. **Second**, we design an additional Hamming ball-based inference tailor-made for OCD, which effectively mitigates the sensitivity issues associated with using hash codes. Furthermore, we have supplemented experiments and the results in the following table verify the superiority of our PHE. We also provide the results of more hash-based baselines in **Q1** of Reviewer **6xA4.**
| Datasets | | CUB | | | SCars | |
| --------- | -------- | -------- | -------- | -------- | -------- | -------- |
| Methods | All | Old | New | All | Old | New |
| MDSH[2] | 34.3 | **57.6** | 22.8 | 28.8 | 60.2 | 13.7 |
| PHE (ours) | **36.4** | 55.8 | **27.0** | **31.3** | **61.9** | **16.8** |
**Q3: Motivation of instance discrimination and class discrimination**
We agree that both OCD and GCD [3] need to learn good representations with “optimal balance between instance discrimination and class discrimination”. However, our primary motivation comes from the identification of the high sensitivity issue caused by applying hash codes in OCD tasks, which significantly hinders the effectiveness of the current hash-based OCD method. Motivated by this, we present the specific prototype design, loss implementation and Hamming-ball-based inference, to mitigate the high-sensitivity issue.
**Q4: Technical practice of prototype-based visualization examples**. The process mainly consists of three steps: 1) Input an image to obtain its feature representation, while also capturing the attention map during forward propagation. 2) Select the top-k most similar (activated) prototypes from all prototypes. 3) Visualize the original samples/images corresponding to these prototypes. We will include detailed examples and process descriptions in the revision.
**Q5: Will the semi-supervised contrastive loss (SSCL) perform?** Firstly, we'd like to clarify that in the OCD setting, only labeled data of known classes is available for model training, and unlabeled data appears only in an on-the-fly format during testing. Thus, the SSCL used in GCD[1] is incompatible with OCD. As for the experiments in Tab. 3, in fact, we did not include experiments with "supcon and unsupcon baselines". “Supcon Cls” represents the use of classification methods based on supervised contrastive learning for representation learning. According to the results, we find that although this variant performs well on seen categories, their generalization capabilities are inferior to our full prototype-based method.
[1] Parametric Classification for Generalized Category Discovery: A Baseline Study. ICCV 2023
[2] Deep Hashing with Minimal-Distance-Separated Hash Centers. CVPR 2023.
[3] Generalized Category Discovery. CVPR 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. Most of my questions are well addressed. Although the author clarified their contributions and differences compared with previous works, my concern about the technical novelty and contribution of this work has been partly addressed. After rebalancing its weaknesses and strengths, I decide to raise my score by one.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer dwc8,
We greatly appreciate your satisfaction with our responses, and very glad you increase the rating! We will add the above important discussions in the final manuscript and highlight them.
Thanks again for your valuable suggestions and comments. We enjoy communicating with you and appreciate your efforts! | Summary: This paper introduces a novel framework called Prototypical Hash Encoding (PHE) for On-the-fly Category Discovery (OCD), which aims to discover both known and unknown categories from streaming data using labeled data of known categories. PHE first learns many prototypes for each category and then maps the learned prototypes to hash codes to distinguish samples from known or novel categories with a threshold.
Strengths: 1. The paper is well-written and easy to follow.
2. The proposed method addresses the limitations of previous hash-based OCD models by reducing sensitivity and preserving discriminative information through prototypes.
3. The proposed method achieves significant improvements in accuracy across various datasets.
Weaknesses: 1. The proposed method is an improvement from the previous work [1], sharing the same core idea of utilizing hash codes for OCD, which limits the novelty of the paper.
2. Despite the improved performance, the reasons why prototypes can address the sensitivity issue are not analyzed in depth.
3. Compared with feature-level prototypes, the advantages of hash code-based category prototypes have not been demonstrated or experimentally verified.
[1] Ruoyi Du, Dongliang Chang, Kongming Liang, Timothy Hospedales, Yi-Zhe Song, and Zhanyu Ma. On-the-fly category discovery. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11691–11700, 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Compared with feature-level prototypes or prototypes after dimensionality reduction, what are the advantages of hash code-based prototypes?
2. How is the training efficiency of the model compared to SMILE?
3. How are hyperparameters selected without the validation set?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer’s recognition of our motivation to address the limitations of previous OCD models. We value the reviewer’s insightful comments and will incorporate these into our final revision.
**Q1: Sharing the same core idea of utilizing hash codes for OCD**. We indeed use hash codes as category descriptors, which are also widely used in image retrieval and deep hashing methods. However, we argue that our innovation lies in: 1. Identifying the high sensitivity issue when applying hash codes to the challenging fine-grained OCD task, where only data from known categories is available. 2. We have introduced a new OCD framework, PHE, that explicitly achieves inter-class separability and intra-class compactness, compared to the SOTA method SMILE. 3. Unlike SMILE, we designed an on-the-fly inference method based on the Hamming ball. Our PHE method effectively mitigates the high sensitivity problem, for example, surpassing SMILE by 15.5% in terms of all accuracy on the CUB dataset with hash code bits = 64. This point has also been acknowledged by Reviewers 6xA4 and Pa9u.
**Q2: The reasons why prototypes can address the sensitivity issue**.
Thanks for your insightful comments. We'd like to explain as follows.
1. In fact, instead of depending on only prototypes themselves, we mitigate the sensitivity issue by 1) constraining the hash centers (mapped from feature-level prototypes) to be at least a Hamming distance of $d_{max}$ apart based on the Gilbert-Varshamov bound; 2) representing a category using a Hamming ball of radius $max(\lfloor \frac{d_{max}}{2} \rfloor, 1)$.
2. The role of prototypes is to: 1) achieve better representation learning; 2) map the category prototypes to category hash centers, unifying representation learning with hash encoding. The importance of prototype learning is evident from the variant without prototype learning loss $L_f$ in Tab.2 of the main paper. Removing the prototype learning loss $L_f$ results in an obvious accuracy drop on datasets in terms of all, old and new accuracy.
We will clarify this point more clearly in the introduction.
**Q3: Feature-level prototypes vs. Hash code-based prototypes**
1. **Experimental results with a powerful feature-level prototype learning method.**
Firstly, we would like to further explain that in the OCD setting, test data appear one-by-one, and the OCD task requires the model to instantly assign a category to the test data. Therefore, if only feature-level prototype learning is used, the method needs an additional online clustering approach to yield real-time category descriptors, which is also acknowledged in the original OCD paper [1]. Thus, we adapt SimGCD [2] ( an advanced prototype learning approach for GCD, which was also acknowledged by Reviewer dwc8) into OCD settings and implement two variant approaches: 1) SimGCD attached a online clustering approach SLC in [1] to achieve instance-wise inference. 2) SimGCD with our hash prototype framework. The experimental results below show that introducing PHE significantly improved all accuracy across two datasets. Meanwhile, our full method achieves better generalization ability, especially on new categories.
| Dataset | | CUB | | | Scars | |
| ----------------------------------------------- | -------- | -------- | -------- | -------- | -------- | -------- |
| Method | All | Old | New | All | Old | New |
| SimGCD [2] (only feature-level prototype learning) | 27.1 | 50.7 | 15.3 | 21.5 | 39.9 | 12.5 |
| SimGCD [2] + hash code-based prototype learning | 34.1 | **60.6** | 20.8 | 30.3 | **65.9** | 13.0 |
| PHE (ours) | **36.4** | 55.8 | **27.0** | **31.3** | 61.9 | **16.8** |
2. **Advantages of hash code-based features.** Hash code-based features can directly use the features’ signs (hash codes) as category descriptors, where features with the same category descriptor are considered to be of the same category. Therefore, compared to feature-level prototype learning, introducing a hash code-based design can directly optimize category descriptors, achieving higher accuracy in the OCD setting. According to the results in Table 1 of the main paper, where only feature-level prototype learning results (SLC) and hash code-based prototypes learning (our PHE) are compared, introducing hash code-based prototypes improved the average accuracy by 8.2% across eight datasets.
**Q4: Training efficiency**. We provided a comparison of training times between our PHE and the SOTA method, SMILE, measured in minutes, as shown in the table below. To ensure fairness, all experiments were conducted on an NVIDIA RTX A6000 GPU. Both algorithms were trained for 200 epochs using mixed precision training. The dataloader parameters were consistent, with a batch size of 128 and num_workers set to 8. According to the table, our average training time across four datasets is less by 45.8 minutes compared to SMILE. This is primarily due to SMILE’s use of supervised contrastive learning with two views of samples for representation learning, which requires higher computational resources.
| Method | CUB | Scars | Food | Pets |
| --------- | ------ | ------ | ------ | ----- |
| SMILE | 127.70 | 177.54 | 819.93 | 80.37 |
| PHE (ours) | 100.22 | 161.37 | 691.39 | 69.48 |
**Q5: Hyperparameter section.** To avoid over-hyperparameter tuning, we use a fixed set of hyperparameters which are obtained based on CUB to report accuracy for all datasets, without tuning them individually for each dataset.
If we have misunderstood any of your concerns, please let us know in the future comments.
[1] On-the-fly category discovery. CVPR 2023.
[2] Parametric Classification for Generalized Category Discovery: A Baseline Study. ICCV 2023
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply, my concerns and questions have been partly addressed. I will raise my score by one.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer ZQiT,
Thank you for your positive feedback and for increasing the rating! We will include the important discussions you mentioned in the final manuscript.
We truly enjoy our interactions and greatly appreciate all your efforts! | Summary: This paper addresses the On-the-fly Category Discovery (OCD) task, which involves utilizing existing category knowledge to recognize both known and unknown categories in new data streams in real-time. To tackle the high sensitivity and suboptimal feature representation issues of existing methods when dealing with fine-grained categories, the paper proposes an innovative Prototypical Hash Encoding (PHE) framework. This framework uses a Category-aware Prototype Generation (CPG) module to represent each fine-grained category with multiple prototypes and employs a probabilistic masking strategy to encourage the model to fully capture intra-class diversity. The Discriminative Category Encoding (DCE) module maps the generated category prototypes to low-dimensional hash centers, optimizing image hash features to ensure intra-class compactness and inter-class separation. Additionally, a center separation loss function is designed to maintain a minimum Hamming distance between different category hash centers. Experimental results on multiple datasets confirm the superiority of this method.
Strengths: The proposed PHE method demonstrates its superiority over existing state-of-the-art methods across multiple datasets. It effectively addresses the issues of large intra-class variance and small inter-class variance, while minimizing the information loss associated with dimensionality reduction. The paper employs visualization techniques to analyze the underlying mechanism by which PHE groups samples into known or unknown categories. Moreover, due to its optimization based on hash centers and inference process based on Hamming balls, the PHE method shows stable performance across different hash code lengths, effectively mitigating the "high sensitivity" problem. In contrast to the SMILE method, which exhibits significant accuracy degradation and instability with increasing hash code lengths, the PHE method maintains remarkable stability and consistency.
Weaknesses: Although the proposed Prototype Hashing Encoding (PHE) framework outperforms existing methods in terms of performance, further research is needed to improve its accuracy in recognizing unknown categories. Additionally, due to the need to compute multiple hash centers and perform complex distance calculations, the computational cost of PHE is relatively high, especially on large-scale datasets.
Technical Quality: 3
Clarity: 3
Questions for Authors: Apart from the issues mentioned in weakness part, there are some additional questions regarding implementation details:
1. The article mentions that to encourage the model to fully capture intra-class diversity, K categories are equally allocated to each class. I would like to know how significant the choice of K is on the model's performance, and if there are experiments demonstrating the impact of different K values on the model.
2. Similarly, when using the masking strategy, how does the variation in the value of θ affect the training and final performance of the model?
3. What about the performance of different splits of old and new class, further analysis is needed regarding the size of different splits as well as whether the correlation of old and new classes essentially influence the final performance etc. Some in-depth insights are expected.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: As mentioned in the appendix of the manuscript, although the proposed PHE framework outperforms existing methods in terms of performance, further research is needed to improve its accuracy in recognizing unknown categories. Additionally, the PHE method involves multiple hyperparameters that require careful tuning to achieve optimal performance, increasing the complexity of model debugging and optimization. Moreover, the weights of the component loss functions in the total loss function need to be appropriately set to balance the optimization objectives of different components, which may require extensive experiments to find the suitable weight combination.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer’s recognition of our motivation and experimental results in addressing the hash sensitivity issue. We value the reviewer’s insightful comments and will include these into our final revision.
**Q1: Improving accuracy in recognizing unknown categories.**
1. **Limited room for improvement due to the unique challenge of OCD.** Compared with GCD/NCD tasks, OCD tasks **do not** use unlabeled data that might include unknown-category data for model training, which leads to a larger challenge for OCD methods to recognize unknown categories. We discussed the limitation in the main paper. Compared to the SOTA method, SMILE, our PHE achieves better accuracy for unknown categories. On the CUB dataset, our method exceeds SMILE by 4.1%/16% on 12/16 bits, respectively. Thus, as recognized by Reviewer ZQiT and dwc8, such an improvement might be non-trivial for OCD.
2. **A possible solution in future work.** Due to the constraint of training data in OCD setting, we consider introducing additional knowledge from pre-trained Large Language Models (LLMs). Firstly, we can leverage LLMs to establish a bank of category attribute prototypes that are expected to be shared across both known and unknown categories. Then, during the on-the-fly prediction process, we plan to use LLM+VLM to match the attribute prototypes for unknown categories. Finally, by jointly considering the instance and attribute features, we hope that our PHE can generate more accurate predictions.
**Q2: Computational cost for large-scale datasets**.
We agree that the computational cost of PHE is relatively high; however, the computational cost mainly depends on the number of known categories, which is not directly related to dataset scale. Specifically, each known category has a hash center, thus hash centers are represented by a tensor $\mathbf h$ with a shape [num_class, bit]. Main calculations involve simple two-dimensional matrix multiplications, which include the dot product of features $\mathbf{b}$ with a shape [batch_size, bit] and hash centers, $\mathbf{b} * \mathbf{h}$, as well as the dot product operations in the calculation of Hamming distances $\mathbf{h} * \mathbf{h}^\text{T}$. Empirically, we have verified the analyses in the table below. It shows that although the Food dataset is larger in scale than CUB-200 and Scars-198, it contains only 100 categories. Therefore, the average training time per sample for the Food dataset is significantly lower than that for CUB and Scars.
| Dataset | CUB#200 | SCars#198 | Food#101 |
| -------------------------------- | ------- | --------- | -------- |
| number of training samples | 1.5k | 2.0k | 19.1k |
| training time / minute | 100.22 | 161.37 | 691.39 |
| training time per sample / second | 4.01 | 4.84 | 2.17 |
**Q3: The impact of different K values on the model**. We regard the "K categories" in the review comment as the number of prototypes per category, $k$. We have detailed the impact of this hyperparameter in Fig. 4 and Sec.3.4 of the main paper. Specifically, $k=1$ results in suboptimal performance, as it doesn't effectively represent the complexity of a category. Using larger $k$ captures the nuances within a category, which is crucial for fine-grained categories. However, when $k>5$, the improvement is minimal, as validated on two datasets. Therefore, K is not sensitive, and we choose a relatively large value of 10 for all datasets to avoid over-tuning this hyperparameter.
**Q4: The impact of $\theta$ in the masking strategy.** We have added experiments on the two datasets and reported results in the table below. The masking strategy helps reduce redundancy when using multiple prototypes, thus facilitating prototype learning. A smaller $\theta$ value is found to be appropriate; within the range of (0, 0.2), the masking strategy can improve accuracy. When $\theta$ exceeds 0.2, suboptimal results occur on both datasets. We did not fine-tune this parameter for each dataset but instead set it at 0.1 across all datasets.
| Dataset| |CUB| | | SCars| |
| - | -| - | - | - | - | - |
| value of $\theta$ |All |Old |New|All|Old|New|
| 0| 35.5| 53.1| 26.6| 30.7| 60.3| 16.7|
| 0.05| 36.1| 53.6| **27.4** | **31.6** | **63.4** | 16.3|
| 0.1| **36.4** | **55.8** | 27.0| 31.3| 61.9| **16.8** |
| 0.15| 36.0 | 54.3 | 26.9 | 31.4 | 62.2 | 16.6 |
| 0.2| 35.3 | 52.7 | 26.6 | 30.1 | 59.8 | 15.7 |
**Q5: Results of different dataset splits.** Good suggestion! We added experiments using different proportions of old category selection on the CUB and SCars datasets below, with all accuracy reported and code bits=12. Based on the results, when the proportion of selected old categories is 75%/25%, our PHE outperforms SMILE by an average of 5.95%/1.0%. This indicates that our PHE is more capable of modeling the nuanced inter-category relationships in fine-grained OCD, as the number of categories increases. Results across all datasets and more code bits will be supplemented in the revision.
| Method | CUB-25% | CUB-50% | CUB-75% | Scars-25% | Scars-50% | Scars-75% |
| -------------- | --------------- | --------------- | --------------- | ----------------- | ----------------- | ---------------- |
| SMILE | 19.9 | 32.2 | 41.2 | 12.6 | 26.2 | 37.0 |
| PHE (ours) | **21.2** | **36.4** | **46.5** | **13.3** | **31.3** | **43.6** |
**Q6: Hyperparameter section.** 1) To avoid over-tuning hyperparameters, we use a fixed set of hyperparameters for all datasets (obtained based on CUB) to report accuracy, without tuning them individually for each dataset. 2) The hyperparameters we use are relatively robust within a certain range. In the loss function, the difference in loss scale determines the scale of the proportions.
---
Rebuttal 2:
Comment: Authors have partly addressed my concerns regarding model's stability and split variation, I'm willing to raise my score by one.
---
Rebuttal 3:
Comment: Dear Reviewer Pa9u,
We greatly appreciate your satisfaction with our responses! We will include the important discussions mentioned above in the final manuscript and highlight them. We truly appreciate your efforts! | Summary: This paper focuses on On-the-fly Category Discovery (OCD), which is to determine novel categories during inference. OCD methods first compute the hash code of the image, this code then becomes a "cluster index", if not matching with existing code, then it is a novel category.
However, the problem with the previous OCD method (SMILE) is that it becomes highly sensitive when the code length increases (difficult to match a new category due to the high possible combinations of hash code).
Instead of contrastive learning in SMILE, this paper uses cross-entropy loss adapted with a minimal hash distance scheme as regularization to improve performance. Further, the paper also adds cross entropy loss in the feature space to avoid information loss when compressing features into hash code, the feature representation used is a prototype-based model (e.g., ProtoPFormer).
Overall, it enhances SMILE by using better loss functions and different representations.
Strengths: originality: the idea itself is clever and simple, propose/reuse some modules to mitigate the problem in SMILE (to reduce the sensitivity problem). the hamming ball-based inference utilizes the benefit of GV bound.
quality: analyzed the proposed method with many experiments.
clarity: the paper is well-written
significance: somehow significant as OCD is more practical in open-world scenario, and may help improve retrieval systems.
the one insight I got from this paper is that using hash code alone is sensitive when code length increases so it cannot be used for cluster index easily, and we need to make the hash code as compact as possible to reduce sensitivity (i.e., the same class must have the same hash code, by guiding the hash code into center, although already shown in multiple deep hashing papers such as CSQ [1], DPN [2], OrthoHash [3], [4]).
[1] Li Yuan et al. Central Similarity Quantization for Efficient Image and Video Retrieval. CVPR 2020.
[2] Lixin Fan et al. Deep Polarized Network for Supervised Learning of Accurate Binary Hashing Codes. IJCAI 2020.
[3] Jiun Tian Hoe et al. One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective. NeurIPS 2021
[4] Liangdao Wang et al. Deep Hashing with Minimal-Distance-Separated Hash Centers. CVPR 2023.
Weaknesses: Comparison to deep hashing based methods are missing. There are many parts of the designed components similar to deep hashing such as the minimal hash distance formulation is adapted directly from [1]. Part of the loss objective is similar to cosine-similarity-based hashing methods like [2]. Another simple baseline is using ProtoPFormer + Deep Hashing method. From this paper, we do not know whether a naive application of deep hashing-based method is sufficient hence we cannot verify the novelty and effectiveness of the proposed method.
[1] Liangdao Wang et al. Deep Hashing with Minimal-Distance-Separated Hash Centers. CVPR 2023.
[2] Hoe et al. One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective. NeurIPS 2021.
Technical Quality: 4
Clarity: 4
Questions for Authors: In terms of the idea, I have no question, I believe the idea is simple and clever use of existing components. However, the lack of comparison to deep hashing methods makes me doubt the novelty and effectiveness of the proposed method/modules.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 2
Limitations: The author did mention limitations in the appendix which I also agree with.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for acknowledging that our idea is clever and simple, and for the positive comments on the writing and experiments.
**Q1: More baseline with deep hash methods**.
Following the meaningful suggestion, we have conducted additional comparative experiments with various deep hashing methods on CUB and SCars datasets. The results are reported and summarized as follows.
| | | CUB |-12bit| | CUB|-16bit| | CUB|-32bit| | SCars|-12bit| | SCars|-16bit| | SCars|-32bit|
| ----------------- | -------- | --------- | -------- | -------- | --------- | -------- | -------- | --------- | -------- | -------- | ----------- | -------- | -------- | ----------- | -------- | -------- | ----------- | -------- |
| Methods | All | Old | New | All | Old | New | All | Old | New | All | Old | New | All | Old | New | All | Old | New |
| DPN[1] | 22.2 | 38.0 | 14.2 | 17.6 | 24.1 | 14.4 | 12.5 | 11.1 | 13.2 | 18.8 | 36.1 | 10.5 | 14.9 | 25.3 | 9.8 | 12.8 | 20.4 | 9.1 |
| OrthoHash[2] | 30.0 | 49.2 | 20.5 | 25.2 | 42.1 | 16.7 | 13.6 | 24.8 | 8.0 | 19.6 | 37.2 | 11.1 | 15.0 | 24.5 | 10.4 | 13.2 | 20.0 | 9.8 |
| CSQ[3] | - | - | - | 25.2 | 35.3 | 20.1 | 26.1 | 45.3 | 16.5 | - | - | - | 25.4 | 54.8 | 11.2 | 23.1 | 44.1 | 13.0 |
| CSQ-MC | - | -| - | 29.3 | 56.4 | 15.7 | 26.4 | 50.6 | 14.3 | - | - | - | 27.4 | 59.9 | 11.8 | 26.9 | 61.8 | 10.0 |
| MDSH[4]| 34.3 | **57.6** | 22.8| 31.9| 52.7| 21.5 | 27.4| 40.8 | 20.7| 28.8 | 60.2 | 13.7 | 27.7 | 53.4 | 15.3| 25.8| 47.8| 15.2 |
| MDSH+Hamming Ball| 35.1 | 55.0 | 25.3 | 35.2 | 53.0 | 26.2 | 35.5 | 47.8 | **29.3** | 29.8 | 56.1| **17.1** | 29.9 | 57.2 | **16.5** | 29.5 | 56.2 | **16.6** |
| PHE (ours) | **36.4** | 55.8 | **27.0** | **37.6** | **57.4** | **27.6** | **38.5** | **59.9** | 27.8 | **31.3** | **61.9** | 16.8 | **31.8** | **65.4** | 15.6| **31.5** | **64.0** | 15.8|
*"MC" represents manually obtained centers that meet the GV bound. "-" means results were unavailable due to the requirements of the Hadamard matrix.
**Summarization of Experimental Results.**
1. **Comparison with methods[1-3] that generate suboptimal hash centers for modeling category information.** Unlike these methods[1-3], which are designed for retrieval tasks and mainly focus on instance-level discrimination, our PHE leverages the GV bound to constrain the learning of hash centers for category discovery. This approach allows the models to be flexible enough to preserve the rich category information contained in feature-level prototypes, while also generating discriminable hash category discriminators. As a result, PHE achieves better results.
2. **Comparison with [4] that optimizes only hash code to align with pre-calculated hash centers.** Different from [4] that uses fixed hash centers, our hash centers are mapped from the prototypes of each category and then updated by end-to-end optimization, which can alter the relationships between categories learned in the CPG module, yielding better results compared to MDSH. For example, with code bits=12, our PHE surpasses MDSH by an average of 2.3% across two datasets. Besides the difference in hash center design, our Hamming-ball-based inference design effectively mitigates the hash sensitivity issue. For instance, when code bits=16, the MDSH-Hamming ball exceeds MDSH by an average of 2.75% across two datasets.
**Q2: Comparison with OrthoHash [2] in terms of loss function**.
We agree with that both PHE and OrthoHash utilize cosine similarity to constrain hash features, yet there are significant differences in our design motivations and specific implementations:
1. **Differences in design motivation**. OrthoHash focuses on instance retrieval and applies constraints in only the low-dimensional hash space, which tends to lose category-specific information (CSI), which is especially crucial for OCD tasks. In contrast, our hash feature constraint loss $L_f$ and prototype learning loss $L_p$ are applied in hash space and feature space, respectively. The motivation is to capture CSI as rich as possible in high-dimensional feature space. We empirically find that this design can effectively mitigate CSI loss when projecting to hash space.
2. **Differences in implementation**. **a) Hash center generation**. Our hash centers are mapped from category prototypes, which preserves the relationships between categories learned in the prototype feature space. OrthoHash uses a matrix whose row vectors are mutually orthogonal as hash centers, which might hinder modeling the relationship between similar categories in OCD tasks. **b) Cosine similarity**. We compute cosine similarity between hash features and hash centers without sign quantization. In contrast, OrthoHash calculates cosine similarity between continuous hash features and binary hash centers. **c) Update of hash center**. Our centers can be optimized during the training process, while the centers in OrthoHash are fixed.
*Our PHE effectively unifies representation learning and category encoding, achieving better OCD accuracy than OrthoHash. As shown in the table of **Q1**, PHE surpasses OrthoHash by an average of 9% when code bits=12.*
[1] Deep Polarized Network for Supervised Learning of Accurate Binary Hashing Codes.
[2] One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective.
[3] Central Similarity Quantization for Efficient Image and Video Retrieval.
[4] Deep Hashing with Minimal-Distance-Separated Hash Centers.
---
Rebuttal Comment 1.1:
Title: recommendation
Comment: thank you for your response. the experiments now look more complete, as long as the author adds them into the final revision, I have no further questions. I am recommending a weak accept (6).
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 6xA4,
We really appreciate your valuable comments! We will add the important discussions mentioned above in the final manuscript and highlight them.
Thank you once again for your efforts and contributions! | Rebuttal 1:
Rebuttal: We sincerely thank the ACs and reviewers for their considerable efforts in handling our paper.
We have appropriately addressed all concerns raised by the reviewers. These include providing more baselines with deep hashing methods (Reviewer #6xA4, #dwc8) and prototype learning methods (Reviewer #ZQiT, #dwc8), conducting additional ablation studies on hyper-parameters (Reviewer #Pa9u), explaining our choices of hyper-parameters (Reviewer #Pa9u, #ZQiT), comparing training efficiencies (Reviewer #ZQiT), and offering a clearer explanation of our contribution and motivation (Reviewer #ZQiT, #dwc8).
**Paper strengths acknowledged by reviewers:**
The motivation and idea are clear and simple (Reviewer #6xA4), the method is easy to follow (Reviewer #ZQiT), effectively mitigates the high sensitivity issues present in current On-the-fly Category Discovery models (Reviewer #6xA4, #Pa9u, #ZQiT, #dwc8), and demonstrates good performance (Reviewer #Pa9u, #ZQiT, #dwc8). The paper is well-written and easy to understand (Reviewer #6xA4, #ZQiT).
We expect the ACs and reviewers to fully consider the following factors when making the final decision: (1) A novel prototypical hash encoding framework for On-the-fly Category Discovery, which effectively mitigates the “high sensitivity issues” of current OCD methods and achieves significant performance improvements; (2) Thanks to the interpretable prototype learning, we provide a perspective for explaining the model's behavior when discovering categories. (3) Comprehensive responses to all the reviewers’ comments.
Please let us know if you have any additional questions or concerns. We are happy to provide further clarification.
Authors of Submission #524 | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Aligner-Encoders: Self-Attention Transformers Can Be Self-Transducers | Accept (spotlight) | Summary: This paper proposes a new ASR model that connects a self-aligned encoder and the light text-only recurrence of RNN-T. The proposed model can be trained with label-wise cross entropy loss, which is computationally efficient than RNN-T training. The authors show the limitation of the model to inference on long-form audios and give a special inference configuration to mitigate the issue. Experiments on Librispeech and larger-scale ASR datasets demonstrated the close performance compared to other ASR models. The authors also show the audio-text alignment in the self-attention weights of a certain layer, which could be said to perform “self-transduction”.
Strengths: The paper proposed a new ASR model for better training and inference efficiency. The idea can be similar to performing down-sampling on encoder side but is more aggressive (to token-level). The strengths of the paper are:
1. The proposed model consumes way less GPU memory than RNN-T and achieves much lower latency than AED.
2. The analysis of the text-audio alignment behavior in self-aligned encoder.
3. A modification on Aligner without training to enable long-form audio decoding.
Weaknesses: The major concerns are the presentation and the limitation of the proposed model.
1. The explanation of the aligner modification for long-form audio is somewhat hard to follow. It may be better to have figures, or equations involved in the paragraph.
2. As the claim of the paper is that Aligner encoder achieves better efficiency than RNN-T and AED, it would be better to summarize the experimental numbers in Section 4.6 into a table for reader's convenience. People might be interested in those strong numbers.
3. The proposed model may have limited use cases e.g. offline ASR because the encoder cannot be implemented for streaming with the current design.
4. A related work that replaces RNN-T loss with the cross-entropy loss and reduces the GPU memory usage. (https://arxiv.org/pdf/2307.14132).
Technical Quality: 3
Clarity: 3
Questions for Authors: Several additional questions:
1. If I remember correctly, the AED model can achieve no worse WER than RNN-T on Librispeech in literature, I am wondering did the authors look into other reasons why AED in Table 3 is much worse than RNN-T (except the long-form problem)?
2. Any results comparing rotary positional embedding and relative positional embedding for speech encoder on long-form audio? The authors mentioned that RNN-T used RPE, is it the reason why RNN-T is good at long-form audio?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The main limitation is the use case of the proposed method, aka, non-streaming applications. Even though, the method can be helpful for many different purposes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the close review of our work.
Thank you for the suggestion to revise the section on the long-form inference modification. In earlier drafts of the paper, this has been the most difficult part to write clearly. Space permitting, a figure or perhaps a pseudo-algorithm would be helpful, we will continue to revise to try to improve this section.
Good idea to highlight the computational efficiency / latency results in a table.
It is true we make no claims toward streaming capability. It is possible that our model is capable of performing streaming recognition like RNN-T models trained for this purpose (although perhaps not with as low of latency). Or it might require adaptations similar to what has been done to make Whisper streaming-capable, for example: https://aclanthology.org/2023.ijcnlp-demo.3.pdf. Since our model decodes non-streaming with much lower latency than AED (owing to our small decoder), it is likely that such adaptations could produce a much lower latency in our model. It would definitely be worthwhile to investigate streaming capabilities of models based on our Aligner Encoder. Since this potential limitation is not only for our model, but also applies to AED, and it could require significant further study and reporting, we hope that it does not diminish too much the significance/relevance of our submission, where we have included many other detailed analyses and ablations.
Yes, we have cited https://arxiv.org/pdf/2307.14132 in our manuscript: [34].
Another reviewer also asked about why AED could be worse than RNN-T on LibriSpeech. We believe this reversal happened with the introduction of the Conformer, as that paper shows RNN-T achieving better performance than transformer-AED. In researching this question we also discovered a reference that reports a conformer-AED on LibriSpeech (https://arxiv.org/abs/2210.00077), which is still not quite as good as RNN-T (so the SOTA we are comparing against remains the same). It is better than our conformer-AED result, but it also used more learnable parameters. Still, we should include it as a point of comparison and double-check all the settings. We welcome any other suggested references or explanations.
Good question about RoPE versus relative position encoding. In the RNN-T baselines we ran, the training run with relative positional embedding produced very slightly better results than with RoPE (e.g. 2.1 versus 2.2, 4.6 versus 4.7), so we reported the relative positional embedding results. However relative positional embedding is significantly slower to train than RoPE, so for the remaining LibriSpeech models (including ours) we used RoPE.
---
Rebuttal Comment 1.1:
Title: Reply to the rebuttal
Comment: Thanks for the rebuttal. The paper can benefit from the revisions proposed by the authors. I am willing to raise the score to 7 (Accept).
In terms of the AED vs. RNN-T, it would be great to clarify the settings and explain the gap. My impression is that RNN-T typically uses a streaming encoder for online purpose, while AED uses non-streaming decoder for offline ASR. AED would always be better in terms of WER. Maybe the authors used the same decoder, and I didn't check the paper again. The authors can use several sentences to clarify the gaps in the revised version of the paper.
---
Reply to Comment 1.1.1:
Title: AED vs RNN-T
Comment: Thank you! Indeed the AED vs RNN-T performance seems to be a valuable question, we're not sure we'll be able to answer it definitively, but will add more details and some discussion (please also see comments under reviewer JgvT). In our experiments we used the same encoder for both, including global attention--so no streaming, which as you mentioned might otherwise put RNN-T at a disadvantage. It might simply be that AED requires a larger decoder to do really well, where we only used a 4-layer, 18M parameter transformer (already much larger than the LSTM used for RNN-T, 3.5M parameters). | Summary: The paper introduces Aligner, which is to take the best parts from RNN-Transducer and AED (Attention Encoder-Decoder) models. The idea comes from the intuition that the transformer encoder with self-attention can already learns to align the input and the output -- which is explicitly modeled by previous approaches like applying techniques to ensure the monotonic alignment or doing dynamic programming to find the alignment. Aligner just simply train with cross-entropy loss, and the results surprisingly show that the encoder can internally learn to align the input and the output during the forward pass. Experimental results showed that Aligner can perform comparably with existing baselines, and show the computational efficiencies.
Strengths: - The motivation about the computational efficiencies of existing models and the idea of developing Aligner is very clear.
- The idea is derived from the observation about internal alignment of Transformers, which give meaningful insights to the readers.
- It successfully tackles the limitations of existing ASR models with simple and intuitive ways, and the results show that it works well.
- The paper includes sufficient amount of evaluation results and showing the limitations (in 4.5.3) as well, providing insights to the readers about these models much.
- I feel the writing is also very clear.
Weaknesses: - The idea is yet confirmed in the model fine-tuning setups; as recent Whisper model [1] shows that large-scale pretraining can leads to performant ASR models, I hope this Aligner work applied to the large-scale pretraining setups and show the effectiveness as well. Note that I feel this is not a critical weakness of this paper.
- Honestly, I am not following the most recent state-of-the-art models for ASR systems, and not sure if there are some missing baselines that the authors should also compare with. I am willing to listen to other reviewers' opinions about this.
[1] Robust Speech Recognition via Large-Scale Weak Supervision, Radford et al, 2022
Technical Quality: 3
Clarity: 3
Questions for Authors: No questions for now.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for considering our work closely, your summary is accurate to what we intended.
It is a worthwhile question whether pre-training can be combined with our model. As it stands, our model seems to use several of the same layers in the conformer for 1) encoding and 2) alignment, whereas existing pre-training methods will only train encoding. It would be interesting for example to take an existing good encoder (e.g. from RNN-T) and see if it can be fine-tuned for a small number of steps to learn the alignment, rather than needing to start from scratch. We may attempt this experiment, thank you for the suggestion.
---
Rebuttal Comment 1.1:
Comment: Thank you, and I will keep my rating as the score is already 7. | Summary: This paper proposes a new speech recognition (or, more generally, sequence-to-sequence) architecture. This architecture performs an alignment process between input and output features via self-attention mechanisms in an encoder. The decoder network is a simplified version of the combination of the RNN-T or AED decoder, and its relationship is detailed in Section 2.2. The primary advantage of this method is reducing the computational cost by avoiding the dynamic programming to adjust the input and output length in RNN-T or cross-attention to consider all possible scores across the input frame and output token. The experiments show the comparable performance of the proposed method to RNN-T and AED, but it significantly reduces the computational costs.
Strengths: - Novel speech recognition architecture or, more generally, novel sequence-to-sequence architecture.
- Comparable performance to other SOTA speech recognition architecture (AED and RNN-T) while reducing the computational complexity.
- Interesting analysis of the alignment behaviors and detailed ablation studies
Weaknesses: - Weak reproducibility due to the use of non-public data for main ASR experiments and the lack of source code release. Note that I did not penalize this point in my initial judgment. But if this part is improved, I'll raise my score.
Technical Quality: 4
Clarity: 3
Questions for Authors: - Section 2.1: Do you need $U \leq T$? Can we apply this method for $T \leq U$? This would happen in the general sequence-to-sequence problem like MT or TTS.
- Section 4.2: Did you try it with the other encoder layer than the Conformer (e.g., vanilla transformer)? I'm curious because the alignment properties of this method might depend on the convolution operation in the Conformer.
- Section 4.5.2: I'm not sure how each attention head behaves. Can you discuss a bit more about how this behavior is different across the head?
- Section 4.5: Do you have some results on MT or AST?
Suggestions
- I recommend the authors emphasize the practical benefit of the computational complexity of this method in the abstract.
- Sections 4.5.3 and 4.5.5: These sections provide the main experimental results in the appendix, which are not recommended. This way, it breaks the page limit rule and is unfair compared with the other papers, which put all main results in the main body. These sections should be rewritten to avoid using the appendix results. Note that some supplemental use of the appendix is no problem (e.g., Table 5 is a good example. This is too detailed and may not be so crucial for understanding the main idea of this paper, but it is essential for reproducibility. So, it is adequate to be located in an appendix section.).
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: - Since this is not based on a hard alignment approach (RNN-T and CTC are based on hard alignment), I'm curious about the hallucination issues often observed in AED or decoder-only architectures. For example, OpenAI's Whisper is based on AED (soft alignment). It has a serious hallucination issue (despite its outstanding performance), and I think this is a potential limitation of the soft alignment-based approaches in general. I want the authors to discuss this aspect. This method probably has an advantage over AED due to its shallow decoder architecture, but I'm not very sure.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful consideration of our work.
Important question about the requirement for U <= T, especially for machine-translation. One possible solution to extend to U>T would be to pad the input with some (fixed) number, P, of learnable input frames, which would provide the model the ability to write T+P output tokens. (This is similar to the "registers" for vision transformers: https://arxiv.org/abs/2309.16588.) Another possibility, for U >> T, would be to train the model to decode two (or more) tokens per embedding frame. We did not need these changes for any of our ASR experiments, but would like to include them as suggestions in a revised discussion section.
We only tried Conformer because it tends to perform better and was already in use for many pre-existing baselines, so we could expect the encoder quality to degrade without the convolutions (even if the alignment is still possible to perform solely with the transformer).
In Figure 2, we plotted attention weights for a single head. The patterns do look different for different heads in the early layers (this also happens in RNN-T), but we found them all to express the alignment as seen in Layers 14 & 15. So in Figure 3 we were able to average across all heads when showing the alignment in Layer 15, which is probably more accurate than relying on a single head.
Unfortunately we do not have expertise or infrastructure for MT or AST---we welcome any correspondence from future readers over future work!
Regarding sections 4.5.3 and 4.5.4 figures, we placed images for these ablations into the appendix so they could be printed as large as possible. On re-reading, we did not adequately describe the figure from 4.5.4 for the text to serve as a standalone result--will revise!
On hallucinations, after looking at many, many examples, we did not observe this issue in any of our models, and did not see abnormally high insertion errors. In other work, we have observed hallucinations when using a larger language model in AED-style, so it seems this is a characteristic of LLMs. It's possible that we have some small number of tokens hallucinated during silence only, which RNN-T models may also do.
It is unfortunate that we are unable to release some of the datasets and our actual code--our hope is that the result on LibriSpeech (public) is sufficient for reproduction, paired with the fact that our method is only a simplification over previous models. We are happy to correspond with anyone re-implementing in a public code repository.
---
Rebuttal Comment 1.1:
Title: Thanks for your explanation
Comment: The answers are valuable (especially for the head and hallucination discussions), but they are not related to the overall discussions. I already gave the accept score and I want to maintain it.
I'm looking forward to MT or AST experiments and the open-source implementation of this method. | Summary: A new simplified encoder-decoder model is presented without the attention. The decoder is generating the labels auto-regressively as usual until end-of-sentence (EOS). In contrast to attention-based encoder-decoder (AED) models, the attention is replaced by simply taking the same frame in the encoder - i.e. in decoder step u, it will take frame u also from the encoder. Thus the output sequence can never be longer than the input sequence. The idea is that the encoder with self-attention can already realign the information as necessary to output it label by label. The remaining encoder output frames after EOS are ignored, but of course all intermediate encoder frames are used due to self-attention.
It's an interesting test to see whether the self-attention is enough to already learn this. And the answer is yes, it can learn this.
Experiments are performed on three speech recognition tasks:
* Librispeech with 960h train data
* Voice Search with 500kh train data
* YouTube videos with 670kh train data
In all cases, a Conformer encoder is used. Word pieces are used as output labels.
The self-attention weights are analyzed and it is observed that the realignment happens in layer 14. This is also verified in another way, by freezing the first N layers of the encoder, randomly resetting the other encoder parameters, and adding a RNNT on top of it and training that, and then generating the RNNT soft alignment. With N>14, the alignment looks like the identity mapping.
Strengths: Interesting idea and model.
The work shows that this simple idea seems to work, even though its performance stays behind the other existing models (CTC/RNNT/AED).
Interesting analysis on the attention weights and retraining the encoder partly and looking at the RNNT soft alignment.
Weaknesses: No source code to reproduce the results?
No references are given to Voice Search and YouTube data, so it's impossible to reproduce and verify the results.
Technical Quality: 3
Clarity: 3
Questions for Authors: How is the convergence rate in comparison to CTC, RNNT, AED? How is the alignment behavior early in training?
Table 3, what is dev, is that dev-clean, dev-other, or both combined?
Table 3, it seems a bit weird to me that the RNN-T is so much better than the AED model. I would actually expect the opposite. For example:
* A Comparison of Sequence-to-Sequence Models for Speech Recognition,
https://www.isca-archive.org/interspeech_2017/prabhavalkar17_interspeech.html
* On the Comparison of Popular End-to-End Models for Large Scale Speech
Recognition, https://www.microsoft.com/en-us/research/uploads/prod/2020/11/template-5fa34dc776e7f.pdf
Both show that AED is better than RNN-T. And this is what I have seen on many other occasions as well. Was this expected to you? Why? Or if not, how do you explain it?
What happens when f_pred is just a feed-forward network without the recurrence (dependence on g_{i-1}), i.e. you would get a model with only the last label as context? For RNN-T, it has been shown that this performs equally well as using the whole history. It would be interesting to see how it behaves for this model. (For an AED model, this is not really possible because the cross attention mechanism needs it.)
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our work closely.
The convergence rates are similar among CTC, RNN-T, and our model (good checkpoints are between 100k-150k training steps on LibriSpeech). Interestingly, AED models sometimes did converge faster (as fast as 25k training steps for the best checkpoint). We did not investigate this further other than to run AED again with a lower learning rate in case that was too high, but we weren't able to find a better result. So in this regard our model is more similar to RNN-T.
In Table 3 Dev is Dev-Clean only, we will edit the column label.
This is an interesting question about AED versus RNN-T. A possible explanation is the use of the conformer encoder. "A Comparison of Sequence-to-Sequence Models for Speech Recognition" uses attention architectures that predate transformers. "On the Comparison of Popular End-to-End Models for Large Scale Speech Recognition" finds an improvement using transformer-AED over RNN-AED, but it only reports RNN-T in its Table 1. Our results more closely match those of the Transformer-AED and Conformer-Transducer given in the Conformer paper https://arxiv.org/abs/2005.08100 (Table 2) which also experiments on LibriSpeech. In searching around for this question, we found a paper we could cite which gives a score for Conformer-AED on LibriSpeech is: https://arxiv.org/abs/2210.00077, which reports test-clean 2.16% and test-other 4.74%. This is better than our AED results (with conformer), although they use more learnable parameters: 148M versus ours at 128M. It is still less good than our RNN-T (conformer), so it would not change the SOTA against which we are comparing. Still, it seems worth citing and including in the comparison, and investigating their other settings.
This is a very interesting question about running our model with limited recurrence in the decoder. It is true that RNN-T models can often perform well using a history of only the 2 most recent tokens. Time permitting we will attempt to launch this experiment, thank you for the suggestion.
Unfortunately our implementation cannot be disentangled from a code base which cannot be shared. We hope that the fact that our model is a simplification over previous models, and uses more standard deep learning components, will make it relatively easier to reproduce the results, and we have attempted to include extensive hyperparameters.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal.
On AED vs RNN-T: Do you have exactly the same hyper-parameters for the encoder in both cases? Or were they tuned individually?
I guess in the original Conformer paper, both the architecture and its hyper-parameters were optimized always using the same RNN-T sequence modelling on top. So this gives maybe an advantage for RNN-T over AED.
RNN-T and CTC don't need any further positional information in the encoder output, while AED needs some information for the cross attention to work properly, such that it can know where to attend next. The architecture can indirectly learn absolute positional information, e.g. via the convolutional padding, but you can imagine that this is maybe not optimal. So maybe a different frontend, or explicitly adding absolute positional encoding would greatly help the AED sequence model.
Just some thoughts on this. My intuition tells me that AED is still more powerful than RNN-T when this is taken into account. And/or when the architecture and/or frontend is tuned for AED.
But studying this is probably out-of-scope for this work. Maybe the only reasonable simple experiment you could do now is adding absolute positional encoding to the encoder.
---
Reply to Comment 1.1.1:
Title: AED vs RNNT
Comment: Good questions, on AED versus RNN-T, we used the same encoder architecture for each. In fact they both include an absolute positional encoding, added into the embedding after the initial 2-D convolution layers, prior to the first conformer layer (we need to add this to Table 5)--some early ablations showed this might not be critical but I think we didn't try again with all the other final settings to be sure. One possible difference is that the variational noise is applied to the LSTM and the text embedding variable in RNN-T, and this is helpful to get a last bit of performance improvement, whereas in AED we only applied it to the embedding variable, since the decoder is much bigger. Another difference is that in AED we needed label-smoothing, but this doesn't apply in RNN-T. In both models we gave the encoder global attention, so it can operate on the whole sequence (no streaming). It is a bit strange that for our AED to work on the longest test utterances, we needed to concatenate training examples (as we described), and we think it is generally known that AED struggles to generalize to longer lengths, but the other references we've found don't mention this. Separately from this, it's possible that to perform better AED simply needs a larger decoder, since we used only a 4-layer transformer, which already adds 18M parameters.
It's interesting that multiple reviewers have raised this question--seems worthwhile for us to add a short discussion about this. Thank you. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
DeepLag: Discovering Deep Lagrangian Dynamics for Intuitive Fluid Prediction | Accept (poster) | Summary: Real-world processes exhibit multi-scale spatio-temporal dynamics. Not all of this dynamics is accurately modeled by Eulerian (i.e., field-based) modeling of scientific processes and sometimes the fine-grained patterns are only able to be modeled by Lagrangian paradigms. However explicit Lagrangian-only modeling is costly and hence this paper proposes a deep learning based surrogate model that is able to jointly model the Eulerian and Lagrangian perspectives by proposing a novel `EuLag' deep learning block. The authors demonstrate the performance of the proposed architecture on multiple fluid-dynamics related tasks involving multi-phase flows.
Strengths: - The authors develop a well-motivated and novel deep learning model architecture to combine the Eulerian and Lagrangian computational modeling paradigms thereby leading to improved modeling capacity owing to modeling across a larger variety of scales.
- The proposed model demonstrates performance improvements compared to state-of-the-art models like the Fourier-Neural Operator and transformer based architectures (also a central feature in the proposed EuLag block) like the Galerkin Transformer. Overall, the experimental results are convincing.
Weaknesses: - More information regarding the experimental setup needs to be included in the main text of the paper. Currently, the main body of the paper (especially related to the experimental setup and the dataset descriptions, model description) cannot stand on its own without the appendix. At least the full description of the model architecture (i.e., upsampling, down sampling and other critical operations) should appear in the main text.
- Comparison with a Lagrange-only model is necessary in addition to the current field-based prediction models for a more holistic experimental comparison.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Why have the authors not compared to a Lagrange-Only model like that of [1] for fluid flow modeling. Since the proposed paradigm is an Euler-Lagrange paradigm, baseline with models that only in the Eulerian domain seems somewhat incomplete and for completeness, comparison with one particle based deep-learning model (like [1]) should be included.
## References
[1] Sanchez-Gonzalez A, Godwin J, Pfaff T, Ying R, Leskovec J, Battaglia P. Learning to simulate complex physics with graph networks. InInternational conference on machine learning 2020 Nov 21 (pp. 8459-8468). PMLR.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Authors have addressed the potential limitations of their work sufficiently well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Special thanks to Reviewer Kve3 for their detailed review and insightful suggestions, your dedication to evaluating our work despite your busy schedule is greatly appreciated.
> **Weakness1:** About the detailed content selected to represent in the main body of the paper.
Thank you for your valuable feedback. We acknowledge that the current presentation may pose difficulties for readers seeking detailed technical insights, especially regarding the experimental setup, dataset descriptions, and model architecture. Due to space constraints, our initial focus was on presenting the core ideas and results, with specific implementation details relegated to the appendix. We will address this in our revisions by integrating a comprehensive description of the experimental setup, dataset characteristics, and detailed model architecture directly into the main body of the paper.
> **Weakness2:** Comparison with a Lagrange-only model.
Thank you for your comment. We appreciate your suggestion regarding including a comparison with a Lagrange-only model alongside existing field-based prediction models for a more comprehensive experimental evaluation, however, we have the following reasons not to do so:
Firstly, it's important to note that existing benchmarks $^{[1,2]}$ and real-world observational datasets, such as the Ocean Current dataset by ECMWF, predominantly represent Eulerian perspectives due to their ease of collection, representation, and storage.
Secondly, in practical applications like ocean studies, precisely tracking nearly infinite particles is often impractical. Therefore, there is a critical need for deep learning models that can effectively and efficiently extract Lagrangian information from Eulerian data in an unsupervised manner.
Thirdly, adhering to established conventions and baseline settings allows for a fair and transparent comparison among different models. As outlined in $\underline{\text{Lines 77-78}}$ of our paper, our research is focused on Eulerian data for fluid prediction, aligning with practical applications and existing benchmarks.
Regarding the comparison with a Lagrange-only model, it's important to clarify that such models operate purely on Lagrangian data inputs and outputs. This does not align with the specific problem statement and datasets used in our study, making it challenging to conduct a fair comparison.
In conclusion, while we appreciate the suggestion, our experimental design focuses on evaluating field-based prediction models within the context of Eulerian data. This approach ensures consistency and relevance to practical applications and existing benchmarks.
------
[1] Takamoto et al., "PDEBench: An Extensive Benchmark for Scientific Machine Learning", NeurIPS D&B 2022
[2] Gupta and Brandstetter, "Towards Multi-spatiotemporal-scale Generalized PDE Modeling", 2022
> **Q1:** Comparison with one particle-based deep-learning model
Thank you for your inquiry. We appreciate your suggestion regarding comparing our approach with a Lagrange-Only model.
As outlined in **Weakness2** of our paper, the decision not to include a comparison with a Lagrange-Only model stems from several considerations. Primarily, our research focuses on leveraging both Eulerian and Lagrangian perspectives within a unified framework, termed the Euler-Lagrange paradigm, currently on Eulerian data. This dual perspective allows for a more comprehensive understanding and prediction of fluid dynamics, which is the primary contribution and focus of our study.
However, we acknowledge the merit of your suggestion to include comparisons with models that solely operate within the Lagrangian domain. In response to your suggestion, we have contemplated a dual approach where Eulerian information supplements purely Lagrangian particle data for prediction tasks. This idea for proving the effectiveness of the Euler-Lagrange paradigm on Lagrangian data, while relevant and insightful, falls beyond the scope of the current paper but will be considered for future work. We appreciate your valuable input and will duly note it for further exploration.
---
Rebuttal Comment 1.1:
Title: Response to Author Rebuttal by Reviewer Kve3
Comment: Thank you to the authors for their responses. After consideration of the responses, I continue to remain positive about the paper and will maintain my score. | Summary: The authors propose DeepLag as an approach to simulating Eulerian fluid dynamics, which makes use of Eulerian-Lagrangian co-design to improve performance. In particular, the idea is to transfer information back and forth between the Eulerian grid and initially randomly placed Lagrangian particles, which themselves exist on multiple but connected discretization scales. The intuition is that Lagrangian particles are better at capturing interactions with boundaries and, in general, tracking the location of material elements, e.g., smoke. Empirically, the gain in performance over existing models is consistent and around 10%. The limitations of the approach are its algorithmic complexity and hyperparameters.
Strengths: - The idea of having "attention" between the Eulerian grid and Lagrangian particles is interesting and seems to work well.
- The multi-scale approach makes a lot of sense to map fine details to the coarser resolution. Not sure whether this is only a strength though, as this means higher algorithmic complexity of the model.
- The approach by construction offers Lagrangian particle trajectories, which might be useful in e.g. ocean dynamics research
Weaknesses: - **Factual mistakes**: the authors make some very crude false statements here and there, and I'm questioning whether a person or LLM wrote sections of the manuscript, particularly the introduction.
- Lines 24-25: "curse of dimensionality" is NOT the reason for high computational cost in fluid dynamics! Please open the Wikipedia article on the curse of dimensionality and modify the sentence in the paper.
- Lines 40-42 state that "[Lagrangian approach] helps get around the CFL condition", which is absolutely wrong.
- **Related work taxonomy**: in section 2.2, a categorization of neural solvers is presented; however, this grouping of methods is new to me and, as much as I know, not what the community typically uses.
- The 1st group, "ODE-based Generative Models", refers to some rather old and outdated work,
- the 2nd group, "Neural PDE Solvers", actually only talks about Physic-Informed Neural Networks, and
- the 3rd group, "Neural Operators for PDE", basically lists all methods that I would call modern neural PDE solvers and what is typically referred to as neural operator learning (DeepONet, FNO, etc.) is mentioned along with GNNs and CNNs.
See PDE Bench [1] and PDE Arena [2] for a more modern categorization of common PDE learning approaches.
- **Ablations**: I'm missing an ablation on (a) the number of scales and (b) overall model size. Judging by Fig 7, in which the U-Net is 5x faster and also smaller, I wonder how a U-Net with similar (a) parameter count or (b) runtime would perform. I'm a bit worried that the claimed good performance might be just a bad hyperparameter choice of the baselines.
---
[1] Takamoto et al., "PDEBench: An Extensive Benchmark for Scientific Machine Learning", NeurIPS D&B 2022
[2] Gupta and Brandstetter, "Towards Multi-spatiotemporal-scale Generalized PDE Modeling", 2022
Technical Quality: 3
Clarity: 2
Questions for Authors: - Fig. 7: in this figure, it looks like the U-Net has fewer parameters than DeepLag, but in lines 345-346, as well as Table 10, you state the opposite. Can you explain that?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: - Thinking of Occam's Razor, it is questionable how many people would adopt the proposed approach, as it requires (a) a significantly more involved implementation effort than a typical U-Net, (b) tuning various hyperparameters like number of scales and number of particles per scale, and (c) is ~5x slower (Fig 7), while offering on average 10% performance improvement. Could the authors add some more hints on where their approach would be of practical interest? I would appreciate extending section 5 into, for example, one paragraph dedicated to the summary/strengths/applications, and one with the limitation/future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Special thanks to Reviewer PRfM for their detailed review and insightful suggestions.
> **strength2:** On the algorithmic complexity brought by the multi-scale design.
Please recall $\underline{\text{Table 6 in Appendix A.1}}$ that the number of tracking particles at each scale has an exponential decrease w.r.t the index of that scale, resulting in adding more coarser scales only bringing about a minor fraction of extra computing, while the overall complexity doesn't vary too much. For quantitative results, please refer to **Weakness3** below.
> **Weakness1:** Justification for certain statements.
Thank you for carefully reviewing our paper and sorry for the controversial expressions, however, you can rest assured that every word of this paper is written by ourselves not LLM.
- **Regarding 'curse of dimensionality'**, we acknowledge your concern. While the curse of dimensionality itself refers to specific challenges in high-dimensional data spaces and may not directly cause high computational costs in fluid dynamics, we understand that computational challenges in fluid dynamics primarily stem from the complex numerical methods required to solve PDEs over large and intricate grids. We will revise the text to better clarify this distinction.
- **Regarding the statement of CFL condition**. We agree with you that the CFL condition is essential for ensuring numerical stability and accuracy in fluid dynamics simulations, regardless of the method employed. What we are trying to say is that Lagrangian approaches for Eulerian fluid prediction like Semi-Lagrangian method offer more flexibility in time step sizes (e.g. adaptive time stepping) than Eulerian methods. We will remove this claim for scientific rigor.
> **Weakness2:** Reorganize the related works.
In the previous paper, we follow the survey$^{[1]}$, where 'Neural Solvers' indicates Physic-Informed Neural Networks (PINNs) and 'Neural Operators' refers to the models that map the input function space to the output space, such as FNOs.
Following your suggestion, we would rephrase category names to 'Classical ML methods', 'Physic-Informed Neural Networks (PINNs)' and 'Neural Operators'.
------
[1] Physics-Informed Machine Learning: A Survey on Problems, Methods and Applications, arXiv 2022
> **Weakness3:** More ablations.
As per your request, we conducted further ablations, yielding the following results:
| No. of scales | Relative L2 | Latent dim. | Relative L2 |
| - | - | - | - |
| 1 | 0.0789 | 16 | 0.0656 |
| 2 | 0.0658 | 32 | 0.0594 |
| 4 (original) | **0.0543** | 64 (original) | **0.0543** |
| 5 | 0.0554 | 128 | 0.0614 |
Regarding (a) the number of scales, **increasing the number of scales generally improves performance up to a point**, after which diminishing returns are observed. As for (b) overall model size, **increasing the latent dimension tends to improve performance**, but too many parameters can be unnecessary for the model, as seen in the ablations.
Additionally, in response to your concern about comparing U-Net with a similar parameter count or runtime to DeepLag, please refer to $\underline{\text{Appendix E}}$. **U-Net consistently exhibits significantly larger parameter counts across all datasets compared to DeepLag**, rendering such experiments unnecessary. Furthermore, experiments scaling U-Net to match DeepLag's runtime show:
| Model | No. of Parameters | GPU Memory | Running Time | Relative L2 |
| - | - | - | - | - |
| U-Net-scale | 336,609,604 | 13672M | 601s/epoch | NaN |
| DeepLag | 19,526,827 | 12112M | 845s/epoch | **0.0378** |
In conclusion, augmenting U-Net with additional convolutional layers and increasing the latent dimension shows that **too many parameters can overwhelm U-Net, indicating a scalability limitation.**
> **Q1:** Elaboration on Figure 7 and Table 10.
Thank you for your inquiry regarding the discrepancy between $\underline{\text{Figure 7}}$ and $\underline{\text{Lines 345-346}}$, as well as $\underline{\text{Table 10}}$. Please note that $\underline{\text{Figure 7}}$ depicts **memory usage**, while $\underline{\text{Table 10}}$ and $\underline{\text{Lines 345-346}}$ refer to **parameter counts**.
Thus, the confusion arises from the difference in metrics presented. Specifically, U-Net is characterized by a larger number of parameters but a smaller memory footprint, as indicated in our findings. We acknowledge the need for clarity in distinguishing these metrics and will ensure that this distinction is clearly labeled in our revised manuscript.
> **Limitations:** About Occam's Razor, practical interest and writing advice.
Many thanks for your valuable suggestion. Firstly, we want to highlight several properties of our model that correspond to your mentioned question:
(a) About implementation effort: DeepLag can be implemented solely based on Pytorch modules, which may not be as hard as you thought.
(b) About hyperparameter: The primary consideration is the total number of tracking particles in the finest scale. Ablations in $\underline{\text{Table 5 in Section 4.4}}$ demonstrate that performance generally improves with an increased number of particles. Therefore, you can decide this up to your computation resource.
(c) About efficiency: We have to say that DeepLag is slower than U-Net. However, the performance improvement brought by DeepLag can be acknowledged in some applications, such as wind tunnel test, which prefers high precision to efficiency.
In addition, the learned particle trajectory can be visualized to help research understand fluid dynamics.
Following your suggestion, we will extend the Conclusion to discuss more about strengths/applications and limitations/future work. One promising direction is to speed up DeepLag with advanced attention mechanisms, such as FlashAttention.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the elaboration, new ablations, and improved content. My score accordingly increases by +1.
---
Reply to Comment 1.1.1:
Title: Thanks for Your Response and Raising the Score
Comment: Thank you for your valuable suggestions and for acknowledging our rebuttal. Your feedback has been instrumental in refining both the writing and experiments in our DeepLag paper. We greatly appreciate your recognition and support. | Summary: In this paper, the author presents a Lagrangian-Eulerian hybrid paradigm to address the complexities of fluid dynamics. Instead of relying only on Eulerian observations to predict future states, we introduce DeepLag, which uncovers hidden Lagrangian dynamics within the fluid by tracking the movements of adaptively sampled key particles. In experiments, DeepLag show better performance in three demanding fluid prediction tasks both in 2D and 3D, as well as simulated and real-world fluids.
Strengths: - The idea of integrating Lagrangian tracking into the deep model for assisting Eulerian based fluid prediction sounds novel.
- The LagToEu Attention and EuToLag Attention for exchanging information between Eulerian and Lagrangian view is intuitive.
- The proposed method shows better performance on all three test cases comparing to all baselines.
Weaknesses: - In the Bounded Navier Stokes, the performance of 10 Frames is significant better than 30 Frames, while for the Ocean Current case, the results of long-term rollout shows gives better result. Just wondering what is the reason between this discrepancy? Is the proposed method better than in shot-term or long-term tasks?
- How does the proposed method handle the complex boundaries? e.g., how to impose different boundary conditions into the framework?
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes. The author discuss the limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank Reviewer PBwE for the detailed review and insightful suggestions.
> **Weakness1:** Explanation of the performance difference between datasets. "Just wondering what is the reason between this discrepancy? Is the proposed method better than in short-term or long-term tasks?"
Thank you for your thorough review of our paper. The observed performance differences between the Bounded Navier-Stokes and Ocean Current datasets stem from the diversity in dataset characteristics rather than any weaknesses in our model.
The Bounded Navier-Stokes dataset represents a smaller scale with irregular variations, whereas the Ocean Current dataset spans larger scales with periodicity and oceanic averages. Therefore, for ocean data, achieving accurate predictions on average state can also yield excellent results over the long term.
> **Weakness2:** About the boundary condition processing in the framework. "How does the proposed method handle the complex boundaries? e.g., how to impose different boundary conditions into the framework?"
Actually, DeepLag is very flexible at adapting to various complex boundary conditions. As depicted in $\underline{\text{Figure 9 in Appendix G}}$, whether the boundary is known or unknown, simple or intricate, the dynamic sampling module which is optimized with the pointwise variance of vorticity could recognize the boundary area, wavefront, wake and far-field zone from the features contained in the input distribution. By the way, for datasets whose boundary is known (like the pillars in Bounded Navier-Stokes), we could concatenate the boundary mask as a new channel into model input as a guide to enhance the performance of DeepLag and the baselines.
To verify the generalizing ability on the new domain of DeepLag, we ran a zero-shot test with the old model checkpoint on a newly generated Bounded N-S dataset which has a different number, position and size of obstacles. **DeepLag still has a ~7% promotion w.r.t the best baseline, U-Net, on relative L2 (DeepLag: 0.203, U-Net: 0.217). The visual comparison** ($\underline{\text{Figure 2 in Global Response PDF}}$) **between the two models further shows that DeepLag adaptively generalizes well on new domains and handles complex boundaries well**.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks for the authors' reply to my concerns. I don't have further questions and I remain positive about this paper.
---
Reply to Comment 1.1.1:
Title: Thanks for your response
Comment: Thank you for your prompt response—it has been very helpful to us. If there's anything further we can do to potentially improve your assessment of our paper, please don't hesitate to let us know. We would be more than happy to engage in further discussion. | Summary: The authors propose a novel neural network architecture in order to leverage the advantages of the eulerian and lagrangian formalisms for fluid prediction. The so called "EuLag Block" acts on an eulerian grid based representation of the fluid as well as on a lagrangian particle based representation and enables information flow between both representations through 2 attention blocks:
1. The "LagToEu Attention" block allows the network to pass information from the lagrangian particle based representation to the eulerian grid based representation
2. The "EuToLag Attention" block allows the network to pass information from the eulerian grid based representation to the lagrangian particle based representation
On top of these 2 attention blocks, a bilinear interpolation module is used to extract information about local dynamics from the eulerian representation, which is then combined with information about global dynamics from all particles to compute the update of the particle based representation. The distribution of the initial particle positions is generated by a learnable sampling module.
The authors evaluate the resulting DeepLag architecture on 3 different fluid prediction tasks and outperform several state of the art methods. Furthermore, ablation studies were performed in order to investigate the effect of different numbers of particles, both attention blocks as well as the learnable sampling module.
Strengths: - the qualitative results look convincing (especially the long term stability shown in the videos)
- fairly extensive quantitative comparison to other network architectures and ablation studies
- interesting idea to use LagToEu / EuToLag attention blocks in order to reduce the squared complexity of the attention mechanism wrt the domain size in the eulerian frame of reference to a linear complexity wrt the domain size and the number of lagrangian particles (especially if the number of particles is chosen to be relatively small).
Weaknesses: - Reference to "Accelerating Eulerian Fluid Simulation With Convolutional Networks" by Tompson et al is missing. They use a particle tracer to deal with the advection term in the lagrangian frame of reference and train a CNN to perform a "pressure projection step" in the eulerian frame. They showed impressive smoke simulations in 3D similar to the experiments shown in section 4.3. Thus, I'm not sure if I can fully agree with the claim in line 55-57 that this is the first deep fluid prediction model that explicitly combines Eulerian and Lagrangian frameworks.
- Regarding the efficiency analysis (Figure 7) it would be interesting to see how a U-Net would compare to DeepLag when upscaled up to a similar running time.
Technical Quality: 3
Clarity: 2
Questions for Authors: - How would the particle movements (shown in Figure 5 on the right) look for the Bounded Navier-Stokes dataset (Figure 4)? Would the particles still follow the velocity field although the velocity field is not considered as a field variable in this experiment but only the dye concetration?
- Could the particle movements be used to extract information about the velocity field in the Bounded Navier-Stokes dataset?
- Often in fluid dynamics (e.g. "Stable fluids" by Jos Stam or "Accelerating Eulerian Fluid Simulation With Convolutional Networks" by Tompson et al), solving the pressure field is done in the eulerian frame and requires a global solution. However, dealing with the advection term using a particle tracer requires only the local velocity field. Isn't the global attention for the particle updates a "slightly wasteful" overkill?
- How does the positional embedding look like?
- How well does your method generalize to new domains?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: - How would your method scale to larger domains? I'm wondering if the fully-connected layer in the dynamic sampling module could become a bottleneck for larger domain as it seems to scale quadratically with the domain size.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Sincerely thank Reviewer rCcn for the detailed review and insightful suggestions.
> **Weakness1:** On the difference between DeepLag and FluidNet [Tompson et al, ICML 2017].
Thanks for your recommendation on related work and rigorous questions. We will cite FluidNet in the revised paper. However, we have to say that FluidNet is distinct from DeepLag:
| | FluidNet | DeepLag |
| - | - | - |
| **Main Design** | Replaces Eulerian pressure projection with CNN module | Combines Eulerian and Lagrangian in Deep Learning |
| **Method for Lagrangian** | **Numerical method** | Learning **deep Lagrangian info for end-to-end modeling** |
The key design of FluidNet is replacing one step of classical method with a learned module implemented by a surrogate CNN for accumulation. However, in the Lagrangian perspective, it still employs the traditional numerical approach (Maccormack method).
**In contrast, DeepLag represents an approach that integrates both Eulerian and Lagrangian within a pure deep learning framework**. This allows for more generalized modeling of Lagrangian dynamics across multiple elements, demonstrating its capability as a fully deep learning-based solution capable of end-to-end autonomous learning.
We will revise the claim you concern to **"explicitly combines Eulerian and Lagrangian in a deep learning framework"** for scientific rigor.
> **Weakness2:** Comparison between DeepLag and U-Net under similar running time.
| Model | No. of Parameters | GPU Memory | Running Time | Relative L2 |
| - | - | - | - | - |
| U-Net-scale | 336,609,604 | 13672M | 601s/epoch | NaN |
| DeepLag | 19,526,827 | 12112M | 845s/epoch | **0.0378** |
As per your request and following $\underline{\text{Figure 7}}$, we conducted experiments on the 3D Smoke dataset that scaled up U-Net to make a comparison between DeepLag and U-Net under similar running time. Concretely, we add more conv layers into the standard U-Net and increase the latent dimension. **The results show that too many parameters overwhelm U-Net, indicating that it has a shortcoming in scalability**.
> **Q1:** Visualization of the particle movements of the Bounded Navier-Stokes dataset.
Based on your request, we have visualized the particle movements on the Bounded Navier-Stokes dataset in $\underline{\text{Figure 1 in PDF of Global Response}}$. Given the dataset's complex and frequent motion patterns, we have plotted particle offsets between consecutive frames, which effectively reflect instantaneous particle movements.
As depicted, DeepLag still can learn intuitive and reasonable motion. However, there is a slight decrease in overall quality compared to Ocean Current, which may be because of lacking standard physical quantities. However, it is worth noticing that DeepLag performs best in this dataset, which verifies the benefits of learned Lagrangian dynamics in improving prediction.
> **Q2:** Velocity field extraction from particle movements of the Bounded N-S dataset.
Extracting dense velocity fields from the particle perspective in the Bounded N-S dataset is challenging due to the sparse nature of particles, which is necessary to reduce computational complexity. However, **key pathways of complex motion regions can be identified through adaptive sampling, which already proves valuable for understanding fluid dynamics**.
> **Q3:** Is the EuToLag attention an overkill?
In theory, relying solely on local velocity fields for particle updates seems sufficient. However, in practical fluid dynamics, the exact velocity of fluid motion is often unknown, and due to the incompressibility of fluids, disturbances from distant regions can influence local dynamics. Therefore, incorporating global attention mechanisms ensures comprehensive modeling, addressing scenarios where distant influences play a role. Take Ocean Current as an instance, the test set Relative L2 increases to 0.0264 from 0.0250 after removing the EuToLag attention (at the 20th training epoch).
Moreover, the EuToLag attention introduces minimal computational overhead ($O(n)$) and slightly increases training time (from 1030s/epoch to 1150s/epoch on Bounded N-S), while yielding noticeable performance improvements. Thus, we believe that EuToLag attention is not 'overkill'.
> **Q4:** About the positional embedding.
We concatenate two additional channels to the input (three for 3D Smoke), representing normalized (x, y) coordinates (or (x, y, z) for 3D Smoke).
> **Q5:** Generalization on new domains.
To verify the generalizing ability of DeepLag, we ran a zero-shot test with the old model checkpoint on a newly generated Bounded N-S dataset which has a different number, position and size of obstacles.
DeepLag still has a ~7% promotion w.r.t the best baseline, U-Net, on relative L2 (DeepLag: 0.203, U-Net: 0.217). The visual comparison ($\underline{\text{Figure 2 in Global Response PDF}}$) between the two models further shows that DeepLag adaptively generalizes well on new domains.
> **Limitations:** On the scalability to larger domains of the fully-connected layer in the dynamic sampling module.
Sorry for the vague description. The "fully-connected layer" in $\underline{\text{Line 151}}$ **refers to channel dimension MLP rather than spatial.** Therefore, the quadratical complexity for a fully-connected layer doesn't exist.
Further, we trained a new model on a 256*256 Bounded N-S (4x larger domain) as requested, which still has a ~17% promotion w.r.t U-Net on relative L2 (DeepLag: 0.051, U-Net: 0.060), the comparison below also shows that the increase of the time complexity is minor, underscoring the scalability of DeepLag.
| Resolution | GPU Memory | Running Time | Relative L2 |
| - | - | - | - |
| 128*128 | 5420M | ~1150s/epoch | 0.0543 |
| 256*256 | 13916M | ~1300s/epoch | 0.0514 | | Rebuttal 1:
Rebuttal: ## Global Response and Summary of Revisions
We sincerely thank all the reviewers for their insightful reviews and valuable comments, which are instructive for us to improve our paper further.
This paper proposes a **new deep Eulerian-Lagrangian hybrid paradigm for fluid prediction**, DeepLag, which can benefit from the advantages of both perspectives. DeepLag is derived from the equivalent and complementary theory of the two perspectives as a practically novel end-to-end deep learning framework, which **consistently boosts the performance of three difficult benchmarks, covering both 2D and 3D fluid.**. Detailed visualizations and model analysis are provided.
The reviewers generally held positive opinions of our paper, in that the proposed method is "**innovative**", "**effectively combines Eulerian and Lagrangian perspectives**", "**particularly valuable**", and "**versatile and robust**"; The model’s effectiveness is "**rigorously tested**" and demonstrated "**superior performance not only in standard scenarios but also in both short-term and long-term prediction tasks**".
The reviewers also raised insightful concerns and constructive questions. We made every effort to address all of them by providing detailed clarification, requested results and visualizations. Here is the summary of the major revisions:
- **Explain hyperparameter choosing for particle tracking would not pose challenges (Reviewer UohQ, PRfM):** Firstly, we point out that the only hyperparameter needed for tuning is the total number of the tracking particles in the finest scale. Secondly, we recall the hyperparameter details in the appendix and the ablation result to explain that the hyperparameter choice depends more on the limitation of computing resources rather than blind searching.
- **Clarify the difference between DeepLag and FluidNet [ICML 2017] (Reviewer rCcn):** Firstly, we emphasize that FluidNet is mainly based on the numerical method, which replaces Eulerian pressure projection with a CNN module, while DeepLag integrates both Eulerian and Lagrangian perspectives within a pure deep learning framework, which learns deep Eulerian and Lagrangian info for end-to-end modeling. Secondly, we will revise the claim to "explicitly combines Eulerian and Lagrangian in a deep learning framework" for scientific rigor.
- **Experiments of scaling up U-Net and comparing it with Deepag under similar running time (Reviewer rCcn, PRfM):** Following the reviewers' request and the efficiency analysis in our paper, we conducted experiments on the 3D Smoke dataset that scaling up U-Net to make a comparison between DeepLag and U-Net under similar running time. The results show that too many parameters overwhelm U-Net, indicating that it has a shortcoming in scalability.
- **Experiments of generalization on new domains and boundary conditions (Reviewer rCcn, PBwE):** Following the reviewers' concern, we ran a zero-shot test with the model checkpoint trained on the original dataset, on a newly generated Bounded N-S dataset which has a different number, position and size of obstacles, to verify the generalizing ability of DeepLag. Both quantitive and visual result shows that DeepLag adaptively generalizes well on new domains and complex boundary conditions.
- **Correct writing issues (Reviewer UohQ, Kve3):** We sincerely thank the reviewers for the valuable feedback from the careful reviewers and explain that the limitations on space constraints make us move some detailed content to the appendix. And we promise to conduct a comprehensive proofreading to ensure that all grammatical errors are resolved and include a comprehensive description of the experimental setup, dataset, and model architecture in the main text.
The valuable suggestions from reviewers are very helpful for us to revise the paper to a better shape. All the above revisions will be included in the final paper. We'd be very happy to answer any further questions.
Looking forward to the reviewer's feedback.
#### **The mentioned materials are included in the following PDE file.**
- **Figure 1 (Reviewer rCcn)**: Visualization of the particle movements on the Bounded Navier-Stokes dataset.
- **Figure 2 (Reviewer rCcn, PBwE)**: The visual comparison of zero-shot inference on the new Bounded Navier-Stokes dataset between the best baseline, U-Net, and DeepLag.
Pdf: /pdf/82647c609221fa18dc8a1a1c26e259b943dccd80.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper introduces a novel approach to predicting fluid dynamics by integrating both Lagrangian and Eulerian paradigms. The model, named ‘DeepLag,’ utilizes transformer blocks to process and integrate information from Eulerian and Lagrangian perspectives. Initially, DeepLag predicts the future state of the Eulerian space. Subsequently, it uses this prediction to infer the Lagrangian movements of key tracked particles. This methodology allows the Lagrangian dynamics to inform and guide the evolution of the Eulerian predictions, enhancing the overall prediction accuracy.
DeepLag is evaluated against various baselines across three diverse datasets, including simulated bounded Navier-Stokes equations, real-world ocean currents, and 3D smoke dynamics. These experiments demonstrate the model’s effectiveness in handling both 2D and 3D fluid dynamics in simulated and real-world scenarios.
Strengths: - The DeepLag model introduces a novel framework that effectively combines Eulerian and Lagrangian perspectives, allowing for dynamic propagation in both spaces. This dual approach is innovative in fluid dynamics modeling, enhancing the prediction accuracy by leveraging the strengths of both paradigms.
- The model’s effectiveness is rigorously tested across three challenging datasets—bounded Navier-Stokes equations, real-world ocean currents, and 3D smoke dynamics. The experiments demonstrate superior performance not only in standard scenarios but also in both short-term and long-term prediction tasks, highlighting the model’s versatility and robustness.
- One of the standout features of DeepLag is its ability to provide interpretable results by showcasing individual particle trajectories via Lagrangian dynamics. This aspect is particularly valuable as it not only enhances the understanding of fluid movements but also aids in validating the model’s predictions through visual and traceable particle paths.
Weaknesses: - The model’s reliance on extensive hyperparameter tuning for particle tracking could pose challenges in terms of replicability and efficiency. This complexity might limit the accessibility of the model for practical applications without substantial computational resources.
- Additionally, the particle tracking mechanism struggles to maintain focus on particles near the domain borders. This limitation is particularly concerning in scenarios where significant dynamic changes occur near these borders, such as the presence of obstacles. The inability to track these dynamics could potentially lead to incomplete or inaccurate modeling of fluid behavior in such areas.
- The paper contains a few grammatical errors that, while minor, could detract from its overall professional presentation.
Technical Quality: 3
Clarity: 2
Questions for Authors: - How to interpret the difference between the left and right parts of Fig. 1?
- Is there supervision over the Lagrangian space, or does it only happen in the Euler space?
- Would interchanging the positions of the LagToEu and EuToLag blocks affect the model’s performance or learning dynamics? If so, how?
- Given that Table 5 shows only a minimal performance boost from the EuToLag attention, how should this be interpreted in the context of the model’s overall efficiency and effectiveness?
- In the Bounded Navier-Stokes experiments, are the positions of the cylinders consistent across all trials within the dataset, or do they vary?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations have been addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks to Reviewer UohQ for the detailed review and suggestions.
> **Weakness1:** The model’s reliance on extensive hyperparameter tuning for particle tracking could pose challenges.
**The only hyperparameter needed for tuning is the total number of the tracking particles in the finest scale.** Please recall $\underline{\text{Table 6 in Appendix A.1}}$ that the number of tracking particles at each scale has an exponential decrease w.r.t the index of that scale. The ablation in $\underline{\text{Table 5}}$ shows that the performance always tends to incline as we simply increase this number (which means our DeepLag is scalable), therefore **the hyperparameter choice depends more on the limitation of computing resources rather than blind searching**.
> **Weakness2:** The particle tracking mechanism struggles to maintain focus on particles near the domain borders.
We respectfully point out that our model works well on boundary particles. $\underline{\text{Figure 9 in Appendix G}}$ shows that **DeepLag is apt to sample more points in the complex-dynamics area**, which is near and behind the column obstacles. Plus, the red boxes in $\underline{\text{Figure 10,14 in Appendix H and Figure 16 in Appendix I}}$ indicate that **DeepLag is good at capturing the fine details in the wake zone where turbulence and vortex form**.
> **Weakness3:** The paper contains a few grammatical errors.
Thank you for your thorough review of our paper and for bringing to our attention the grammatical errors that may detract from its professional presentation. We sincerely apologize for these inaccuracies and have already taken steps to correct them. Specifically, we will address errors such as the **article misusage** in $\underline{\text{Line 3 of paper}}$ ("an Eulerian perspective" corrected to "the Eulerian perspective") and the **preposition error** in $\underline{\text{Line 70 of paper}}$ ("tracking a certain particle with initial position $\mathbf{s}_0$ with its displacement $\mathbf{d} = \mathbf{d}(\mathbf{s}_0, \mathit{t})$" corrected to "tracking a particular particle from initial position $\mathbf{s}_0$ by its displacement $\mathbf{d} = \mathbf{d}(\mathbf{s}_0, \mathit{t})$"). These corrections are crucial for maintaining the professional standard of our presentation. Moving forward, we will conduct a comprehensive proofreading to ensure that all grammatical errors are identified and rectified.
> **Q1:** About the difference between the two parts of Fig. 1
As indicated in the figure caption, Fig. 1 presents two distinct visualizations. The left part depicts the trajectories of Lagrangian particles overlaid on the mean state observed from the Eulerian perspective. In contrast, the right part displays successive Eulerian frames with scattered positions of tracked particles. These visualizations align with the description provided in $\underline{\text{Lines 42-43 of paper}}$, where **the trajectories of fluid motion are more visibly represented through the dynamic Lagrangian view compared to the density variations observed in static Eulerian grids**. This underscores our motivation to incorporate and study Lagrangian dynamics.
> **Q2:** About the supervision over the Lagrangian space.
There is no supervision signal in the Lagrangian space due to the following reasons. Firstly, the widely used large-scale and fine-grained data in hydromechanics is all Eulerian since they're relatively easier to collect, represent and store. Secondly, it's impractical to track almost infinite particles precisely in real-world applications like ocean study, so **there is a need for deep models to unsupervised extract Lagrangian information from Eulerian data effectively and efficiently**. Thirdly, following the convention and the setting of baselines allow us a fair and clear comparison.
> **Q3:** The result of interchanging the positions of the LagToEu and EuToLag blocks.
| Dataset / Model | Original | Swap EuToLag and LagToEu |
| - | - | - |
| Bounded N-S | 0.0543 | 0.0545 |
| 3D Smoke | 0.0378 | 0.0378 |
We conducted experiments as per your request, swapping the positions of the EuToLag and LagToEu blocks and validating their effects on both 2D (Bounded Navier-Stokes) and 3D (3D Smoke) datasets. The experimental results indicate that **there was minimal change in performance, suggesting that the flow of information between Eulerian and Lagrangian perspectives is bidirectional**. This insensitivity to the order in which information is transferred between the two perspectives underscores the robustness of our approach. These findings further support our statement in $\underline{\text{Line 73}}$ of the manuscript that "**Two perspectives are constitutionally equivalent**," which is substantiated both theoretically ($\underline{\text{Eq. (1) and (2) in Section 2.1}}$) and intuitively.
> **Q4:** Explanation on the ablation of the EuToLag attention.
The relatively small promotion of EuToLag attention can be interpreted as follows.
Firstly, **EuToLag attention itself adds minimal computational overhead ($O(n)$) and only marginally increases the training time (from 1030s/epoch to 1150s/epoch)**. The primary computational load lies in the LagToEu module, which processes dense Eulerian data using patch embedding and patch recovery. Therefore, EuToLag attention does not significantly impact efficiency.
Secondly, **in scenarios where precision demands capturing global information for effectiveness, integrating EuToLag attention remains justified**. In this paper, we mainly focus on effective Eulerian-Lagrangian collaborative modeling and efficiency optimization is a promising future work.
> **Q5:** Positions of the cylinders in the Bounded Navier-Stokes Benchmark.
The positions of the cylinders are fixed, but the initial condition varies in different samples, which can simulate a scenario like bridge pillars in a torrential river.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for providing additional experiments and clarifications in response to my comments. I appreciate these efforts and will maintain my score. | null | null | null | null | null | null |
DiP-GO: A Diffusion Pruner via Few-step Gradient Optimization | Accept (poster) | Summary: The paper introduces Diffusion Pruning via Few-step Gradient Optimization (DiP-GO), which is a new diffusion model pruning method. The method addresses the high computational cost of diffusion models' multi-step denoising process, which hinders their practical use. Traditional pruning methods require resource-intensive, inefficient retraining with large datasets. DiP-GO proposes intelligent, dynamic pruning without retraining.
The paper proposes a SuperNet using standard diffusion models with backup connections for similar features. These turn pruning into SubNet search, eliminating retraining.
Plugin pruner design optimizes pruning constraints and synthesis. The pruner finds an optimal SubNet by identifying redundant computation with carefully designed optimization losses.
A post-processing method ensures the pruned SubNet meets specific requirements, improving pruning efficiency and effectiveness.
Strengths: 1. DiP-GO efficiently predicts the importance scores of computational blocks, allowing for dynamic and intelligent pruning without the need for retraining the diffusion models.
2. The pruner network employs consistency and sparsity optimization losses to ensure generation quality while minimizing computational usage. This balance between accuracy and efficiency is crucial for effective model pruning.
3. The authors validated DiP-GO across different diffusion models, including Stable Diffusion series and DiTs, showing its versatility and robustness. The extensive experiments show that DiP-GO achieves great speedup (up to 4.4× on Stable Diffusion 1.5) without loss of accuracy, outperforming previous state-of-the-art methods.
Weaknesses: 1. The vast search space (due to the large number of blocks and timesteps) may pose challenges, as the method must efficiently navigate and optimize within this space to identify the optimal SubNet.
2. At higher pruning ratios, there is a risk of performance degradation where the quality of generated images might be compromised. This is particularly concerning for applications requiring high fidelity and detail. As the pruning ratio increases, some patterns in the image content may deviate from those in the original images. Although the main objects typically adhere to the textual conditions, subtle changes in background details can lead to noticeable artifacts.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How exactly are the additional backup connections in the SuperNet constructed? Are they predefined or learned during the training of the pruner network?
2. Can you provide detailed computational resources required to train the pruner network? How does this compare to the computational resources required for retraining the diffusion models in traditional pruning methods?
3. The threshold $\tau$ vary among different datasets or not?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude for the detailed and professional attention you have given to our work during the review process.
**Re: Weakness#1**
Yes, the vast search space actually introduces challenges to the learning process for obtaining an optimal SubNet. Therefore, we choose the gradient-based optimization method to tackle this challenge instead of the traditional search methods and design the pruner network with specific optimization losses. And the Table 6 of main submission shows that our method surpasses the search methods in both accuracy and efficiency. Moreover, Table 1, 2, 3 and 4 in main submission show that our method can obtain optimal SubNets in the search space, which is benefited to the gradient-based optimization method. We can reduce the search space by reducing the number of pruning steps and pruning blocks, which may hinder the achievement of an optimal SubNet than before and sacrifices accuracy of pruning result.
**Re: Weakness #2**
Yes, although our method can maintain the main object content of images under very high compression ratios, subtle changes in the background still occur. This is a common issue in model acceleration tasks, and it is essential to consider the accuracy-speed trade-off for the deployment scenario when using our method. We also show this phenomenon in Figure 3 of the main submission, and our method can maintain the generation quality with a significant acceleration on SD1.5 with 50-step sampler.
**Re: Question #1**
The additional backup connections are predefined in the dense SuperNet before training the pruner network. As shown in Figure 1b of main submission, a block would build backup connections for its dependent blocks and the backup connections are from the corresponding dependent blocks in previous timestep to it. The pruner network would select either the original connection or the backup connection, but not both simultaneously, during training and testing phases based on the importance score predicted by it.
**Re: Question #2**
The training cost of our method is not high. For example, using SD-1.5 with a 50-step sampler, our method requires approximately 2.3 hours of training on a single MI250 GPU as shown in Table 6, which is significantly less than the time cost of traditional pruning methods (10% to 20% of the cost compared to pre-training) [1] and the SD-1.5 training takes about 150K GPU hours for pretraining [2]. And our method requires about 4 hours of training a pruner network for DiT with 250-step sampler, which is also efficient.
**Re: Question #3**
For different datasets, the threshold τ is set to 0.2, it's related to the pruned ratio requirement. The τ should be sat smaller at higher pruning ratios and bigger at lower pruning ratios.
[1] Structural Pruning for Diffusion Models. NeurIPS 2023.
[2] Stable-diffusion-v1-5 in hugging face.
---
Rebuttal Comment 1.1:
Title: Kindly Request for Your Feedback!
Comment: Dear **Reviewer JBdU**,
Thank you again for your valuable comments. We have tried our best to address your questions (see rebuttal PDF and above), and will carefully revise the manuscript by following suggestions from all reviewers. Please kindly let us know if you have any follow-up questions.
Your insights are crucial for enhancing the quality of our paper, and we would greatly appreciate your response to the issues we have discussed.
Thank you for your time and consideration.
---
Rebuttal 2:
Title: Hope to hear your response before discussion phase end
Comment: Dear **Reviewer JBdU**,
Thank you for reviewing our paper and providing thoughtful feedback. We have addressed your concerns in our rebuttal, which are summarized as follows:
1. **Vast Search Space Challenge:** We detailed how our gradient-based optimization method overcomes this challenge and outperforms existing methods, such as GA Search.
2. **Image Quality under Higher Pruning Ratios:** We provided a more detailed analysis of our method under high pruning ratios and clarified that considering the accuracy-speed trade-off is crucial for deployment scenarios when using our method.
3. **Clarification:** We explained the concept of backup connections in the SuperNet and provided more details about the threshold $\tau$.
4. **Computational Resources:** We detailed the computational resources required to train the pruner network, demonstrating that our method is significantly more efficient than traditional pruning methods.
Given these clarifications and the positive aspects of our work that you have acknowledged, we kindly request that you reconsider your evaluation. We believe our research contributes valuable insights to the field and addresses key challenges in diffusion model acceleration.
With approximately 15 hours remaining in the discussion phase, we hope our rebuttal has resolved your concerns. If so, we would appreciate if you could consider raising your score.
If you have any further questions or need additional information, please let us know. We are committed to providing any further clarification needed within the remaining time.
Thank you again for your time and feedback.
Best regards,
Authors | Summary: This paper proposes a novel differentiable pruner for diffusion models. The core of the approach involves transforming the model pruning process into a SubNet search process.
Strengths: 1. The main idea is to transfer the model pruning process into a SubNet search process, eliminating the need to retrain pretrained diffusion models.
2. Compared to traditional search methods, this differentiable approach is much more efficient.
3. Extensive experiments demonstrate the superiority of this method.
Weaknesses: 1. In Figure 2(a), the dimension of the prune queries is T \times N \times D . The meaning of dimension D is not discussed earlier in the paper. Could you clarify what D represents?
2. DiP-GO is tested with 50 steps for SD and 250 steps for DiT, while 20 or 25 steps are more common nowadays. How does your algorithm perform under these more typical step scenarios?
3. The paper only provides theoretical speedup measurements in terms of MACs or Speedup. What is the actual inference latency?
4. In Table 3, the faster sampler method is compared under pruned-0.75 with 70 steps. However, under pruned-0.6, the results for the fast sampler method are not shown. Could you explain this omission?
5. The usual step count for DPM-Solver is 20 or 25 [1,2,3]. Why did the author choose 50 steps in Table 4?
6. According to Table 4, DiP-GO does not seem to be friendly to LCM.
[1]. Cross-Attention Makes Inference Cumbersome in Text-to-Image Diffusion Models, arXiv.
[2]. PixArt-$\alpha$: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis, ICLR24.
[3]. Your Student is Better Than Expected: Adaptive Teacher-Student Collaboration for Text-Conditional Diffusion Models, arXiv.
Technical Quality: 2
Clarity: 2
Questions for Authors: Please refer to the weaknesses part.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes. The authors have addressed the limitations and social impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude for the detailed and professional attention you have given to our work during the review process.
**Re: Weakness #1**
$D$ is the embedding dimension of a learnable query in the pruner network, and the dimension of the prune queries is $T \times N \times D$ due to the $N \times T$ queries mentioned in the caption of Figure 2. We clarify its meaning and value in the experimental section on Line 267-268 of the paper. But there is a typo, which changes $D$ to $d$. We will fix it in the revised version.
**Re: Weakness #2**
We tested our method on SD-1.5 with 50 steps to align with the configuration of DeepCache [1], the main SOTA method for comparison. Additionally, we tested our method on DiT with 250 steps, as specified in the official DiT paper [2]. Our method achieved significant acceleration while minimizing loss on DiT with the 250-step sampler, demonstrating the effectiveness of our gradient-based optimization method. For models with fewer steps, it is easier to obtain the optimal SubNet to reduce redundant computations due to a smaller search space. We also tested our method on SD-2.1 with a 25-step DPM-Solver sampler and PixArt-$\alpha$ with a 20-step DPM-Solver sampler, achieved excellent pruning results, as shown in Table 1 and Table 2 below.
| Model | MACs | Speedup | CLIP Score |
|-----------------------------|------|---|------------|
| SD-2.1 with 25-step DPM | 19.02 T | 1.0$\times$ | 31.59 |
| SD-2.1 Pruned-0.5 (Ours) | 9.51 T |1.8$\times$ | 31.52 |
*Table 1:* Comparison with a 25-step DPM-Solver sampler for SD model. We evaluate the effectiveness of our methods on COCO2017 validation set.
| Model | MACs | Speedup | CLIP Score |
|-----------------------------|------|---|------------|
| PixArt-$\alpha$ with 20-step DPM | 85.65 T | 1.0$\times$ | 30.43 |
| PixArt-$\alpha$ Pruned-0.4 (Ours) | 51.39 T |1.6$\times$ | 30.41 |
*Table 2:* Comparison with a 20-step DPM-Solver sampler for diffusion transformer model. We evaluate the effectiveness of our methods on COCO2017 validation set.
**Re: Weakness #3**
The speedup mentioned in the paper is based on actual measurements, not theoretical measurements. Detailed descriptions of the environment and platform used for these measurements can be found on Line 272 of the main submission. The testing method adheres to the Deepcache method for fair comparison, and the actual inference latency is presented in Table 3 of the main submission. We will include further details about the measurement method in the revised version.
**Re: Weakness #4**
We apologize for missing this data. We have now included two additional few-step results in Table 3, and will include these updates in the revised version of the paper.
| Method | Pruning Type | MACs | FID-50K | Speedup |
| --- | --- | --- | --- | --- |
| DiT-XL/2*-250 steps | Baseline | 29.66T | 2.97 | 1.00 $\times$ |
| DiT-XL/2*-110 steps | Fast Sampler | 13.05T | 3.06 | 2.13 $\times$ |
| DiT-XL/2*-100 steps | Fast Sampler | 11.86T | 3.17 | 2.46 $\times$ |
| **Ours (DiT-XL/2\* w/ Pruned-0.6)** | Structured Pruning | 11.86T | **3.01** | **2.43 $\times$** |
*Table 3:* Comparison of fast sampler methods under pruned-0.6.
**Re: Weakness #5**
We first choose 50-step DPM sampler due to its higher quality with more sampling steps, and the DPM sampler using 50 steps is also found in the Table 2 of the reference paper [3].
Further, we provide the pruning result on SD-2.1 with 25-step sampler in above Table 1. As shown in Table 1, our method can prune 50% computation nearly without loss.
**Re: Weakness #6**
Yes, you are right. Line 298 in the paper explains this. Our method benefits from information redundancy in multi-step optimization processes, the LCM model has fewer redundancy in features across adjacent timesteps due to its efficiency. For better pruning results, we think it's necessary to train the original diffusion model.
[1] Deepcache: Accelerating Diffusion Models for Free, CVPR 2024.
[2] Scalable Diffusion Models with Transformers, ICCV 2023.
[3] Cross-Attention Makes Inference Cumbersome in Text-to-Image Diffusion Models, arXiv.
[4] PixArt-$\alpha$: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis, ICLR24.
---
Rebuttal Comment 1.1:
Title: Kindly Request for Your Feedback!
Comment: Dear **Reviewer A73E**,
We appreciate your thoughtful evaluation and the opportunity to clarify and expand upon key aspects of our work. Based on the detailed responses and additional data provided, which directly address the concerns raised:
1. We have added more clarification regarding the meaning of dimension $D$ and inference latency.
2. We conducted comparison experiments with SD-v1.5 (25 steps) and PixArt-$\alpha$ (20 steps), demonstrating that our model performs effectively in these scenarios.
3. We included an additional comparison of faster samplers under pruned-0.6 in Table 2, further verifying that our method surpasses these faster samplers.
We hope our responses have addressed your questions satisfactorily. We believe that the additional explanations and data substantiate a higher impact in these areas, reflecting the rigor and potential impact of our work. If our explanations have resolved your concerns, we would be grateful if you could reconsider your rating.
Should you have any further questions regarding our responses, we would be happy to provide additional clarification.
---
Reply to Comment 1.1.1:
Title: Hope to hear your response before discussion phase end
Comment: Dear **Reviewer A73E**,
Thank you for your careful review of our paper. With approximately 15 hours remaining in the discussion phase, we sincerely hope that our rebuttal has addressed your concerns.
If our responses have clarified the issues you raised, we kindly request that you consider raising your score.
We believe our research makes a valuable contribution to the field and addresses important challenges in diffusion model acceleration.
We greatly value your feedback and have made every effort to provide thorough responses to each of your points. If you have any unresolved questions or require further clarification, please do not hesitate to let us know. We are committed to providing additional information within the remaining time.
Thank you once again for your valuable time and expert opinion. Your feedback is crucial for enhancing the quality of our research.
Best regards,
Authors
---
Rebuttal 2:
Title: Resending Response
Comment: We suspect that there might have been a system issue that caused our previous response to not be sent successfully. Therefore, we are resending our response and hope it hasn’t caused you any inconvenience. Below is the main content:
We would like to express our sincere gratitude for your detailed review.
**Re: Weakness #1**
$D$ is the embedding dimension of a learnable query in the pruner network, and the dimension of the prune queries is $T \times N \times D$ due to the $N \times T$ queries mentioned in the caption of Figure 2. We clarify its meaning and value in the experimental section on Line 267-268 of the paper. But there is a typo, which changes $D$ to $d$. We will fix it in the revised version.
**Re: Weakness #2**
We tested our method on SD-1.5 with 50 steps to align with the configuration of DeepCache [1], the main SOTA method for comparison. Additionally, we tested our method on DiT with 250 steps, as specified in the official DiT paper [2]. Our method achieved significant acceleration while minimizing loss on DiT with the 250-step sampler, demonstrating the effectiveness of our gradient-based optimization method. For models with fewer steps, it is easier to obtain the optimal SubNet to reduce redundant computations due to a smaller search space. We also tested our method on SD-2.1 with a 25-step DPM-Solver sampler and PixArt-$\alpha$ with a 20-step DPM-Solver sampler, achieved excellent pruning results, as shown in Table 1 and Table 2 below.
| Model | MACs | Speedup | CLIP Score |
|------------------------|--------|---------|------------|
| SD-2.1 with 25-step DPM | 19.02 T | 1.0× | 31.59 |
| SD-2.1 Pruned-0.5 (Ours) | 9.51 T | 1.8× | 31.52 |
*Table 1*: Comparison with a 25-step DPM-Solver sampler for SD model. We evaluate the effectiveness of our methods on COCO2017 validation set.
| Model | MACs | Speedup | CLIP Score |
|--------------------------------|--------|---------|------------|
| PixArt-$\alpha$ with 20-step DPM | 85.65 T | 1.0× | 30.43 |
| PixArt-$\alpha$ Pruned-0.4 (Ours) | 51.39 T | 1.6× | 30.41 |
*Table 2*: Comparison with a 20-step DPM-Solver sampler for diffusion transformer model. We evaluate the effectiveness of our methods on COCO2017 validation set.
**Re: Weakness #3**
The speedup mentioned in the paper is based on actual measurements, not theoretical measurements. Detailed descriptions of the environment and platform used for these measurements can be found on Line 272 of the main submission. The testing method adheres to the Deepcache method for fair comparison, and the actual inference latency is presented in Table 3 of the main submission. We will include further details about the measurement method in the revised version.
**Re: Weakness #4**
We apologize for missing this data. We have now included two additional few-step results in Table 3, and will include these updates in the revised version of the paper.
| Method | Pruning Type | MACs | FID-50K | Speedup |
|----------------------------------|--------------------|--------|---------|-----------|
| DiT-XL/2*-250 steps | Baseline | 29.66 T | 2.97 | 1.00× |
| DiT-XL/2*-110 steps | Fast Sampler | 13.05 T | 3.06 | 2.13× |
| DiT-XL/2*-100 steps | Fast Sampler | 11.86 T | 3.17 | 2.46× |
| Ours (DiT-XL/2* w/ Pruned-0.6) | Structured Pruning | 11.86 T | 3.01 | 2.43× |
*Table 3*: Comparison of fast sampler methods under pruned-0.6.
**Re: Weakness #5**
We first choose 50-step DPM sampler due to its higher quality with more sampling steps, and the DPM sampler using 50 steps is also found in the Table 2 of the reference paper [3].
Further, we provide the pruning result on SD-2.1 with 25-step sampler in above Table 1. As shown in Table 1, our method can prune 50% computation nearly without loss.
**Re: Weakness #6**
Yes, you are right. Line 298 in the paper explains this. Our method benefits from information redundancy in multi-step optimization processes, the LCM model has fewer redundancy in features across adjacent timesteps due to its efficiency. For better pruning results, we think it's necessary to train the original diffusion model.
[1] Deepcache: Accelerating Diffusion Models for Free, CVPR 2024.
[2] Scalable Diffusion Models with Transformers, ICCV 2023.
[3] Cross-Attention Makes Inference Cumbersome in Text-to-Image Diffusion Models, arXiv.
[4] PixArt-$\alpha$: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis, ICLR24.
We believe that the additional explanations and data substantiate a higher impact in these areas, reflecting the rigor and potential impact of our work. If our explanations have resolved your concerns, we would be grateful if you could reconsider your rating.
Once again, thank you for your valuable time and expert opinion.
Best regards, Authors | Summary: This paper proposes DiP-GO, a novel pruning method to address the computational challenge of Diffusion model during inference. The key innovation lies in the creation of a SuperNet, which includes backup connections based on similar features across adjacent time steps, and a plugin pruner network optimized through gradient optimization to identity whether to keep or remove a computational block. This approach eliminates the need to retrain the diffusion model, significantly reducing computational costs. Extensive experiments on various diffusion model, including Stable Diffusion series and DiTs, demonstrate that DiP-GO achieves substantial speedups without compromising accuracy.
Strengths: - The creation of SuperNet and transforming network pruning into a SubNet search, despite its simplicity, significantly improves performance in terms of efficiency and evaluation metrics such as FID and CLIP score.
- This method avoids retraining diffusion models, saving substantial computational resource and time
- This method has been applied to both traditional U-Net based diffusion models and transformer-based DiTs, validating its effectiveness across different architectures.
Weaknesses: - As noted by the authors, memory overhead is a potential issue in this work. Although techniques like gradient checkpointing and half-precision floating-point representation are used to mitigate this, they might sacrifice the performance
Technical Quality: 3
Clarity: 4
Questions for Authors: - Feature similarity in fast samplers: This approach relies heavily on the similarity of features across adjacent time steps. For diffusion model with fast sampler, only a few steps are required which might result in dissimilar features across adjacent time. A more strategic demonstration of this similarity could make statement more convincing.
- How does different choice of alpha in equation 3 impact the overall performance?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The DiP-GO is limited in training the pruner and maintaining performance with extremely high pruning ratios. The authors address the pruner training limitation as the training time is still relatively short compared to retraining a model. For extremely high pruning ratio cases, it is possible that the pruned model is not over-parameterized. A theoretical work about the upper bound of the pruning ratio would be helpful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the time and effort you invested in reviewing our paper. Your acknowledgment of the **significant improvements** in **efficiency and effectiveness** across **different architectures** is highly valued. We are also grateful for your recognition that our method **avoids retraining diffusion models**, thereby saving substantial computational resources and time.
We address the raised concerns as follows:
**Re Weaknesses #1:**
Gradient checkpointing and FP16 mixed precision are standard practices for training diffusion models, as recommended by Hugging Face. These techniques are crucial for long-timestep diffusion models (e.g., DiT, which requires 250 timesteps for inference). However, for models with fewer timesteps (e.g., SD-1.5 with 50 timesteps), these techniques are not necessary.
Additionally, we apply FP16 precision only to the diffusion model during the forward pass to reduce memory overhead, while keeping the pruner network and backward gradients in FP32 to maintain performance. Furthermore, additional acceleration techniques such as DeepSpeed [1] and ZeRO [2] can be employed to further reduce memory usage.
**Re Questions #1:**
Feature similarity across adjacent time steps in fast samplers has been confirmed in recent works. We have cited their conclusions in Line 156-157 of the manuscript. We also analyzed the feature similarity between adjacent steps in the fast sampler. Specifically, we sampled 200 samples from COCO2017 validation set and calculate the average cosine similarity between the features of the penultimate upsample block across all $T$ steps for two typical fast samplers, creating a $T \times T $ similarity matrix shown in Figure 2 (refer to the rebuttal pdf). The heat map in Figure 2 illustrates the high degree of similarity between features of consecutive time steps.
**Re Questions #2:**
We conducted an ablation study on α and present the results below: our method achieves the best performance when α=1.0. We will include this additional comparison _Table 1_ in the next version of the manuscript.
| α | 0.1 | 0.5 | 1.0 | 2.0 |
|-----|------|------|------|------|
| CLIP-Score | 29.77 | 29.93 | 30.29 | 30.17 |
*Table 1:* Comparison of different α values. Pruning experiments with 80% pruning ratio were conducted on COCO2017 validation using SD-1.5.
[1] DeepSpeed: System Optimizations Enable Training Deep Learning Models with Over 100 Billion Parameters, KDD 2020, Tutorial.
[2] ZeRO: memory optimizations toward training trillion parameter models, SC 2020.
---
Rebuttal Comment 1.1:
Title: Kindly Request for Your Feedback!
Comment: Dear **Reviewer zGQU**,
Thank you again for your valuable comments. We have tried our best to address your questions (see rebuttal PDF and above), and will carefully revise the manuscript by following suggestions from all reviewers. Please kindly let us know if you have any follow-up questions.
Your insights are crucial for enhancing the quality of our paper, and we would greatly appreciate your response to the issues we have discussed.
Thank you for your time and consideration.
---
Reply to Comment 1.1.1:
Title: Hope to hear your response before discussion phase end
Comment: Dear **Reviewer zGQU**,
We sincerely appreciate the time and effort you have invested in reviewing our paper. Your recognition of the significant improvements in efficiency, evaluation metrics, and effectiveness across diverse architectures has been particularly encouraging.
We have carefully addressed the concerns you raised in our rebuttal, which can be summarized as follows:
1. **Memory Overhead:** We provided a more detailed analysis of the memory reduction strategies used in our work and proposed promising solutions for reducing memory overhead while maintaining performance.
2. **Feature Similarity Analysis:** We analyzed the feature similarity in faster samplers and demonstrated the high similarity of features across adjacent time steps in these samplers.
3. **More Ablation of $\alpha$:** We conducted additional ablation experiments of the hyperparameter $\alpha$.
Additionally, we will take your advice and address the theoretical work on the upper bound of the pruning ratio, including it in the final version of our paper.
With approximately 16 hours remaining in the discussion phase, we sincerely hope that our responses have provided the clarity needed and resolved your questions satisfactorily. If our explanations have successfully addressed your concerns, we would be grateful if you could reconsider your evaluation.
Specifically, we kindly request that you consider raising our score if you feel that:
1. Our rebuttal has adequately addressed your concerns.
2. The novelty and contributions of our work have been further clarified.
3. The value and potential impact of our research in the field of diffusion model acceleration have been demonstrated.
Your expert opinion is crucial in the evaluation process, and we truly value your input. If you have any remaining questions or require further information, please do not hesitate to let us know. We are more than willing to provide additional clarification.
Thank you once again for your time and consideration. We look forward to your response.
Best regards,
Authors | Summary: The paper introduces DiP-GO, a novel pruning method for diffusion models. Unlike the majority of existing methods that require extensive retraining with large datasets, DiP-GO employs (1) a SuperNet with backup connections, and (2) a plugin pruner network to identify redundant computations. The authors formulate a subnetwork searching problem which requires few-step gradient optimization instead of expensive retraining. Extensive experiments demonstrate DiP-GO’s effectiveness, outperforming state-of-the-art methods. The paper also explores compatibility with fast samplers and presents a fair amount of ablation studies.
Strengths: - The paper tackles a timely and practically-relevant problem supported by a fair amount of experiments. Diffusion model pruning is an area with limited prior research, making this work particularly valuable.
- The proposed method demonstrates superior performance compared to the baselines, and the paper provides a comprehensive review of relevant previous works.
- Overall, the paper is well-written and easy to follow.
Weaknesses: - I wonder why the pruner network takes $T \times N$ random learnable queries and prediction heads. If the goal is to leverage the similarity between feature representations from adjacent timesteps, rather than distant relationships like between timestep 0 and 1, a more intuitive design might involve using $2N$ learnable queries and a timestep embedding vector (e.g., say $t_{emb}$). The final score could then be averaged or normalized as there will be duplicated score outputs, e.g., $s_t$ can be calculated from outputs $(s_{t-1}, s_t)$ and $(s_t, s_{t+1})$. This approach could result in a smaller network size and faster training for the pruner network. Please correct me for any misunderstanding.
- Qualitative comparison with respect to baselines such as Diff-Pruning may help to improve the paper. Currently, there are only two figures (Figures 3 and 4).
- I wonder if there is any pattern of pruning ratio with respect to the time-steps. For instance, DiP-GO may aggressively prune blocks near $t=0$, or there may exist a repeating pruned block pattern across time-steps. This leads to the question of design of $\gamma$ in Equation (3). Did the authors ablate results concerning $\gamma$?
- As the proposed method offers 2-4X speedup, is this method better than naively skipping time-steps, say for every two time-steps?
Technical Quality: 3
Clarity: 3
Questions for Authors: - The covariance constant $\beta$ in Equation (2) should be defined.
- In Equation (3), does the flops ratio $\gamma$ in [0,1]?
- In line 250, how is the threshold for sparsity loss set to 0.2? Is this merely a hyperparameter?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The paper provided limitations in Appendix D.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are deeply thankful for the thorough and expert review of our work. Your acknowledgment of the **timely and practically-relevant problem** our research tackles is greatly appreciated. Your feedback highlights the **superior performance** and the **extensive experiments** that were conducted.
Below are our detailed responses to the weaknesses and questions raised:
**Re: Weakness #1**
We sincerely appreciate your valuable feedback. We believe your concerns primarily stem from two aspects:
1. Our pruner network uses learnable queries to predict the importance scores of $T \times N$ blocks, rather than directly predicting the similarity between adjacent steps. Although this importance score is influenced by the similarity between time steps during training.
2. The idea of using time embedding as an alternative to the T dimension is a good design choice for the queries. We incorporated time embedding as a condition, added it to the N queries, and then conducted experiments with a pruning network. On SD1.5, we achieved a **29.49** CLIP score with a pruning rate of 80%, which is lower than our DiP-GO with a **30.29** CLIP score. This design helps reduce the number of parameters when T is very large, requiring $N \times D + D_t \times D$
parameters (where $D_t$ is the time embedding dimension), while queries in our method require $T \times N \times D$ parameters. However, since $T$ is usually a constant, this design does not result in a significantly smaller network size or faster training.
**Re: Weakness #2**
Thank you for the suggestion. We provide qualitative comparisons with baseline, DeepCache, since Diff-pruning lacks pruning results for SD and DiT, as shown in Figure 3 (refer to the rebuttal PDF). And we will add them to the revised version.
**Re: Weakness #3**
1. Our method exhibits a specific pattern of pruning ratios with respect to the timesteps. As shown in Figure 1 (refer to the rebuttal pdf), fewer blocks are pruned during the middle denoising stage (approximately between steps 65 and 150), as this is when the image content is rapidly being generated. Conversely, the pruning ratio in the latter stage is higher since the content has already taken shape.
2. Yes, we conducted an ablation study on γ. Without gamma, pruning 80% on SD1.5 resulted in a CLIP score of **29.50** (w/ γ: **30.29**).
**Re: Weakness #4**
We also tried simply skipping time steps for every N steps. When N = 2, the CLIP score on COCO was **19.74** (Ours: **30.29**), which resulted in much lower speedup and accuracy compared to our method. This indicates that simply skipping steps leads to significant performance loss, while our method of searching for the pruner gate provides a more effective pruning strategy.
**Re: Question #1**
Thank you for pointing this out. We will refine the definition of the covariance constant β in the final version.
**Re: Question #2**
Yes. The flops ratio $\gamma$ is in the range [0, 1].
**Re: Question #3**
The sparsity threshold 0.2 is a hyperparameter used to prevent the network from overly optimizing the sparsity loss, which could lead to all-zero gates predictions during training.
---
Rebuttal Comment 1.1:
Title: Kindly Request for Your Feedback!
Comment: Dear **Reviewer 1Yei**,
Thank you again for your valuable comments. We have tried our best to address your questions (see rebuttal PDF and above), and will carefully revise the manuscript by following suggestions from all reviewers. Please kindly let us know if you have any follow-up questions.
Your insights are crucial for enhancing the quality of our paper, and we would greatly appreciate your response to the issues we have discussed.
Thank you for your time and consideration.
---
Reply to Comment 1.1.1:
Title: Hope to hear your response before discussion phase end
Comment: Dear **Reviewer 1Yei**,
We appreciate the time and effort you have invested in reviewing our paper. We have provided a comprehensive rebuttal addressing your concerns, which can be summarized as follows:
1. **Alternative Design:** We explained the motivation behind our learnable queries design and incorporated your advice by providing an additional comparison with time embedding conditioned queries. This supports the rationale of our design both theoretically and experimentally.
2. **More Qualitative Comparison:** We included a more detailed qualitative comparison with respect to baselines (see the rebuttal PDF), demonstrating that our method generates higher fidelity samples with better content consistency compared to the baselines.
3. **Pruning Patterns:** We presented the pruning gates of our learned pruner network and analyzed the pruning ratios with respect to the timesteps.
4. **Hyperparameters Ablation:** We conducted an additional ablation study on the parameter $\gamma$, demonstrating its effectiveness.
5. **Direct Skip Steps Comparison:** We evaluated a direct skip step design and found that our method achieves better performance.
6. **Clarity:** We provided additional details on the covariance constant $\beta$, FLOPs ratio $\gamma$, and the sparsity threshold used in our paper.
Given the comprehensive nature of our response, we kindly request that you review our rebuttal and provide your feedback. Your insights are crucial for a fair evaluation of our work, and we would greatly appreciate your response to the points we've addressed.
With approximately 16 hours remaining in the discussion phase, we sincerely hope our rebuttal has addressed your concerns. If our responses have clarified the issues you raised, we kindly ask you to consider revising your score.
Once again, thank you for your valuable time and expert opinion. Your feedback is crucial in improving the quality of our research.
Best regards
---
Rebuttal 2:
Title: Resending Response
Comment: We suspect that there might have been a system issue that caused our previous response to not be sent successfully. Therefore, we are resending our response and hope it hasn’t caused you any inconvenience. Below is the main content:
We are deeply thankful for the thorough and expert review of our work. Your acknowledgment of the **timely and practically-relevant problem** our research tackles is greatly appreciated. Your feedback highlights the **superior performance** and the **extensive experiments** that were conducted.
Below are our detailed responses to the weaknesses and questions raised:
**Re: Weakness #1**
We sincerely appreciate your valuable feedback. We believe your concerns primarily stem from two aspects:
1. Our pruner network uses learnable queries to predict the importance scores of $T\times N$ blocks, rather than directly predicting the similarity between adjacent steps. Although this importance score is influenced by the similarity between time steps during training.
2. The idea of using time embedding as an alternative to the $T$ dimension is a good design choice for the queries. We incorporated time embedding as a condition, added it to the $N$ queries, and then conducted experiments with a pruning network. On SD-v1.5, we achieved a $29.49$ CLIP score with a pruning rate of 80%, which is lower than our DiP-GO with a $30.29$ CLIP score. This design helps reduce the number of parameters when $T$ is very large, requiring $N\times D + D_t \times D$ parameters (where $D_t$ is the time embedding dimension), while queries in our method require $T\times N \times D$ parameters. However, since $T$ is usually a constant, this design does not result in a significantly smaller network size or faster training.
**Re: Weakness #2**
Thank you for the suggestion. We provide qualitative comparisons with baseline, DeepCache, since Diff-pruning lacks pruning results for SD and DiT, as shown in Figure 3 (refer to the rebuttal PDF). And we will add them to the revised version.
**Re: Weakness #3**
1. Our method exhibits a specific pattern of pruning ratios with respect to the timesteps. As shown in Figure 1 (refer to the rebuttal pdf), fewer blocks are pruned during the middle denoising stage (approximately between steps 65 and 150), as this is when the image content is rapidly being generated. Conversely, the pruning ratio in the latter stage is higher since the content has already taken shape.
2. Yes, we conducted an ablation study on γ. Without gamma, pruning 80% on SD1.5 resulted in a CLIP score of **29.50** (w/ γ: **30.29**).
**Re: Weakness #4**
We also tried simply skipping time steps for every N steps. When $N = 2$, the CLIP score on COCO was **19.74** (Ours: **30.29**), which resulted in much lower speedup and accuracy compared to our method. This indicates that simply skipping steps leads to significant performance loss, while our method of searching for the pruner gate provides a more effective pruning strategy.
**Re: Question #1**
Thank you for pointing this out. We will refine the definition of the covariance constant β in the final version.
**Re: Question #2**
Yes. The flops ratio $gamma$ is in the range [0, 1].
**Re: Question #3**
The sparsity threshold 0.2 is a hyperparameter used to prevent the network from overly optimizing the sparsity loss, which could lead to all-zero gates predictions during training.
We appreciate the time and effort you have invested in reviewing our paper. We have provided a comprehensive rebuttal addressing your concerns, which can be summarized as follows:
1. **Alternative Design**: We explained the motivation behind our learnable queries design and incorporated your advice by providing an additional comparison with time embedding conditioned queries. This supports the rationale of our design both theoretically and experimentally.
2. **More Qualitative Comparison**: We included a more detailed qualitative comparison with respect to baselines (see the rebuttal PDF), demonstrating that our method generates higher fidelity samples with better content consistency compared to the baselines.
3. **Pruning Patterns**: We presented the pruning gates of our learned pruner network and analyzed the pruning ratios with respect to the timesteps.
4. **Hyperparameters Ablation**: We conducted an additional ablation study on the parameter $\gamma$, demonstrating its effectiveness.
5. **Direct Skip Steps Comparison**: We evaluated a direct skip step design and found that our method achieves better performance.
6. **Clarity**: We provided additional details on the covariance constant $\beta$, FLOPs ratio $\gamma$, and the sparsity threshold used in our paper.
With remaining time in the discussion phase, we sincerely hope our rebuttal has addressed your concerns. If our responses have clarified the issues you raised, we kindly request that you consider raising your score.
Once again, thank you for your valuable time and expert opinion.
Best regards, Authors | Rebuttal 1:
Rebuttal: Thank you to all the reviewers. We mainly upload the images needed for the rebuttal here. Detailed rebuttal responses and tables have already been sent to each reviewer separately.
Pdf: /pdf/389831ef85f63173ac12e581fe53c1b5ad68d533.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Beware of Road Markings: A New Adversarial Patch Attack to Monocular Depth Estimation | Accept (poster) | Summary: The paper demonstrates that various MDE models with different architectures, trained for autonomous driving, heavily rely on road regions when predicting depths for different obstacles. Based on this, it provides the Adversarial Road Marking (AdvRM) attack, which camouflages patches as ordinary road markings and deploys them on roads, thereby posing a continuous threat within the environment. Experimental results from both dataset simulations and real-world scenarios demonstrate that AdvRM is effective, stealthy, and robust against various MDE models.
Strengths: This paper is the first attempt to produce obstacle-independent adversarial patches and deploy them on roads.
The proposed method is efficient for both CNN and ViT-based models and demonstrates robust performance in physical world attacks.
The experiments are comprehensive. Good quantitative results are observed even in challenging physical world evaluation.
Weaknesses: For the robustness analysis, the authors should also evaluate performance under random noise and blurriness, which are common in real-world deployments.
The reliance on the knowledge of target models restricts the applicability of this approach.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. Given that the issues you are concerned about, such as (a) the robustness of AdvRM against common image distortions and (b) the practicality of white-box scenarios in real life, are also of concern to other reviewers, we address them in our global response. | Summary: The paper proposes AdvRM, a novel patch attack against MDE models. It proposes introducing adversarial road markings with the attack objective of altering the inferred depth at the output of MDE models corresponding to image regions that contain obstacles in the scene. Unlike prior patch attacks on the MDE task, AdvRM proposes a patch that is physically detached from the target obstacles. The effectiveness of AdvRM is evaluated against a wide range of MDE models, and the numerical results show the ability of this attack model to compromise MDE models. Further experiments investigate the impact of robustness regularization during training, transferability across obstacles and MDE models, and effectiveness in the physical world, which leads to the conclusion that AdvRM is capable of real-world impact, and is reasonably transferable between obstacles. The main limitation of AdvRM is that it has a negligible model transferability.
Strengths: -The proposed attack is not dependent on the attacked obstacles or the amount of attacked obstacles, facilitating introduction into the scene to be processed by the victim MDE model. The attack itself is interesting, the weakness of the paper lies in the evaluation (as discussed under weaknesses).
- The evaluation covers a variety of MDE models, including CNN and ViT based models. Moreover, further experiments in the paper and in the appendix evaluate important aspects like obstacle and model transferability, effectiveness in the physical world, and the effect of patch size, patch-obstacle distance, and patch style on attack effectiveness.
- The paper covers the key details of each component involved in the patch generation process.
- Informative visualizations are provided to ease the understanding of the proposed attack, its implementation, and its evaluation.
- Code is provided to aid reproducibility.
Weaknesses: - The paper does not consider any baselines. While AdvRM is different from obstacle-dependent attacks by design, evaluating the effectiveness of existing attacks applied on the obstacles in the images using the clean background (without the AdvRM attack) would provide an intuition on how AdvRM compares to existing attacks (beyond the qualitative benefits summarized in Table 1).
- There is no evaluation or discussion of potential defenses against AdvRM or of AdvRM’s robustness to existing defenses. Evaluating attack effectiveness against existing patch attack defenses would be very valuable. Even if there are no defense methods designed specifically for the MDE task, some existing patch defenses are applicable to any image input, regardless of the task. Moreover, simpler general techniques could be used (e.g., JPEG compression as done in previous work).
- The problem formulation in Section 3.2 omits relevant details. For example, the background and foreground in line 112 are not properly defined (how do they constitute x?), and the post-processing of the MDE model outputs mentioned in lines 114-115 is not described. Moreover, it is not stated what are feasible values for lambda in (2).
- The saliency-driven analysis has important missing details in Section 4.2, namely alpha in line 150, which is neither introduced nor defined, and the term “designated region” in line 152 which is also not introduced.
- The methodology description also lacks some relevant information. There is no justification for the 1.14 lower bound on eta in (4), and there is no description of what is the valid range of values for sigma in (5). This is not even addressed in the appendix. Moreover, while the stealthiness loss is explained in the appendix, it is not specified what is the feature extractor H, or what layers are used to compute the corresponding elements of the stealthiness loss. Is H just the MDE victim model itself or some other generic feature extractor?
- The experiments in Section 6 contain details that should be clarified. Sentence “whose heights are 230 within x hat” in line 247 is unclear as to whether “230” are pixels and if this size applies to all obstacles and patches, or if this somehow refers to the placement of the boundaries instead. Crucially, it is not clear how the 100 background images and the obstacle images are combined into a training and test set. Are all background images used for both testing and training? Or are there unseen background images during testing? Is each single obstacle synthesized into each of the 100 background images? Moreover, for the case of pedestrian obstacles, are they always used as multiple obstacles or are there images with single pedestrians as well? In general, it is unclear what are the images that constitute the test and train sets. The results in Figure 6 are not easy to interpret since the relative increment definition in line 276 involves the undefined quantity zeta_r^b. Additionally, line 291 states “we restore the real autonomous driving scenarios”. It is unclear which scenarios authors mean to restore. The results also seem to consider a single scenario at different distances, but it is not clear why this is referred to as “the” real autonomous driving scenarios.
- The limitations discussion in Section 7 also lacks depth in terms of connecting AdvRM to existing approaches. Lines 301-302 suggest prior methods also rely on the specific MDE model to generate attacks. How they fare in terms of transferability, or whether they were evaluated with respect to transferability at all, is not discussed. This is problematic, because some existing attacks do transfer rather well.
- Other smaller issues, unclear terms, and lack of details further undermine the clarity of the paper:
-- “from other dimensions” in line 35.
-- Lines 36-38 could be more specific regarding that they refer to the MDE context, since for other tasks, patch attacks that are not placed on the attacked objects/obstacles have been proposed.
-- Lines 72-73.
-- On line 85 “randomly” is not the right word, maybe without justification?
-- The term “insertion algorithms” first introduced in lines 116-117, is later on used interchangeably with “patch applicator”. Moreover the term “applicator” itself is somewhat unusual.
-- What’s the meaning of “full-size” in line 120?
-- Lines 122-123 “represents arbitrary binary mask”.
-- The meaning of “hybrid datasets” in line 145 is unclear.
-- M_delta in Figure 3 should be M_o, moreover the caption of the figure is not clear (“steps of 1,2, and 3” and “the pathways… are employed to insert multiple obstacles into a single image” could be expressed more clearly).
-- “3D manner” (line 163).
-- “random transformers” (line 167).
-- “reduces the suspicion” (line 208).
– Multiple parameter choices in the implementation are not justified, such as the values for lambda in (2), sigma in (5), the 1.14 lower bound on eta, and the choice of H to compute the stealthiness loss. This is not even addressed in the appendix.
-- A different color scale or zooming into obstacle regions would be helpful to visualize more clearly the saliency maps in Figure 2 and the predictive depth in Figures 5 and 6.
- Finally, certain elements do not seem to conform to the paper template, such as Tables 2 and 4, and Figure 4.
Technical Quality: 2
Clarity: 2
Questions for Authors: - Why is there no comparison to existing attacks in the evaluation? If it is not necessary or important, why is that?
- Why is there no evaluation of the effectiveness of existing patch attack defenses? Compared to the reported experiments, would it be less valuable to investigate whether the patch is effective against patch recovery methods, or stealthy against patch detection methods? If so, why?
- What is the precise description of the test and train data? Please refer to the comment above in weaknesses.
- What is the rationale behind the choices for sigma, alpha, lambda, and the 1.14 lower bound on eta?
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors point out limitations of the proposed approach in Section 7, although it would be good to include other attack models and potential defense schemes in this discussion.
The authors discuss the potential impact of the developed adversarial patch in the introduction and conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for providing the detailed and insightful comments! we will carefully consider them to improve our paper.
**[Comment 1] Experiments on baseline and defense.**
Following your suggestions, we conduct additional experiments to compare our AdvRM with the SOTA attack [1], **further confirming that obstacle-agnostic patches (AdvRM) outperform obstacle-dependent patches (prior attacks) even under defense.** Please refer to our global response to all reviewers for experimental results and analysis.
**[Comment 2] Missing details in Sections 3.2, 4.2, 5, and 6**
Thank you for pointing these out. We elaborate below and will refine the paper accordingly.
* Section 3.2
* Foreground refers to the obstacle in inputs while other areas are background.
* $x = b \otimes (1-M_o)+o^F \otimes M_o$.
* The post-processing operation is an inverse linear mapping function: $f(x) \leftarrow -(f(x) - \min(f(x))) + \max(f(x))$. It's only used for some models, e.g., Midas, to ensure a larger value in the depth map indicates a greater distance while preserving the relative relationships between values.
* $\lambda \ge 0$ ($\lambda = 0$ means no consideration of stealthiness).
* Section 4.2
* $\alpha=2.5$ when pixel values of both $x$ and $S$ are normalized into [0,1].
* "Designated region" refers to any area of interest in the depth map, which is denoted as $M$ in Line 139.
* Section 5
* **We set $\eta \ge 1.14$ based on our experimental setup (Line 247-248) to ensure that the prediction errors are sufficient to cause collisions.** Specifically, $\eta \ge 1.14$ means the model's predicted depth is off by at least 2.7 m when the ground truth is 12 m. Delayed braking caused by this error is enough to cause a minor collision at a vehicle speed of 50 km/h. We understand that concerns may arise about whether a smaller $\eta_0$ in (7) accurately reflects ARR. Table C shows that AdvRM indeed realizes a high ARR since we do not limit the attack cap.
**Table C: ARR values calculated with different $\eta_0$. The target model is Mono2 and $\eta \ge 1.14$ in training.**
$\eta_0=1.14$|$\eta_0=1.2$|$\eta_0=1.3$|$\eta_0=1.4$|$\eta_0=1.5$|
| :---: | :---: | :---: |:---: |:---: |
|0.976| 0.974| 0.965| 0.951| 0.887|
* $\sigma \ge 0$ in (5).
* We followed the source code of [1] to build the extractor, which is a generic VGG19 pre-trained on ImageNet. We will detail the layers used for loss calculation in the revised version.
* Section 6
* The "230" in Lines 246-247 denotes the vertical position of the bottom edge of the obstacle, as well as the top edge of the patch, with the coordinate origin at the upper left corner of $\hat{x}$.
* **The test images are separate from the training images.** Specifically, we randomly split 150 obstacle images into a training set (120 images) and a testing set (30 images), and 100 background images into a training set (80 images) and a testing set (20 images). During each training iteration, we randomly select one background image and 1 or 3 obstacle images from the corresponding training sets. For testing, we combine each test background image with all obstacle images to create test inputs, including both single-obstacle and multiple-obstacle scenarios.
* We apologize for the typo in Line 276, $\xi^b_r$ should be $\xi^{wo}_r$.
* We apologize for the misleading phrasing regarding “restoring real autonomous driving scenarios” and will revise it. Due to the lack of necessary equipment, e.g., autonomous vehicle, we used toy-based simulations instead. To ensure the authenticity, we printed textures and lane lines on A4 papers to simulate roads and used a 1:50 scale for lane and vehicle dimensions (see Appendix D.2). Such simulations in the physical world are common in previous studies, e.g., [5] also used a toy car to test their attack effectiveness.
**[Comment 3] Discussion on the transferability of prior attacks**
**To our best knowledge, all existing attacks against MED adopt the white-box setting and do not have good transferability to unknown models.** Specifically, [15] discussed that its optimized perturbation does not transfer to unknown MDE models due to overfitting. We reproduce [1] and confirm its bad transferability, as shown in Table D. Other prior attacks [2-5] did not discuss the transferability at all. In general, transferability in MDE tasks is still an open and challenging problem.
**Table D: Bad transferability of [1]. The known model is Mono2**
|Unknown Model|MRSR|
|:---:|:---:|
|ManDe| 0.022|
|DeHin| 0.107|
**[Comment 4] Other smaller issues**
* “from other dimensions” $\rightarrow$ “across various aspects”.
* We will revise Lines 36-38 to discuss our contribution within the MDE context.
* We will revise Lines 72-73 to highlight why we focus on patch attacks.
* We will remove “randomly” in Line 85.
* We will use the patch insertion algorithm and obstacle insertion algorithm to express them uniformly.
* We will remove “full-size” in Line 120.
* We describe $M$ as an arbitrary binary mask for generality because it can be $M_o$, $M_\delta$, or masks of other areas of interest as shown by the blue boxes in Figure 2.
* “Hybrid datasets” means the training set is constructed by different datasets. For example, DeAny [9] is trained on 14 datasets.
* We will revise the inaccurate symbol in Figure 3, as well as its caption.
* “3D manner” $\rightarrow$ “realistic manner”.
* “random transformers” $\rightarrow$ “random image transformations”.
* “reduces the suspicion” $\rightarrow$ “enhances stealthiness”.
* We will explain all hyper-parameters and the feature extractor in the Appendix.
* We will redraw figures based on your suggestions to better visualize them.
* We will correct all template issues.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: Thank you for addressing some of the issues raised. My comments are as follows:
[Comment 1]: The additional experiments comparing to [1] and evaluating AdvRM under different defenses provide valuable results that complement the paper. In addition, it would be good to comment on whether [1] could also adapt to Gaussian noise and blurring, and to JPEG compression, and more importantly, to report whether AdvRM-Noise,-Blur, and -JPEG are less effective in the undefended scenario.
[Comment 2]: Thank you for clarifying the missing details. The relationship between x, b, and f in Section 3.2, M_o and o^F would have to be introduced earlier, imho. As for the phrasing in Section 6.3, the main issue was referring to them as “the” real autonomous driving scenarios, which gives the impression that some specific real scenarios have been defined or selected beforehand and then simulated at the 1:50 scale, which doesn’t seem to be the case (the simulated physical scenarios are not directly based on a full-scale scenario defined or referenced earlier in the paper).
[Comment 3]: The reproduction of [1] to confirm transferability is not a problem specific to AdvRM, Table D is a valuable additional result. However, the table could be augmented with the results for AdvRM for a direct comparison, and moreover, since [1] found good transferability only by training on other models and testing on Mono2, it would be good to see if this holds in the reproduction and also if it holds for AdvRM.
[Comment 4]: The comment on the arbitrary mask description of M in 122-123 was mainly about the grammar, although mentioning that it could be M_o or M_delta could indeed further clarify the sentence. To clarify, the comment on the visualization of predictive depth refers to Figures 5 and 7 not Figures 5 and 6.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer HCq6
Comment: **[Comment 1]** Thanks for your valuable suggestion. **We further conduct experiments to show that (a) paper [1] could also adapt to image distortions by EoT techniques, and (b) AdvRM-Noise, -Blur, and -JPEG are still effective in the undefended scenario.** Table E and F present the results, respectively. In Table E, we continue to apply the corresponding distortion to process inputs throughout the optimization. In Table F, we applied distortion processing to the input with a probability of 0.5 during optimization (denoted as a random choice).
* Compare Table E and F with Table B in the global response, we observe that the robustness of both [1]-$\star$ and AdvRM-$\star$ ($\star \in$ {Noise, Blur, JPEG}) in defended scenarios can be enhanced.
* Table E also shows that the targeted EoT process in optimization may cause overfitting, reducing the attack effectiveness in the undefended scenario.
* Table F shows that the attack performance of AdvRM-$\star$ ($\star \in$ {Noise, Blur, JPEG}) in the defended and undefended scenarios can be better balanced by introducing random choice during optimization.
**Table E: MRSR values after applying targeted EoT operations**
|Defense | [1]-Noise | [1]-Blur | [1]-JPEG | AdvRM-Noise | AdvRM-Blur | AdvRM-JPEG |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|No defense |0.627 | 0.556 | 0.922| 0.701 | 0.642| 0.379|
|Noise| 0.460 | - | - | 1.171 | - | - |
|Blur | - |0.968 | - | -| 0.923| - |
|JPEG | - | - |1.027| -| - | 1.304 |
**Table F: MRSR values after applying targeted EoT operations and random choice**
|Defense | [1]-Noise | [1]-Blur | [1]-JPEG | AdvRM-Noise | AdvRM-Blur | AdvRM-JPEG |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|No defense| 0.969 | 0.863| 1.176| 1.020 | 1.109| 0.948|
|Noise| 0.415 | - | - | 0.918 | - | - |
|Blur | - |0.757| - | -| 0.794| - |
|JPEG | - | - |1.084| -| - | 1.127 |
**[Comment 2 and 4]** Thank you for your further explanations. We will carefully revise the manuscript based on your suggestions.
**[Comment 3]** We further conduct experiments to show that both AdvRM and [1] face the problem of limited transferability. Table G and H show the transferability of AdvRM and [1], respectively. These tables' top row and first column represent the unknown and known models, respectively.
* **Mono2 as the known model**. In this case, neither [1] nor AdvRM can transfer to Mande and Dehin well.
* **Mono2 as the unknown model**. [1] shows better transferability in this case, which is consistent with the experimental results in [1].
* **Mande as the unknown model**. AdvRM shows better transferability than [1] in this case.
Both AdvRM and [1] have their strengths and weaknesses in this comparison, but overall, neither has achieved satisfactory transferability.
**Table G: Transferability of AdvRM between Mono2, Mande, and Dehin**
| | Mono2 | Mande | Dehin |
|:---:|:---:|:---:|:---:|
|Mono2 | - | 0.113 | 0.068 |
|Mande | 0.157 | - | 0.046 |
|Dehin | 0.126 | 0.235 | - |
**Table H: Transferability of [1] between Mono2, Mande, and Dehin**
| | Mono2 | Mande | Dehin |
|:---:|:---:|:---:|:---:|
|Mono2 | - | 0.022 | 0.107 |
|Mande | 0.337 | - | 0.265 |
|Dehin | 0.276 | 0.071 | - | | Summary: The paper describes a white box patch attack for monocular depth estimators (MDE) that is independent from an obstacle. The attack is placed on a fixed part of the environment, as an ordinary road mark, and not on the target obstacle itself. It is learned adversarially using a loss that distinguishes pixels of most impact on potential obstacles combined with a stealthiness objective. The approach is evaluated on simulated data and on a toy real scene.
Strengths: - To the best of my knowledge, the approach proposes a new attack objective on the depth map using a fixed physical road mark.
- Clearly written and justified.
- Experiments on a white box setting and various models and environments show that the attack is effective.
Weaknesses: - Although quite intuitive, I found the justification for putting the attack on the road rather weak. 1/ There may be many other features in the environment that contribute to the estimation of depth, 2/ Depth is the result of regression, and I do not understand why the gradient to the input is a good explanatory feature for this type of function.
- It is also difficult to say whether the attack succeeds because of bad learning (overfitting, existence of spurious correlations) or because the attack is strong. The fact that the attack does not transfer to other models (Table 4) is a bad symptom about the origin of the attack.
- The attack is limited to the white box with no transfer to other models, which is a low evaluation standard compared to the current literature.
- Some additional experiments would generally help to justify the strength and validity of the attack:
- Test against defenses such as .LGS [A], Segment and Complete [B], and JEDI [C].
- A comparison with target obstacle approaches [1-5] is welcome to validate the capacity of the obstacle-free target.
- Another dataset that Kitti to learn and test the model.
[A] Naseer, M., Khan, S., & Porikli, F. (2019, January). Local gradients smoothing: Defense against localized adversarial attacks. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 1300-1307). IEEE.
[B] Liu, J., Levine, A., Lau, C. P., Chellappa, R., & Feizi, S. (2022). Segment and complete: Defending object detectors against adversarial patch attacks with robust patch detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 14973-14982).
[C] Tarchoun, B., Ben Khalifa, A., Mahjoub, M. A., Abu-Ghazaleh, N., & Alouani, I. (2023). Jedi: Entropy-based localization and removal of adversarial patches. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 4087-4095).
Technical Quality: 3
Clarity: 2
Questions for Authors: - The visualization of the results can be improved:
- Show the map of differences between depths predicted from images with and without patches.
- The saliency maps are difficult to see (Figure 2).
- See comments about evaluation in the weaknesses section
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Some technical limitations are addressed in section 7.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate all your constructive comments which can help improve the paper. Below is our point-to-point response.
**[Comment 1] Motivation of putting the attack on the road**
We believe our motivation is reasonable. The intensity of gradient changes is commonly used to explain regression models, as illustrated in [F-I]. MDE, being the regression task, can reasonably utilize gradients as well. A steeper gradient indicates that changes in the corresponding input pixel have a greater impact on the predicted depth, making the prediction more sensitive to these pixels. Both the interpretability (Figure 2) and experimental results have demonstrated that altering the pixels of road areas can heavily affect MDE models' predictions.
[F] M. Wicker, et al. "Robust explanation constraints for neural networks." ICLR, 2023
[G] T. Han, et al. "Which explanation should I choose? a function approximation perspective to characterizing post hoc explanations." NeurIPS, 2022
[H] H. Choi, et al. "Explainable Time-Series Prediction Using a Residual Network and Gradient-Based Methods." IEEE Access, 2022
[I] S. Seitz. "Gradient-based explanations for Gaussian Process regression and classification models." CoRR, 2022
**[Comment 2] Factors contributing to attack success and issues with low transferability**
**First, the success of the attack doesn't lie in bad learning.** All tested models used official pre-trained weights and performed well on benign inputs. Moreover, Midas [8] and DeAny [9] are trained on multiple datasets from different sources to reduce the risk of overfitting in model training.
**Second, the success lies in our carefully-designed strong attack.** To explain this, we clarify that (a) the road is a reliable depth feature (Appendix B) and the correlation between depth and roads is easy to learn, as different models trained on different datasets consistently exhibit road dependency, and (b) *road dependency shouldn't be regarded as a false correlation* because humans also use familiar reference objects to estimate distances [J]. Given the importance and universality of road dependency, AdvRM fully leverages it and alters road features by placing a patch on roads, thereby successfully misleading various MDE models.
The low transferability is due to the significant architectural differences among MDE models, causing the attack to overfit specific models, as shown in Section 6 of [15]. We will explore how to improve the transferability of AdvRM in the future.
[J] E. Mischenko, et al. "Examining the Role of Familiarity in the Perception of Depth." Vision (Basel), 2020
**[Comment 3] Restriction to white-box scenario limitation**
**We believe the white-box attack scenario is practical in the real world, and it is indeed the main assumption adopted in existing attacks targeting MDE.** Please refer to our global response to all reviewers for detailed explanations.
**[Comment 4] Additional experiments on baseline, defense, and other datasets.**
* **Experiments on baseline and defense.** Following your suggestions, we conduct additional experiments to compare our AdvRM with the SOTA attack [1], **further confirming that obstacle-agnostic patches (AdvRM) outperform obstacle-dependent patches (prior attacks) even under defense.** Please refer to our global response to all reviewers for experimental results and analysis.
* **Testing AdvRM on other datasets.** The results of DeAny in Table 2 show that **AdvRM is not limited to KITTI** because DeAny is trained on 14 datasets without KITTI and AdvRM can still alter its outputs.
**[Comment 5] Improvement of visualization**
We will redraw the figures for better visualization.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you for addressing some of the issues I raised. My comments on your response.
- **[comment 1]** I am still not convinced that a pixel-wise gradient can explain the complex behavior of an MDE algorithm that produces a dense field that expresses the global geometric content of the scene. The fact that the attack is on road markings makes sense from a physical point of view, but to deduce that such a road marking is the main feature used to infer depth is a huge logical step, even if the local gradient is high in that region. The fact that you carefully consider the geometry of the road when drawing the patch suggests that perspective cues are indeed crucial for depth estimation, a very intuitive and natural idea that you also point out in your comment 2. I have seen the complementary experiments you did in response to reviewer LDnF by placing the patch in other regions, but the way you do it breaks the natural geometry of the scene and is likely to be filtered out by the MDE algorithm as a non-informative feature: it would have been more interesting to place a patch on another object (building facades, car on the side) with the same perspective constraint and observe the difference.
- **[Comment 2]** Well, the fact that attacking Midas and DeAny, learned on multiple datasets, seems to be mode difficult (as your numbers in Table 3 tell) contradicts your statement: it seems that good generalization does have an impact on the attack Your psychophysical argument may be valid for natural networks, but we are in a completely different world here. Spurious correlation is a form of overfitting, and it is not proven that the networks you attack are free of it.
- **[Comment 3]** For my part, I do not believe that "the white-box attack scenario is practical in the real world", and the fact that the current literature is limited to it is not a scientific argument (academia is full of unrealistic problems!).
However, the fact that we have access to the full architecture and parameters can be used to analyze the meaningful phenomena in a more refined way: this is not exactly what you are suggesting.
- **[Comment 4]** Thank you for these experiments that show that your approach is effective with defenses.
I will update my rating to Weak Accept mainly for the complementary experiments on defended networks.
---
Rebuttal 2:
Title: Response to Reviewer YPF7
Comment: **[comment 1]** Thanks for your valuable suggestion. **We further conduct experiments to show that MDE models rely significantly more on roads than other areas.**
* **Settings.** Following your suggestion, we place a patch on the *building facades* and *car on the side* with the same perspective constraint. (a) The building is separated from the vehicle by one lane. To compensate for the long-distance effect, we expand the size of the building patch to cover the entire building and even its neighboring buildings. Specifically, the building patch covers 3.1% to 3.7% of the total input pixels. (b) We place an SUV next to the current lane, and the patch is placed on the side of the car closer to the current lane. The car patch covers 0.9% to 1.2% of the total input pixels. (c) For fair comparisons, we reduce the height of our road patch to 35, and the smaller road patch covers about 1.1% of the total input pixels.
* **Results.** Table C shows that **our road patch still demonstrates clear superiority in altering depth estimation.**
* **The building patches are completely ineffective** although we have extended their size. This is because buildings are typically far from the lane, and the performance of patches will degrade with increasing distance between patches and obstacles, as demonstrated in our ablation study in this paper.
* **The perspective relationship on both sides does not seem to have much effect on the model** because the results of the car patch are close to that of the right patch we reported in Table C in our first-round response to Reviewer LDnF.
* The road patch is more effective in altering depth predictions than the building and car patches.
**Table C: MRSR values when we place patches in different locations with the same perspective constraint**
| Obstacle | Building | Car | Road (height = 35)| Road (height = 70)|
| :---: | :---: | :---: |:---: |:---: |
|PE|0.001|0.104|0.406| 2.431|
|CA|0.002|0.211|0.457| 1.868|
|RO|0.002|0.323|0.493| 3.157|
**[comment 2]** Thanks for the valuable suggestion. **Our experimental results demonstrate that AdvRM effectively exposes a new vulnerability in MDE models: changing only road features can significantly alter depth predictions.**
* Due to their enhanced training strategies, Midas and DeAny might learn more from other depth features than other models, reducing the influence of AdvRM on the final predictions.
* **While Midas and DeAny mitigate our attack, they do not completely eliminate the threat.** Specifically, as detailed in Table 2, the average prediction errors for Midas and DeAny are 3.756 meters and 5.964 meters, respectively. These errors are substantial enough to cause collisions when an obstacle is at 12 meters and the vehicle is traveling at 50 km/h. Thus, the road remains an important depth cue for these models.
The suggestion to delve deeper into the reasons behind attacks or defenses is valuable. We will incorporate it in our future works to provide a more comprehensive theoretical support for our research.
**[comment 3]** We agree with you that **the practicality of white-box methods is certainly limited, especially compared with black-box methods**.
* **Practicality of white-Box methods.** We acknowledge the limitations of white-box methods. Therefore, in our previous response discussing the feasibility in real life, we added *a strict prerequisite: the target model must be extractable* through hacking techniques (complete extraction) or other methods (such as model distillation [K], a technique of approximate extraction). In this case, our attack is practical because there is a complete replica or a very close version of the target model. Although extracting models is quite challenging due to various defense mechanisms, it is not entirely impossible, as demonstrated by previous examples.
* **Extending to black-box scenarios**. Given that model extraction is very difficult, we also explained how our attacks could be extended to black-box scenarios. We plan to further verify its feasibility and address potential issues such as efficiency.
Moreover, as you noted, complete access allows for a more detailed analysis of significant phenomena. Our work offers new insights into security in MDE based on this complete access and successfully reveals vulnerabilities of MDE in autonomous driving. We hope this research will inspire future work on developing more practical attacks and more robust MDE models.
[K] N. Papernot, et al. "Practical black-box attacks against machine learning." ASIACCS, 2017
**[comment 4]** Thank you for recognizing the additional experiments on defense. | Summary: This paper introduces a novel adversarial attack called Adversarial Road Marking (AdvRM) on Monocular Depth Estimation (MDE) models, which are crucial for autonomous driving systems. Unlike previous attacks that rely on placing patches on specific obstacles, AdvRM deploys optimized patches on roads, disguised as ordinary road markings. This approach exploits the discovery that various MDE models heavily depend on road regions for depth prediction.
Key Contributions:
- Discovery of Road Dependency: The paper identifies that MDE models trained for autonomous driving rely significantly on road regions when predicting depths.
- AdvRM Attack: A new attack method that places adversarial patches on roads, allowing it to affect the depth predictions for any obstacle, thus broadening the attack's impact and persistence.
- Experimental Validation: Comprehensive experiments, including both simulated and real-world scenarios, demonstrate the effectiveness, stealthiness, and robustness of AdvRM across different MDE models.
The research underscores the need for enhanced security measures in MDE models to mitigate such adversarial attacks in autonomous driving systems.
Strengths: 1. **The paper is clearly written and easy to follow**:
- The authors provide a well-structured introduction that explains the importance of Monocular Depth Estimation (MDE) in autonomous driving systems and the limitations of current adversarial attacks.
- Figures such as Figure 1 effectively illustrate the difference between previous attacks and the proposed AdvRM, making it easier for readers to grasp the concept and significance of the new method.
- Detailed explanations of methodologies and experimental setups ensure that readers can replicate the study and understand the results.
2. **The proposed method is simple but effective**:
- The AdvRM approach leverages a straightforward yet innovative idea: placing adversarial patches on the road instead of on specific obstacles. This simplicity allows the attack to be more versatile and persistent.
- The use of saliency maps to identify road regions as optimal patch locations demonstrates a clever application of existing techniques to achieve significant impact.
- The method's simplicity is further evident in its robustness; by disguising patches as ordinary road markings, the attack remains stealthy and effective over time.
3. **According to the experimental results, the proposed method can be generalized to multiple datasets**:
- The experiments conducted on eight different MDE models, including CNN-based and ViT-based architectures, show that AdvRM is effective across a variety of models.
- The paper reports high Mean Relative Shift Ratio (MRSR) values across these models, indicating consistent performance in altering depth predictions.
- The use of both simulated dataset experiments and real-world physical tests demonstrates that AdvRM can be generalized beyond a single dataset, as shown by its successful application in different environments and with various obstacles.
These strengths illustrate the paper's contribution to advancing the understanding and mitigation of adversarial attacks in autonomous driving systems.
Weaknesses: 1. **Limited to white-box attack scenario**:
- The paper assumes that the attacker has full knowledge of the target MDE model, which is a white-box scenario. This assumption limits the practical applicability of the attack since real-world attackers may not have such detailed information about the models used by autonomous vehicles.
- For example, the authors state, "We assume that the attacker possesses comprehensive knowledge regarding the target MDE model," highlighting the dependency on white-box conditions, which might not always be feasible in real-world attacks.
2. **There is no obvious technical challenge**:
- The core idea of placing adversarial patches on roads is conceptually simple and does not introduce any groundbreaking technical innovations or challenges.
- It would be helpful to talk about why this problem cannot be solved by trivial methods, and compare between the trivial methods and the proposed method.
3. **The ablation studies cannot well support the insights in section 4**:
- The ablation studies provided in the paper do not fully support the insights regarding the road-dependent nature of MDE models discussed in section 4.
- The experiments focus on evaluating the effectiveness of AdvRM but lack detailed analysis to empirically validate the specific claim that road regions are crucial for depth prediction in various MDE models.
- It would be ideal to have more quantitive analysis to further demonstrate the claim.
These weaknesses highlight areas where the paper could be improved, such as addressing practical attack scenarios, clearly talking more about the technical challenges, and providing stronger empirical support for its claims.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: As the author mentioned that the proposed method can only be used as white box attack. It would be great if future work can figure out how to apply this method to black-box attack methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback, which will greatly aid in refining the paper. Below is our point-to-point response.
**[Comment 1] Restriction to white-box scenarios**
**We believe the white-box attack scenario is practical in the real world, and it is indeed the main assumption adopted in existing attacks targeting MDE.** Please refer to our global response to all reviewers for detailed explanations.
**[Comment 2] Technique challenges in our simple but effective method**
Thank you for affirming the effectiveness of our attack. **Although transferring patches from obstacles to the environment seems simple, it faces new technical challenges that previous obstacle-dependent methods cannot solve.** These technical challenges are summarized below.
* **Making the patches outside obstacles effective.** Our new strategy requires the patch to be still effective in affecting the obstacle's depth when it is outside the obstacle. This is extremely challenging. Prior obstacle-dependent patches are only valid when they are attached to the obstacle, which cannot fulfill this requirement. To address this, (a) we first innovatively deploy a gradient-based explainability technique to MDE and observe the property of road-dependency common in MDE, thereby using roads as our patching region and ensuring our attack works for different MDE models (Table 2 and 3). (b) We then design a new adversarial loss function to ensure our road patch works for most obstacle pixels (ARR is close to 100% in most cases in Table 2), instead of only modifying the depth of partial pixels as in previous works [1,2].
* **Inserting patches realistically.** Our strategy requires an additional perspective transformation during image synthesis to simulate the real imaging effect of road markings in the frame. However, prior patches appear in camera frames in an orthographic projection manner, which cannot meet this requirement. To address this, we propose a new patch insertion algorithm that exploits the road to compute the necessary parameters required by the perspective transformation, ensuring a realistic patch insertion and attack effectiveness.
* **Making road patch obstacle-agnostic.** Our strategy requires the patch to be effective against any obstacle (one or more) that might appear in front of it. However, prior patches are obstacle-dependent and cannot transfer to other unknown obstacles, which cannot satisfy this requirement. To address this, we design a new obstacle insertion pipeline for patch optimization by constructing an obstacle pool and randomly sampling one or more from this pool each time, ensuring that the optimized patch is suitable for both single-obstacle and multi-obstacle scenarios (Figure 5) with high obstacle transferability (Table 3).
**[Comment 3] Need for quantitative evidence on road dependency**
Thanks for the great suggestion. **We further conduct ablation studies to show that MDE models rely significantly more on roads than other nearby areas.** Specifically, we compare the attack performance of patches pasting onto different regions: (a) our patch on the road area as shown in Figure 1; (b) a rectangular patch to the right of the obstacle, adjacent to but not occupying the center lane, like a roadside billboard; (c) a rectangular patch above the obstacle. The size of the right patch and top patch is $(p+20) \times 70$, where $p$ is the width of the center lane, ensuring their sizes are similar to our road patch. For fair comparisons, we randomly choose Mono2 [6] as the target model and keep other hyperparameters the same.
The experimental results are given in Table C. We observe that the top and right patches are less effective than our road patch in altering the predicted depth, **further supporting our insights regarding the road-dependent nature of MDE models.** We will add the results in the revision.
**Table C: MRSR values when we place patches in different locations**
| Obstacle | Top | Right |Road|
| :---: | :---: | :---: |:---: |
| PE | 0.272 | 0.214|**2.431**|
| CA | 0.411 |0.292 |**1.868**|
| RO | 0.513 | 0.433 |**3.157**| | Rebuttal 1:
Rebuttal: We appreciate all reviewers' constructive comments, which can help improve the paper. Due to the 6000-character limit of this response, we will address **two common concerns regarding (a) restrictions to the white-box scenario and (b) experiments on baseline and defense** here. We will provide detailed responses in a separate, dedicated reply for comments and concerns related to other aspects of the paper.
**[Comment 1] Restriction to white-box attack scenarios**
We believe **the white-box attack scenario is practical in the real world**. Commercial vehicles of the same brand and model implement the same MDE model. So it is easy for an attacker to extract the MDE model from a similar car, and then launch the white-box attack based on the extracted model. The feasibility of extracting MDE models from commercial cars has been realized in reality, where a hacker successfully recovered the MDE model from his Tesla vehicle, and released a video to demonstrate how the MDE model works [A]. Driven by this, *all existing attacks against MDE assume this white-box attack scenario [1-5,14,15], and we follow the same practice*.
Nevertheless, we still believe your suggestions regarding black-box attacks are very helpful and promising. *It is possible to extend AdvRM to black-box scenarios* by replacing the its patch optimization method from BIM (a white-box way) to black-box methods, such as NES [B], which update the patch only relying on model's input and outputs. However, most black-box optimization methods still face efficiency issues since they often require at least hundreds or thousands of queries to estimate necessary gradients, making them unsuitable for large-scale attacks. We will explore efficient black-box attacks targeting MDE in future.
[A] "Hacker shows what Tesla Full Self-Driving’s vision depth perception neural net can see", 2021
[B] A. Ilyas, et al. "Black-box adversarial attacks with limited queries and information." ICML, 2018
**[Comment 2] Experiments on baseline and defense**
We conduct additional experiments to compare our AdvRM with the SOTA attack [1], further confirming that obstacle-agnostic patches (AdvRM) outperform obstacle-dependent patches (prior attacks) across different configurations. All these results will be added in the revision.
* **Effectiveness in both single-obstacle (one car) and multiple-obstacle (three pedestrians) scenarios.** Table A shows that AdvRM has higher MRSR and ARR than [1] in both scenarios. The reasons are that (a) [1] mainly affects the depth of the patch region while AdvRM affects all obstacle pixels, and (b) the patch in [1] only works for the known obstacle due to its obstacle-dependency (Figure 1) while AdvRM is obstacle-agnostic.
**Table A: Comparison between AdvRM and [1]. All obstacles (the car and pedestrians) used in testing are unknown to AdvRM. For [1], the car and only one pedestrian are known in training, and the other two pedestrians are unknown.**
|Metric|[1] (One car)|AdvRM (One car)|[1] (Three pedestrians)|AdvRM (Three pedestrians)|
|:---:|:---:|:---:|:---:|:---:|
|MRSR| 1.019|**1.868**|0.136|**2.417**|
|ARR| 0.887|**0.969**|0.168|**0.958**|
* **Effectiveness against various defense techniques.** Table B shows that (a) AdvRM still works under [C] and [D] because the two defense techniques are designed for unnatural patches while our road patch is natural; (b) AdvRM performs much better than [1] when faced with [E] because [E] only considers patches on obstacles; and (c) we can enhance AdvRM's robustness to noise, blur, and compression by incorporating them into EoT.
**Table B: MRSR values under various defenses. The obstacle is a car. AdvRM-$\star$ ($\star \in${Noise, Blur, JPEG}) denotes the enhanced AdvRM by incorporating the corresponding distortion into EoT.**
|Defense|[1]|AdvRM| AdvRM-Noise|AdvRM-Blur|AdvRM-JPEG|
|:---:|:---:|:---:|:---:| :---:| :---:|
|[C]|0.550|**1.149**|-| -| -|
|[D]|0.887|**1.279**|-| -|-|
|[E]|0.022|**0.467**|-|-|-|
|Gaussian Noise|0.299|0.416| **1.171**| -|-|
|Gaussian Blurring|0.434|0.436| -| **0.923**|-|
|JPEG Compression|0.813|0.352| -|-|**1.304**|
[C] M. Naseer, et al. "Local gradients smoothing: Defense against localized adversarial attacks". WACV, 2019
[D] J. Liu, et al. "Segment and complete: Defending object detectors against adversarial patch attacks with robust patch detection." CVPR, 2022
[E] Cheng, Zhiyuan, et al. "Adversarial training of self-supervised monocular depth estimation against physical-world attacks." ICLR, 2023 | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SOI: Scaling Down Computational Complexity by Estimating Partial States of the Model | Accept (poster) | Summary: This paper presents a method called Scattered Online Inference (SOI) aimed at reducing the computational complexity of ANNs. By applying compression and extrapolation techniques, SOI caches partial states of CNNs, allowing it to skip full model recalculation at each inference. The proposed method is positioned as a solution for real-time systems, where energy efficiency and reduced latency are critical.
Strengths: 1. The research problem is important. The focus on real-time systems and applications in consumer electronics, where computational resources are limited, adds practical value to the research.
2. The extensive experiments on audio tasks indicate the effectiveness of the proposed method.
Weaknesses: The presentation of this work is not very good:
First, the proposed method is heavily based on STMC, but STMC is not well explained in this manuscript.
Second, there is no theoretical justification of how much the computational complexity will decrease, making it difficult for reviewers to assess the trade-off between accuracy and complexity.
Third, the Mathematical Formulation part is not well-described. i).Fig. 3 is just put there without any explaination and reference to it. ii). There are too many words but not clear formulae to explain the process. iii). The notation is a little bit strange.
Technical Quality: 3
Clarity: 1
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: 1. The proposed method only focuses on CNNs. Extending it to Transformer architectures, which are increasingly popular, presents significant challenges.
2. This manuscript does not discuss the method's limitation, although the authors claim that the discussion is in sec. 1.2.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper. We appreciate the time you have taken to provide your insights and the opportunity to improve our work. We are pleased that you recognize the importance of the research problem, the practical value of focusing on real-time systems and consumer electronics, and the effectiveness of our extensive experiments on audio tasks.
> First, the proposed method is heavily based on STMC, but STMC is not well explained in this manuscript.
While we did not include an extensive background on STMC, we referenced the STMC paper in our manuscript. We could not provide an additional explanation of the STMC method due to the length requirements for the paper. We understand that the reviewers / readers may not have the time to fully engage with the paper and all relevant references. We will work on fitting a brief overview of STMC to address this concern in the revised manuscript.
> Second, there is no theoretical justification of how much the computational complexity will decrease, making it difficult for reviewers to assess the trade-off between accuracy and complexity.
Our experimental results clearly demonstrate significant computational savings, which serve as a practical validation of our method. While theoretical analysis is valuable, the provided empirical evidence should suffice to illustrate the effectiveness of SOI. To further support our experimental claim, we have included the peak memory usage and inference time (as a measure of latency) in the supplementary PDF added during the rebuttal. These results are presented in Table 9 and Figure 12, continuing the enumeration from the original paper. These results were achieved using the Intel Xeon Gold 6246R CPU with a clock speed of 3.40 GHz. We will incorporate these results into the revised manuscript to provide a more comprehensive evaluation of our algorithm's efficiency. Additionally, we will make the best effort to include theoretical insights in the revised paper to strengthen our case.
> Third, the Mathematical Formulation part is not well-described. i).Fig. 3 is just put there without any explanation and reference to it. ii). There are too many words but not clear formulae to explain the process. iii). The notation is a little bit strange.
We appreciate this feedback and will add a missing reference to Figure 3 in the text. This figure represents Equations 4-6 in graphical form and was added to help readers better understand our method. We will also extend the caption of this figure to better explain its significance and how it fits into the overall methodology.
The mathematical formulation is precise and follows standard conventions in the field. The notations are carefully chosen to align with the specific needs of our methodology. However, we will review this section for any ambiguities and provide a clearer explanation.
> The proposed method only focuses on CNNs. Extending it to Transformer architectures, which are increasingly popular, presents significant challenges.
The paper clearly states its focus on CNNs, which is a deliberate choice given their relevance in the targeted applications (real-time processing). Extending to Transformer architectures is beyond the current scope and should not detract from the contributions made within the context of CNNs. Please note that we are conducting ongoing research to validate the generalizability of SOI across more tasks and architectures, including non-CNNs. Future publications will include comprehensive results and analyses from these experiments, providing additional evidence of the method's broad applicability. By publishing our results now, we hope to encourage the research community to apply SOI to their specific use cases and share their findings. This collaborative approach will help identify any limitations and further refine the method.
> This manuscript does not discuss the method's limitation, although the authors claim that the discussion is in sec. 1.2.
Section 1.2 does discuss the limitations briefly in the first paragraph and through lines 96-98. We acknowledge that this brief discussion could be expanded. We will elaborate on it in the revised manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the explanation. Most of my concerns have been addressed. As I am not an expert in this field, I will cautiously maintain the positive score. | Summary: This paper introduces a novel method called Scattered Online Inference (SOI) aimed at reducing the computational cost of convolutional neural networks (CNNs) by reusing partial states from previous inferences. The method generalizes these states over longer time periods, balancing computational efficiency and model performance. SOI leverages techniques such as strided convolution, transposed convolution, and skip connections to achieve significant reductions in computational complexity without substantial loss of accuracy. The paper presents extensive experimental results demonstrating the effectiveness of SOI in two tasks: speech separation and acoustic scene classification. In the speech separation task, the method achieved a 64.4% reduction in computational cost with only a 9.8% decrease in SI-SNRi. In the acoustic scene classification task, a 50% reduction in computational cost was achieved without any drop in accuracy. The SOI method also showcases the ability to control the trade-off between model quality and computational cost, allowing for resource- and requirement-aware tuning.
Strengths: ### Strengths
The paper introduces the novel Scattered Online Inference (SOI) method, which reduces the computational complexity of convolutional neural networks (CNNs) by reusing partial states from previous inferences. This originality is evident in its creative combination of strided and transposed convolutions with new techniques for managing computational resources. The paper is methodologically sound, with thorough experimental evaluations showing significant computational cost reductions without substantial performance loss. The writing is clear and well-structured, making complex concepts accessible. The significance of this work lies in its applicability to resource-constrained environments and real-time applications, offering a valuable alternative to existing methods like STMC.
Weaknesses: ### Weaknesses
While the paper presents a novel approach, there are several areas where it could be improved:
1. **Early Layer Impact on SI-SNRi**: The paper notes a significant drop in SI-SNRi when the S-CC layer is introduced too early in the network. This suggests that the method may not be robust across different network configurations. The authors could improve this by exploring adaptive strategies for placing S-CC layers based on the specific characteristics of the task and dataset.
2. **Complexity of Implementation**: The SOI method involves multiple steps and different types of convolution operations, which may make implementation complex and potentially error-prone. Providing a more detailed implementation guide or open-sourcing the code could help alleviate this concern and improve reproducibility.
3. **Limited Scope of Evaluation**: The experiments focus primarily on speech separation and acoustic scene classification. While these are relevant tasks, the generalizability of SOI to other types of tasks and datasets is not thoroughly explored. Expanding the evaluation to include more diverse tasks and datasets could strengthen the paper’s claims about the broad applicability of SOI.
4. **Quantitative Analysis of Trade-offs**: The paper mentions the trade-off between model quality and computational cost but lacks a detailed quantitative analysis of this trade-off across different configurations. Including more comprehensive metrics and visualizations of these trade-offs could provide deeper insights and help practitioners make more informed decisions when applying SOI.
5. **Parameter Sensitivity**: The paper does not discuss the sensitivity of SOI’s performance to various hyperparameters, such as the number and position of S-CC and SS-CC layers. A sensitivity analysis could reveal potential limitations and guide users in tuning these parameters for optimal performance.
Addressing these weaknesses could significantly enhance the impact and usability of the proposed SOI method.
Technical Quality: 2
Clarity: 2
Questions for Authors: Please refer to weakness.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review of our paper. We appreciate your feedback and the opportunity to clarify and improve our work. We are pleased that you recognize the novelty and significance of the SOI method, as well as the thoroughness of our experimental evaluation and the clarity of our writing. Your acknowledgment of our method's potential impact on resource-constrained environments and real-time applications is greatly appreciated.
**Early Layer Impact on SI-SNRi**: The drop in SI-SNRi when the S-CC layer is introduced early in the network can be attributed to the following reasons:
- In our tests, the output of the S-CC layer is extrapolated by duplicating the output of this layer. After concatenation with the current data (as described in Equation 6), the network needs time to correctly merge these states. In the U-Net model, this concatenation happens in the decoder, meaning the sooner we introduce SOI, the less time the model has to recover.
- Early layers in convolutional neural networks are responsible for capturing fundamental features such as edges and textures. Introducing the S-CC layer too early can disrupt this essential feature extraction process, leading to a less informative representation being passed to subsequent layers. This disruption is more impactful because foundational features are crucial for the overall performance of the network.
- Later layers in the network generally capture more complex and higher-level features that benefit more from the memorization provided by the S-CC layer. Introducing S-CC at these stages allows the network to maintain a balance between computational efficiency and the preservation of important contextual information.
Regarding the robustness of the method across different network configurations, it is important to note that the observed performance drop does not imply that the method is fundamentally flawed but rather highlights the importance of carefully selecting the placement of the S-CC layer. We acknowledge that the placement of the S-CC layer is critical and have provided guidelines and empirical evidence in the manuscript to help practitioners make informed decisions about where to introduce the S-CC layer to achieve optimal performance. We would also like to note that the SOI's flexibility and adaptability allow for fine-tuning based on the specific characteristics of the task and the network.
We will include these points in the revised manuscript to provide clearer guidance on the effective use of the S-CC layer in different network configurations. We will also investigate adaptive strategies for placing the S-CC layer in our future work - thank you for the excellent suggestion! As you also noted, the implementation of SOI is quite complex already, so we fear that adding it right now might further complicate the paper.
**Complexity and Implementation**: We acknowledge the complexity of the SOI implementation and recognize that it might be challenging. We have attempted to counteract this by providing a detailed depiction of the newly introduced layers in Figure 1 and illustrating how inference will look in the simple predefined pattern of odd and even inference in Figure 2. We realize that several implementation details were omitted, so we will add an extensive implementation guide in the appendix of the revised manuscript. Thank you for pointing out this issue.
**Limited Scope of Evaluation**: In Appendix E, we provided an evaluation of our method on a video action recognition task using ResNets and MoviNets. We did not include this experiment in the main text because our lab focuses on audio data. Speech separation is actually one of the most challenging tasks for our method, as it involves dealing with fast-changing input signals.
The SOI method is designed to be integrated into a variety of convolutional neural network (CNN) architectures. Our experiments with different models (U-Net, GhostNet, and their variants) have demonstrated its effectiveness. We believe that other popular architectures, such as ResNet, VGG, and Inception, can also benefit from the SOI approach. The SOI method includes parameters that can be tuned based on the specific requirements of different tasks and architectures.
We are conducting ongoing research to validate the generalizability of SOI across more tasks and architectures, including non-CNNs. Future publications will include comprehensive results and analyses from these experiments, providing additional evidence of the method's broad applicability. By publishing our results now, we hope to encourage the research community to apply SOI to their specific use cases and share their findings. This collaborative approach will help identify any limitations and further refine the method.
**Quantitative Analysis of Trade-offs**: When composing the paper, we had to consider our limited resources and make editorial choices to present the method effectively. We decided to focus on a detailed description of the algorithm itself, allowing ML practitioners to replicate and extend our study. While we recognize the benefits of conducting and presenting a detailed analysis of the trade-offs, we decided to concentrate our efforts on a well-described algorithm and a summary of our results. The paper's description of the SOI algorithm and our brief summarization of the results will constitute a well-defined baseline for ML practitioners to start from.
**Parameter Sensitivity**: In the paper, we briefly present our findings on the dependency of SOI's performance on the number and placement of the novel layers in the section describing the results of our algorithm for the speech separation task. We provided some general conclusions on this issue. We agree that a thorough sensitivity analysis could provide important insights into hyperparameter optimization routines, and we encourage ML practitioners to experiment with these parameters.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their efforts in clarifying and revising their manuscript.
Now that I have a deeper understanding of this paper, I am willing to maintain my original positive score and provide a higher confidence rating.
I'm happy to discuss my opinion with other reviewers and area chairs. | Summary: The authors propose a novel method that reduces the computational cost of regular ANN models to achieve better online inference efficiency. The core technique named Scattered Online Inference (SOI) is able to reduce computational cost with partial predictions.
Strengths: - The proposed method is designed to avoid redundant computation in convolution layers.
- Brings little learning quality drop.
- Extensive experiments.
Weaknesses: - The organization of this paper makes it very difficult for readers to appreciate the true merit of this paper.
- It is unclear which part is reusing STMC and which part is uniquely proposed by SOI.
- No backgrounds are provided for STMC.
- To show that the proposed algorithm achieves higher efficiency, peak memory usage and latency should be provided.
Technical Quality: 3
Clarity: 2
Questions for Authors: - What are the removed frames? How to decide the removal?
- A better explanation of Figure 2 C&D could be provided in conjunction with section 2.2. Which frames are predicted in C&D?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations are discussed in the paper. The proposed technique is mainly limited to time-series data.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback on our paper. We appreciate your constructive comments, which will help us improve the quality and clarity of our work. We are glad to hear that you found the method's design, minimal impact on learning quality, and extensive experiments to be strengths of our paper.
> What are the removed frames? How to decide the removal?
The removed frames are those cropped after a layer during training to ensure that the network maintains a causal manner. This ensures proper alignment of the frames during training and allows achieving the desired inference pattern during model inference. The cropping is essential to prevent an increase in latency. To retain causality within the network, it is crucial to remove all rightmost extrapolated frames, as illustrated in Figure 1C. The number of removed frames can be adjusted to introduce more predictive factors into the network, subsequently reducing latency, peak computational complexity, and peak memory usage.
> A better explanation of Figure 2 C&D could be provided in conjunction with section 2.2. Which frames are predicted in C&D?
These subfigures illustrate the inference patterns of the Partially Predictive (PP) SOI. In Figure 2C, the even inference of PP SOI updates all network states (both blue and orange), whereas in Figure 2D, the odd inference utilizes buffered partial states of the second layer (previously blue) from the previous inference.
The frames predicted are those that have been extrapolated (white ones) based on the previously observed frames. The extrapolated frames are a generalized output of the layer for a larger number of input frames (longer time). Therefore, these frames represent a prediction of this general state for frames that the model has not yet seen. This approach allows the network to maintain continuity and causality without having to recalculate the entire sequence.
> To show that the proposed algorithm achieves higher efficiency, peak memory usage and latency should be provided.
Thank you for your suggestion regarding the benefits of providing peak memory usage and latency to demonstrate the higher efficiency of our proposed algorithm.
In response, we have included the peak memory usage and inference time (as a measure of latency) in the supplementary PDF added during the rebuttal. These results are presented in Table 9 and Figure 12, continuing the enumeration from the original paper. These results were achieved using the Intel Xeon Gold 6246R CPU with a clock speed of 3.40 GHz. We will incorporate these results into the revised manuscript to provide a more comprehensive evaluation of our algorithm's efficiency.
We hope these additional results strengthen our claims and provide the necessary evidence to support the effectiveness of our method.
> It is unclear which part is reusing STMC and which part is uniquely proposed by SOI.
Although we were inspired by STMC, SOI is quite different. STMC is a method that removes redundant calculations in the online inference of CNNs. With SOI, we aimed to reduce computational complexity by not processing data through the whole model on each inference but still having the ability to update the output of a model on each incoming input frame. SOI offers different handling of strided convolution. Although STMC can handle strided convolution to decrease computational complexity, the authors stated in their paper that the usage of stride requires buffering multiple states, which unfortunately increases memory requirements exponentially (due to the increase in the number of shift registers).
> No backgrounds are provided for STMC.
While we did not include an extensive background on STMC, we referenced the STMC paper in our manuscript. We could not provide an additional explanation of the STMC method due to the length requirements for a paper. We understand that the reviewers / readers may not have the time to fully engage with the paper and all relevant references. We will work on fitting a brief overview of STMC to address this concern in the revised manuscript.
> The organization of this paper makes it very difficult for readers to appreciate the true merit of this paper.
The structure of our paper follows a standard format: introduction, method description, experiments description, results, and conclusion. We made every effort to ensure a natural flow of ideas and make sure that each section builds on the previous one.
---
Rebuttal Comment 1.1:
Comment: - Thanks for the clarification. The novelty of this method is now clear. However, I still find the visualization in Figure 2 hard to understand. The authors can consider giving better and more detailed visualization for readers in later versions.
- The provided peak memory usage and latency indeed show the benefit of this proposed method.
The score has been modified accordingly. | Summary: The authors introduce a novel method called Scattered Online Inference (SOI) aimed at reducing the computational cost of convolutional neural networks (CNNs). This method leverages the reuse of network partial states from previous inferences, thereby generalizing these states over extended periods. SOI enables extrapolation, particularly enhancing processing speed in deeper layers. By applying compression techniques, SOI generates more generalized inner partial states within the CNN, allowing the system to skip full model recalculations for each inference.
Their contributions include offering a distinct treatment of strided layers, resulting in a new and efficient inference pattern. This approach modifies the inference pattern of the network to skip the recalculation of certain layers according to a predetermined scheme. Their method significantly reduces the computational cost of CNN model with only a negligible decrease in performance. These optimizations are achieved with minimal changes to the architecture, making the method suitable for tasks where reducing energy consumption or processing time is crucial.
Strengths: From my perspective, authors have touched each dimesnion of originality, quality, clarity, and significance.
1.) Their proposed SOI approach is inspired and built upon Short-Term Memory Convolution approach.
2.) In order to support their approach, they have performed extensive experiements with different architectures (U-Net, GhostNet and their variants) on two benchmark datasets: Interspeech 2020 dataset and TAU Urban Acoustic Scene 2020 Mobile dataset and also compared their reuslts with state-of-the-art model.
3.) They achieved a computational cost reduction of 50% without any drop in metrics in the acoustic scene classification task and a 64.4% reduction in computational cost for the speech separation task.
4.) Through extensive experiments, they showed the ability of SOI to control the trade-off between model’s quality and computational cost, allowing for resource- and requirement-aware tuning.
5.) They have shared all the information regarding experiments and each hyperparameter used that will be useful to reproduce the results.
Weaknesses: The limitation of paper is discussed in one line in subsection 1.2 that it incurs a neglible decrease in model's performance. From my perspective, it should have been more explanatory. For speech separation task, there was 9.8% of reduction in metrics. I think, this reduction is not negligible. I believe it would have been better if authors had clarified or mentioned more supporting statements regarding the results.
Technical Quality: 3
Clarity: 3
Questions for Authors: I also would like to see different versions of the ResNet model because they too have skip connections and how much computational cost gets reduced using SOI?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: not clearly but yes, up to certain degree
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review! We appreciate your detailed feedback and the opportunity to address your comments and concerns. We are pleased to see that you recognize the originality, quality, clarity, and significance of our work. Your positive feedback on our approach, extensive experiments, and detailed presentation of results is highly appreciated.
> I also would like to see different versions of the ResNet model because they too have skip connections and how much computational cost gets reduced using SOI?
In our research, we have applied SOI to a 3D version of ResNet-10 and MoviNets for the video action recognition task. To see the results of our experiment, please refer to Appendix E. Nevertheless, we believe this is a very fitting suggestion. ResNet is a widely known and often-used architecture, and we recognize the value of extending our research to cover ResNet models as well. Unfortunately, we could not conduct additional experiments on such a short notice. We plan to test SOI on various ResNet architectures and include the preliminary results in the revised manuscript.
> The limitation of paper is discussed in one line in subsection 1.2 that it incurs a neglible decrease in model's performance. From my perspective, it should have been more explanatory. For speech separation task, there was 9.8% of reduction in metrics. I think, this reduction is not negligible. I believe it would have been better if authors had clarified or mentioned more supporting statements regarding the results.
We understand your concerns regarding the 9.8% reduction in metrics for the speech separation task. In the context of our work, we find such reduction to be negligible due to the following reasons:
- Our primary focus is on real-time systems where computational resources are limited and latency is a critical factor. In these scenarios, achieving a significant reduction in computational cost while maintaining performance close to the original model is a considerable achievement. Therefore, the reduction in quality must always be considered in relation to the reduction in computational cost. The decrease in performance is often an acceptable trade-off for the substantial gains in efficiency and reduced latency. In our case, a 9.8% drop in quality is an acceptable price for a 64.4% reduction in computational cost.
- When comparing our results to the other state-of-the-art methods, the observed 9.8% performance reduction can be considered relatively small (e.g., see Figure 6 for our results with pruning). Many popular techniques that aim to reduce computational complexity or improve efficiency often degrade the performance to a much higher extent. Our method, on the other hand, manages to minimize the reduction in quality while providing substantial computational benefits.
- In practical applications, the difference in performance (9.8% reduction) might not significantly impact the overall user experience, especially when balanced against the improved efficiency and reduced resource consumption. For many real-time applications, the ability to deploy a model with lower computational requirements can be more critical than a slight reduction in performance metrics.
- The SOI method allows for flexible tuning between computational cost and model performance. This means that, depending on the specific requirements and constraints of the application, users can adjust the trade-off to either prioritize efficiency or performance. The 9.8% reduction observed in our experiments represents a particular balance point that can be adjusted if needed.
We will revise the subsection 1.2 to include this detailed explanation and to provide a clearer rationale for why we consider the decrease in performance to be negligible in the context of the broader benefits offered by our method.
Thank you once again for your detailed review and constructive suggestions. We are committed to improving our paper based on your feedback and look forward to presenting a stronger and more comprehensive manuscript.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for clarifying my concerns in detail and incorporating them into the manuscript in a detailed manner. | Rebuttal 1:
Rebuttal: In the attached PDF, we provided peak memory footprint and average inference time measurements for a single S-CC layer in our U-Net for the speech separation task. These results were achieved using the Intel Xeon Gold 6246R CPU with a clock speed of 3.40 GHz. The TFLite Model Benchmark Tool with C++ binary was adopted as the measurement tool. We will incorporate these results into the revised manuscript to provide a more comprehensive evaluation of our algorithm's efficiency.
We hope these additional results strengthen our claims and provide the necessary evidence to support the effectiveness of our method.
Pdf: /pdf/1d686cf8849024c67912ad21dcbf88604a16bf7b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Generalized Eigenvalue Problems with Generative Priors | Accept (poster) | Summary: This paper provides theoretical guarantees for the optimal solution of the generalised eigenvalue problem with generative priors. It also designs an algorithm to approximate the optimal solution within some guaranteed distance.
Strengths: The obtained results look new and rigorously proved.
Weaknesses: + Assumption 2.4 looks too strong and not very convincing. As shown in (12), this condition is even stronger than [72, Assumption 1]. At the first sight, I don't see any pair ($\mathbf{E}$,$\mathbf{F}$) which satisfies this assumption except the trivial one $\mathbf{E}=0$ and $\mathbf{F}=0$. Please find at least one class of $(\mathbf{E}, \mathbf{F})$ which satisfies this assumption.
+ Similarly, it looks not interesting to only put three conditions (21), (22), (23) in Theorem 3.3 but don't identify (at least) a class of matrices ($\mathbf{A}, \mathbf{B})$ which satisfies these conditions. Please give some classes of $(\mathbf{A}, \mathbf{B})$ such that these conditions hold. It is more interesting if $\mathbf{B}$ is not the trivial one, i.e., $\mathbf{B}=\mathbf{I}_n$, as in the experiment since you are considering the generalised eigenvalue problem.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please address my comments in the weakness. Additional notes:
+ Typo in (11): I think the constant should be $2C$ (not $\sqrt{2} C$).
+ Typo: The second and third terms in (34) and (35) should be $(\mathbf{\hat{u}}-\mathbf{w})^T \mathbf{\hat{B}}\mathbf{w}$ and $\mathbf{w}^T \mathbf{\hat{B}}(\mathbf{\hat{u}}-\mathbf{w})$.
+ Typo: The second term in (49) should be $-$.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: This is a theoretical research paper, hence the negative society impact of this work is not direct. The authors mention some technical limitations of this work though assumptions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your helpful feedback and comments. Our responses to the main concerns are given as follows.
(**Assumption 2.4 looks too strong and not very convincing. At the first sight, I don't see any pair $(\mathbf{E}, \mathbf{F})$ which satisfies this assumption except the trivial one $\mathbf{E} = \mathbf{0}$ and $\mathbf{F} = \mathbf{0}$. Please find at least one class of $(\mathbf{E}, \mathbf{F})$ which satisfies this assumption**) As stated in the paragraph preceding Assumption 2.4, similarly to the proof for the spiked covariance model in [45, Appendix B.1], it can be readily demonstrated that for the $(\hat{\mathbf{A}}, \hat{\mathbf{B}})$ constructed in Section 4.2, the corresponding perturbation matrices $\mathbf{E}$ and $\mathbf{F}$ satisfy Assumption 2.4 with high probability. More specifically, for Eq. (29) with $$\hat{\mathbf{A}} = \frac{1}{m}\sum_{i=1}^m(2\gamma_i\mathbf{v}^*+\mathbf{z}_i)(2\gamma_i\mathbf{v}^*+\mathbf{z}_i)^\top,$$
and $\hat{\mathbf{B}} =\frac{1}{m}\sum_{i=1}^m \mathbf{w}_i\mathbf{w}_i^\top$, we have $\mathbb{E}[\hat{\mathbf{A}}] = 4\mathbf{v}^*(\mathbf{v}^*)^\top + \mathbf{I}_n$ and $\mathbb{E}[\hat{\mathbf{B}}] = \mathbf{I}_n$;
where $\mathbf{v}^*$ is a unit vector; $\gamma_i \sim \mathcal{N}(0,1)$, $\mathbf{z}_i \sim \mathcal{N}(\mathbf{0},\mathbf{I}_n)$, $\mathbf{w}_i\sim \mathcal{N}(\mathbf{0},\mathbf{I}_n)$, and they are mutually independent. If setting $\mathbf{A} = \mathbb{E}[\hat{\mathbf{A}}]$ and $\mathbf{E} = \hat{\mathbf{A}} - \mathbf{A} \neq \mathbf{0}$, and setting $\mathbf{B} = \mathbb{E}[\hat{\mathbf{B}}]$ and $\mathbf{F} = \hat{\mathbf{B}} - \mathbf{B} \neq \mathbf{0}$, we obtain for **any** $\mathbf{s}_1 \in S_1$, $\mathbf{s}_2 \in S_2$,
$$\left|\mathbf{s}_1^\top \mathbf{E} \mathbf{s}_2\right|$$
$$ = \left|\frac{1}{m}\sum_{i=1}^m \left(\mathbf{s}_1^\top(2\gamma_i\mathbf{v}^*+\mathbf{z}_i) \cdot \mathbf{s}_2^\top(2\gamma_i\mathbf{v}^*+\mathbf{z}_i) - \mathbb{E}\left[\mathbf{s}_1^\top(2\gamma_i\mathbf{v}^*+\mathbf{z}_i) \cdot\mathbf{s}_2^\top(2\gamma_i\mathbf{v}^*+\mathbf{z}_i)\right]\right)\right|.$$
Since $\mathbf{s}_1^\top(2\gamma_i\mathbf{v}^*+\mathbf{z}_i) \mathbf{s}_2^\top(2\gamma_i\mathbf{v}^*+\mathbf{z}_i)$ are sub-exponential and independent (with the sub-exponential norm being upper bounded by $C\\|\mathbf{s}_1\\|_2\cdot \\|\mathbf{s}_2\\|_2$, where $C$ is an absolute constant), using the concentration inequality for the sum of independent sub-exponential random variables [75, Proposition 5.16], we obtain for any $u > 0$ satisfying $m = \Omega(u)$, the following holds with probability $1-e^{-\Omega(u)}$:
$$\left|\mathbf{s}_1^\top \mathbf{E} \mathbf{s}_2\right| \le C \\|\mathbf{s}_1\\|_2\cdot \\|\mathbf{s}_2\\|_2 \cdot\sqrt{\frac{u}{m}}.$$
Taking a union bound for **all** $\mathbf{s}_1 \in S_1$, $\mathbf{s}_2 \in S_2$, and setting $u = \log (|S_1|\cdot |S_2|)$, we obtain with probability $1-e^{-\Omega(\log (|S_1|\cdot |S_2|))}$ that the following holds for all **all** $\mathbf{s}_1 \in S_1$, $\mathbf{s}_2 \in S_2$:
$$\left|\mathbf{s}_1^\top \mathbf{E} \mathbf{s}_2\right| \le C \\|\mathbf{s}_1\\|_2\cdot \\|\mathbf{s}_2\\|_2 \cdot \sqrt{\frac{\log (|S_1|\cdot |S_2|)}{m}}.$$
Please refer to [45, Appendix B.1] for more technical details. Similarly, for $\big|\mathbf{s}_1^\top \mathbf{F} \mathbf{s}_2\big|$, we can obtain the upper bound in Eq. (7).
(**It looks not interesting to only put three conditions (21), (22), (23) in Theorem 3.3 but don't identify (at least) a class of matrices $(\mathbf{A}, \mathbf{B})$ which satisfies these conditions. It is more interesting if $ \mathbf{B}$ is not the trivial one, i.e., $\mathbf{B}=\mathbf{I}_n$, as in the experiment since you are considering the generalised eigenvalue problem**) We have deliberated in Remark 3.4 regarding when the three conditions are satisfied. Specifically, we demonstrate that when $\kappa(\mathbf{B}) = \lambda_{\max}(\mathbf{B})/\lambda_{\min}(\mathbf{B})$ is close to 1 (or more precisely, $\kappa(\mathbf{B})<c’$ for some constant $c’>1$; note that we do not make any attempt to refine the constant $c'$ here, and we believe that our theorem actually holds under a more relaxed condition), the three conditions can be satisfied. Since this work is primarily theoretical, we follow [8] to only conduct the experiments for the case where $\mathbf{B}=\mathbf{I}_n$. We have added the experiments for the case that $\hat{\mathbf{B}}$ in Eq. (29) is modified by $\mathbf{w}_i \sim \mathcal{N}(\mathbf{0}, \mathrm{Diag}(2,1,\ldots,1))$ (other settings of the experiments remain unchanged), and thus $\mathbf{B} = \mathbb{E}[\hat{\mathbf{B}}] = \mathrm{Diag}(2,1,\ldots,1)$ and $\kappa(\mathbf{B}) = 2$. The quantitative results are as follows:
| $m$| Rifle20 | Rifle100 | PPower | PRFM |
| :---------------- | :------: | ----: | :------: | ----: |
| 100 | 0.17 $\pm$ 0.02 | 0.27 $\pm$ 0.01 | 0.75 $\pm$ 0.03 | 0.80 $\pm$ 0.02 |
| 200 | 0.28 $\pm$ 0.01 | 0.43 $\pm$ 0.01 | 0.78 $\pm$ 0.01 | 0.86 $\pm$ 0.02 |
| 300 | 0.32 $\pm$ 0.01 | 0.50 $\pm$ 0.01 | 0.79 $\pm$ 0.01 | 0.91 $\pm$ 0.01 |
From the above results, we observe that our PRFM method also performs well in this case. We will incorporate the corresponding numerical results in the revised version.
(**Typos**) Thanks for pointing out these typos. In the revised version, we will correct them and meticulously check the manuscript to prevent typos.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer 5DuU,
We have taken your initial feedback into meticulous consideration in our responses. Could you please check whether our responses have appropriately addressed your concerns? If so, could you please kindly consider increasing your initial score accordingly? Certainly, we are more than happy to answer your further questions.
Thank you for your valuable time and dedicated effort in reviewing our work!
Best Regards,
Authors | Summary: The paper studies generalized eigenvalue problems under generative priors. They show that under suitable conditions on the prior assumptions on the perturbation matrices, the optimal solution vector of the corresponding optimization problem attains the statistically optimal rate. Furthermore, they provide an algorithm that under assumptions on the signal strength, step size, and initialization vector, converges linearly to a statistically optimal solution vector. They further supplement their theoretical analysis with experiments on the MNIST dataset, while comparing with other approaches for solving generative generalized eigenvalue problems.
Strengths: - The paper generalizes analysis of PCA, Fisher's Discriminant Analysis, and Canonical Correlation Analysis with generative priors and presents a unified view.
- The optimization problem (6) and the corresponding Algorithm (1), seem to attain the statistically optimal rate under certain assumptions on the perturbation matrices, signal strength, step-sizes, and smoothness of the generative priors.
- The numerical experiments complement their theoretical results and show a clear empirical improvement over existing methods.
Weaknesses: - Regarding Computational Efficiency: As acknowledged by the authors, projection on the prior set may not be efficiently achievable in general and approximations may be required. Theoretical analysis taking the approximation into account would be good to see. Furthermore, even with the current approximate projection based off gradient descent, I would like to see how the proposed algorithm compares in compute time to other methods such as PPower.
- Regarding Eq (22): It seems to me that the condition assumed here essentially implies a local convergence guarantee. In Line 252, the authors mention that if $\nu_{0}$ is close to 1, then the conditions (21), (22) are easy to verify. However, achieving $\nu_{0}$ close to 1 itself seems like a hard problem since it would imply that you already have a pretty good initialization to start with, which may be hard to find in high dimensions.
Technical Quality: 4
Clarity: 3
Questions for Authors: - Regarding practical examples/instances: Apart from the numerical experiments pointed out in the paper, are there other scenarios where assuming existence of such generative priors is reasonable/well-accepted?
- Regarding Assumption 2.4. : Outside the spiked covariance model, can the authors specify other covariance matrices where this assumption is satisfied?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The primary limitation seems to be assuming the existence of an exact projection step. The authors address this adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your recognition of this paper as well as the beneficial comments and questions. Our responses to the major concerns are as follows.
(**Theoretical analysis taking the approximation into account would be good to see**) Similar to previous works such as [45, 59, 65], we assume exact projection in the theoretical analysis. We agree that theoretical analysis taking the approximation into account is an intriguing direction and we leave it for future research.
(**Even with the current approximate projection based off gradient descent, I would like to see how the proposed algorithm compares in compute time to other methods such as PPower**) The projection step onto the range of the generative model occupies the majority of the running time of the algorithm. Consequently, the compute time of PRFM is approximately the same as PPower, with only a minor increase due to more matrix-vector multiplications in each iteration. In the following table, we present the running time (in seconds) for PRFM and PPower on MNIST with $\hat{\mathbf{A}}$ and $\hat{\mathbf{B}}$ generated from Eq. (29) (and the number of iterations for both algorithms is set to 10; the running time is averaged over 10 test images and 10 restarts). Note that we run the algorithms given $\hat{\mathbf{A}}, \hat{\mathbf{B}} \in \mathbb{R}^{n\times n}$ (or $\mathbf{V} \in \mathbb{R}^{n\times n}$ for PPower), and the running time will not increase with $m$.
| $m$ | PPower | PRFM |
| :---------------- | :------: | ----: |
| 100 | 38.64 $\pm$ 0.09 | 39.15 $\pm$ 0.35 |
| 200 | 38.59 $\pm$ 0.09 | 39.25 $\pm$ 0.31 |
| 300 | 38.61 $\pm$ 0.15 | 39.03 $\pm$ 0.33 |
(**Achieving $\nu_0$ close to 1 itself seems like a hard problem since it would imply that you already have a pretty good initialization to start with, which may be hard to find in high dimensions**) Thanks for the comment. While this initialization condition might seem restrictive, it seems to be the mildest initialization condition that we can assume for the case of generative priors. The reason is as follows: For sparse priors, the initialization vector can be obtained via a convex program (see [72]). Regrettably, for a generative model, this idea no longer applies since, without additional assumptions, the problem cannot be relaxed to a convex optimization problem. We observe that similar initialization conditions have been imposed in previous works such as [31, 45]. Additionally, in practice, we do not enforce such an initialization condition to hold and we discover that simply setting the initial vector to be $[1,1,\ldots,1]^\top/\sqrt{n} \in \mathbb{R}^n$ in the numerical simulations performs well.
(**Apart from the numerical experiments pointed out in the paper, are there other scenarios where assuming the existence of such generative priors is reasonable/well-accepted**) From the successful applications of deep generative models in diverse fields, there are various scenarios where assuming the existence of such generative priors is reasonable and well-accepted. For instance, robust compressed sensing MRI with generative priors has been investigated in
Jalal, A., Arvinte, M., Daras, G., Price, E., Dimakis, A.G. and Tamir, J., 2021. Robust compressed sensing MRI with deep generative priors. Advances in Neural Information Processing Systems, 34, pp.14938-14954.
Additionally, an approach similar to the closely related PPower method has been applied to Interferometric Passive Radar Imaging in
Kazemi, S., Yonel, B. and Yazici, B., 2023. Interferometric Passive Radar Imaging With Deep Denoising Priors. IEEE Transactions on Aerospace and Electronic Systems, 60(1), pp.145-156.
For more real applications of deep generative models in inverse problems in imaging, please refer to the following popular survey paper [569 citations]:
Ongie, G., Jalal, A., Metzler, C.A., Baraniuk, R.G., Dimakis, A.G. and Willett, R., 2020. Deep learning techniques for inverse problems in imaging. IEEE Journal on Selected Areas in Information Theory, 1(1), pp.39-56.
(**Outside the spiked covariance model, can the authors specify other covariance matrices where Assumption 2.4 is satisfied**) Assumption 2.4 is also satisfied with high probability by the matrices corresponding to the phase retrieval model (see Eq. (30)). Additionally, similar to what is mentioned in [72], it is straightforward to verify that under generative priors, for various statistical models that can be formulated as a GEP such as CCA and FDA, Assumption 2.4 will be satisfied with high probability.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. I maintain my current score. | Summary: This submission proposes a way to solve Generalized Eigenproblems with a constraint on the eigenvectors to be in the range of some generative model. On top of a simple algorithm, the paper proposes bound on the optimal solution and some experiments on some toy data.
Strengths: Generalized eigenproblems are a cornerstone of many ML problems and their study could have a great impact.
The idea of constraining the solution with the output of a generative model (although a bit unclear) has a good potential.
Weaknesses: The presentation of the problem could be improved (for exemple, it is never said that Eq 3 is a Rayleigh quotient), section 2.2 gather folklore results that are not useful (in my opinion).
It is also difficult to follow how the algorithm 1 is derived from section 3.
The numerical experiments are carried out on toy data and it makes it difficult to understand how it could be used in practice.
Technical Quality: 2
Clarity: 2
Questions for Authors: I am not quite sure to understand how the generative prior (and the projection onto the range of a pre trained model) is not redondant with the optimization problem itself. If the data are distributed according to some distribution, then the covariance of the data should be in this range. There is for sure something that I missed and it was not clear from the application but could you clarify this point ?
In Th 3.3, beside being a technical condition, is there any impact of the condition \gamma_1 + \gamma_2 < 2 ? Is it a reasonable constraint ?
In the main results, the $\mathcal{P}_G(\dot)$ has to be exact. In the experiments, it is approximated. How does this approximation impact the algorithm and the numerical results ?
The proposed algorithm is somewhat similar to a projected power method. How stable is it when k grows ? (As it is know to behave poorly as k grows).
On possible solution could be to solve the problem as the $\max_U Tr (U^T A U)$ with U in a genaralized Stiefel manifold (defined with B) and the generative prior could be enforced with a regularization. This should prove to be more stable than the power method-like method.
Section C of the appendix is very interesting and should be included in the paper (although the argument about having a proof that is bigger than the competitors is clearly not a valid argument …)
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Since the paper involves the use of a generative model, it could have some negative implication for the society. A discussion on that topic should be included.
Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent', 'Ethics review needed: Data quality and representativeness', 'Ethics review needed: Safety and security', 'Ethics review needed: Discrimination, bias, and fairness']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your helpful feedback and suggestions. Our responses to the main concerns are given as follows.
(**The presentation of the problem could be improved**) Thank you for the comments. We will undertake revisions on the presentation, encompassing the following aspects: 1) Mention that Eq. (3) is a Rayleigh quotient; 2) Relocate Section 2.2 to the Appendix; 3) Incorporate more explanations for the derivation of Algorithm 1; 4) Include Section C of the Appendix in the main paper (and we will meticulously modify it to prevent overstated claims).
(**The numerical experiments are carried out on toy data and it makes it difficult to understand how it could be used in practice**) We have also conducted the experiments for the CelebA dataset (which comprises over 200,000 face images of celebrities and the image vector has a dimension of $n = 64 \times 64 \times 3 = 12288$) in the case where $\hat{\mathbf{A}}$ and $\hat{\mathbf{B}}$ are generated from Eq. (29), and the quantitative results (in terms of Cosine Similarity) are as follows.
| $m$ | PPower | PRFM |
| :---------------- | :------: | ----: |
| 200 | 0.70 $\pm$ 0.11 | 0.76 $\pm$ 0.06 |
| 1000 | 0.73 $\pm$ 0.07 | 0.81 $\pm$ 0.06 |
| 3000 | 0.78 $\pm$ 0.04 | 0.87 $\pm$ 0.01 |
We will incorporate the experimental results on the CelebA dataset in the revised version.
(**I am not quite sure to understand how the generative prior (and the projection onto the range of a pre-trained model) is not redundant with the optimization problem itself**) The generative prior is not redundant with the optimization problem. The rationale is as follows: We concentrate on the high-dimensional scenario where the number of samples $m$ is significantly smaller than the data dimension $n$. To attain reliable reconstruction in this context, it is necessary to impose a low-dimensional prior on the underlying signal and add the corresponding constraint (or regularization) to the optimization problem. A common selection for the data prior is the sparse prior (note that for the popular sparse PCA/GEP problem, the constraint corresponding to the sparse prior also needs to be imposed, as seen in Eq. (5)). Given the numerous successful applications of deep generative models, we are aware that generative priors can be far more potent in modeling the distribution of real data. Hence, we follow [4] that considers generative priors for the underlying signal, and impose the corresponding constraint into the optimization problem.
(**Is there any impact of the condition $\gamma_1 + \gamma_2 < 2$ ? Is it a reasonable constraint?**) The condition $\gamma_1 + \gamma_2 < 2$ can be expressed as $\eta (\lambda_1 -\lambda_2)\lambda_{\min}(\mathbf{B}) + \eta (\lambda_1 -\lambda_n)\lambda_{\max}(\mathbf{B}) < 2$, where $\lambda_1 > \lambda_2 \ge \ldots \lambda_n$ are the generalized eigenvalues of $(\mathbf{A}, \mathbf{B})$. This is a reasonable constraint that is similar to the constraint $\eta \lambda_{\max}(\mathbf{B}) < 1/(1+c)$ in [72, Theorem 1] (where $c > 0$ is a constant specified in [72, Assumption 1]). More precisely, note that $\eta/\rho_{t-1}$ in [72, Algorithm 1] plays the role as our $\eta$ and $\rho_{t-1} \approx \lambda_1$. Then, if employing our notation, the constraint in [72, Theorem 1] is approximately $\eta \lambda_1 \lambda_{\max}(\mathbf{B}) < 1/(1+c)$, whose impact is to impose an upper bound on the selection of the step size $\eta$.
(**How does the approximation of the projection step impact the algorithm and the numerical results?**) Since the projection step cannot be exact in the experiments, we adhere to previous works such as [45, 59, 65] and employ a gradient descent method along with the Adam optimizer to approximately carry out the projection step. From the experimental results, we observe that the approximation of the projection step proves effective as we are able to obtain reasonably good reconstructions when the number of samples $m$ is significantly smaller than the data dimension $n$.
(**How stable is it when $k$ grows ? (As it is known to behave poorly as $k$ grows). One possible solution could be to solve the problem as the $\max_{\mathbf{U}}\mathrm{Tr}(\mathbf{U}^\top\mathbf{A}\mathbf{U})$ with $\mathbf{U}$ in a generalized Stiefel manifold (defined with $\mathbf{B}$) and the generative prior could be enforced with a regularization**) In our experiments, we employ a pre-trained generative model with the latent dimension $k$ remaining fixed. For instance, for the generative model pre-trained for the MNIST dataset, $k = 20$, and for the generative model pre-trained for the CelebA dataset, $k = 100$. We do not modify the generative model (and the latent dimension) as pre-training is time-consuming. Nevertheless, we are grateful to the reviewer for highlighting this intriguing optimization problem to us and we concur that it constitutes a promising direction for further investigation.
(**Limitations and Ethics Review**) Similar to the series of works following [4], we only conduct the experiments and pre-train the generative models for the publicly accessible and widely utilized datasets MNIST and CelebA. We believe that there should be no negative implication for the society and no requirement for an ethics review.
---
Rebuttal 2:
Comment: I thanks the authors for their answer which covers some of my concerns and I will raise my score.
About the limitations and ethics, although the authors only work on pre-trained models with classical dataset, they are tinkering with generative models. The implications for other generative models (on more sensitive data/applications) could be huge and it would deserve some discussion.
---
Rebuttal Comment 2.1:
Title: Responses to Reviewer PACz
Comment: Thank you for your responses and for raising the score. We will incorporate a discussion concerning the application of generative models to more sensitive data/applications in the revised version. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Preferential Normalizing Flows | Accept (poster) | Summary: This paper focuses on the problem of eliciting a complex multivariate probability density from an expert. Existing works mainly use simple distributions. This paper proposes to model the belief density with a flow based model. To apply normalizing flows to this problem, we will need to address a few challenges. (1) We want to train flows with a small set of samples, which may result in collapsing and diverging probability mass. (2) The samples presented for the expert might be drawn from an independent and known distribution.
This paper proposes to use Bayesian inference to address the challenges of collapsing and diverging probability mass. That is, we are interested in representing the belief density $p_{}(x)$ with a flow model. We then optimize the parameters of $p_{*}(x)$ with MAP estimation. The likelihood of samples, i.e., $p(D|f)$ can be computed with Proposition 3.5 of the paper. The prior is approximated by the probability of the k-wise winners i.e., $p(D_{k}^{\succ})$.
The experiments on synthetic and real datasets show the effectiveness of the proposed method.
Strengths: 1. This paper applies normalizing flows to the problem of expert knowledge elicitation. I think this is a promising application of flow-based models. The idea is novel.
2. The proposed method is effective, and has theoretical guarantees.
Weaknesses: Some definitions and formulas are not very clear to me.
1. In Proposition 2.1, it is not very clear what is the relationship between W and the limit $\lim_{\beta \rightarrow 0} p(\mathcal{D}) = 1$.
2. The symbol $\mathcal{D}$ is used in two places, i.e., Proposition 2.1 and Section 4.1
3. In Assumption 3, what is $Exp(s)$.
4. In Proposition 3.5, is $s()$ a function or a constant?
5. In Eq.4, Is the numerator $\exp{(sf(x)\lambda(x))}$?
6. The symbol $\mathbb{x}$ is used in several places. Sometimes, it represents a random variable, sometimes, it represents a sample.
7. In Eq. 6, is $f_i = f(x_i)$?
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. I am not an expert of expert knowledge elicitation, so I may misunderstand something. Based on Eq. 1, the $f$ is a continuous function. Based on Eq. 6, $p(f) \propto \prod \exp{f_i}$. Therefore, why do we need normalizing flows to represent $f$. That is, it seems that we can use an arbitrary neural network to represent $f$.
2. What’s the meaning of Theorem 3.6. I may have missed something, but I did not see any discussion or analysis on this theorem.
3. The paper mentions that one challenge of this task is samples presented for the expert might be drawn from an independent and known distribution. How this challenge is addressed by the proposed method?
4. The experiments actually not very convincing, to improve the experiments,
(1) Are there any existing methods that can be used as baselines?
(2) Do we have any quantitative metrics to evaluate the proposed method?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: In my opinion, the proposed method does not have potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > In Proposition 2.1, it is not very clear what is the relationship between W and the limit \\(\lim_{\beta \rightarrow 0} p(\mathcal{D}) = 1\\).
As the “noise level” \\(\beta\\) goes zero, there is no noise in RUM (i.e. \\(W=0\\) with high probability), which implies that the expert always chooses \\(\mathbf{x}\\) which maximizes their utility / belief density, that is \\(argmax_{\mathbf{x}} p_{\star}(\mathbf{x})\\). Only observations consistent with argmax have non-zero probability and hence the probability of the only possible data collection is one. Since argmax is invariant to monotonic transformations, one cannot identify the target belief density, or it is unidentifiable at least up to a monotonic transformation.
> The symbol \\(\mathcal{D}\\) is used in two places, i.e., Proposition 2.1 and Section 4.1
We modified the notation in Proposition 2.1 from \\(\mathcal{D}\\) to \\(\mathcal{D}_{\textrm{pref}}\\).
> In Assumption 3, what is \\(Exp(s)\\)
Exponential distribution with a rate parameter s>0. We will clarify this in the final version.
> In Proposition 3.5, is \\(s()\\) a function or a constant?
It is a constant, the rate parameter of the exponential distribution. The equation reads as \\(s\\) times \\(f(\mathbf{x}) - f(\mathbf{x}_j)\\), the parenthesis being for the difference and not indicating a function.
> In Eq.4, Is the numerator \\(\exp{(sf(x)\lambda(x))}\\)?
No, the equation in the paper is correct, i.e. the integrand in the numerator is \\(\exp{(sf(\mathbf{x}))\lambda(\mathbf{x})}\\). Note that \\(f(\mathbf{x}) := \log p_{\star}(\mathbf{x})\\), whereas $\lambda(x)$ is directly a density.
> The symbol \\(\mathbb{x}\\) is used in several places. Sometimes, it represents a random variable, sometimes, it represents a sample.
We used \\(X\\) for a random variable and \\(\mathbf{X}\\) for a set of samples (design matrix). To avoid confusion, we changed our notation so that the random variable is now denoted by \\(\mathbb{X}\\).
> In Eq. 6, is \\(f_i = f(x_i)\\)?
Yes. We will clarify this in the final version.
> ...Based on Eq. 1, the \\(f\\) is a continuous function. Based on Eq. 6, \\(p(f) \propto \prod \exp{f_i}\\). Therefore, why do we need normalizing flows to represent \\(f\\)...
The critical requirement is that the result has to be a density. More precisely, \\(\exp(f)\\) needs to normalised over the argument x, i.e. \\(\int \exp(f(\mathbf{x}))d\mathbf{x} = 1\\). Normalising flows provide a natural tool for representing densities, whereas for an arbitrary network we would need explicit normalization that is not computationally feasible.
Further, Eq. (6) relates to the functional prior \\(p(f)\\), which defines a probability density over the *function values*.
> What’s the meaning of Theorem 3.6. I may have missed something, but I did not see any discussion or analysis on this theorem.
Theorem 3.6. provides the formula for the k-wise winner distribution in the limit of \\(k\\) (number of alternatives) approaches infinity. We use the limit result to construct the functional prior, which helps to solve the collapsing and diverging mass problem. Even though we eventually use only small \\(k\\), we can use the limit distribution as a prior, because we can temper the finite-k distribution to resemble the limit (Figure 2).
> The paper mentions that one challenge of this task is samples presented for the expert might be drawn from an independent and known distribution. How this challenge is addressed by the proposed method?
The whole method is designed specifically to address this challenge.
If the samples were drawn from the target itself then standard flow learning would be sufficient as a solution, whereas learning the flow from preferential responses for samples drawn from any other distribution requires the full machinery we introduce. In other words, the challenge is addressed by the combination of the RUM model for preferential data and interpretation of the k-wise distribution as tilted version of the belief as described in Section 3, as well as the functional prior presented in Section 4. Further, the new ablation study (Figure R1) provides some insights on how the problem becomes more challenging when sampling distribution for the candidates is very different from the target density. Yet, the proposed method is able infer the target density, although the estimate is not accurate as it would be when the sampling distribution is closer to the target.
> Are there any existing methods that can be used as baselines?
Majority of prior elicitation works assume fixed prior family (e.g. a Gaussian) and would not be fair baselines; they can be made arbitrarily bad by making a poor choice of the distribution. Furthermore, there are no specific methods that learn from the preferential comparisons and hence conducting such a comparison would require deriving details for a new method anyway. While there are some methods that can estimate flexible densities (the GP-based methods we cited in Introduction), they all require completely different kind of input information and hence cannot be compared against in our setting. We learn from preferential comparisons, whereas e.g. Oakley \& O'Hagan (2007) learn from percentiles.
Even though there are no natural baselines, we hope that the additional experiments relating to sensitivity of the method in terms of the key parameters (\\(k\\), \\(n\\), \\(\lambda(x)\\)) address in part the same request.
> Do we have any quantitative metrics to evaluate the proposed method?
Since the problem is a density estimation problem (from a preference data), a natural metric is the distance between the estimated density and the target density. The proposed method produces relatively low (Wasserstein and the MMTV) distances between the estimated flow density and the target density in the experiments.
**References**
Oakley, J. E., \& O'Hagan, A. (2007). Uncertainty in prior elicitations: a nonparametric approach. Biometrika
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you for your explanation. Your answers address my questions related to the theories of the proposed method. But I still have a concern on the experiments. Since there are no baselines for comparison, it is hard to tell how good the proposed method is, and what advantages the proposed method has. Especially I am not that familiar with expert knowledge elicitation.
I notice that other reviewers are positive to this paper. It would be appreciated if other reviewers can explain to me the completeness and thoroughness of the experiments. Thank you. | Summary: The paper proposes a method to learn so-called belief densities based on k-wise rankings or comparisons of alternatives. This belief density is learned by combining function-space Bayesian inference and normalizing flows, which allows for the learning of complex (multivariate) probability densities. The authors evaluate their method empirically by studying synthetic scenarios, a regression task and a “real” use-case by querying an LLM and evaluating how close the resulting flow is to the ground truth density by three metrics.
Strengths: The paper addresses an interesting and very relevant problem, especially as human feedback has recently been used a lot in the training of LLMs. The paper has a strong theoretical contribution proposing a method how to learn the belief density using normalizing flows from rankings while preventing failure modes by using a functional prior. The paper is well written and the authors clearly disentangles their contribution from the work in related works.
Weaknesses: The experiments could have been more varied. At the moment all experiments use k = 5 and it would be interesting to see how the results vary with different values of k. Specifically, k = 2 is a very relevant real-life use case as this type of feedback is given a lot in e.g. chatbots. Similarly, it would be good to get some idea of how the flows converge to the ground truth density as a function of the number of preferential samples.
Minor points and typos:
Fig. 1 Can you make the numbers larger? They are hard to read even when zooming in and without zooming in it’s not clear that they are numbers
l. 78 “Diverging” -> “diverging”
l. 94 “Mollester” -> “Mosteller”
l. 201 “to Bayesian inference” -> “to use/do, etc. Bayesian inference”
l. 201 “given a preferential” -> “given preferential”
l. 405 citation ends abruptly
l. 458 “probality” -> “probability”
Technical Quality: 3
Clarity: 2
Questions for Authors: Is assumption 3 satisfied in the LLM experiment? If not, how does this affect the results?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The paper mentions fair limitations in the discussion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > ...At the moment all experiments use k = 5 and it would be interesting to see how the results vary with different values of k. Specifically, k = 2 is a very relevant real-life use case as this type of feedback is given a lot in e.g. chatbots. Similarly, it would be good to get some idea of how the flows converge to the ground truth density as a function of the number of preferential samples.
Thank you for the excellent suggestion. We re-ran both Onemoon2D with \\(k=2,3,5,10\\) and LLM experiment with \\(k=2\\) to \\(k=5\\), as explained in the global response. The key result is that the method indeed works already with \\(k=2\\), though naturally not as well. We also studied the convergence as a function of \\(n\\), again describing the experiment and the results in the global response.
> Minor points and typos:
Thank you for pointing out these errors. We corrected the errors for the revised manuscript and increased the font size of Fig. 1.
> Is assumption 3 satisfied in the LLM experiment? If not, how does this affect the results?
The RUM model with this specific noise distribution serves as a theoretical model for the expert's choice, so it is highly unlikely that assumption 3 is satisfied in the LLM experiment. In fact, it most likely does not hold exactly for humans either. Hence this experiment shows that even under model misspecification that can be expected also in the real use-cases we can get sensible results. We hypothesize that a potential violation of assumption 3 can have similar effect as misspecification of the noise level, which can result in too high (or low) spread in the estimated flows (see e.g. 2nd col, 3rd row in Figure A.7).
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you for the additional experiments and the clarification regarding assumption 3. I will increase my score to 8. | Summary: The paper presents an approach for expressing an expert's belief density using a normalizing flow. Interestingly, the flow is trained solely on preferential questions (e.g., comparing and ranking) which follow a random utility model (RUM). The approach avoids several optimization issues that could occur when trying to model the more complex underlying data. The authors method relies on an approach that defines a functional prior over the preference data. The authors demonstrate their method on both simulated and real-world data, showing a very promising method for representing the belief density of expert opinions.
Strengths: The authors overcome a unique challenge in modeling expert's beliefs with flows, namely that the true data distribution of experts beliefs $p^*$ cannot be directly sampled from. Instead, they sample from preference data $D$ and choose a specific function-space prior for the flow in order to model the likelihoods of the k-wise comparisons. This unique solution results in a flow that properly places mass on the winner points following the underlying decision model of the data. The experimental results confirm their hypothesis, showing much better results as a result of their prior.
Additionally, The use of LLMs as experts in their experiments is clever. The approach is reasonable and replicable, while also demonstrating and more end-to-end application that would be of interest to the broader community.
Weaknesses: One primary advantage of normalizing flows is that they offer exact likelihood estimation. Because of the underlying data model for preference data (eq. 3), I suspect that the method here is actually optimizing a lower bound on the log likelihood. This may explain the optimization challenges and need for fixing the prior's precision $s=1$ described on lines 228-230. Specifically, their objective may be similar to the Max surjective flow described in Nielsen et. al. [2020, https://arxiv.org/abs/2007.02731]. In either case, the authors may want to clarify if their objective function is a lower bound or not, since adapting flows for non-bijective mappings is also a topic of interest to many.
Technical Quality: 3
Clarity: 4
Questions for Authors: The Appendix explores various settings of the parameter $s$ when the true value of s varies, which shows that their choice to fix $s=1$ is reasonable. However, the authors almost mention using using $\lambda \propto 1$ as well to maintain tractability. The implications of this choice are unclear, did the authors run any validity checks to test the implications of their choice?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: In practice collecting an training dataset of expert feedback is laborious, and often many experts are needed in order to collect enough data. The paper only considers the case of a single expert, with a flow model to that experts belief density. The authors may wish to comment on future work in this area — specifically, if their approach could be adapted to a multi-expert setting, if a single flow model is still sufficient or if a flow mixture model is preferred.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >...Because of the underlying data model for preference data (eq. 3), I suspect that the method here is actually optimizing a lower bound on the log likelihood...
Specifically, their objective may be similar to the Max surjective flow described in Nielsen et. al. [2020, https://arxiv.org/abs/2007.02731]...
There are some similarities between the Max surjection transformation and the preferential likelihood given by (noiseless) RUM, which is essentially an argmax operator. However, the main difference is that the Max surjection transformation is a transformation that can be applied to any level of a composable transformation (denoted by \\(T\\) in the manuscript, by \\(f\\) in (Nielsen et. al., 2020)) with available likelihood contribution (Table 2, Nielsen et. al., 2020), while the RUM model conditions on the whole \\(T\\) and computes a (non-additive) likelihood contribution for the whole preferential flow.
In SurVAE Flows, the likelihood contribution of a deterministic bijective flow is the likelihood of a datapoint \\(\mathbf{x}\\) given by the flow density \\(p\\) at that point, that is \\(p(\mathbf{x})\\) computed using the change of variable formula, which results in the Jacobian of \\(T^{-1}\\). In contrast, a preferential flow uses a probabilistic modelling approach by having an additional likelihood function \\(\mathcal{L}\\) which computes the likelihood contribution of (preference) data \\(\mathbf{x}\\) conditioned on the flow density \\(p\\), that is \\(\mathcal{L}(\mathbf{x} \mid p)\\). In a preferential flow, although the mapping \\(T\\) from a latent point \\(\mathbf{z}\\) to a datapoint \\(\mathbf{x}\\) is a deterministic bijective function, the likelihood contribution of preference data does not directly involve the Jacobian of \\(T^{-1}\\), unlike in SurVAE Flows (Algorithm 1, Nielsen et. al., 2020).
Thus, the likelihood contribution is exact and the objective is not a lower bound. We extended the discussion section of the manuscript to discuss the relationship between SurVAE Flows and preferential flows.
> The Appendix explores various settings of the parameter \\(s\\) when the true value of \\(s\\) varies, which shows that their choice to fix \\(s=1\\) is reasonable. However, the authors almost mention using using \\(\lambda \propto 1\\) as well to maintain tractability. The implications of this choice are unclear, did the authors run any validity checks to test the implications of their choice?
We indeed use \\(\lambda \propto 1\\) for construction of the functional prior, but note that in all of the experiments we used as \\(\lambda(x)\\) a mixture of a bounded uniform and a Gaussian. That is, we already conducted the experiments under conditions where the choice does not hold.
To further investigate the effect of the true sampling distribution, we now conducted an additional ablation study, where the true \\(\lambda(x)\\) is varied. See the global response for description of the experiment and summary of the results confirming that the method works for a range of choices.
>...The paper only considers the case of a single expert, with a flow model to that experts belief density. The authors may wish to comment on future work in this area — specifically, if their approach could be adapted to a multi-expert setting, if a single flow model is still sufficient or if a flow mixture model is preferred.
This is a good idea and we will extend the Discussion to cover also multi-expert settings.
The most common methods for aggregating expertise of multiple experts are *behavioral aggregation* and *mathematical aggregation*. The former is based on experts discussing their opinions and making consensus judgments for which an aggregate distribution is fitted (O'Hagan, 2019). This is agnostic to the elicitation algorithm and our method with a single flow could readily be used to capture the consensus. For *mathematical aggregation* we need a *pooling rule* that combines the elicited densities into a single one (EFSA, 2014). We think that a mixture of flows, which is equivalent to a linear pooling over multiple elicited flow densities, with equal mixture weights, would be a good default option. Further, Bayesian pooling such as having a hierarchical model on the joint preferential data from multiple experts could be investigated.
**References**
EFSA, E. F. S. A. (2014). “Guidance on Expert Knowledge Elicitation in Food and Feed Safety Risk Assessment.” EFSA Journal, 12(6): 3734. 2, 16\\
O’Hagan, A. (2019). Expert knowledge elicitation: subjective but scientific. The American Statistician, 73(sup1), 69-81.
---
Rebuttal Comment 1.1:
Comment: Thank you for clarifying these points, the additional ablation study, and further discussion. The extended discussion connecting preferential flows to other flows will be a nice contribution for the community. Additionally, confirming that the preferential flows you present have exact likelihood contributions is a strength and may lead to additional future applications. | null | null | Rebuttal 1:
Rebuttal: We are happy to see the reviewers both understood the paper well and perceived it positively. We thank the reviewers for the detailed constructive comments, and provide responses to specific comments and questions for each reviewer separately.
The rebuttal is accompanied by a pdf that reports results of new empirical experiments addressing comments from reviewers sR2g and FQE4, validating the sensitivity of the results in terms of the cardinality of the choice set \\(k\\), the number of comparisons \\(n\\), and the choice of the distribution \\(\lambda(x)\\) that the candidates are sampled from.
We report results for three new experiments:
1. Figure R1 studies the effect of \\(\lambda(x)\\), the unknown distribution from which the candidates to be compared are sampled from, complementing the experiment reported in Section 5.1 and confirming the method is robust for the choice. In the original experiment the candidates were sampled from a mixture distribution of uniform and Gaussian distribution centered on the mean of the target, with the mixture probability \\(w=1/3\\) for the Gaussian. Figure R1 reports the accuracy as a function of the choice of \\(w\\) for one of the data sets (Onemoon2D), so that \\(\lambda(x)\\) goes from uniform to a Gaussian,
and includes also an additional reference point where \\(\lambda(x)\\) equals the target. For all \\(w>0.5\\) we reach effectively the same accuracy as when sampling from the target itself, and even for the hardest case of uniform \\(\lambda(x)\\) (with \\(w=0\\)) the distance is significantly smaller than the reference scale comparing the base distribution with the target.
2. Tables R1 and R2, as well as Figure R2, report results of an experiment studying the effect of \\(k\\) and \\(n\\). Again the results are reported for only one of the data sets (Onemoon2D), but we will add similar results for all of the data sets in Appendix.
For fixed \\(n\\), we see that the accuracy naturally improves as a function of \\(k\\) (Table R1). The original manuscript used \\(k=5\\), but the new result reveals that we can learn the target already with \\(k=2\\) that is most convenient for a user, but naturally with somewhat lower accuracy. For fixed \\(k\\), increasing \\(n\\) generally improves the accuracy and already fairly small \\(n\\) is sufficient for learning a good estimate (Table R2). For very large \\(n\\) the accuracy can slightly deteriorate. We believe this is because of prior misspecification that encourages overestimation of the variation due to the fact that \\(k\\) is finite but in the prior it is assumed to be infinite. Figure R2 confirms that for \\(n=1000\\) the shape of the estimate is extremely close and the slightly worse Wasserstein distance is due to overestimating the width. For the final version, we extend the second paragraph of the Discussion to elaborate this aspect more.
3. Finally, Figure R3 shows that the LLM expert also works with \\(k=2\\). We replicated the original experiment conducted with \\(k=5\\) and report the estimates side-by-side, visually confirming we learn the essential characteristics of the distribution in both cases. The results are not identical and the case of \\(k=5\\) is likely more accurate (see e.g. the marginal distribution of the last feature), but there are no major qualitative differences between the two estimates.
Pdf: /pdf/06aac05576cb8722d7b55d7728e9fdc79d94844d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
UniGAD: Unifying Multi-level Graph Anomaly Detection | Accept (poster) | Summary: Traditional graph anomaly detection focus on single type of graph object (e.g., node, edge, graph). To address this, this paper introduces the first unified framework (UniGAD) for detecting anomalies at the node, edge, and graph levels jointly. The authors propose two core modules, MRQSampler and GraphStitch, to address the challenge of unifying multi-level task formats and unifying multi-level task training, respectively. Moreover, the authors theoretically prove that MRQSampler maximizes the accumulated spectral energy of subgraphs (i.e., the Rayleigh quotient) to preserve the most significant anomaly information. Extensive experiments demonstrate that UniGAD surpasses existing single-task GAD methods and graph prompt-based approaches for multiple tasks, offering robust zero-shot task transferability.
Strengths: Originality:
1. This paper focus on the gap of multi-level graph neural network at the node, edge, and graph levels, which is an important problem. In many scenarios/data, the labels of these different graph objects are often relevant, but this is always overlooked.
2. The problem formulation in Definition 2.1 is novel in graph learning, which is feasible to have labels at one or more levels, which makes the framework broadly applicable and provides strong zero-shot capabilities.
3. The authors designed a theoretically guaranteed graph sampling algorithm that uses the Maximum Rayleigh Quotient to sample the most anomalous subgraphs. This approach ensures sampling nodes that contain anomalous information, preventing the anomalous information from being smoothed out.
4. The authors implement multi-level task information transfer through a module called GraphStitch. Since multi-task in graph learning research remains rare, this network design is very interesting and prospective.
Quality:
The paper exhibits a high level of technical quality, with theoretical proofs and rigorous dynamic programming algorithms provided for the effectiveness of the MRQSampler in maximizing the accumulated spectral energy of subgraphs.
The paper proposes a robust framework that achieves the unified multi-level tasks through a pre-trained GNN encoder, MRQsampler, and GraphStitch network.
The experiments look solid, including 13 datasets with single-graph datasets and multi-graph datasets and 17 SOTA model with node-level models, edge-level models, graph-level models and multi-task models.
Clarity:
The paper is well-written and clearly presents its contributions.
Significance:
The framework of UniGAD further improves the effect of graph anomaly detection and has strong zero-shot capability. Besides, it also promote the development of multi-task learning in the graph learning domain.
Weaknesses: A substantive assessment of the weaknesses of the paper. Focus on constructive and actionable insights on how the work could improve towards its stated goals. Be specific, and avoid generic remarks. For example, if you believe the contribution lacks novelty, provide references and an explanation as evidence; if you believe experiments are insufficient, explain why and exactly what is missing, etc. Please keep in mind that the rebuttal period is not necessarily enough to run new experiments. As a result, asking for new results is rather unrealistic.
Overall, this paper is technically solid and novel. In Page 8, the authors mention that the other multi-task method, like GraphPrompt and All-in-One, often run out of time (OOT) and calculate redundantly. What is it that makes UniGAD avoid this? Are each node’s representations calculated only once? Please explain the difference in more detail.
Minors: in Additional Experimental Results, the caption of the table 8 and table 9 should be F1-macro? Otherwise this will repeat table 1,2.
Technical Quality: 4
Clarity: 4
Questions for Authors: Refer to weaknesses, please explain the difference in more detail and check the typos.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors addressed limitations and societal impacts in the conclusion and appendices.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are greatful for your helpful comments! Below are our responses.
### **Q1: How UniGAD avoid Redundant calculation**
Prompt-based methods for handling multiple objects convert all objects into induced graphs during pre-processing. This approach results in the number of induced graphs to be processed equaling the sum of nodes, edges, and graphs, potentially reaching tens of millions in some datasets. Each induced graph contains numerous nodes, often appearing repeatedly across different induced graphs, leading to significant computational redundancy.
In contrast, UniGAD employs a more efficient approach. It first obtains the representation of each node through a pre-trained GNN encoder. Then, using a sampler and pooling architecture, it derives the embeddings for other levels of objects (edges/graphs). The framework optimizes the process by eliminating the need to generate and process numerous induced subgraphs, instead leveraging the original graph structure and minimizing repetitive calculations.
### **Q2: Minor Corrections**
Thank you for bringing this to our attention. The caption of Tables 8 and 9 should be F1-macro. We will thoroughly examine the manuscript for any other minor inconsistencies.
---
Rebuttal Comment 1.1:
Title: Thanks for the reply
Comment: I have no further questions. | Summary: The paper introduces UniGAD, a unified framework for multi-level graph anomaly detection, capable of identifying anomalies at the node, edge, and graph levels. Key contributions include the Maximum Rayleigh Quotient Subgraph Sampler (MRQSampler), which optimizes subgraphs to maximize significant anomaly information, and the GraphStitch Network, which facilitates information sharing across different levels while maintaining effectiveness. Experiments on 13 datasets show that UniGAD outperforms existing methods and demonstrates strong zero-shot transferability. Overall, UniGAD leverages spectral properties and multi-task learning to achieve state-of-the-art performance in graph anomaly detection.
Strengths: 1.Innovative Unified Framework: UniGAD is the first framework to jointly detect anomalies at the node, edge, and graph levels, addressing a significant gap in current GAD research. The integration of multiple anomaly detection tasks into a single model enhances its versatility and applicability across various scenarios.
2.Advanced Methodologies: The Maximum Rayleigh Quotient Subgraph Sampler (MRQSampler) and the GraphStitch Network are key innovations. MRQSampler maximizes spectral energy to preserve critical anomaly information, while the GraphStitch Network facilitates information sharing across different levels, harmonizing conflicting training goals and maintaining the effectiveness of individual tasks.
3.Robust Performance and Transferability: UniGAD is comprehensively evaluated on 13 diverse datasets, consistently outperforming existing methods. It demonstrates robust zero-shot transferability, effectively transferring knowledge across different GAD tasks without prior exposure to specific anomalies, showcasing its robustness and generalizability.
4.Strong Theoretical and Practical Foundations: The approach is supported by solid theoretical foundations, with proofs provided for the optimal conditions of MRQSampler. Additionally, the dynamic programming algorithm ensures computational efficiency, making the method scalable to large datasets. The availability of the implementation code promotes transparency and reproducibility of the results, facilitating further research and application.
Weaknesses: 1.The paper modifies existing methods like All-in-One and GraphPrompt into multi-task versions for comparison. However, All-in-One already supports node, edge, and graph tasks, and GraphPrompt supports node and graph tasks. A more straightforward approach would have been to directly apply these methods to the relevant anomaly detection tasks without modifications to see their effectiveness.
2.The paper mentions that All-in-One and GraphPrompt often run out of time (OOT), but it is unclear whether this refers to the prompt tuning stage or the pre-training stage. Since both methods are known for their relatively fast prompt tuning, clarifying this distinction is crucial for understanding their performance limitations.
3.The paper lacks comparisons of temporal and spatial performance metrics. Including an analysis of the time and space efficiency of UniGAD compared to other methods would provide a more comprehensive evaluation of its practicality for large-scale and real-time applications.
Technical Quality: 4
Clarity: 4
Questions for Authors: Is it possible to provide All-in-One and GraphPrompt code for multi-task versions?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments to our work! Below are our responses.
### **Q1: Modification of All-in-One and GraphPrompt**
We would like to clarify that we did not alter the core methodologies of GraphPrompt and All-in-One. Our modifications were limited to the data preprocessing component to accommodate the simultaneous handling of multiple object types (node/edge or node/graph) within induced graphs. The original implementations of these methods support processing only one type of graph object at a time, either by handling one object type exclusively or by processing different types sequentially (i.e., one type for pre-training and another type for prompt tuning). They do not inherently support scenarios where multiple object types are input/output simultaneously.
**Code Access**: We have included the modified versions of All-in-One and GraphPrompt in our source code, which is accessible through the link provided in our manuscript for further reference and verification.
### **Q2: OOT issues for All-in-One and GraphPrompt**
The time limitation mentioned refers specifically to the preprocess stage and prompt tuning stage. During the pre-training stage, both these methods and UniGAD use GraphMAE, and the time consumption is similar and short. The reasons for significant time consumption of All-in-One and GraphPrompt are:
- Large amount of induced graphs in pre-processing: The number of induced graphs becomes the sum of node and edge numbers rather than just the number of graphs. This results in tens of millions of induced k-hop subgraphs in some datasets, causing substantial pre-processing time.
- Large amount of training samples in prompt tuning: Although both methods are known for their fast prompt tuning, it only held for few-shot settings (e.g., 5 or 10 samples for tuning). However, we use the fully supervised setting and have much more training samples, which significantly increased the data required for tuning.
- Redundant computation: Induced graphs between neighboring nodes inevitably contain duplicated nodes. However, the prompt-based approach treats these duplicates as distinct nodes to be computed separately in different graphs, significantly increasing the computational load. In contrast, UniGAD employs a more efficient approach. It first obtains the representation of each node through a pre-trained GNN encoder. Then, using a sampler and pooling architecture, it derives the embeddings for other levels of objects (edges/graphs). The framework optimizes the process by eliminating the need to generate and process numerous induced subgraphs, instead leveraging the original graph structure and minimizing repetitive calculations.
To address the OOT issue, we employ faster GPUs, extended time limits to 2 days, and optimized the pre-processing code. The updated results are as follows:
||GraphPrompt(AUROC/AUPRC/F1-macro)|All-in-One|UniGAD(Ours)
|-|-|-|-|
|Amazon(Node)|50.01/6.62/40.93|56.11/1.02/48.67|**97.84/87.29/91.33**
|Amazon(Edge)|50.96/2.64/35.95|54.8/3.13/2.45|**92.18/42.01/73.59**
|Yelp(Node)|49.83/12.41/40.90|49.77/46.10/14.43|**86.23/61.00/74.57**
|Yelp(Edge)|49.56/13.63/42.94|49.13/13.49/46.29|**79.05/40.90/66.66**
|MNIST0(Node)|81.16/82.89/80.66|OOT|**99.99/99.99/99.99**
|MNIST0(Graph)|83.88/36.25/52.39|OOT|**99.61/97.92/95.54**
|T-Group(Node)|47.40/1.06/50.77|OOT|**96.19/31.31/68.69**
|T-Group(Graph)|50.81/2.36/49.78|OOT|**88.78/55.64/78.09**
We found that GraphPrompt can complete all datasets in our additional experiments, while All-in-One still fails to finish on the MNIST0/1 and T-Group datasets. This difference in performance can be attributed to their underlying mechanisms. Based on the All-in-One source code, the model must learn the token structure represented by pairwise relationships among tokens. It calculates the dot product between prompt tokens and input graph nodes to determine link establishment, which is computationally intensive. In contrast, GraphPrompt employs a simpler and faster approach, incorporating a learnable vector integrated into graph pooling through element-wise multiplication.
### **Q3: Time and space efficiency of UniGAD**
Thank you for your suggestion. We have incorporated both time and space efficiency metrics into our evaluation, using the large-scale, real-world T-Group dataset (37,402 graphs, 93,367,082 edges, and 11,015,616 nodes). We used the same batch size for all models to ensure a fair comparison.
To provide a more straightforward comparison between single-task and multi-task baselines, we calculated the average, minimum, and maximum for combinations of single-task node-level and graph-level models, and compare these with multi-task models. The results, as shown in Figure 1(a) in the supplementary PDF, indicate that in terms of execution time, our method is slower than the combination of the fastest single-level models but faster than the average of the combination.
Regarding peak memory usage, Figure 1(b) demonstrates that graph-level models consume significantly more memory than node-level models. Our method maintains memory consumption comparable to node-level models and substantially lower than both graph-level GAD models and prompt-based methods.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' reply. I will keep positive score.
---
Reply to Comment 1.1.1:
Title: Acknowledgement from the Authors
Comment: Thank you for your continued engagement and positive feedback on our work! We will carefully incorporate your comments into our manuscript. | Summary: This article presents an anomaly detection model, UniGAD, designed to be applicable across different levels including nodes, edges, and whole graphs. Leveraging the relationship between the Rayleigh quotient and anomaly degree, as described in Lemma 1, the authors have developed a novel subgraph sampling algorithm, MRQSampler. This algorithm recursively adds nodes that maximize the Rayleigh quotient, ensuring the subgraph contains the most anomalous information. Consequently, node-level and edge-level tasks are converted into graph-level tasks for unified processing. Additionally, the authors introduce the innovative GraphStitch Network, which jointly considers multi-level representations. Extensive experimental results substantiate the effectiveness of UniGAD.
Strengths: 1. The task of unifying different levels of anomaly detection on graphs is challenging and very novel.
2. Complete theoretical proof, explaining the motivation of MRQSampler.
3. Complete experiment proves the effectiveness of UniGAD
Weaknesses: 1. The graph signal $x$ in all proofs is treated as a vector, which implies that the node feature is a scalar. However, in practice, graph node features are often vectors. This discrepancy is not clearly addressed by the authors.
2. The MRQSampler appears to be a recursive algorithm rather than a dynamic programming (DP) algorithm, as suggested by Algorithm 1 in the appendix. This distinction impacts the efficiency of the proposed method.
3. UniGAD is a supervised algorithm, yet the baseline comparisons in the experiment include unsupervised algorithms (OCGIN,OCGTL). This raises concerns about the fairness and validity of the experimental comparisons.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How efficient is MRQSampler? (execution time and memory requirements), can it be processed in parallel on Gpus
2. Can UniGAD be unsupervised? I think this is more realistic (collecting abnormal samples during the training phase is difficult)
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: No limitations need to discuss
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **W1: The discrepancy of scalar node feature in proofs and vector node feature in practice.**
Thank you for highlighting this discrepancy. In theoretical derivations, we followed the established foundations of BWGNN and RQGNN, which consider single-dimensional features in their proofs. This simplification enhances mathematical tractability and facilitates clearer theoretical analysis. In fact, the primary focus of spectral graph theory and graph signal processing is on one-dimensional vectors.
However, we recognize that real-world scenarios typically involve multi-dimensional feature vectors. To address this, we developed two approaches:
- Pre-processing: Normalize all feature dimensions and then take the norm (1-norm in our case) to obtain a composite feature for each node, allowing us to identify the most anomalous nodes based on this comprehensive feature.
- Post-processing: Identify anomalous nodes in each feature dimension separately, and then take the union or combination of these node sets across all dimensions.
We implemented both methods in our code, and our early experiments showed similar results between the two approaches. For efficiency, we used the pre-processing approach in UniGAD. This method ensures that the computational complexity does not increase with the number of feature dimensions, making it more scalable for high-dimensional data. We will clarify this point in the manuscript to provide a more comprehensive understanding of the rationale behind our design choices.
### **W2. MRQSampler: DP or Recursive**
The MRQSampler algorithm utilizes principles of dynamic programming (DP), which may not be immediately apparent and thus requires clarification. The key distinction between DP and recursive or divide-and-conquer approaches is the use of memoization, i.e., storing the results of subproblems to avoid redundant computations in future calculations.
In the MRQSampler, as described in Algorithm 1 (lines 18-29), the process incorporates a bottom-up calculation where the maximum $\Delta$ values of sub-trees are computed starting from the leaves up to the root. It is important to note that sub-trees not selected in initial iterations at lower levels might still be reconsidered in subsequent iterations at higher levels. Therefore, the maximum $\Delta$ for these sub-trees, referred to as 'inferior candidates' in the algorithm, could be computed and stored during their first evaluation. These stored values are then reused in later computations, which is a hallmark of DP.
To better illustrate this process, we restructure Algorithm 1 into the following two stages:
- **Stage 1:** Compute and store the maximum $\Delta$ for each sub-tree recursively, starting from the leaf nodes and moving upwards to the root. This stage has a computational complexity of $\mathcal{O}(N \log N)$.
- **Stage 2:** Initiate from the root node and iteratively select the sub-tree with the highest $\Delta$ from the available candidates, incorporating it into the resultant sub-graph until the $\Delta$ of the selected sub-tree exceeds the current RQ. As new sub-trees are added to the resultant set, their child sub-trees are also added to the pool of candidates. The complexity of this stage is $\mathcal{O}(N)$. Without memoization, this stage would necessitate recalculations of the maximum $\Delta$ for these sub-trees.
The implementation of memoization in the DP paradigm to store and reuse the results of sub-tasks effectively reduces redundant computations and optimizes the overall complexity of the algorithm to $\mathcal{O}(N \log N)$. We will make the necessary revisions to the manuscript to better explain this algorithm.
### **Q1. Time and memory efficiency for MRQSampler**
Following your suggestion, we have conducted a comprehensive evaluation of both time and space efficiency on the large-scale, real-world T-Group dataset. In Figure 1(a), we have specifically highlighted the time consumption of the MRQSampler module separately from other parts of UniGAD, about 37% of the total time. The MRQSampler offers a key efficiency advantage: it requires only once computation to generate and record subgraphs, which can be reused across multiple trials without recalculation when tuning parameters. Besides, we find that k-hop subgraph sampling also takes a lot of time in the prompt-based method.
Moreover, the subgraph sampling process for different nodes can be parallelized, as each node's sampling is independent. This parallelization potential further enhances the scalability of our approach, particularly for large-scale graphs. We are actively working on optimizing MRQSampler for GPU acceleration but it presents some challenges. Currently, we offer a CPU-based parallel version of the code.
### **W3/Q2: Can UniGAD be Unsupervised?**
OCGIN and OCGTL are widely used unsupervised graph-level GAD baselines, but some supervised methods also compared with them (e.g., GmapAD and RQGNN). We agree that such comparison is not rigor and will highlight this in our analysis.
UniGAD is also designed with label scarcity in mind. It focuses on scenarios with missing labels at different levels, exploring how labels from one level can compensate for another. While UniGAD requires labels, it exhibits zero-shot transferability---applying learned knowledge to unseen scenarios without requiring labels in a secific type.
Moreover, the MRQSampler can be considered as an independent module in our framework. It leverages the correlation between spectral domain Rayleigh quotients and anomaly degrees to identify the most anomalous subgraphs. Essentially, it functions as a graph algorithm that can theoretically identify the most anomalous node-centered subgraphs in an unsupervised manner. This characteristic makes the MRQSampler adaptable to other unsupervised methods beyond the scope of UniGAD. We view this as an area of independent interest and anticipate its potential applications in unsupervised GAD methods.
---
Rebuttal Comment 1.1:
Comment: The author's response basically convinced me and I will keep my postive score
---
Reply to Comment 1.1.1:
Title: Acknowledgement from the Authors
Comment: Thank you for your insightful feedback and for recognizing our contributions. We greatly appreciate your engagement throughout the review process. We'll carefully incorporate your comments into our manuscript. | Summary: This paper presents a novel framework for detecting anomalies at node, edge, and graph levels within graph-structured data. The authors introduce the Maximum Rayleigh Quotient Subgraph Sampler (MRQSampler) to transform multi-level tasks into graph-level tasks by sampling subgraphs with high spectral energy, thus preserving significant anomaly information. Additionally, the GraphStitch Network integrates information across different levels and balances multi-level training objectives. Experimental results demonstrate the promising results on different GAD tasks.
Strengths: 1. The paper studies an interesting and underexplored problem that unifies multiple-level GAD tasks in a single framework. The proposed unification strategy is interesting. The paper is overall nicely presented and easy to follow. A comprehensive set of baseline methods are considered in the evaluation.
Weaknesses: 1. AUPR is a popular complementary metric to AUROC, commonly used by the majority of recent GAD papers. It is important for readers to understand the performance of the model with a focus on the anomaly class. Considering that the improvement on some datasets is marginal, discussing the model's performance under AUPR would be beneficial.
2. It seems that on the edge prediction task for some datasets, the baselines achieve the top performance. I am wondering how this method would perform on datasets with a very high degree, like the T-Finance dataset. In addition, it would be interesting to see how this method performs on large-scale datasets to understand its robustness better.
3. Some of the experiments are marked as OOM) and OOT, especially for those prompt-based methods on larger graphs. While I understand these methods can be resource-intensive, a limit of 24GB max GPU RAM or 1 day wall time might be insufficient for adequately evaluating such methods.
Technical Quality: 2
Clarity: 2
Questions for Authors: Please refer to weakness.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer’s thorough and constructive feedback on our paper.
### **Q1: AUPRC as a complementary metric**
We acknowledge the importance of AUPRC as a complementary metric to AUROC and F1-macro, especially for anomaly detection with imbalanced labels. In light of your suggestion, we have now included AUPRC results in our evaluation. We show the all results in the supplementary PDF Table 1-4.
Based on the results, we observe that UniGAD's performance under the AUPRC metric aligns closely with its AUROC and Macro-F1 scores. UniGAD achieves state-of-the-art performance across nearly all scenarios.
### **Q2: Results on T-Finance, large-scale datasets, and edge prediction**
We appreciate the reviewer's interest in high-degree and large-scale datasets. We've conducted additional experiments on the high-degree T-Finance dataset and highlighted our results on another large-scale dataset T-Group.
||AUROC||F1-macro||AUPRC||
|-|-|-|-|-|-|-
|Model|Node|Edge|Node|Edge|Node|Edge|
|GCN/GCNE|96.03|87.63|80.57|79.07|84.94|62.12|
|GIN/GINE|90.70|79.05|70.61|73.40|70.81|52.01|
|GraphSAGE/SAGEE|86.43|77.14|76.81|67.12|61.79|18.79|
|SGC/SGCE|78.16|83.01|62.63|68.76|19.62|33.47|
|GAT/GATE|74.21|83.91|56.69|65.75|30.35|54.13|
|BernNet/BernE|90.60|87.80|69.11|63.16|54.70|45.01|
|PNA/PNAE|92.37|86.19|70.52|57.45|67.65|43.70|
|AMNet/AME|68.17|92.27|27.69|70.88|23.07|68.14|
|BWGNN/BWE|93.58|69.01|74.31|64.51|74.73|30.22|
|UniGAD-GCN|93.93|93.75|84.92|84.08|75.30|69.90|
|UniGAD-BWGNN|**96.49**|**94.32**|**89.75**|**84.90**|**85.34**|**74.37**|
The above results on T-Finance indicate that UniGAD outperforms all baseline methods in both node-level and edge-level GAD tasks. Regarding two prompt-based multi-task approaches, GraphPrompt and All-in-One, the preprocessing phase proved to be excessively time-consuming and consume substantial memory, failing to complete within a 2-day timeframe. The primary reason for this inefficiency is the need to generate a distinct induced graph for each edge, which dramatically increases computational demands and memory usage on the high-degree T-Finance dataset.
**Large-scale Dataset Performance Analysis:** We highlight our results on the T-Finance (39,357 nodes, 21,222,543 edges) and T-Group dataset (37,402 graphs, 11,015,616 nodes, 93,367,082 edges) as reported in the manuscript. Our method demonstrated superior performance in both node-level, edge-level and graph-level anomaly detection on this large-scale, real-world dataset. It surpasses two recent powerful GAD baselines: the node-level method BWGNN and the graph-level method RQGNN. This highlights the versatility and effectiveness of our method across different levels of objects, particularly in handling large-scale scenarios.
**Edge Prediction Results Analysis:** For edge prediction, UniGAD achieves the best performance on 4 out of 7 datasets for AUROC and 6 out of 7 datasets on F1-macro. While our method is designed for a multi-task setting, the performance on a single level might be slightly compromised to ensure the model performs well across all tasks.
### **Q3: OOT and OOM under limited resources**
We acknowledge the reviewer's concern about the OOM and OOT issues for some baselines on large graphs. We agree that the limit of 24GB GPU RAM and 1-day running time might be insufficient for these resource-intensive methods. To address this, we borrowed A800 80G GPUs for additional experiments during the rebuttal period and extended the time limit to 2 days.
For reference, the datasets having OOM and OOT issues are:
|Dataset|Graph_num|Edge_num|Node_num|
|-|-|-|-
|Amazon|1|8,847,096|11,944|
|Yelp|1|7,739,912|45,954|
|MNIST0|70,000|41,334,380|4,939,668|
|MNIST1|70,000|41,334,380|4,939,668|
|T-Group|37,402|93,367,082|11,015,616|
For prompt-based multi-task methods previously facing OOT issues, we employed faster GPUs, extended time limits, and optimized the pre-processing code. These improvements enabled GraphPrompt to complete experiments on all datasets except T-finance. However, All-in-One remained slower than GraphPrompt and failed to finish MNIST0/1 and T-Group within 2 days. This is because All-in-One requires learning token structures through pairwise relationships and calculating dot products between prompt tokens and input graph nodes, which is computationally intensive. In contrast, GraphPrompt simply incorporates a learnable vector into graph pooling via element-wise multiplication, enabling faster processing. The updated results are as follows:
||GraphPrompt(AUROC/AUPRC/F1-macro)|All-in-One|UniGAD(Ours)
|-|-|-|-|
|Amazon(Node)|50.01/6.62/40.93|56.11/1.02/48.67|**97.84/87.29/91.33**
|Amazon(Edge)|50.96/2.64/35.95|54.8/3.13/2.45|**92.18/42.01/73.59**
|Yelp(Node)|49.83/12.41/40.90|49.77/46.10/14.43|**86.23/61.00/74.57**
|Yelp(Edge)|49.56/13.63/42.94|49.13/13.49/46.29|**79.05/40.90/66.66**
|MNIST0(Node)|81.16/82.89/80.66|OOT|**99.99/99.99/99.99**
|MNIST0(Graph)|83.88/36.25/52.39|OOT|**99.61/97.92/95.54**
|T-Group(Node)|47.40/1.06/50.77|OOT|**96.19/31.31/68.69**
|T-Group(Graph)|50.81/2.36/49.78|OOT|**88.78/55.64/78.09**
Two graph-level anomaly detection methods, iGAD and GmapAD, initially encountered OOM issues. Using 80GB of GPU RAM, iGAD successfully ran on the MNIST0 and MNIST1 datasets. However, T-Group exceeded memory limits due to the large number of nodes per graph. We switched the processing to CPU, which was completed in 2 days. Conversely, GmapAD could operate within the 80GB memory limit on the A800 GPU but still timed out even with a 2-day limit. We discovered that the final SVM predictor in GmapAD becomes significantly slower with a large number of training samples.
||iGAD(AUROC/AUPRC/F1-macro)|UniGAD(Ours)
|-|-|-|
|MNIST0|98.93/94.79/87.73|**99.61/97.92/95.54**
|MNIST1|99.50/97.98/95.04|**99.98/98.60/97.60**
|T-Group|64.44/5.92/46.51|**88.78/55.64/78.09**
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for addressing my concerns. I will maintain my current rating for now. I look forward to discussions with the other reviewers and the area chair, and I am open to adjusting my rating if necessary after the discussion.
---
Reply to Comment 1.1.1:
Title: Acknowledgement from the Authors
Comment: Thank you for your continued engagement and thoughtful consideration of our work. We'll carefully incorporate your comments into our manuscript. We’re glad that our responses have addressed the concerns you raised. We’ll be available until the end of the rebuttal if you have any follow-up questions or further points of discussion. | Rebuttal 1:
Rebuttal: Dear Reviewers,
We are deeply grateful for your constructive and insightful feedback. We sincerely appreciate your recognition of our contributions to the field of graph anomaly detection. The reviewers have highlighted several key strengths of our work, including UniGAD's unique capability to unify node, edge, and graph-level tasks, the advanced methodologies introduced through MRQSampler and GraphStitch Network, robust empirical performance with strong zero-shot transferability, solid theoretical foundations, and clear presentation. These aspects collectively address a significant gap and promote multi-task learning in current GAD research. We have carefully considered each comment and provided detailed, point-by-point responses to address all the feedback received, further strengthening our manuscript.
The additional experiments carried out during the rebuttal period are summarized as follows:
- **AUPRC:** All results related to the additional AUPRC metric are presented in supplementary PDF Tables 1-4 and discussed in Q1 of the NHz6 rebuttal.
- **High-degree dataset T-Finance:** The new results for high-degree T-finance dataset are included in Q2 of the NHz6 rebuttal.
- **Time and space evaluation:** Experiments assessing time and space efficiency on the extensive T-Group dataset are depicted in Figure 1 of the supplementary PDF and are discussed in Q1 of ji27 and Q3 of ymWJ rebuttal.
- **OOM and OOT Issues:** The results under the extended time and space constraints to address out-of-memory and out-of-time issues are detailed in Q3 of NHz6 and Q2 of ymWJ.
**We attached a one-page PDF summarizing the additional experimental results.** For detailed discussion, please refer to our reviewer-specific feedback.
Pdf: /pdf/23fd5d53defe7f13c22947aa612de973f2ad076b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning Human-like Representations to Enable Learning Human Values | Accept (poster) | Summary: The paper explores how representational alignment between humans and AI agents affects the ability of AI systems to learn human values efficiently and safely. The authors propose that AI systems learning human-like representations can generalize human values better and ensure safer exploration during learning. They support their hypothesis through theoretical analysis and extensive experiments, including simulations and human value judgments in a reinforcement learning setting.
Strengths: I have a positive view of this work. The paper presents a novel approach by linking representational alignment with value alignment, addressing a significant challenge in AI safety. The experiments are thorough and well-structured, covering various aspects of human values and employing multiple machine learning models.
Weaknesses: 1. The theoretical analysis relies on strong assumptions, such as specific kernel functions and Gaussian process regression, which may limit the generalizability of the results. Discuss the impact of these assumptions on real-world applicability and consider additional theoretical or empirical validations to support the generalizability of the method.
2. Using the Spearman correlation coefficient as a measure of alignment might have its advantages. However, it is crucial to investigate whether other alignment metrics have been considered or tested. Conducting comparative experiments to evaluate the impact of different alignment metrics on the experimental results will help in understanding the robustness and reliability of the chosen metric and could potentially identify more effective ways to measure representational alignment.
3. Presentation
(a) I suggest that the authors highlight the best results in the tables and clarify in the caption whether each metric is better when higher (Mean Reward) or lower (Immoral Actions Taken). Additionally, they should note any significant p-value with an asterisk (*). (b) I also suggest that the authors change the colors of the GPT models and Embedding Models in Figure 3, as the current colors are difficult to distinguish at first glance.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Can the authors clarify the relationship with inverse reinforcement learning and imitation learning?
2. Although the authors have demonstrated in the experiments that the method in the paper is indeed effective, one point still puzzles me: in Figure 3, in the right panel (generalization phase), it is evident that with the increase of representational alignment, the mean reward increases and immoral actions taken decrease. The number of immoral actions taken decreases to near zero in the generalization phase at the point near 0.35, which is even better than the same level of representational alignment in the personalization phase. Can you explain the reason for this?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: 1. The findings might not apply to all machine learning models and architectures. Additional experiments with different models and architectures would strengthen the conclusions and demonstrate the broader applicability of the results.
2. The human value judgments used in the experiments were collected from a relatively homogenous group (English-speaking internet users from the US). This limits the generalizability of the findings to a more diverse population. Future studies should include a wider range of participants to ensure that the results are universally applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *> The theoretical analysis relies on strong assumptions, such as specific kernel functions and Gaussian process regression, which may limit the generalizability of the results. Discuss the impact of these assumptions on real-world applicability and consider additional theoretical or empirical validations to support the generalizability of the method.*
We appreciate the suggestion to further test the generalizability of our results beyond the kernel-based experiment, and we provide a more realistic implementation of our kernel-based experiment using LLMs. To address this, we have expanded the results from our work by including a more extensive evaluation of large language models performing the text-based learning task, which we have outlined in the general response.
*> Using the Spearman correlation coefficient as a measure of alignment might have its advantages. However, it is crucial to investigate whether other alignment metrics have been considered or tested. Conducting comparative experiments to evaluate the impact of different alignment metrics on the experimental results will help in understanding the robustness and reliability of the chosen metric and could potentially identify more effective ways to measure representational alignment.*
While choosing the Spearman correlation coefficient, we did investigate other possible metrics and compare them. We provide a list of metrics we considered below and why we rejected them:
- Pearson correlation: Spearman correlation is able to capture non-linear relationships, because it is ordinal, whereas Pearson cannot. Individual similarity matrices may be on different scales or have different biases (e.g. tending towards higher or lower ratings), and Spearman correlation enables an equivalent comparison of these matrices regardless of these factors. Our theory section in the paper also supports this choice.
- Spearman correlation between all pairs of (personalization, generalization) actions, personalization actions only, or generalization actions only: This measure is sensitive to the specific choice of personalization vs generalization set, and does not accurately reflect the overall degree of representational alignment between two agents.
We appreciate the reviewer’s comment and will add a section to the appendix describing these alternative metrics.
*> Presentation (a) I suggest that the authors highlight the best results in the tables and clarify in the caption whether each metric is better when higher (Mean Reward) or lower (Immoral Actions Taken). Additionally, they should note any significant p-value with an asterisk (*). (b) I also suggest that the authors change the colors of the GPT models and Embedding Models in Figure 3, as the current colors are difficult to distinguish at first glance.*
We thank the reviewer for their helpful suggestions to improve clarity and will implement these changes in the final version of the paper.
*> Can the authors clarify the relationship with inverse reinforcement learning and imitation learning?*
Inverse reinforcement learning explicitly models the reward function of the demonstrator and seeks to infer it from their actions. Imitation learning uses the actions of the demonstrator in specific states and tries to learn that function directly. Both are different from our setting, in which the agent simply performs a reinforcement learning task and receives feedback on the actions it takes based on human values. The agent has no explicit representation of the reward function of the human or the actions they would take, but is trying to learn good actions in an “environment” created by a human’s values. We will clarify this distinction in the final version of the paper.
*> Although the authors have demonstrated in the experiments that the method in the paper is indeed effective, one point still puzzles me: in Figure 3, in the right panel (generalization phase), it is evident that with the increase of representational alignment, the mean reward increases and immoral actions taken decrease. The number of immoral actions taken decreases to near zero in the generalization phase at the point near 0.35, which is even better than the same level of representational alignment in the personalization phase. Can you explain the reason for this?*
In the experiment shown in Figure 3, during the personalization phase, all agents are in the process of learning to identify moral and immoral actions. This means that they will almost certainly take at least some immoral actions during their learning process, because they start with only their representation space and no other knowledge of the actions or their rewards. However, in the generalization phase, the agents have already gone through the personalization phase, during which they learn what actions are moral vs immoral. This results in the observation you have made, that the agents that have higher representational alignment are able to generalize their learnings exceptionally well to new, unseen actions (hence, near-zero immoral actions taken).
---
Rebuttal Comment 1.1:
Title: Thank you for the responses!
Comment: Thank you to the authors for their responses. Most of my questions have been addressed. After considering your responses and the feedback from other reviewers, I will maintain my evaluation.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response!
If you do have any remaining concerns, please let us know, and we'd be happy to try to address them!
---
Rebuttal 2:
Comment: *> The findings might not apply to all machine learning models and architectures. Additional experiments with different models and architectures would strengthen the conclusions and demonstrate the broader applicability of the results.*
We used three different kernel-based methods (kernel regression, support vector regression, and Gaussian process regression) to produce our results in the paper, and in the experiment using human morality judgments, we used similarity kernels obtained from a variety of embedding models via distance measures between embeddings as well as from multiple LLMs via prompting for similarity judgments. We have additionally further expanded the results from our work by including a more extensive evaluation of large language models performing the text-based learning task, which we have outlined in the general response. We appreciate the reviewer’s suggestion to demonstrate broader applicability of our results.
*> The human value judgments used in the experiments were collected from a relatively homogenous group (English-speaking internet users from the US). This limits the generalizability of the findings to a more diverse population. Future studies should include a wider range of participants to ensure that the results are universally applicable.*
We appreciate the reviewer’s suggestion and have called out this limitation of our work in the discussion section of our paper. We believe that future studies would benefit from a wider and more representative group of humans providing value judgments. | Summary: The authors test whether human alignment of LLM representations are related how well LLMs can learn personalised preferences. They collect preference ratings and similarity judgements from humans for various value-related stimuli. The authors use the ratings to construct a reinforcement learning problem for LLMs. Using both simulations and the human data collected, they show that LLMs whose representations’s kernel align with the kernel of the human similarity judgements gain more reward (i.e. cater better for human preferences) and select fewer unsafe actions. While the initial results presented are in the domain of morality, the authors show these results generalise to other value-based domains such as compassion and fairness.
Strengths: - The work focuses on the timely problem of personalising LLM behaviour in a safe manner.
- The benefit of representational alignment for safe and effective personalisation is shown in several domains and using several function approximators.
- Several LLMs that span both open-source and closed-source space are considered. It is nice to see that the two different ways of obtaining representations from open vs. closed source models yield similar results, which can be beneficial for guiding future work.
Weaknesses: - The hypothesis that increased representational alignment allows for better learning of human values is not actually tested in the paper. If I understand correctly, all the LLM experiments are done using a kernel derived from representations and some kernel-based function approximator. This is not humans interact with LLMs. The paper can be greatly improved by running the exact same bandit experiment with by prompting LLMs and obtaining their behaviour. If the LLMs that have more aligned representations with humans perform better in the behavioural version of the task, the proposed hypothesis would be supported. Otherwise there is no way to know if the current bandit results translate to behaviour. In fact, just the results of such a behavioural experiment alone would be sufficient to test this hypothesis. It is unclear to me what the benefit of the kernel-based reward learning approach is. Would it not suffice to correlate reward obtained from behaviour with the representational alignment?
- The results of the simulations are not informative. What is being shown is if you corrupt the true generative kernel that goes into the function approximator (i.e. decrease representational alignment), the model performs worse. Using the type of corruption you employ, where some scores a randomly assigned, this is to be expected under any function approximation problem with the types of models you use. I think these findings can either go into Appendix or can be removed, as they currently take up a lot of space and attention in the paper.
- The conclusion drawn from the control experiment is confusing. If you mismatch the kernel with the reward function the model needs to learn, the performance goes down during personalisation. However, Table 3 in the Appendix shows that a human kernel performs better than a length kernel when the rewards are defined over length during generalisation. I appreciate the authors discuss the reasons behind this. However, the findings are followed by “[…]human-like representations are not necessarily always helpful for all tasks. Conversely, a representation based on a reward function such as the length of an action description can help support safe exploration in that particular task, but not for learning human values.” It does seem like human representations are helpful in the both tasks you defined, assuming the generalisation phase is more important than the personalisation phase.
- More minor suggestions to improve clarity:
- It is hard to distinguish the two green colours used in Figure 3. Your legends overlap with the y-axis labels.
- “The representational alignment of a particular agent is measured as the Spearman correlation between the upper triangular, off-diagonal entries of the corrupted and actual similarity matrix (because diagonal entries are all the same, and the similarity matrix is symmetric).” =-- - - Pretty much the same sentence is repeated later in the text.
- Please consider making your figure captions more informative. Figure 3 and Figure 4 almost have identical captions. Also, for Figure 3, the caption only mentions embedding models but that is not just what is plotted. Some of the sentences in the main text, such as the one starting on line 280, can be moved to the caption for better flow.
Technical Quality: 3
Clarity: 2
Questions for Authors: - I’m not sure what “s.t. |a| = 10.” refers to in the pseudocode.
- I’m confused about how actions are sampled from the function approximators’ estimates. Do you do Thompson sampling from the estimates? Pseudocode suggests so. However, in the text, Thompson sampling is described completely separately
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: I think the authors raise some of the important limitations in the text. In fact, they highlight that a behavioural experiment with LLMs can be useful, but this is discussed with respect to faster convergence. I believe the paper would improve greatly if the weakness points I raised are addressed, especially the first one. Then, I would be happy to consider increasing my score!
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *> The hypothesis that increased representational alignment allows for better learning of human values is not actually tested in the paper. If I understand correctly, all the LLM experiments are done using a kernel derived from representations and some kernel-based function approximator. This is not humans interact with LLMs. The paper can be greatly improved by running the exact same bandit experiment with by prompting LLMs and obtaining their behaviour. If the LLMs that have more aligned representations with humans perform better in the behavioural version of the task, the proposed hypothesis would be supported. Otherwise there is no way to know if the current bandit results translate to behaviour. In fact, just the results of such a behavioural experiment alone would be sufficient to test this hypothesis. It is unclear to me what the benefit of the kernel-based reward learning approach is. Would it not suffice to correlate reward obtained from behaviour with the representational alignment?*
We appreciate the suggestion to provide a more realistic implementation of our kernel-based experiment using LLMs. To address this, we have expanded the results from our work by including a more extensive evaluation of large language models performing the text-based learning task, which we have outlined in the general response. These additional experiments provide some support to our hypothesis, though we believe that performing a full evaluation of current LLMs will require designing more challenging tasks as many existing models perform at ceiling on our current task.
*> The results of the simulations are not informative. What is being shown is if you corrupt the true generative kernel that goes into the function approximator (i.e. decrease representational alignment), the model performs worse. Using the type of corruption you employ, where some scores a randomly assigned, this is to be expected under any function approximation problem with the types of models you use. I think these findings can either go into Appendix or can be removed, as they currently take up a lot of space and attention in the paper.*
The goal of our simulations is to verify our theoretical results, which identify how (mis)alignment of representations transforms into performance reduction, not just the fact that it does (which is a straightforward result). The goal of this is to predict, in practice, how misalignment with human representations limits the ability to learn human values such as ethics. We show that there is a simple functional relationship between representational alignment and performance on the reinforcement learning task, consistent with the predictions of our theoretical account.
In addition, we perform a reversed form of the corruption experiment in the appendix, in section A.4, “Evolution of Alignment of Language Models with Humans”. In this case, we increase the amount of representational alignment between language model kernels and human similarity judgments via interpolation, and show that there is a similar relationship between representational alignment and performance on learning human values as that observed in the simulated experiments with corruption via randomization.
We appreciate the suggestion to utilize the space within the paper to further emphasize the later results, and will make edits to shorten the section accordingly.
*> The conclusion drawn from the control experiment is confusing. If you mismatch the kernel with the reward function the model needs to learn, the performance goes down during personalisation. However, Table 3 in the Appendix shows that a human kernel performs better than a length kernel when the rewards are defined over length during generalisation. I appreciate the authors discuss the reasons behind this. However, the findings are followed by “[…]human-like representations are not necessarily always helpful for all tasks. Conversely, a representation based on a reward function such as the length of an action description can help support safe exploration in that particular task, but not for learning human values.” It does seem like human representations are helpful in the both tasks you defined, assuming the generalisation phase is more important than the personalisation phase.*
We appreciate the reviewer’s comment. As we mention in the text below the table: “We note that the human kernel performs far better than the length kernel in generalization, both in the morality and length reward case, and the length kernel performs quite poorly on generalization for both reward functions. We note that the length kernel's performance on generalization to the length task is slightly poorer than the human kernel. After running additional experiments, we confirmed that the length kernel is highly sensitive to the choice of personalization/generalization action sets and thus performed quite poorly on a few experiments, but typically still outperforms or is comparable to the human kernel on this task. “
We agree that this could be made more clear via the data presented, and we will update the table with data from multiple trials of the control experiment in the final version of the paper to better demonstrate this.
---
Rebuttal 2:
Comment: *> More minor suggestions to improve clarity:*
*- It is hard to distinguish the two green colours used in Figure 3. Your legends overlap with the y-axis labels.*
*- “The representational alignment of a particular agent is measured as the Spearman correlation between the upper triangular, off-diagonal entries of the corrupted and actual similarity matrix (because diagonal entries are all the same, and the similarity matrix is symmetric).” =-- - - Pretty much the same sentence is repeated later in the text.*
*- Please consider making your figure captions more informative. Figure 3 and Figure 4 almost have identical captions. Also, for Figure 3, the caption only mentions embedding models but that is not just what is plotted. Some of the sentences in the main text, such as the one starting on line 280, can be moved to the caption for better flow.*
We thank the reviewer for their helpful suggestions to improve clarity and will implement these changes in the final version of the paper.
*> I’m not sure what “s.t. |a| = 10.” refers to in the pseudocode.*
From the algorithm description, “Randomly select $a \subset A$ s.t. $|a|=10$.” $A$ is the set of all 50 actions, and $a$ is the randomly selected set of 10 actions shown to the agent at a particular moment. $|a| = 10$ indicates that the number of allowable actions per loop is 10. We appreciate the question and have provided additional clarification on this point in the algorithm description.
*> I’m confused about how actions are sampled from the function approximators’ estimates. Do you do Thompson sampling from the estimates? Pseudocode suggests so. However, in the text, Thompson sampling is described completely separately*
The function approximators’ estimates provide some scalar prediction of the expected reward from each action. However, an agent relying entirely on this estimate (which is initially poor and uninformative) will not trade off between exploration and exploitation in its environment, and instead will continually take actions for which it received some nonzero reward in the beginning. We apply Thompson sampling to these scalar expected reward predictions for the agents in order to induce a smoother learning behavior, such that agents will continue to (probabilistically) explore new actions until they increase their level of certainty on which are the best actions to take. This is mentioned in one line of the pseudocode from Algorithm 1: “Choose a new action $x$ via Thompson sampling over agent's predicted rewards.” We appreciate the question and have placed additional emphasis on this point in the paper.
---
Rebuttal Comment 2.1:
Title: Author-Reviewer Discussion period ending
Comment: With the author-reviewer discussion period ending soon, we wanted to take this opportunity to thank you again for your suggestions and to check whether you had any remaining questions after our response above. We especially hope that you have a chance to review the additional experiments we ran based on your suggestions and described in the general response (official comment titled "Additional Experiments: Few-Shot Learning with LLMs"). We re-link the anonymized 1-page pdf from that general response here for convenience: https://drive.google.com/file/d/1lb0GMAbuMaiLmNwUkYpUuBZ47BjJEAFQ/view?usp=sharing
If we've addressed your concerns, we'd be grateful if you'd consider updating your score!
---
Rebuttal Comment 2.2:
Comment: I thank the authors for the clarifications and the additional analyses. Given the additional evidence linking the representational analyses to behavior, I am happy to raise my score to a 5.
The reason I’m not giving a higher score is the limited scope of the evidence (ceiling effects, weak correlations, and limited models). I agree with the authors that a harder task that can provide a better test in more realistic settings would be highly beneficial in the future. It would be particularly interesting if this could be a task that all LLMs agree to respond to.
---
Rebuttal 3:
Title: Additional experiments provided past rebuttal deadline
Comment: Hi Reviewer 7UgR,
~~The authors are not allowed to provide external links to additional results. See the guidelines:~~
~~"Can we include an anonymous link in the author rebuttal? No. Do not use links in any part of the response. The only exception is if the reviewers asked for code, in which case you can send an anonymized link to the AC in an Official Comment (make sure all linked files are anonymized)." https://neurips.cc/Conferences/2024/PaperInformation/NeurIPS-FAQ~~
~~I made an exception for these authors to provide their allowed rebuttal PDF as an external link because they had some technical issue uploading the PDF. But, they are not allowed to modify it past the rebuttal deadline of August 6. Therefore, please disregard the additional results provided on August 11 when adjusting your score.~~
Please disregard the above, the authors have clarified they did not modify the PDF
---
Rebuttal Comment 3.1:
Title: Pdf not modified since posting on Aug 7
Comment: Dear Area Chair,
We have not modified the pdf on August 11. This is the exact same link and pdf as posted in our general response. By clicking details on the Google doc link you can see that the pdf has **not been modified** since being created on Aug 7, which is when we posted the anonymized link. Please let us know if you'd still like us to remove it. | Summary: This paper looks at the importance of having human-like representations for learning human values. It does this for kernel methods specifically, allowing it to operate on the level of the covariance matrix implied by the representation, rather than the representation itself. The paper presents a number of results: 1) Theoretical analysis shows that misalignment in the covariance matrix between a “teacher” (the human) and a “student” (the AI system) can result in error, and that this error is most impacted by misalignment in the matrix measuring similarity between training and test data. 2) synthetic experiments in a multi-armed bandit setting confirm the theoretical analysis; they show that increasing misalignment between representations leads to decreased performance and an increase in the number of immoral actions taken. 3) Experiments with embeddings extracted from real language models again show this negative relation between representational alignment with humans and a kernel method’s ability to learn human values. A more fine-grained version of this experiment which looks at learning individual types of human values (fairness, morality, …) confirms the existence of this relation across nearly all types.
Strengths: In general this is a very solid paper which leaves little to be desired. It identifies an interesting and important question in ML: namely how important human-aligned representations are to learning human values. It contributes answers to this question from a variety of angles, including theoretical analysis and real-world empirical experiments. All experiments are to my judgement sound and report statistical significance. The writing is excellent and I appreciate the inclusion of a refresher on kernel methods in the appendix.
Weaknesses: The last part of the theoretical analysis in 3.1 looks at a setting with two training examples and one test example. I think this would have been stronger if it had also included the more general setting with $N$ training examples and $M$ test examples.
Technical Quality: 4
Clarity: 4
Questions for Authors: This work has focused on kernel methods. How much do these results now tell us about non-kernel methods? For example about estimators that can learn (near-) arbitrary functions of the given representations?
From line 182, you use variables $c_T^g$ and $c_S^g$. These have not been defined. Though one could guess from context what they mean, it would be better to introduce them properly.
As you point out in the discussion, human values vary significantly across cultural and individual levels. A reference to the Moral Machine Experiment [1], which specifically sought to quantify this variation, would help to furnish this point.
[1] Awad, E., Dsouza, S., Kim, R. *et al.* The Moral Machine experiment. *Nature* **563**, 59–64 (2018). https://doi.org/10.1038/s41586-018-0637-6
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Limitations such as the representativeness of the collected data and the recognition that human judgement is varied is properly addressed in the discussion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *> The last part of the theoretical analysis in 3.1 looks at a setting with two training examples and one test example. I think this would have been stronger if it had also included the more general setting with N training examples and M test examples.*
Thank you for the great suggestion! We have now extended the theory to the general case with n training examples and m test examples:
“We can extend this result to the case where there are $n$ training examples and $m$ test examples. Let $e_m, e_n$ be column vectors consisting of $m$ and $n$ ones, respectively. To allow us to find the analytical form of the prediction expression, suppose that covariance between each pair of training examples is $c^p \neq 1$, that training examples are normalized to have variance $1$, and that the covariance between each pair of train and test examples is $c^g$. Then $K_T=(1-c^p)I + c^p e_n e_n^\top$ and $K^*_T=c^g y_m y_n^\top$. Applying the Sherman-Morrison formula and simplifying the resulting expression we get $K^{-1}_T=(1-c^p)^{-1}(I-\frac{c^p}{1+(n-1)c^p}e_n e_n^\top)$. Thus, the prediction is now $\hat{y}^g=K_T^{*\top}K_T^{-1}y^p=\frac{nc^g [1+(k-2)c^p]}{(1-c^p)[1+(k-1)c^p]}e_m e_k^\top y^p$. Misalignment in $K^*$, which can be represented by $|c^g_T-c^g_S|=\epsilon$, results in error $|\hat{y}^g-\tilde{y}^g|=|\epsilon d^p| y^p$ where $d^p$ is a function of $c^p$ but constant in $c^g$. Thus, error due to misalignment in $K^*$ grows linearly. Misalignment in $K$, which can be represented as $|c^p_T-c^p_S|=\epsilon$, results in error $|\hat{y}^g-\tilde{y}^g|=|(\frac{1}{1-c^p_T} - \frac{1}{1-c^p_T + \epsilon}d^g| y^p$ where $d^g$ is a function of $c^g$ but constant in $c^p$. Thus, error due to misalignment in $K$ ranges from $0$ to $(1-c^p_T)^{-1}d^g| y^p$ and grows sublinearly with $\epsilon$. The resulting conclusions are therefore the same as in the special case of two training examples and one test example, the error grows monotonically as representational alignment decreases and misalignment in $K^*$ has a larger effect on student performance than the same degree of misalignment in $K$ does.”
*> This work has focused on kernel methods. How much do these results now tell us about non-kernel methods? For example about estimators that can learn (near-) arbitrary functions of the given representations?*
We focused on kernel methods to establish our theoretical results and validate these results empirically. We appreciate the suggestion to test the generalization of our experiment to more complex estimators, and ran the same experiment using a variety of LLMs in a few-shot learning setting; results are presented in the general response.
*> From line 182, you use variables cTg and cSg. These have not been defined. Though one could guess from context what they mean, it would be better to introduce them properly.*
Thank you, this is a great catch. We have now clarified that these refer to the teacher and student’s beliefs, respectively, about the covariance between the training and testing examples.
*> As you point out in the discussion, human values vary significantly across cultural and individual levels. A reference to the Moral Machine Experiment [1], which specifically sought to quantify this variation, would help to furnish this point.*
We thank the reviewer for the helpful suggestion of an additional reference and have added it to the paper.
---
Rebuttal Comment 1.1:
Comment: Thank your for the response and thank you for fixing the LaTex formatting.
I have read the reviews of the other reviewers and the authors' responses. I agree with reviewer 7UgR that a direct evaluation of the morality of LLM's outputs versus their representational alignment would provide additional support for the central hypothesis of the paper, and would strongly encourage the authors to do such an experiment. However, conference papers have limited space for experiments, and in my view the experiments that are currently in the paper provide sufficient support for the hypothesis. Therefore, as concerns the current submission, I will maintain my score.
---
Reply to Comment 1.1.1:
Title: Additional experiments in general response
Comment: Thank you for responding!
Based on the suggestions from reviewer 7UgR we ended up running those additional experiments that directly evaluate morality both by having the LLMs play the bandit game, and by getting morality ratings directly from the LLMs.
We summarized the additional experiments in the general response (official comment titled "Additional Experiments: Few-Shot Learning with LLMs"). We re-link the anonymized 1-page pdf from that general response here for convenience: https://drive.google.com/file/d/1lb0GMAbuMaiLmNwUkYpUuBZ47BjJEAFQ/view?usp=sharing
We'd be grateful if you could check out the experiment description and results and let us know if this is what you were suggesting!
If yes, we'd of course also be really grateful if you consider updating your score.
---
Rebuttal 2:
Title: Fixed markdown LaTeX
Comment: We noticed after submitting the rebuttal that the LaTeX resulted in poor formatting, but rebuttals can no longer be edited so we are copying a fixed version here:
> *The last part of the theoretical analysis in 3.1 looks at a setting with two training examples and one test example. I think this would have been stronger if it had also included the more general setting with N training examples and M test examples.*
**Response**: Thank you for the great suggestion! We have now extended the theory to the general case with n training examples and m test examples:
“We can extend this result to the case where there are $n$ training examples and $m$ test examples. Let $e_m, e_n$ be column vectors consisting of $m$ and $n$ ones, respectively. To allow us to find the analytical form of the prediction expression, suppose that covariance between each pair of training examples is $c^p \neq 1$, that training examples are normalized to have variance $1$, and that the covariance between each pair of train and test examples is $c^g$. Then $K_T=(1-c^p)I + c^p e_n e_n^\top$ and $K^\*_T=c^g y_m y_n^\top$. Applying the Sherman-Morrison formula and simplifying the resulting expression we get $K^{-1}_T=(1-c^p)^{-1}(I-\frac{c^p}{1+(n-1)c^p}e_n e_n^\top)$. Thus, the prediction is now $\hat{y}^g=K_T^{\*\top}K_T^{-1}y^p=\frac{nc^g [1+(k-2)c^p]}{(1-c^p)[1+(k-1)c^p]}e_m e_k^\top y^p$. Misalignment in $K^\*$, which can be represented by $|c^g_T-c^g_S|=\epsilon$, results in error $|\hat{y}^g-\tilde{y}^g|=|\epsilon d^p| y^p$ where $d^p$ is a function of $c^p$ but constant in $c^g$. Thus, error due to misalignment in $K^\*$ grows linearly. Misalignment in $K$, which can be represented as $|c^p_T-c^p_S|=\epsilon$, results in error $|\hat{y}^g-\tilde{y}^g|=|(\frac{1}{1-c^p_T} - \frac{1}{1-c^p_T + \epsilon}d^g| y^p$ where $d^g$ is a function of $c^g$ but constant in $c^p$. Thus, error due to misalignment in $K$ ranges from $0$ to $|(1-c^p_T)^{-1}d^g| y^p$ and grows sublinearly with $\epsilon$. The resulting conclusions are therefore the same as in the special case of two training examples and one test example, the error grows monotonically as representational alignment decreases and misalignment in $K^*$ has a larger effect on student performance than the same degree of misalignment in $K$ does.”
---
Rebuttal 3:
Title: Disregard additional experiments provided past the rebuttal deadline
Comment: Authors: Technically providing a link to outside results isn't allowed, you are allowed to provide a single rebuttal PDF within the time from of the author rebuttal period. Although I initially allowed it due to your comment about technical difficulties, it is not valid to provide additional results with an external link beyond the deadline of the rebuttal period, since this isn't allowed for other authors. To be fair, **please revert the PDF at the link you provided to the version that you had at the end of the rebuttal period**. I would not want to have to disqualify your paper.
~~Reviewers: please disregard these additional results.~~
---
Rebuttal Comment 3.1:
Title: Pdf not modified since posting on Aug 7
Comment: Dear Area Chair,
We have not modified the pdf on August 11. This is the exact same link and pdf as posted in our general response. By clicking details on the Google doc link you can see that the pdf has **not been modified** since being created on Aug 7, which is when we posted the anonymized link. Please let us know if you'd still like us to remove it.
---
Rebuttal 4:
Comment: > Based on the suggestions from reviewer 7UgR we ended up running those additional experiments that directly evaluate morality both by having the LLMs play the bandit game, and by getting morality ratings directly from the LLMs. We summarized the additional experiments in the general response (official comment titled "Additional Experiments: Few-Shot Learning with LLMs"). We re-link the anonymized 1-page pdf from that general response here for convenience: https://drive.google.com/file/d/1lb0GMAbuMaiLmNwUkYpUuBZ47BjJEAFQ/view?usp=sharing
Thank you for pointing this out to me. I apologize for having initially missed the additional direct evaluation of morality. Taking these results into account, I think the paper now provides solid support for the central hypothesis. I will raise my score to reflect this. | Summary: The paper addresses the challenge of ensuring that machine learning models learn to achieve explicit objectives without causing harm or violating human standards, which is crucial as these models operate in more open environments. They specifically focused on value alignment in LLMs, and note that this is challenging when models must align with user preferences, values, or morals after minimal interaction.
The authors propose that learning human-like representations, or representational alignment, can aid in quickly and safely learning human values. They design a reinforcement learning task involving morally-salient actions to explore this. They collected a dataset of human value and similarity judgments to simulate AI personalization settings, and conducted a human evaluation.
Strengths: 1) The paper introduces the concept of representational alignment as a means to achieve value alignment, which is a relatively unexplored area in AI research.
2) The authors design a specific reinforcement learning task and create a new dataset.
3) They collect human value and similarity judgment data
Weaknesses: 1) The experimental evaluation is not very extensive / the motivation for this is missing and it is hard to interpret the results. The results section must be expanded more thoroughly.
2) The reinforcement learning task and dataset used in the study is a bit narrow to generalize the findings across all possible real-world scenarios.
3) While the paper focuses on safe exploration, it does not deeply explore the ethical implications of the work / discussion should be expanded here.
4) The criteria and metrics used to evaluate the success of representational alignment in achieving value alignment could be more comprehensive. More detailed metrics would help in better assessing the effectiveness of the proposed approach.
Technical Quality: 2
Clarity: 1
Questions for Authors: 1) Why is it important for LLMs to learn human-like representations to learn human values? How can we validate that the LLM is doing this process?
2) How does this process work when there are several human values to consider?
Confidence: 2
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: Refer to weaknesses for more limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *> The experimental evaluation is not very extensive / the motivation for this is missing and it is hard to interpret the results. The results section must be expanded more thoroughly.*
Thank you for these suggestions. We will expand the results section in the final version of the paper. We have added additional text clarifying the motivation, namely:
We measure performance of agents in terms of mean reward (i.e. mean morality score), as well as number of immoral actions taken. We seek to develop learning agents who can both learn human values effectively (generalization ability) and perform their learning process in a safe, harmless manner (personalization and safe exploration), and these metrics help us to evaluate agents' performance with respect to both of these goals.
In addition, we have expanded the results by including a more extensive evaluation of large language models performing the text-based learning task, as outlined in the general response.
*> The reinforcement learning task and dataset used in the study is a bit narrow to generalize the findings across all possible real-world scenarios.*
Thank you for this observation. We incorporated a second experiment into the paper to try to address this concern, showing that our results generalize across 10 different human value functions, which we believe improves the generalizability of our results. Our focus on the reinforcement learning task was based on the heavy use of this kind of task in the value alignment literature (e.g. [1]), and in particular, a learning framework in which the agent performing safe exploration is particularly important. Our intent wasn’t to cover all possible real-world scenarios, but to use a task that has previously been used as a metric for value alignment to demonstrate the relevance of representational alignment in this setting. We would point out that previous work has found that representational alignment can be effective in facilitating learning other few-shot settings [2], which when combined with our results suggest that this is a more general phenomenon. We have also performed additional experiments with LLMs via few-shot learning to help address this point; results are included in the general response.
[1] Nahian, M., Frazier, S., Harrison, B., Riedl, M. “Training Value-Aligned Reinforcement Learning Agents Using a Normative Prior,” 2021.
[2] Sucholutsky, I., Griffiths, T. “Alignment with human representations supports robust few-shot learning,” 2023.
*> While the paper focuses on safe exploration, it does not deeply explore the ethical implications of the work / discussion should be expanded here.*
We appreciate the reviewer’s suggestion and have expanded the Discussion and Limitations section to include the following:
This work could potentially introduce another dimension to consider when working towards building more ethical AI systems that are aligned with societal values. While we hope that our study will provide a new avenue for creating safe, moral, and aligned AI systems, we acknowledge that morality is a significantly more complex and multi-faceted concept than can be captured in a small number of ratings by English-speaking internet users. Our study is intended only to highlight the importance of aligning models' internal representations with the representations of their users. Our dataset should not be used as a benchmark for determining whether models are safe or moral.
*> The criteria and metrics used to evaluate the success of representational alignment in achieving value alignment could be more comprehensive. More detailed metrics would help in better assessing the effectiveness of the proposed approach.*
In our simulated experiments, we studied five different metrics related to safe exploration and value alignment - namely, mean reward (mean “alignment”), number of “non-optimal” actions taken (i.e. agent did not choose the most moral action available), immoral actions taken, iterations to convergence (i.e. number of personalization iterations before the agent successfully learned the set of values), and the number of unique actions the agent had to take before it learned the values effectively. We showed in the simulations that all five metrics related to the degree of representational alignment.
In our experiments using human data, we study two of these metrics - namely, mean reward and immoral actions taken - in both the personalization and generalization phases (the other metrics no longer give meaningful information because we restrict the agent to a fixed number of iterations for personalization and generalization). Once again, we show that both metrics relate to the degree of representational alignment, in both personalization and generalization.
We are open to suggestions of other metrics that could be used, but our choice here was based on the kinds of measures that have been used to assess value alignment in the previous literature [1], [2]. A recent survey showed that the measures of representational alignment we adopted are widely used across cognitive science, neuroscience, and machine learning [2]. We appreciate the feedback and will provide a more detailed explanation behind our choice of metrics in the final paper.
[1] Hendrycks, D., Burns, C., Basart, S., Critch, A., Li, J., Song, D., Steinhardt, J. “Aligning AI with shared human values,” 2020.
[2] Sucholutsky, I., Muttenthaler, L., Weller, A., Peng, A., Bobu, A., Kim, B., Love, B., Grant, E., Groen, I., Achterberg, J., Tenenbaum, J., Collins, K., Hermann, K., Oktar, K., Greff, K., Hebart, M., Jacoby, N., Zhang, Q., Marjieh, R., Geirhos, R., Chen, S., Kornblith, S., Rane, S., Konkle, T., O'Connell, T., Unterthiner, T., Lampinen, A., Müller, K., Toneva, M., Griffiths, T. “Getting aligned on representational alignment,” 2023.
---
Rebuttal 2:
Comment: *> Why is it important for LLMs to learn human-like representations to learn human values? How can we validate that the LLM is doing this process?*
While it is certainly true that non-human representations may be better for some tasks, the task of learning human values and morals is intrinsically tied to learning things in a human way. We do acknowledge that cognitive biases may be present in some settings, and this would be a very interesting direction for future work. However, we chose to adapt action descriptions from the ETHICS dataset partly because these actions are simple and straightforward enough that the moral judgments on them were largely consistent. Modern models often do not learn human-aligned representations, and are misaligned across many domains.
We can validate that LLMs are learning more human-like representations by measuring the “closeness” of the pairwise similarity matrix to humans’ that we can collect given a set of stimuli, as we do in this work. In our paper, we work with pre-trained models and their (fixed) representations, and learning happens via few-shot prompting; however, the same method can be applied during the training process of language models to quantify how much their representations become more or less human-like over time.
We appreciate the reviewer’s insightful question and will provide a more explicit explanation of this in our introduction.
*> How does this process work when there are several human values to consider?*
Our second experiment explores the case where there are multiple values that are learned using the same representation. In this setting, we show that representational alignment is beneficial across a set of 10 different human values. Our expectation is thus that alignment will be helpful in general, even though the specific relationships that are learned to capture different human values will differ.
As for aggregating multiple values to form pluralistic human value judgments, that extends beyond the scope of our work; however, this is an active research area as well. Some recent work that explores this area can be found in the following paper:
Sorensen, T., Jiang, L., Hwang, J., Levine, S., Pyatkin, V., West, P., Dziri, N., Lu, X., Rao, K., Bhagavatula, C., Sap, M., Tasioulas, J., Choi, Y. Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties, 2024.
---
Rebuttal 3:
Title: Author-Reviewer Discussion period ending
Comment: With the author-reviewer discussion period ending soon, we wanted to take this opportunity to thank you again for your suggestions and to check whether you had any remaining questions after our response above. We especially hope that you have a chance to review the additional experiments we ran based on reviewer suggestions and described in the general response (official comment titled "Additional Experiments: Few-Shot Learning with LLMs"). We re-link the anonymized 1-page pdf from that general response here for convenience: https://drive.google.com/file/d/1lb0GMAbuMaiLmNwUkYpUuBZ47BjJEAFQ/view?usp=sharing
If we've addressed your concerns, we'd be grateful if you'd consider updating your score! | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Exploring the Precise Dynamics of Single-Layer GAN Models: Leveraging Multi-Feature Discriminators for High-Dimensional Subspace Learning | Accept (poster) | Summary: This work introduces a simplified GAN framework to learn the subspace of a spiked covariance model. The authors are able to derive precise training dynamics given specific assumptions and show they correspond to numerical simulations. They also prove that convergence rate is often faster than previous work that used a further simplified discriminator (single vector vs. matrix parameter). Authors extend convergence results to the case where exact subspace dimension is not known a priori. Lastly MNIST results are conducted that show steady state feature alignment for this GAN method is higher than a standard method (Oja's method).
Strengths: **Clarity:** The authors introduce the spiked covariance model and their GAN method+ assumptions clearly
**Theoretical Results:** The authors use a variety of tools to characterize their system dynamics, and extend their results to differing dimensionalities of the generator and true distribution subspace
Weaknesses: **Clarity:** While most sections are well written, a few components made it more difficult to understand the key contributions of the paper
* The introduction doesn't clearly define single feature vs. multi-feature discriminator learning on lines 34/35
* I'd like to see more discussion of the interpretation of the main theorem (4.3), especially since the ODE dynamics are fairly complex
* Similarly in section 4.2 I'd like a brief discussion on how these microscopic dynamics are useful to analysis
* In Fig. 1 It is unclear the what the difference between blue and yellow lines is (same for red and green)
**Comparison to Previous Work**
* While GROUSE and Oja's method are introduced as alternative subspace methods, it appears as if only Oja's method is used as an experimental baseline
* I'd like to see one or two citations of other GAN stability analysis papers (e.g. something like Which Training Methods for GANs do actually Converge?) and their limitations as opposed to this works analysis
**Experiments:**
* Fig. 2 appears to show steady state feature alignment for GAN and Oja's method, but I would be interested in a comparison of the convergence rates as well.
**Significance:**
* Many assumptions seem hard to extend to the full GAN setting, for example A3 assumes an uncommon discriminator non-linearity and regularization function. A6 assumes the discriminator matrix parameter is orthonormal.
* Most of this work appears to be extending the results in [Wang et al. 2018] to a slightly more complex discriminator. This is probably my biggest concern, and would like to hear the authors opinion on this difference.
Technical Quality: 3
Clarity: 3
Questions for Authors: Is Fig 2. meant to include GROUSE comparisons?
Does the generator need to be changed when moving from the spiked covariance dataset to MNIST one?
Does Fig 2 suggest that making the discriminator learning rate as small as possible the best course of action for feature learning?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: While the authors state their assumptions explicitly, I would like to see further discussion on which assumptions could possibly be relaxed in the future.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the valuable feedback provided by the reviewer.
- *The introduction doesn't clearly ...*
We define single-feature to be a discriminator which can only learn one dimension of the subspace at a time. This is quite a slow process, especially when the number of features is high. We seek to show the benefits of using a multi-feature discriminator, which is a discriminator that can learn multiple (or all) dimensions at once, yielding a stronger discriminator. We will include a definition of these terms in the introduction.
- *I'd like to see more discussion ...*
We will include a more detailed explanation of the ODE. However, we include the key details here:
1) The key parameter that we care about is the $\mathbf{P}$ macroscopic state, representing the similarity between the true and learned subspaces. We can see that it depends on the interaction between the discriminator and the true subspace, and the interaction between the generator and discriminator. If either of those states are the zero matrix, training will not be possible. However, we find that random initialization is sufficient for escaping the fixed point around 0.
2) The discriminator-true subspace macroscopic state (i.e., $\mathbf{Q}$) depends on both its current overlap with the true subspace, as well as how the generator and discriminator interact.
3) All the states depend on the covariance matrix (either the true or fake covariances). As can be seen in Figure 1, the dimension which has the largest covariance (the red color) is the dimension which is able to learn the features better. However, there is also a balance. Using too high a value for the covariances will mean that the generator samples appear unlike the true samples, preventing training.
- *Similarly in section 4.2 ...*
We will include such a discussion. The main use of the microscopic dynamics is to derive the PDE, from which the ODE is derived using the relevant test functions (cosine similarity).
- *In Fig. 1 It is ...*
The red and green lines represent the two dimensions of the generator, while the blue and yellow lines are the two dimensions of the discriminator.
- *While GROUSE and Oja's method ...*
We primarily used Oja’s method due to the asymptotic equivalence of Oja’s and GROUSE, and since the datasets used are sufficiently high dimensionality.
- *I'd like to see one or two ...*
We will be adding the following citations:
[1] Mescheder, L.M., Geiger, A., Nowozin, S. (2018). Which Training Methods for GANs do actually Converge?
[2] Fedus, W., Rosca, M., et al.. Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step.
The first paper is focused on analyzing the different methods of GAN regularization and how those impact training. Specifically, they focus on gradient penalties.
The second paper suggests analyzing GAN training from the perspective of Nash Equilibria, and while focused on true datasets, does not provide a way of analyzing the training and its modes, instead seeking to understand how different choices such as a gradient penalty impact training.
If there are any additional papers that you believe will be important for us to contrast against, we would be happy to take your recommendations.
- *Fig. 2 appears to show ...*
For convergence rates, please see the attached PDF for a comparison of convergence rates on a real dataset (as measured by Grassmann Distance).
- *Many assumptions seem hard ...*
While we focus on the linear case, we find these results interesting and suggests future exploration into whether such a trend exists in the nonlinear case as well. Specifically, instead of the usual trend in training GANs of using a weak discriminator compared to the generator, it seems that it is possible to use a discriminator of similar strength.
While this is true, we chose these in order to provide a fair comparison between single-feature (sequential) learning (found in the original paper Wang et al. 2018) and multi-feature learning. Furthermore, there are ways to to constrain the optimization of a matrix to be orthonormal (by modifying the gradients to live on the Grassmannian manifold).
- *Most of this work ...*
The importance of our work comes not just from using a more complex discriminator, but in showing (both theoretically and empirically) that contrary to the standard assumptions, it is in fact possible to train GAN models with more powerful discriminators. Furthermore, training speed is faster, and the model actually learns more of the true subspace compared to a weaker discriminator. Empirically, we can see from Figure 5 the key differences in the learned features, even when providing the sequential discriminator 5 times as much training time. While we focus on a linear discriminator and generator, we believe that these results are important and open up important avenues for future research into extending these results into practical models.
- *Is Fig 2. meant to include GROUSE comparisons?*
Figure 2 is using the derived ODEs for both the GAN and Oja’s method, since [3] shows that Oja’s method and GROUSE have the same asymptotic trajectory in high dimensions, and so the same ODEs. However, we will make this more clear.
[3] Chuang Wang, Yonina C. Eldar, and Yue M. Lu. Subspace estimation from incomplete observations: A high-dimensional analysis.
- *Does the generator need ...*
No, there is no change in the generator.
- *Does Fig 2 suggest ...*
For the situation in Figure 2, we use a warm initialization, avoiding the fixed point around 0. In such a case, if the goal is to attain the best learning of the subspace regardless of training time, smaller learning rates are better. In Figure 2, we consider the steady-state values after 5000 timesteps, to ensure that all the approaches have converged. However, in reality, training time is an important factor to consider, and so it will be important to balance these when picking learning rates.
---
Rebuttal 2:
Title: Acknowledgement of Rebuttal
Comment: I thank the authors for their detailed response. In particular, I appreciate the addition of experiments focusing on convergence rate, and clarification of grouse/oja's method baselines. Adding the author's response about the Figure 2 learning rates to the experimental discussion would also be useful.
I see the other reviewers were also concerned about novelty compared to the Wang et al. 2018 work. While I still think the multi-feature discriminator is a fairly incremental change over a single feature, I agree the authors' analysis for the case of unknown subspace dimension is enough to distinguish it.
Given the promised changes I am willing to increase my score by a point. I am interested to see reviewer 7DBz's thoughts on these changes as well. | Summary: This paper proposes to learn the subspace from the observations by the GAN model. Taking the one-layer GAN model as the starting point, this paper provides a theoretical analysis from the perspective of training dynamics. Specifically, from the technical side, the proposed method trains both the generator and discriminator by using adversarial loss and subspace regularization. Different from the original GAN dynamic training method, this paper further proposes an assumption that "the columns of the discriminator matrix W are orthonormal"
The proposed method is evaluated on both synthetic and real-world datasets.
Strengths: This paper provided a systematical analysis of GAN-based methods and conventional approaches, from both theoretical and empirical sides.
More simulation results are attached in the appendix to verify the effectiveness of the method.
Weaknesses: I have a big concern about the contribution of this paper.
1) The reason for using GAN for subspace learning, instead of other frameworks (such as VAE), is unclear. The advantages and drawbacks of this choice should be discussed. Please refer to the questions for the details.
2) This paper provides a theoretical analysis from the perspective of training dynamics (Wang et al. 2018). However, most equations and theoretical results are similar as (Wang et al. 2018), such as equations [1,5,6,7,8,9 10]. Only the assumption that "the columns of the discriminator matrix W are orthonormal" is newly added. However, given that the loss function in (Wang et al. 2018) had already constrained this.
3) Technically, some GAN-based empirical methods [RW1,RW2] tried to add subspace regularization into the GAN model. Some methods apply the disentanglement constraints to achieve the identification of the latent space, such as [RW3].
Technically, what is the advantage of the proposed subspace regularization $ tr(H(W^TW))$.
4) In the experiments, the comparison with VAE-based methods [RQ1,RQ2], the GAN-based empirical methods [RW1,RW2] should also be involved.
[RW1] Liang, Jie, et al. "Sub-GAN: An unsupervised generative model via subspaces." Proceedings of the European Conference on Computer Vision (ECCV). 2018.
[RW2] Jiang, Hongxiang, et al. "Orthogonal Subspace Representation for Generative Adversarial Networks." IEEE Transactions on Neural Networks and Learning Systems (2024).
[RW3] Xie, Shaoan, et al. "Multi-domain image generation and translation with identifiability guarantees." The Eleventh International Conference on Learning Representations. 2023.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1) What is the unique advantage of the GAN model for subspace learning? Why not use the VAE-based framework, such as iVAE [RQ1] and PCL[RQ2]. In my understanding, subspace learning is a representation learning method, VAE-based methods explicitly model a mapping from observation to latent variables (encoder), which may be more efficient to use in subspace learning.
2) What is the detailed difference between this paper and (Wang et al. 2018) on the theoretical side?
3) What is the advantage of the proposed subspace regularization $ tr(H(W^TW))$ over other regularization methods [RW1,RW2] .
[RQ1] Khemakhem, Ilyes, et al. "Variational autoencoders and nonlinear ica: A unifying framework." International conference on artificial intelligence and statistics. PMLR, 2020.
[RQ2] Hyvarinen, Aapo, and Hiroshi Morioka. "Nonlinear ICA of temporally dependent stationary sources." Artificial Intelligence and Statistics. PMLR, 2017.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: No clear negative societal impact needs to be listed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: While we appreciate the reviewer’s comments, we respectfully disagree with his/her assessment of the level of novelty. We refer the reviewer to our joint statement to all reviewers to clarify the novelty of our paper.
- *This paper provides a theoretical analysis from the perspective of training dynamics (Wang et al. 2018). However, most equations and theoretical results are similar as (Wang et al. 2018), such as equations [1,5,6,7,8,9 10]. Only the assumption that "the columns of the discriminator matrix W are orthonormal" is newly added. However, given that the loss function in (Wang et al. 2018) had already constrained this.*
Regarding the similarity to the previous approach, our loss function and design was intentionally and carefully chosen to be similar to (Wang et al. 2018), in order to provide a fair comparison between the two approaches. Specifically, our main goal was to understand how multi-feature (stronger) discriminators compare to single-feature (weaker) discriminators, with all else being equal. Note that this is not a trivial extension, and furthermore, by introducing the uplifting method, we are the first to show that such an analysis provides novel insights into the training of GAN models with real datasets. The prior analysis results were only focused on synthetic datasets and only allows for sequential learning (one feature at a time), and as our experimental results have shown, it is not straightforward to apply the previous analysis to real datasets. As such, we believe that our contributions are significant, and has the potential to open new research directions into a better characterization of GAN training dynamics.
- [Q1:] *What is the unique advantage of the GAN model for subspace learning? Why not use the VAE-based framework, such as iVAE [RQ1] and PCL[RQ2]. In my understanding, subspace learning is a representation learning method, VAE-based methods explicitly model a mapping from observation to latent variables (encoder), which may be more efficient to use in subspace learning.*
Our purpose of the experiments is to show that such insights gained from the theoretical analysis is actually transferable to real situations. Furthermore, we focus on the linear case, while both [RW1] and [RW2] are in general non-linear models with significantly more advanced setups. VAE models are outside the scope of our work, as our setting is understanding how GAN methods fit into subspace learning in general. Furthermore, while there are limited works on this type of analysis for VAE models such as the paper [1] cited in our paper, they are focused on other issues such as posterior collapse, and the ODEs provided (Appendix A of [1]) are far too complicated for any similar analysis to be performed.
[1] Ichikawa, Y., \\& Hukushima, K. (2023). Learning Dynamics in Linear VAE: Posterior Collapse Threshold, Superfluous Latent Space Pitfalls, and Speedup with KL Annealing. ArXiv, abs/2310.15440.
While it is possible to view the VAE models from a subspace learning approach, our key goal is to understand how GANs exist compared to other traditional subspace learning methods. Additionally, the VAE models you have mentioned are non-linear models relying on MLPs, and so do not provide a fair comparison.
- [Q2:] *What is the detailed difference between this paper and (Wang et al. 2018) on the theoretical side?*
The following are the key differences between the papers on the theoretical side:
- We focus on the advantages of a multi-feature discriminator compared to a single-feature discriminator. This includes understanding the increases in speed that comes with a multi-feature discriminator, as well as the increases in overall learning of the true subspace, measured in terms of Grassmann distance between the true and learned subspaces.
- We introduce a method for extending the analysis to cases where the dimensionality of the true subspace is not known. This method allows for analyzing more realistic situations and real-world scenarios, where such information is not known and not learnable.
- We show how GAN models can be viewed as a new type of subspace learning algorithm, which we can call the data-driven approaches, compared to the algebraic (Oja's) and geometric (GROUSE) methods discussed in our paper. We use the characterization of training dynamics to provide a systematic comparison of the different approaches' performance.
We refer the reviewer to our joint statement to all reviewers for a more detailed explanation of these key points.
- [Q3:] *What is the advantage of the proposed subspace regularization $tr(H(W^T W))$ over other regularization methods [RW1,RW2] .*
The choice of normalization $tr(H(W^T W))$ is a simple way of enforcing that the discriminator matrix W remains orthonormal during training. By using a sufficiently high lambda value, this will force the matrix to be orthonormal, which is required by our assumptions. However, the use of this term is not explicitly meant for regularization, just for enforcing orthonormality.
Note that orthonormality is useful in subspace learning because it allows for as much as possible to be learned by the different dimensions, without any overlap (which would mean redundant information).
[RW2] proposes a 3-stage process to improve regularization of the learned subspace. The goal is not simply for an orthonormal basis of the subspace, but also for a more explainable and interpretable latent space, which is not our goal or focus in this work. Furthermore, we do not need to use multiple stages of training.
[RW1] uses clustering to identify certain subspaces, and predicts which subspace the data belongs to. We focus on learning a single subspace.
Overall, the mentioned papers do not allow for a fair comparison with our approach or our choice of subspace regularization.
---
Rebuttal Comment 1.1:
Comment: Thank you for taking the time to carefully consider my concerns. I apologize for the delayed response.
I agree with Reviewer N4Fr that the technical extension of the multi-feature discriminator appears incremental. There are studies that support the use of multiple discriminators to enhance GAN models, such as [1] and [2]. Additionally, employing different discriminators for extra guidance has been explored in works like [3].
Additionally, I recognize that the theoretical subspace analysis introduces a new point over Wang et al. 2018. My concern here lies in the distinct difference this approach has compared to VAE-based frameworks. The authors argue that VAE-based methods focus on non-linear models, which they suggest is an unfair comparison. However, I believe that the linear case is simply a special instance of non-linear functions. Analyzing non-linear settings is indeed more challenging, and single-layer or linear cases may not fully reflect real-world applications.
Minor suggestion: If all equations are identical to those in previous work, it would be beneficial to remove them and simply provide a reference. This could help draw more attention to the unique contributions of the paper.
Since these concerns remain open, I will maintain my current score.
[1] Choi, Jinyoung, and Bohyung Han. "Mcl-gan: Generative adversarial networks with multiple specialized discriminators." Advances in Neural Information Processing Systems 35 (2022): 29597-29609.
[2] Cai, Zhipeng, et al. "Generative adversarial networks: A survey toward private and secure applications." ACM Computing Surveys (CSUR) 54.6 (2021): 1-38.
[3] Ma, Cheng, et al. "Structure-preserving super resolution with gradient guidance." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the response, but we respectfully disagree with the assessment that the submission lacks novelty.
The reviewer argues that the technical extension of the multi-feature discriminator to be incremental and lists multiple works that propose to use multiple discriminators in practical use. In contrast, our work introduces a new perspective on the use of single-layer GAN model by leveraging a powerful multi-feature discriminator on subspace learning problem. There is a distinction between multi-feature discriminators and multiple discriminators. We believe that there may be a misunderstanding in regards to what a multi-feature discriminator is, which we would like to clarify. The former is a single discriminator which learns multiple features of the true subspace at once, while the latter has multiple discriminators, each of which could be single-feature or multi-feature, with each discriminator focusing on a different part of the learning task. The listed works have nothing in common with our multi-feature discriminator approach.
We stated our perspective on the VAE models before. While it is possible to view them from a subspace learning point of view, our goal is to understand how training dynamics of GANs is compared to other traditional subspace learning methods. We do not simply want to compare based on empirical results and specific datasets, but also providing a theoretical comparison using the derived ODEs characterizing training dynamics. As mentioned in our previous response, the only such ODE existing for any VAE models, found in (Ichikawa et al. 2023) , is far too complicated to realistically compare with Oja's and the GAN models.
Finally, we would like to restate our contributions as follows:
1. We frame GANs as a new type of subspace learning algorithm, and systematically compare that to traditional methods like Oja’s and GROUSE based on training dynamics.
2. We demonstrate the speed and accuracy benefits of a multi-feature discriminator over a single-feature one, particularly in learning the true subspace.
3. We introduce an uplifting method to analyze cases where the true subspace's dimensionality is unknown, making the analysis more applicable to real-world scenarios.
Overall, we believe our work aligns with the high standards of NeurIPS, and we hope the reviewer will reconsider the novelty and significance of our contributions in this light. | Summary: This paper focuses on the training dynamics of the gradient-based learning algorithms, and converted into a continuous-time stochastic process characterized by an Ordinary Differential Equation (ODE). Empirical evidence demonstrates the correctness of the proposed method.
Strengths: S1: This paper focuses on the training dynamics of the gradient-based learning algorithms, and converted into a continuous-time stochastic process characterized by an Ordinary Differential Equation (ODE). Empirical evidence demonstrates the correctness of the proposed method. This is the first work that connects the ODE with the training dynamics with GAN, which is very inspiring.
Weaknesses: I am afraid I am not an expert in this ODE area. But still, as a machine learning scientist, I feel the empirical section is lacking the supportive evidence and ablation studies. Maybe adding more comparisons with other GAN-related generative modeling can help better support the assumptions in this paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please could you think of other GAN baselines and evaluations metrics that can be used to strengthen the empirical evidence in this paper? For example the FID score is a popular metric usually used in this generative modeling scenario. Maybe demonstrating that the ODE can lead to similar FID score in order to demonstrates the equivalence between this new model and GAN?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: It seems to me there is no discussion on limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and effort. We would like to provide a more detailed summary of our paper, to clarify any confusions you may have about our work.
We focus on representing the training dynamics of a simplified (linear) GAN model using a system of ODEs, which represent the key states of the training (specifically the interactions between the true and fake subspaces, and the discriminator). We explore using a multi-feature discriminator, which is able to learn significantly faster than a weaker, single-feature (aka sequential) discriminator. The goal is to show that contrary to standard practice in GAN training of using a weak discriminator, it is actually possible to use a discriminator with equal power as the generator. In fact, doing so allows for both faster training as well as a better overall performance. As such, we believe that our results will open new research directions into understanding the training dynamics of more complicated GAN models, and the use of more powerful discriminators.
We provide numerical results showing that the ODEs do actually match with the GAN training, and we show that our analysis allows us to gain insight into training on real datasets, through out testing on MNIST. The purpose of using MNIST is not to claim that our approach is the best for this task, but instead showing that trends and insights gained from the ODE appear as we would expect. For example, in the Grassmann distance on MNIST diagram (Figure 6), we see the oscillating pattern which is visible in Figure 1, second diagram.
Finally, we aim to position GANs in the category of online subspace learning algorithms, by showing how it learns a subspace compared to existing approaches. Specifically, we are able to empirically show that the features learned by the GAN model are semantically more meaningful than those learned by Oja's. It is possible to learn an arbitrary number of bases for a given subspace, but certain ones are more useful than others, and having features which actually match the true data is one way of seeing this.
FID is a metric used to compare two distributions of images. However, it depends on the model used for extracting the features, and using a model with no exposure to the type of data we are using (specifically grayscale images of digits and/or faces) will mean that the results provided are not very informative. Furthermore, as mentioned above, we do not claim that our model will achieve the best FID, as we do not focus on perceptual quality of the results. Instead, we are focused on metrics such as Grassmann distance, which measures the subspace itself, in terms of the learned features. This is more directly related to subspace learning, which is our setting.
---
Rebuttal Comment 1.1:
Title: Thanks for your rebuttal
Comment: Thanks for the further clarifications from the authors. I do believe your analysis allows to gain insights into the GAN model using ODEs and contributes to the theoretical support of GAN models. I believe these theoretical contribution can further inspire more thoughts around generative training. I therefore decide to increase my score. | Summary: This work explores the training dynamics of a single-layer GAN model, especially for high-dimensional subspace learning, presented as a novel approach. By connecting the GAN models with analysis to subspace learning, this work compares the effectiveness of GAN-based methods with former approaches e.g., Oja’s method and GROUSE, both theoretically and empirically. The findings in this work reveal that GANs demonstrate a remarkable ability to acquire a more informative basis due to their inherent capacity to generate new data samples or in this case subspaces. The experiments in this work demonstrate subspace learning tasks in a more efficient way compared to the counterparts.
Strengths: - This paper provides a novel perspective for a fundamental problem in subspace learning. The idea of GAN is imposed to learn subspaces with a generator and a discriminator with equal power.
- This work provides some insights on technical issues implementing GAN to learn subspaces with some different hyperparameter settings in P7.
- The intuition and motivation for implementing GAN for subspace learning are adequate with relevant backgrounds and theoretical analysis.
Weaknesses: - Because the subspaces have to be orthonormal, we need to ensure that Eq. 7 maintains this property when performing gradient descents. How can this property be achieved? Or is this not necessary using the approach in this work? Commonly, this desired property can be achieved by geometry aware constraints applied to gradient steps.
- This work borrows the idea of GAN with a minimax objective between the generator and discriminator. One common issue in GAN is the learning instability (e.g., mode collapse) in the learning stage. This work does not discuss this issue in depth. Is this problem not the case for subspace learning?
- The efficacy of the proposed method is demonstrated on MNIST with promising performance. However, the dataset used in this work is relatively small with limited variances. The proposed method would be more insightful to work on a larger scale dataset e.g., a face dataset (MORPH).
-The claim that the learned features are better than Oja’s method in P9 cannot be justified only from the depicted figure. This claim of learned features has not been quantified in what kinds of tasks and for what purposes.
- The proposed method is limited to the linear setting. Would it be possible to use the GAN design for subspace learning in this method for non-linear subspace learning with more complex types of data? This work would be more insightful to provide relationships with the choice of non-linear functions as well.
Technical Quality: 2
Clarity: 2
Questions for Authors: Please answer the questions and concerns in weaknesses.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: There is no special section for limitations. It is hard to view direct negative impacts of this work as this is a core machine learning problem that can be applied to many applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the valuable feedback provided by the reviewer.
- *Because the subspaces have to be orthonormal, we need to ensure that Eq. 7 maintains this property when performing gradient descents. How can this property be achieved? Or is this not necessary using the approach in this work? Commonly, this desired property can be achieved by geometry aware constraints applied to gradient steps.*
Regarding the orthonormality of the subspaces, it is required to enforce this property in some way. The easiest way to do this is by explicitly orthonormalizing the matrices after each gradient step. However, an alternative we have also tested is using the approach in [1], which allows for gradient updates which keep the matrix on the Grassmann manifold, thereby preserving the orthonormality condition. The last option is using the orthonormality regularization term $tr(H(W^T W))$, however this does not explicitly force the subspaces to be orthonormal.
[1] A. Edelman, T. A. Arias, and S. T. Smith. The geometry of algorithms with orthogonality constraints. SIAM Journal on Matrix Analysis and Applications, 20(2):303–353, 1998.
- *This work borrows the idea of GAN with a minimax objective between the generator and discriminator. One common issue in GAN is the learning instability (e.g., mode collapse) in the learning stage. This work does not discuss this issue in depth. Is this problem not the case for subspace learning?*
(Wang et al. 2018) provides an analysis in the single-feature case of the different types of learning, including mode collapse. They find that mode collapse depends on multiple details, including the relation between the generator and discriminator learning rates, the noise level present in the data, and the covariance matrix chosen for the sampled subspace.
We did find that similar trends hold in the multi-feature learning case. However, since this is not a novel result, and is not directly related to our main goal of understanding the advantages of a multi-feature discriminator compared to a single-feature discriminator, we choose not to focus on this.
- *The efficacy of the proposed method is demonstrated on MNIST with promising performance. However, the dataset used in this work is relatively small with limited variances. The proposed method would be more insightful to work on a larger scale dataset e.g., a face dataset (MORPH). -The claim that the learned features are better than Oja’s method in P9 cannot be justified only from the depicted figure. This claim of learned features has not been quantified in what kinds of tasks and for what purposes.*
The MORPH dataset is a commercial dataset which requires access to a license, which we do not have. Thus, we instead focus on a different face dataset, the Olivetti Faces dataset (a well-known face dataset). Included in the attached PDF above are results for both the GAN model and Oja’s method on this dataset. Further details can be found in our joint response to all reviewers, and in the caption of the figure. Our theoretical results driven by our work is highlighted in this new setting, in addition to the MNIST results.
- *The proposed method is limited to the linear setting. Would it be possible to use the GAN design for subspace learning in this method for non-linear subspace learning with more complex types of data? This work would be more insightful to provide relationships with the choice of non-linear functions as well.*
Non-linear GAN models are not within the scope of this work, and as such, we do not have any results or analysis of such a situation. We do believe that this is possible, but we have not derived the necessary ODEs or proved the relevant theorem to enable such an analysis on a more complicated model. However, future work will seek to extend such an analysis to a situation like you have described.
---
Rebuttal 2:
Title: Acknowledgement of Rebuttal
Comment: I would thank the authors for providing the responses to all concerns.
In particular, additional experiments do show the significance of the multi-feature discriminator over the single feature disciminator. Also, we can observe that compared to the Oja's method and single feature discriminator, the proposed method could lead to faster convergence on the challenging dataset.
I agree to Reviewer 7DBz and N4Fr that the novelty in this work is somewhat incremental against Wang et al. Many equations and sections are borrowed from this source, even though they are explicitly mentioned in the paper. The contribution in this work about multi-feature discriminator GAN is not highlighted due to major statements and equations adoption from Wang et al. in the main paper. I agree with Reviewer 7DBz suggestions that the repetitive and adopted information should be included in Appendix, and the authors could keep the essentials of the proposed work in the main paper. Unfortunately, in the current version, insightful and distinctive insights and analyses are located in Appendix e.g., Off-diagonal simulation, comparison results between single and multi-features GAN.
The position of this work is clear that the adoption from Wang et al. is not meant to reinvent the wheel but to provide backgrounds and further analysis of the proposed method. Specifically, this work extends the work of Wang et al. to multi-feature cases and arbitrary dimensions. Considering that the work is fairly incremental and the manuscript needs improvements, I would be lukewarm about this work and keep my current rating.
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer for their valuable feedback. While we acknowledge and respect their concerns regarding the perceived lack of novelty and the structure of the manuscript, we would like to provide some clarification and context for the work.
- Firstly, we appreciate the recognition that our additional experiments demonstrate the significance of the multi-feature discriminator over the single-feature discriminator. The faster convergence of our proposed method compared to Oja's method and the single-feature discriminator on challenging datasets is a noteworthy contribution that we believe advances the state of the art in this area.
- Regarding the concerns about the novelty of our work in relation to Wang et al., we would like to emphasize that our intention was not merely to replicate their findings but to build upon them in a meaningful way. Specifically, our work extends Wang et al. by exploring the application of multi-feature discriminators in GANs and addressing arbitrary dimensions—a significant extension that, in our view, goes beyond incremental improvement. The adoption of equations and sections from Wang et al. was necessary to provide a clear foundation for our contributions and to ensure that our work was accessible to readers who may not be familiar with the previous work. Additionally, we are the first to explore how these theoretical developments can be uncovered in real-world datasets.
- We acknowledge the suggestions made by reviewer regarding the structure of the manuscript. We agree that the essentials of our proposed work could be more prominently highlighted in the main paper, and we are open to restructuring the manuscript to move some of the background information to the Appendix. This would allow us to better emphasize the novel aspects of our work in the main text.
- Finally, we understand that the perceived incremental nature of the work may affect the overall assessment of the reviewer. However, we believe that the contributions of this work—particularly in extending the multi-feature discriminator to arbitrary dimensions and demonstrating its effectiveness on challenging datasets—are both significant and novel. We hope that with the proposed restructuring and the clarifications provided, the reviewer might reconsider their evaluation on the novelty and impact of this work.
Thank you once again for your constructive feedback. We value your insights and hope to address them satisfactorily in our revised manuscript. | Rebuttal 1:
Rebuttal: Dear Reviewers and Area Chairs,
We appreciate the valuable feedback provided by the reviewers.
We are encouraged that the reviewers found our paper provides a novel perspective on exploring the precise dynamics of the single-layer GAN model on subspace learning problems. We are also glad to see that the reviewers agreed that our experiments back up our claim that multi-feature discriminators enable improved training performance by jointly learning features in a non-sequential way unlike the prior results. However, we understand that they may have some concerns on the level of the novelty, which we address in our response to all reviewers.
The following points clarify the novelty of the paper with some updates:
- The key contribution is the analysis of multi-feature vs single-feature (sequential) discriminators when it comes to training GANs. By using a multi-feature discriminator, not only is training much faster, but in fact it is possible to learn the true subspace much better. Figure 4 in the Appendix shows the gains made by switching to a multi-feature discriminator in terms of cosine similarity with the true subspace. Additionally, Figure 6 shows this more concretely with a real dataset (MNIST), where we can see not only the much faster convergence, but also the better steady-state results.
- We also introduce a new method for analysis in the cases where the true subspace feature dimension is not known. The common assumption of knowing the exact number of features is restrictive and does not match real-world datasets or scenarios. Our uplifting technique allows for more broad analysis in future works, where it will be possible to analyze learning outcomes depending on whether the number of fake features is smaller or greater than the number of true features. Additionally, our ODEs work using this method, which allows for not just the two extremes of learning a single feature at a time, or all at once, but any possible number of features in between.
- We provide additional results (as requested) in the attached PDF on another dataset, the well-known face dataset Olivetti Faces. The results emphasize our point more clearly, as the top 16 features learned by the GAN model are much more diverse and representative of the entire dataset compared to the features learned by Oja's method. We see that Oja's method is faster in terms of convergence, but all the features learned are similar. Eventually, after approximately 50 timesteps of training, the GAN model outperforms Oja's method in terms of Grassmann distance.
- We position the GAN model within the other subspace learning algorithms, showing how it compares, and highlighting how the learned features are more semantically diverse and meaningful compared to subspace learning algorithms such as Oja's method, and in fact does eventually outperform those methods for a range of learning rates. We use the word meaningful to state that it is more representative of the dataset visually. We see this as the GAN model being forced to learn a basis that matches the dataset better, because it has to learn how to trick the discriminator.
As such, we believe presenting our work at NeurIPS 2024 will significantly contribute to the discussions at the conference.
Pdf: /pdf/c767fab8b4bde83eaa0893b5fc4ec3914ddbba43.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Boosting the Transferability of Adversarial Attack on Vision Transformer with Adaptive Token Tuning | Accept (poster) | Summary: This paper introduces Adaptive Token Tuning (ATT) to improve the transferability of adversarial examples generated from ViTs. ATT is an improvement over traditional gradient-based algorithms, consisting of three independent methods. The first method reduces gradient variance by rescaling the token gradients computed throughout different modules and layers in the ViT. The second method is a scheduled, semantic-guided patch-out approach to increase the diversity of input during the iterative gradient ascent procedure. The third method is a truncation approach for attenuating the model-specific attention occurring at the deeper layers of the ViT. Experimental results on ImageNet show an improvement over existing transfer-based attacks.
Strengths: **Originality:**
The proposed ATT algorithm includes three strategies that work in combination to improve the transferability of perturbations generated from vision transformers. Each method is developed based on insights from previous work and improves upon them. The proposed method is original.
**Quality/Clarity:**
The presentation is clear, and the methods are well-explained. A thorough evaluation of transferability on a variety of target models, including transformers and both undefended and defended CNNs.
**Significance:**
The proposed work is a valuable addition to methods for generating more transferable perturbations from vision transformers.
Weaknesses: 1. Ln76: Blackbox attack also includes those when target model has limited access, i.e., query-based attacks.
2. The statement in ln 294-295 requires further consideration. The analysis itself does not show that the ATT perturbation captured more features than those generated by other algorithms. While the improvement suggests this, there is no analysis proving the statement.
3. Several statements in the paper requires references, for instance, Ln218-219.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Do we have results showing the benefit of the adaptive strategy in variance reduction in terms of transferability improvement?
2. In (2), is it possible for the scaling factor to be negative? Wouldn't that change the direction of the gradient, resulting in a less optimal argmax of (1), and potentially making the perturbation less transferable?
3. For the Hybrid Token Gradient Truncation Method, many design decisions are made without clear explanation. For example, why do we have two different types of truncations? I appreciate the authors for including the analysis in A.5 on the number of truncated layers. For the experiment in 4.1, how is $\ell'$ selected for each model? Do we need to run a sweep for each model?
4. For the results in Tables 1 and 2, do all methods, including the baselines and ours, have Patchout?
5. In Ln 271, shouldn't the number of discards be equal between Patchout and SPPO for a fair comparison?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No discussion on limitation of the work. The checklist says the limitation is discussed in Sec 2 4 and Appendix A1, but I could not find any discussion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. **Answer to Weakness 1 -- query-based attacks:** We appreciate your comment on the broader definition of blackbox attacks, including query-based scenarios where the target model has limited accessibility. We will expand our discussion to encompass these types of attacks, providing a more comprehensive overview of blackbox scenarios in the context of adversarial machine learning. This will be incorporated in the introduction and related work sections to clarify the scope and applicability of our Adaptive Token Tuning (ATT) method.
2. **Answer to Weakness 2 -- the ATT perturbation captured more features:** We acknowledge the reviewer's point regarding the need for a more robust analysis to substantiate our claim that ATT perturbations capture more feature information. Referred to our explanation of **Questions in review 9KoY**. This could be interpreted as a further explanatory illustration of our method from the perspective of experimental results, which could retain more feature information to improve the transferability of the adversarial attack. Additionally, in response, we will refine our experimental section to include a comparative feature analysis between ATT and other algorithms. This analysis will aim to quantitatively demonstrate how ATT retains more relevant features, thereby enhancing transferability. This addition will address the gap between our observational claims and the empirical evidence provided.
3. **Answer to Weakness 3 -- References required for Ln218-219:** Thank you for highlighting the need for additional references in our manuscript. We recognize the importance of properly citing existing work to validate our statements. Specifically, for the assertions made in Ln218-219, we will ensure that all relevant literature is cited, providing a solid foundation for our claims and aligning with academic standards.
4. **Answer to Questions 1 -- Adaptive Variance Reduced and transferability:** We appreciate the inquiry into the impact of our adaptive strategy on variance reduction. As detailed in Table 4 of the supplementary PDF, our experiments demonstrate two key aspects: **(1)** variance reduction while preserving feature information, and **(2)** adaptive variance reduction that smooths gradient variance across layers.
Our analysis in Supplementary A.2 confirms that our strategy effectively reduces gradient variance, optimizing $\lambda$ to balance the trade-off between fixed and overly penalized scaling, which could hinder the transferability. This optimal setting ensures improved attack efficacy without compromising the adaptive nature of the method.
5. **Answer to Questions 2 -- The scaling factor to be negative?:** In our implementation (referenced in Line 147, as well as the code implementation), we ensure that the scaling factor remains non-negative, thus maintaining the original direction of token gradients. This is crucial for sustaining an effective adversarial attack, as negative values would indeed invert the gradient direction, undermining the attack's efficacy.
6. **Answer to Questions 3 -- Hybrid Token Gradient Truncation Method:** The Hybrid Token Gradient Truncation Method is designed to optimize the utilization of the attention mechanism by adjusting gradients in different Our motivation, grounded in literature and empirical evidence (see references [16] in our submitted manuscript), seeks to understand and enhance the impact of different modules in ViT layers on the perturbation transferability.
For each surrogate model tested, we need to sweep them all, but parameters are fixed except for those under examination, facilitating focused analysis as described in Supplementary A.5.
7. **Answer to Questions 4 -- Do Tables 1 and 2 include PatchOut?:** Regarding input enhancement strategies, Tables 1 and 2 deliberately exclude PO/SPPO to isolate the effects of our adaptive gradient and truncation strategies from those of input modifications. This choice was made to clearly demonstrate the independent effectiveness of our proposed methods.
8. **Answer to Questions 5 -- Number of PatchOut:** Our Self-Paced Patch Out (SPPO) approach dynamically adjusts the number of discards, unlike the fixed value used in Patch Out (PO). Since more discards reduce the features that perturbations can learn, we ensure a fair comparison by setting the expected number of discards in SPPO to match or exceed that of PO.
9. **Answer to Limitations:** We appreciate the reviewer's attention to the limitations of our work. We realize there was an oversight in our citation of the sections discussing limitations, which may have caused confusion. To address this, we will add a dedicated section in our manuscript that explicitly outlines the limitations of our Adaptive Token Tuning (ATT) method. This new section will discuss potential scalability issues, the dependency on hyperparameters, and any constraints related to the types of adversarial settings where our method may not perform optimally. We aim to provide a comprehensive and transparent overview of these aspects to better inform future research and application of our findings.
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses. Most of my concerns have been address. For **Answer to Weakness 2 -- the ATT perturbation captured more features**, I strongly suggest that the authors either include additional experiments to support this statement or provide more specificity in the original claim. For **Answer to Questions 2 -- The scaling factor to be negative?**, I also suggest clarifying this in the section.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 52P8:
Thank you for your suggestions on the various aspects of our work. We will provide a more accurate and detailed description of our methodology in our revised manuscript. We also appreciate your high recognition of our research. | Summary: The paper proposes an Adaptive Token Tuning (ATT) method to enhance the transferability of adversarial attacks on Vision Transformers (ViTs). The method introduces three optimization strategies: adaptive gradient re-scaling to reduce token gradient variance, a self-paced patch out strategy to enhance input diversity, and a hybrid token gradient truncation strategy to reduce the effectiveness of the attention mechanism. Extensive experiments demonstrate the superiority of ATT over existing methods in terms of attack success rate and transferability across various models.
Strengths: 1. The combination of adaptive gradient re-scaling, self-paced patch out, and hybrid token gradient truncation is novel and well-motivated, providing a fresh perspective on improving adversarial attack transferability for ViTs.
2. The experimental results show that ATT significantly outperforms state-of-the-art methods, achieving higher attack success rates and better transferability.
Weaknesses: 1. In the paper's formulation, there are multiple hyperparameters. Do these hyperparameters interact with each other? Why is γ set to 0.5?
2. In Figure 2, is the model used for prediction the same as the target model for the attack? From the figure, it appears that the proposed method indeed makes the model focus on the classification target more quickly. However, how do you explain that as the number of iterations increases, the attention no longer focuses on the classification target? The targets shown in the figure are all dogs, just different types, so the attention should still focus on the dog.
3. Have the effects been tested on defended ViTs?
Technical Quality: 3
Clarity: 3
Questions for Authors: Refer to Weakness section.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. **Answer to Weakness 1 -- Hyperparameters Interaction:**
The hyperparameters in our Adaptive Token Tuning (ATT) method fall into two categories: those related to gradient adjustment and those pertaining to input enhancement.
To mitigate the interaction between hyperparameters,
these categories are adjusted independently and alternatively.
For the gradient adjustment, the gradient penalty factor $\gamma$ is initially set to 0.5 as a median value to mildly rescale back-propagated token gradients. It will be adaptively adjusted within the ranges of $(0, 0.5]$ and $[0.5, 1)$ based on the gradient variance across consecutive ViT layers.
Based on our exploratory experiments, this value of $\gamma$ was not only a practical but also an effective choice.
2. **Answer to Weakness 2 -- Model Prediction and Target Focus in Figure 2:**
In our experiments (see Ln333-336), ViT-B served as the surrogate model, with visualizations in Figure 2 conducted on the black-box model CaiT-S/24 using Grad-CAM. The visualization does not specifically set a label; thus, attention is drawn to the category deemed most probable by the model. The shift in focus observed in the figure illustrates a change in the highest probability category, indicating a successful attack, characterized by a rapid and confident misclassification. This dynamic is further elaborated in Figure 6, where attention visualizations are presented for both correct and incorrect labels, highlighting the mechanism of our attack’s effectiveness.
3. **Answer to Weakness 3 -- Testing on Defended ViTs:**
As addressed in response to **Weaknesses 2 from Review zV8Q**, we have expanded our experimental validation to include defended (robust) ViTs, with findings detailed in **Table 3 of the rebuttal PDF**.
This additional testing confirms the effectiveness of our proposed ATT attack against various adversarial defenses, ensuring that our approach is not only theoretically sound but also practically viable against contemporary robust architectures.
---
Rebuttal Comment 1.1:
Comment: My concerns are properly addressed.
Thanks for the rebuttal!
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 38RD:
Thank you for your suggestions regarding the details of our experiments and broader experimental comparisons. We appreciate your feedback and the acknowledgment of the clarifications and new experiments, as well as your positive recognition of our work. | Summary: n this paper, the authors investigate three strategies to boot the transferability of adversarial attacks on Vision Transformers, including an adaptive gradient re-scaling strategy, a self-paced patch out strategy, a hybrid token gradient truncation.
Strengths: 1 The experiments are solid.
2 The soundness of the method is good.
3 This paper is well written.
Weaknesses: 1 The title of the paper might be imappropriate to conclude the techniques proposed by authors. The overall methods are composed of three partitions:
- An adaptive gradient re-scaling strategy to reduce the overall variance of token gradients. It is related to the back propagation for crafting adversarial samples.
- A self-paced patch out strategy to enhance the diversity of input tokens. It is related to the adaptive forward propagation for calculating the adversarial loss.
- A hybrid token gradient truncation strategy to weaken the effectiveness of attention mechanism. It is also related to the back propagation for crafting adversarial perturbations.
We can see that none of these points are related to the token tuning. (Maybe I am wrong, I hope authors can point out my mistakes.) As I understand it, token tuning usually refers to optimizing the token after tokenizing the input images. Since the designed attacks craft pertubations directly on input images like most of previous attacks. I think it might be imappropriate to conclude the proposed methods with the phrase "Adaptive Token Tuning".
2 In line 72, the authors claim that "robustly trained defense models still fail to meet security requirements". The observation contradicts with the findings observed in [1]. I conjecture this is because the authors choose to attack secure models which is published in 6 years ago. Considering the rapid development in the adversarial community, I recommend to attack recent sucure models trained by $l_{\infty}$ AT provided by robustbench [2].
3 The novelty of this paper is fair. The design principles of the proposed methods originates from previous papers and in this paper, they simply make them adaptive. Higher flexibility often brings better transferability. However, it also brings additional costs: it will increases the difficulty of hyperparameter tuning. In addition, more new insights are needed to guide community better leverage ViT to attack deep models.
4 The conparison with the baseline methods in Table 1 and Table 2 might be unfair. It seems that the proposed methods acquire better transferability than other attacks. However, considering the proposed attack proposed in this paper combines the power of three attacks: TGR, PNA, PatchOut. I think the appropriate baseline is TGR+PNA+PatchOut. It helps us more precisely pinpoint the gains brought by the proposed method.
Technical Quality: 3
Clarity: 3
Questions for Authors: In Table 2, the results reveal that the proposed method can better attack the robust CNNs. How about robust ViTs, such as DiT-S in [3], Swin-B in [4], Xcit-S in [5]?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: There is no single section to discuss the limitation and the broader Impact of the paper. Can you explain them with more details?
[1] https://github.com/Trustworthy-AI-Group/TransferAttacks
[2] https://robustbench.github.io/
[3] Are Transformers More Robust Than CNNs? in NeurIPS 2021.
[4] When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture, in NeurIPS 2022.
[5] A Light Recipe to Train Robust Vision Transformers, in SaTML 2023.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. **Answer to Weakness 1 -- Adaptive Token Tuning for our manuscript title:** We appreciate the reviewer's emphasis on the traditional definition of token tuning. In our work, token tuning extends beyond mere feature value processing; it encompasses the modification of feature values, attention weights, gradient information, and layer interaction, all centered around the token as the basic unit.
Our strategies, whether related to gradient tuning or input enhancement, adhere to this broader definition of token tuning. We will clarify this in our revised manuscript to avoid any misunderstanding.
2. **Answer to Weakness 2 -- Testing on Defended ViTs:** Based on the reviewer's suggestion, we conducted additional experiments on robust ViTs, as detailed in **Table 3 of our rebuttal PDF**.
Transferable black-box attacks were tested against three robust ViT models and their normal counterparts using our best strategies and those of comparative methods.
The results confirm that our approach consistently outperforms others in effectiveness.
3. **Answer to Weakness 3 -- Description of our innovations:**
**(1)** Our gradient treatment strategy, although inspired by existing methods like PNA and TGR, we proposed a new treatment strategy for the gradient, and that the idea of treating the gradient was completely different from PNA and TGR, see our manuscript Ln146-154. On this basis, introduced a novel adaptive adjustment strategy that aligned gradients from previous layers with those of the deepest layer. This alignment helped smooth out gradient variance and enhanced feature information learning, moving beyond mere adaptivity.
**(2)** Our self-paced patch out strategy innovates beyond the random discard of patches. Our strategy introduced the concept of self-stepping learning and meanwhile distinguished between significant and non-significant regions when discarding tokens. This strategy is beneficial for learning the transferable attack pattern and also improves the efficiency of the attack, as demonstrated in Figures 2, 4, and Table 5 of our manuscript.
The relationship between our method and PatchOut is more like the relationship between dropout and its variants.
**(3)** Unlike TGR and PNA, our Hybrid Token Gradient Truncation truncates attention modules in deep ViT layers to learn generic feature patterns and adjusts corresponding truncation factors based on the role of different modules.
This nuanced approach differs significantly from previous methods and offers a tailored strategy for enhancing attack transferability.
In sum, the above-mentioned strategies are different from the existing ViT gradient attacks. Our approach was a **`new`** and **`innovative`** approach to attacks on gradients.
4. **Answer to Weakness 4 -- Tables 1-2 were fair:** We apologize for any ambiguity in our manuscript regarding the experimental setups described in Tables 1-3.
The explanation of the experiment’s fairness can be found in our responses to **Weaknesses 1 and 2 from Reviewer 9KoY**.
Regarding the combination of TGR, PNA, and PatchOut, our experimental design in Table 3 of our manuscript considers the synergistic and antagonistic effects of combining these methods.
We found that simply superimposing TGR and PNA could lead to excessive feature information loss, contradicting the suggested "PNA+TGR" approach.
An additional comparison experiment is provided in **Table 1 of our rebuttal PDF**.
Patchout could be used as an enhancement input, so in the comparison experiments, we made such a comparison in our manuscript Table 3.
5. **Answer to Questions -- Testing on Defended ViTs:**
To address this query, we extended our experiments to include robust ViT models like DiT-S, Swin-B, and Xcit-S. These results are presented in the updated **Table 3 in our rebuttal PDF**.
Our method shows comparable effectiveness against these robust ViTs, validating the adaptability and efficiency of our strategies across different robust architectures.
This inclusion provides a comprehensive view of our method's performance against state-of-the-art robust ViTs, in line with current advancements in adversarial machine learning.
6. **Answer to Limitations:**
We thank the reviewer for paying attention to the limitations of our work. We realize that there was an omission in our citation of the sections discussing limitations, which might have led to confusion. To handle this, we will add a dedicated section in our manuscript to clearly outline the limitations of our Adaptive Token Tuning (ATT) method. This new section will discuss potential scalability issues, the dependence on hyperparameters, and any constraints related to the types of adversarial settings where our method may not perform at its best. We will also take into consideration the social implications. We aim to provide a comprehensive and transparent overview of these aspects to better inform future research and application of our findings.
---
Rebuttal 2:
Comment: Thank you for your detailed rebuttal. It indeed addresses most of my concerns. However, the gain in Table 1 of your rebuttal indicates that it is so marginal for me, Thus I will only increase my score marginally to 4.
---
Rebuttal Comment 2.1:
Comment: Thank you for considering our clarifications and additional experiments in the rebuttal. We greatly appreciate the valuable review and your decision to increase the rating. We will incorporate the discussions to our revised manuscript accordingly.
In response to your concern about Table 1 in our rebuttal PDF, we would like to address any potential ambiguity and provide further clarification on our experimental results from three perspectives.
In all the following tables, we consider four ViTs (ViTB/16, PiT-B/16, CaiT-S/24, Visformer-S) as surrogate models. The generated adversarial examples are then used to attack black-box target models, and the average ASR is calculated separately for 8 ViT models, 4 CNN models, 3 Adversarially Trained Defense CNN (a.k.a., Def-CNNs) models. Here, we primarily focus on comparing our proposed ATT attack method with two state-of-the-art methods, PNA [15] and TGR [14].
**(1)** Attacking ViTs via "Token Gradient-based Optimization":
Firstly, we focus solely on the ViT attack using "Token Gradient-Based Optimization." We denote these methods with `(w/o)` to indicate that it is "without input diversity enhancements."
**Tables A**
| Model | Attack | ViTs | CNNs | Def-CNNs | -- | Model | Attack | ViTs | CNNs | Def-CNNs |
|--|--|--|--|--|--|--|--|--|--|--|
| | PNA(w/o) | 62.8 | 38.8 | 26.3 | | | PNA(w/o) | 63.9 | 54.6 | 30.3 |
| ViT-B/16 | TGR(w/o) | 69.8 | 42.7 | 29.3 | | PiT-B/16 | TGR(w/o) | 78.7 | 68.0 | 39.3 |
| | ATT(w/o) | **77.4`+7.6`** | **49.8`+7.1`** | **36.0`+6.7`** | | | ATT(w/o) | **85.9`+7.2`** | **75.3`+7.3`** | **48.0`+8.7`** |
| -- | -- | -- | -- | -- |--| -- | -- | -- | -- | -- |
| | PNA(w/o) | 79.2 | 52.1 | 34.9 | | | PNA(w/o) | 54.9 | 52.1 | 25.5 |
| CaiT-S/24 | TGR(w/o) | 84.8 | 54.0 | 36.1 | | Visformer-S | TGR(w/o) | 62.7 | 62.2 | 30.6 |
| | ATT(w/o) | **91.2`+6.4`** | **68.2`+14.2`** | **50.2`+14.1`** | | | ATT(w/o) | **67.0`+4.3`** | **77.1`+14.9`** | **41.1`+10.5`** |
Specifically, our method ATT(w/o) employs two strategies: adaptive variance reduced token gradient and hybrid token gradient truncation. PNA(w/o) uses attentional skipping without any input diversity enhancement. TGR(w/o) relies solely on use gradient regularization, also without input diversity enhancement.
Compared to state-of-the-art methods without input diversity enhancement, our ATT(w/o) shows a significant improvement in attack performance, with the increase in the average ASR highlighted by “`+∆`” in the Table A.
**(2)** Attacking ViTs via "Token Gradient-based Optimization" and " Input Diversity Enhancement":
Secondly, we incorporate the "Input Diversity Enhancement" strategy with "Token Gradient-based Optimization", where Patch Out (PO) introduced in PNA [15] is used as the input diversity enhancement strategy. As a result, we denote these methods with `(PO)` to indicate that it is "with patch out-based input diversity enhancement."
**Tables B**
| Model | Attack | ViTs | CNNs | Def-CNNs | -- | Model | Attack | ViTs | CNNs | Def-CNNs |
|--|--|--|--|--|--|--|--|--|--|--|
| | PNA(PO) | 70.8 | 42.6 | 29.9 | | | PNA(PO) | 73.1 | 57.8 | 32.7 |
| ViT-B/16 | TGR(PO) | 76.0 | 46.7 | 33.3 | | PiT-B/16 | TGR(PO) | 82.3 | 68.9 | 41.3 |
| | ATT(PO) | **77.1`+1.1`** | **51.7`+5.0`** | **37.1`+3.8`** | | | ATT(PO) | **84.2`+1.9`** | **75.2`+6.3`** | **48.4`+7.1`** |
| -- | -- | -- | -- | -- |--| -- | -- | -- | -- | -- |
| | PNA(PO) | 81.6 | 56.6 | 39.3 | | | PNA(PO) | 68.8 | 61.8 | 32.3 |
| CaiT-S/24 | TGR(PO) | 88.8 | 60.5 | 40.5 | | Visformer-S | TGR(PO) | 70.4 | 64.3 | 33.5 |
| | ATT(PO) | **91.1`+2.3`** | **71.9`+11.4`** | **54.3`+13.8`** | | | ATT(PO) | **70.5`+0.1`** | **79.3`+15.0`** | **44.5`+11.0`** |
Specifically, our ATT(PO) method employs three strategies: adaptive variance-reduced token gradient, hybrid token gradient truncation, and patch out. On the other hand, PNA(PO) and TGR(PO) integrate patch out with their respective token gradient-based optimizations. It’s worth noting that TGR(PO) is currently the state-of-the-art method for ViT attacks. Additionally, the review suggested TGR+PNA+PO as a baseline, while the differing token gradient mechanisms used by TGR and PNA present a conflict.
It needs to be emphasized that our ATT(PO) method is not the full version for this comparison in Table B, as it uses the same Patch Out enhancement as PNA(PO) and TGR(PO). The Table B above highlights the average ASR improvement of our ATT(PO) method compared to state-of-the-art methods with input diversity enhancement, indicated by “`+∆`”.
**(3)**...
---
Reply to Comment 2.1.1:
Comment: **(3)** Attacking ViTs via "Token Gradient-based Optimization" and "Improved Input Diversity Enhancement":
Thirdly, we further investigate the ViT attack strategy that combines "Token Gradient-based Optimization" and "Improved Input Diversity Enhancement", where our proposed Self-Paced Patch Out (SPPO) is used as the improved input diversity enhancement strategy. As a result, we denote these methods with `(SPPO)` to indicate that it is "with self-paced patch out-based input diversity enhancement."
**Tables C**
| Model | Attack | ViTs | CNNs | Def-CNNs | -- | Model | Attack | ViTs | CNNs | Def-CNNs |
|--|--|--|--|--|--|--|--|--|--|--|
| ViT-B/16 | TGR(SPPO) | 71.6 | 47.4 | 33.1 | | PiT-B/16 | TGR(SPPO) | 81.6 | 72.4 | 47.4 |
| | ATT(SPPO) | **80.3`+8.7`** | **54.1`+6.7`** | **38.7`+5.6`** | | | ATT(SPPO) | **87.7`+6.1`** | **78.0`+5.6`** | **52.0`+4.6`** |
| -- | -- | -- | -- | -- |--| -- | -- | -- | -- | -- |
| CaiT-S/24 | TGR(SPPO) | 86.4 | 64.9 | 48.4 | | Visformer-S | TGR(SPPO) | 65.4 | 80.9 | 45.4 |
| | ATT(SPPO) | **92.6`+6.2`** | **75.4`+10.5`** | **58.3`+9.9`** | | | ATT(SPPO) | **76.4`+11.0`** | **84.4`+3.5`** | **50.3`+4.9`** |
In this comparison, our ATT(SPPO) method represents the full version, incorporating adaptive variance-reduced token gradient, hybrid token gradient truncation, and self-paced patch out. Additionally, we replaced the original Patch Out strategy used in TGR with our SPPO strategy, resulting in and TGR(SPPO).
As shown in Tables A, B, and C, our ATT(SPPO) method consistently outperforms across various attack scenarios. Table C also demonstrates that TGR(SPPO) further improves the average ASR compared to TGR(PO) in Table B, highlighting the effectiveness and compatibility of our proposed SPPO strategy.
Thank you again for considering our work. We hope this explanation addresses your concern. | Summary: This paper investigates the enhancement of adversarial attack transferability on Vision Transformers (ViTs) through innovative adaptive token tuning techniques. It addresses the vulnerability of ViTs to adversarial attacks by introducing three main optimization strategies: an adaptive gradient re-scaling method to uniformly reduce gradient variance across ViT layers, a self-paced patch out strategy to increase input diversity and mitigate overfitting by dynamically discarding less important perturbation patches, and a hybrid token gradient truncation method designed to attenuate the attention mechanism's effectiveness, adjusting truncation factors across different modules for optimal balance. Extensive experiments demonstrate that these methods not only improve the transferability of adversarial examples across ViTs and CNNs but also significantly enhance attack success rates by an average of 10.1% compared to state-of-the-art transfer-based attacks. Notably, this approach achieves a remarkable average attack performance of 58.3% on defended CNNs, highlighting the ongoing challenges in securing robustly trained defense models against sophisticated adversarial strategies.
Strengths: 1. This paper significantly enhances the transferability of adversarial attacks on Vision Transformers (ViTs) by introducing three main optimization strategies: an adaptive gradient re-scaling method, a self-paced patch out strategy, and a hybrid token gradient truncation method. It also provides a comprehensive theoretical analysis of the proposed methods, detailing the mechanisms through which each contributes to improving adversarial robustness and effectiveness.
2. Extensive experimental results substantiate the effectiveness of the proposed methods, demonstrating their efficacy across various settings and models.
Weaknesses: 1. This paper enhances the transferability of adversarial attacks on Vision Transformers (ViTs) through adaptive token tuning, introducing three main optimization strategies: an adaptive gradient re-scaling method, a self-paced patch out strategy, and a hybrid token gradient truncation method. However, it lacks ablation experiments for these three strategies.
2. The authors propose an improvement on the PatchOut method, termed Self-Paced Patch Out, which is an input transformation approach that intuitively could enhance adversarial transferability. However, comparing it in Table 1 alongside methods that do not incorporate input transformation, such as TGR, seems unfair.
Technical Quality: 3
Clarity: 3
Questions for Authors: The authors mention that the mild re-scaling strategy can back-propagate some important feature information in the largest token gradient, which is correlated with important feature information, unlike TGR. However, it is unclear how retaining important feature information contributes to enhanced transferability of adversarial attacks. Could the authors elaborate on the connection between these retained features and the observed improvements in transferability?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors suggest that attention modules in certain ViT layers are redundant for image classification and adversarial perturbation generation, potentially leading to overfitting. This assertion raises questions about the underlying mechanisms. Could the authors provide more detailed analysis or empirical evidence on how these redundant attention modules affect the transferability of adversarial attacks?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. **Answer to Weakness 1 -- ablation experiments for these three strategies:** We apologize for any ambiguity in our manuscript regarding the experimental setups described in Tables 1-3, where we did an ablation study of the input enhancement strategy including both Path Out (PO) and Self-Paced Patch Out (SPPO).
To clarify, the input enhancement strategy (PO and SPPO) was not included in Tables 1-2, aligning with the setups used for TGR.
In Table 3, the input enhancement strategy is combined with our other two attack strategies.
Additionally, we provided the ablation experiment of the other two proposed strategies in **Table 2 of our rebuttal PDF**, including the adaptive gradient re-scaling method (see AVR) and the hybrid token gradient truncation method (see ST and HT).
2. **Answer to Weakness 2 -- Table 1-2 were fair:**
To align with the setups used for TGR, we did not include any input enhancement strategies in Tables 1-2, ensuring a fair comparison between our ATT method and the state-of-the-art methods.
On the other hand, we investigated the effect of SPPO combined with other methods (such as TGR and PNA) on attack transferability in **Table 1 of our rebuttal PDF**, further verifying the effectiveness of our proposed SPPO strategy.
3. **Answer to Questions -- Retention of features to improved transferability:** The retention of important feature information is pivotal for the effectiveness of gradient-based adversarial attacks, which often utilize the gradient direction opposite to these critical features to enhance attack transferability. While completely disregarding these features, as seen with TGR, reduces overfitting, it also diminishes the utility of these features in facilitating the transferability of perturbations.
Our experiments, detailed in **Table 4 of the rebuttal PDF**, specifically explore the impact of preserving these important features using our methodology—without truncation or input enhancement strategies—demonstrating their significance in improving adversarial attack transferability.
4. **Answer to Limitations -- Redundant attention modules affected the transferability:** To address concerns regarding the redundancy of attention modules, we provide a comprehensive analysis in Section A.5 of our Supplementary Material.
This section includes experimental evidence showing how excessive reliance on the attention mechanism can lead to overfitting, thus negatively impacting the transferability of adversarial attacks.
By truncating the attention layer, we demonstrate the potential for improved attack transferability without compromising model accuracy.
Further, references [15-16] conducted a comprehensive analysis of the attention mechanism, experimentally verified the overfitting phenomenon caused by the attention module. [15-16] strategically bypassed the attention mechanism to not only maintain but can enhance the model performance.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors' responses, our major concerns have been addressed.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 9KoY:
Thank you for your suggestions on experimental integrity and fairness, which help us conduct more rigorous research. We also appreciate your thoughtful feedback and the acknowledgment of the clarifications and new experiments. We sincerely value your careful consideration of our work. | Rebuttal 1:
Rebuttal: 1. In **Table 1** of our rebuttal pdf, three attack methods (including PNA, TGR and ours) are tested under different settings of input enhancement.
**“w / o”** denoted that no input enhancement was added, which is the same as the results shown in Tables 1-2 of our manuscript.
**“PO”** indicated that PatchOut was utilized as input enhancement.
**“SPPO”** indicated that Self-Paced Patch Out was utilized as input enhancement, with the same hyperparameters mentioned in our manuscript.
Due to the low ASR of PNA under the “SPPO” setting, we thought that our ATT method's hyperparameters were not compatible with PNA,
and thus we fine-tuned relevant hyperparameters and specified this input enhancement as **“SPPO+”**.
After this adjustment, the ASR of PNA (SPPO+) was significantly improved, which verified our observations.
This experiment validated the superiority of our ATT method compared to SOTAs in various attack settings.
2. Considering that the results in Tables 1-3 in our manuscript could reflect the effect of SPPO on ASR, we followed the suggestion of Reviewer 52P8's question 3.
In **Table 2** of our rebuttal pdf, we performed ablation experiments on three strategies (AVR/ST/HT), where **“AVR”** denoted Adaptive Variance Reduced Token Gradient; **“ST”** denoted soft truncation that we utilized the truncation factor to truncate Attention, QKV, and MLP modules; **“HT”** denoted hard truncation that we could truncate the gradient of selected Attention layers to zero.
3. In **Table 3** of our rebuttal pdf, we added a validation experiment for robust ViTs,
where PNA, TGR, and ATT (ours) use their optimal hyperparameters.
“clean” denoted that the model was utilized to classify a clean dataset and the result was the probability of the classifier classifying the data incorrectly.
The rest of the results were all ASRs.
Meanwhile, we performed the same test on the normal model corresponding to robust ViTs.
Experimental results showed that our ATT method still performs better than SOTAs in advanced normal and robust ViT models.
4. In **Table 4** of our rebuttal pdf, an additional experiment was conducted to study the variance reduction of token gradients, where both input enhancement and gradient strategies are not considered here.
**“Only VR”** denoted that only variance reduction strategy was utilized in our ATT method, while **“Only AVR”** denoted that only adaptive variance reduction (AVR) was ultized in our ATT method.
As shown in Table 4, our proposed adaptive variance reduction strategy performs better than the fixed variance reduction strategy, leading to better transferability of crafted adversarial examples.
Pdf: /pdf/5de4f89cc3c23ea1e84e53825cbab489f49d73fd.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Semi-supervised Knowledge Transfer Across Multi-omic Single-cell Data | Accept (poster) | Summary: This paper introduces a novel method to address the knowledge transfer challenge in multi-omic single-cell data. Specifically, it focuses on scenarios where annotations are available partially in one modality, namely scRNA-seq data, and aims to infer annotations in another modality, the scATAC-seq data, without requiring paired datasets.
The authors proposed to use a shared encoder to project the two modalities, with additional compoenents including: optimal transport-based dataset expansion for scRNA-seq data, divide and conpuer for target scATAV-seq data, and cross-omic multi-sample mixup.
The author conducted comprehensive comparison with SOTA method and showed that the proposed method achieves the best performance.
Strengths: Overall, the proposed approach is innovative, and the methodology section is well-structured. An ablation study confirms that each proposed component contributes significantly to the overall superior performance of the methology.
Weaknesses: I'm not entirely convinced about the prevalence of the problem setting described by the author, where only a small subset of the scRNA-seq data is annotated. It may occur in scenarios such as large experiments conducted in batches, where only certain batches are annotated. It raises questions about the experimental setup. Are annotations removed randomly or based on specific conditions or batches? I wonder how does the availability of annotated subsets (randomly or by batch) affect the overall study results.
Technical Quality: 3
Clarity: 3
Questions for Authors: I wonder how the methodology comparision is conducted. As the authors argued in the paper, the SOTA methods in the experimental studies does not deal with scenarios where the scRNA-seq is only partially annotated. Therefore, in the comparison experiment, such as in the "low label ratio" setup, is it that only the 1% of the scRNA-seq is used for cross-modality matching? There are popular existing methods such as Seurat and Harmony that provided a potential two-step approach: first learn the full annotation on scRNA-seq, and then transfer to scATAC-seq, which might be a more fair comparison.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have addressed limitations such as only applicable to open-set settings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are truly grateful for the time you have taken to review our paper and your insightful review. Here we address your concerns in the following.
> Q1. I'm not entirely convinced about the prevalence of the problem setting described by the author, where only a small subset of the scRNA-seq data is annotated.
A1. Thanks for your comment. The label scarcity in scRNA-seq data is practical as supported in the previous works [1,2,3]. They have developed semi-supervised approaches to address the label scarcity in the scRNA-seq data. We will include the citations in our revised version.
> Q2. It may occur in scenarios such as large experiments conducted in batches, where only certain batches are annotated. It raises questions about the experimental setup. Are annotations removed randomly or based on specific conditions or batches? I wonder how does the availability of annotated subsets (randomly or by batch) affect the overall study results.
A2. Thanks for your question. The experiments in the manuscript are randomly removed. We have added an experiment to compare the performance of removing annotations by batches. Specifically, we collect different batches of MouseAtlas dataset and remove some of the batches. From the following results, we can observe that our approach still outperforms other methods, which verifies the robustness of our approach in different scenarios. We will include this in our revised version.
| Label Ratio | Low | Mid | High |
| ----------- | ----- | ----- | ----- |
| scJoint | 62.36 | 67.75 | 70.02 |
| scBridge | 64.99 | 69.86 | 71.44 |
| scNCL | 61.50 | 69.04 | 71.17 |
| Ours | **70.64** | **72.21** | **72.58** |
>Q3. I wonder how the methodology comparision is conducted. As the authors argued in the paper, the SOTA methods in the experimental studies does not deal with scenarios where the scRNA-seq is only partially annotated. Therefore, in the comparison experiment, such as in the "low label ratio" setup, is it that only the 1% of the scRNA-seq is used for cross-modality matching? There are popular existing methods such as Seurat and Harmony that provided a potential two-step approach: first learn the full annotation on scRNA-seq, and then transfer to scATAC-seq, which might be a more fair comparison.
A3. Thanks for your suggestion. To solve your concern, we have included Seurat [4] and Harmony [5] for performance comparison. For these two methods, we first learn learn the full annotation on scRNA-seq data, and then transfer the knowledge to scATAC-seq data. From the following results on CITE-ASAP and snRNA_10X_v3_A-ATAC, we can find that our method surpasses other methods consistently. We will include this in our revised version.
| Dataset | CI-AS | CI-AS | CI-AS | sn-AT | sn-AT | sn-AT |
| ----------- | ----- | ----- | ----- | ----- | ----- | ----- |
| Label Ratio | Low | Mid | High | Low | Mid | High |
| Seurat | 38.71 | 41.76 | 43.25 | 33.46 | 39.78 | 45.09 |
| Harmony | 39.03 | 39.99 | 44.06 | 32.17 | 39.94 | 44.56 |
| Ours | **45.36** | **46.38** | **47.45** | **50.23** | **51.02** | **51.42** |
**Reference**
[1] Dong et al., Semi-Supervised Deep Learning for Cell Type Identification From Single-Cell Transcriptomic Data, IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2022
[2] Kimmel et al., scNym: Semi-supervised adversarial neural networks for single cell classification, bioRxiv, 2020
[3] We et al., CALLR: a semi-supervised cell-type annotation method for single-cell RNA sequencing data, Bioinformatics, 2021
[4] Stuart et al., Comprehensive integration of single-cell data, Cell, 2019
[5] Korsunsky et al., Fast, sensitive and accurate integration of single-cell data with Harmony, Nature Methods, 2019
In light of these responses, we hope we have addressed your concerns, and hope you will consider raising your score. If there are any additional notable points of concern that we have not yet addressed, please do not hesitate to share them, and we will promptly attend to those points. | Summary: This paper proposes a label transfer method from scRNA-seq to scATAC-seq data. Based on the heterogeneity of single-cell data, this work partitions data into several groups and designs effective strategies to tackle them respectively. Experiments demonstrate the effectiveness of the proposed method.
Strengths: 1. The proposed method is technically sound, outperforming existing methods on two scRNA-seq and scATAC-seq integration datasets.
2. Ablation studies demonstrate the effectiveness of each component in the proposed method.
3. The paper is well written and clearly organized in general, which is easy to read and follow.
Weaknesses: 1. In the Introduction, the authors claim that "a small fraction of scRNA-seq data with cell types annotated" aligns more closely with practical scenarios. Do you have any evidence to support the claim? You may provide an example to show that scRNA-seq data are not processed and annotated as a whole, but only a small portion is annotated.
2. The uniform distribution assumption in the optimal transport process could be wrong, as most scRNA-seq data are unbalanced across different cell types.
3. In Eq. 8, the negative cell types are removed with a significant difference from the cell type with the highest probability. However, does a threshold of 1e-3 represent significant differences?
4. What does $S_i$ in Eq. 10 mean?
5. What is the motivation of mixing scRNA-seq data to form scATAC-seq data? Is such an operation biologically reasonable?
6. Typo: In line 253 cBridge -> scBridge
7. Currently the author split the MouseAtlas dataset into several subsets with different combinations for evaluation. I recommend the author provide the experimental results on the full dataset.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to my concerns in the weaknesses section.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No significant limitations of this work are found.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the time you've taken to review our paper and for your insightful comments. Your positive feedback is highly encouraging for us! We'd like to address your concerns in the following response.
> Q1. In the Introduction, the authors claim that "a small fraction of scRNA-seq data with cell types annotated" aligns more closely with practical scenarios. Do you have any evidence to support the claim? You may provide an example to show that scRNA-seq data are not processed and annotated as a whole, but only a small portion is annotated.
A1. Thanks for your comment. The label scarcity in scRNA-seq data is practical as supported in the previous works [1,2,3]. They have developed semi-supervised approaches to address the label scarcity in the scRNA-seq data. We will include the citations in our revised version.
> Q2. The uniform distribution assumption in the optimal transport process could be wrong, as most scRNA-seq data are unbalanced across different cell types.
A2. Thanks for your comment. In real-world scenarios, we often have no prior knowledge about the distribution of specific cell types and adopting the uniform distribution assumption is a popular choice without prior information. In future work, we aim to extend our methods to scenarios with prior distribution information for imbalanced scenarios. We will include the discussion of potential future works in our revised version.
> Q3. In Eq. 8, the negative cell types are removed with a significant difference from the cell type with the highest probability. However, does a threshold of 1e-3 represent significant differences?
A3. Thanks for your question. We have included a sensitivity analysis of the threshold $\mu$ by varying it in {0.0005,0.001,0.0015,0.002}. The following results indicate that a threshold of 1e-3 brings good performance with significant differences. In particular, after the softmax operation, the sum of the confidences for all cell types equals 1 with more than 10 cell types. In this context, the confidence scores are quite small and a difference of 1e-3 indicates a sufficient difference.
| Threshold $\mu$ | 5e-4 | 1e-3 | 1.5e-3 | 2e-3 |
| ------------------- | ----- | ----- | ------ | ----- |
| CITE-ASAP | 46.58 | 47.45 | 47.40 | 47.02 |
| snRNA_10X_v3_A-ATAC | 50.07 | 51.42 | 51.14 | 51.03 |
> Q4. What does $S_i$ in Eq. 10 mean?
A4. Thanks for your question. $S_i$ is a typo and it should be $Y_i$ in Eq. 9. We will correct it in the revised version.
> Q5. What is the motivation of mixing scRNA-seq data to form scATAC-seq data? Is such an operation biologically reasonable?
A5. Thanks for your question. We **do not** generate scATAC-seq data from scRNA-seq data. Instead, we generate virtual scRNA-seq data using the Mixup technique and then reduce the semantic gap between scATAC-seq data and this virtual scRNA-seq data in the embedding space. From a biological perspective, scATAC-seq data is characterized by extreme sparsity. This results in input matrices that are significantly heterogeneous compared to scRNA-seq data. To address the challenges in data integration, our motivation is to reduce the distance between scATAC-seq data and the mixed (virtual) scRNA-seq data in the embedding space.
> Q6. Typo: In line 253 cBridge -> scBridge.
A6. Thanks for pointing out the typo. We will correct it in the revised version.
> Q7. Currently the author split the MouseAtlas dataset into several subsets with different combinations for evaluation. I recommend the author provide the experimental results on the full dataset.
A7. Thanks for your suggestion. We have included the experimental results on the full dataset. The results below indicate that our approach still outperforms other methods. We will include this in our revised version.
| Label Ratio | Low | Mid | High |
| ----------- | ----- | ----- | ----- |
| scJoint | 62.36 | 67.75 | 70.02 |
| scBridge | 64.99 | 69.86 | 71.44 |
| scNCL | 61.50 | 69.04 | 71.17 |
| Ours | **70.64** | **72.21** | **72.58** |
**Reference**
[1] Dong et al., Semi-Supervised Deep Learning for Cell Type Identification From Single-Cell Transcriptomic Data, IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2022
[2] Kimmel et al., scNym: Semi-supervised adversarial neural networks for single cell classification, bioRxiv, 2020
[3] We et al., CALLR: a semi-supervised cell-type annotation method for single-cell RNA sequencing data, Bioinformatics, 2021
Thanks again for appreciating our work and for your constructive suggestions. Please let us know if you have further questions.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed responses which addressed most of my concerns. However, the current evidence of whether the setting aligns with practical scenarios is still weak. It would be more convincing if the authors could provide some real-world examples. Moreover, the experimental results on full datasets are more solid than those on subsets. They should be included in the main paper after the revision.
---
Reply to Comment 1.1.1:
Title: Thank you for your feedback!
Comment: Thank you for your feedback! We are pleased to address your further questions as follows:
>Q1. Thanks for the detailed responses which addressed most of my concerns. However, the current evidence of whether the setting aligns with practical scenarios is still weak. It would be more convincing if the authors could provide some real-world examples.
A1. Thanks for your suggestion. Here we provide some real-world examples.
The annotation of single-cell data is quite challenging and time-consuming. For example, in PanglaoDB [1], the assignment of genes to cell types requires broad expertise to make mappings from gene markers to cell types. The complexity is further amplified by the exhaustive list of cell type markers spanning 11 columns, making the annotation process intricate and less efficient. Moreover, ground truth cell-type annotation often requires experimental validation techniques such as flow cytometry [2] and copy number variations (CNV) estimation method [3,4]. Therefore, our semi-supervised methods can save the experimental and labor costs when it comes to new datasets in the real world.
Based on these examples, we can conclude that label scarcity in scRNA-seq data is a practical problem in real-world scenarios. We will include it in the revised version.
>Q2. Moreover, the experimental results on full datasets are more solid than those on subsets. They should be included in the main paper after the revision.
A2. Thanks for your suggestion. We will definitely include the experimental results in the revised version.
**Reference**
[1] Franzén et al., PanglaoDB: a web server for exploration of mouse and human single-cell RNA sequencing data, Database, 2019
[2] Zhang et al., Regulatory T-cell depletion alters the tumor microenvironment and accelerates pancreatic carcinogenesis, Cancer Discov, 2020
[3] Zhang et al., Single-cell analyses inform mechanisms of myeloid-targeted therapies in colon cancer, Cell, 2020
[4] Kim et al., Single-cell rna sequencing demonstrates the molecular and cellular reprogramming of metastatic lung adenocarcinoma, Nature Communication, 2020
Thank you again for your feedback and effort! We will add the rebuttal contents to the main paper in the final version following your valuable suggestions. Please let us know if you have further questions. | Summary: This paper introduces a semi-supervised knowledge transfer framework called DANCE, designed to effectively transfer cell type annotations from scRNA-seq data to unannotated scATAC-seq data under conditions of label scarcity. It is similar to the unsupervised domain adaptation task in computer vision. DANCE addresses the challenge of heterogeneous multi-omic data by generating pseudo-labels based on optimal transport, employing a divide-and-conquer strategy, and using cross-omic multi-sample Mixup to reduce cell heterogeneity. Extensive experiments demonstrate DANCE's superiority over state-of-the-art methods.
Strengths: 1. The paper is well-motivated and seems to be reproducible.
2. The paper is well-structured and easy to follow.
3. The theory involved in the method seems relatively solid.
4. The experiment fully proves the effectiveness and superiority of the DANCE.
Weaknesses: 1. The compared methods, especially the DA method, are relatively old. Comparison with newer methods (e.g., [a], [b], etc.) helps to understand the performance of the methods.
2. Repeated use of symbols may cause confusion and misunderstanding among readers. Eq. (1), (3), (16), and Theory 3.2 all contain $\lambda$. As far as I understand, the meanings of these symbols may be different.
3. The method includes too many empirical hyperparameters, and the ‘crucial’ parameters selected by parameter analysis seem not comprehensive enough. A more comprehensive description of each $\lambda$ and threshold $\tao$ will help to understand the method and promote future work.
4. The quality of images seems to be low, affecting comprehension. To be specific, the method process is not clearly and comprehensively shown in Figure 1. The text in Figure 2 is too small.
[a] Semi-Supervised Domain Adaptation with Source Label Adaptation
[b] COT: Unsupervised Domain Adaptation with Clustering and Optimal Transport
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the strengths and weaknesses of the paper. In addition, in my opinion, the proposed method seems to be a patchwork of methods in domain adaptation, cross-modal alignment, etc. Maybe its technical innovation is questionable. A more targeted and detailed description is recommended.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The author has explained the limitations in Sec. 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your time in reviewing our paper and your insightful comments. Your positive feedback is incredibly encouraging for us! We'd like to address your concerns in the following response.
> Q1. The compared methods, especially the DA method, are relatively old. Comparison with newer methods (e.g., [1], [2], etc.) helps to understand the performance of the methods.
A1. Thanks for your suggestion. We have included the recommended baselines, i.e., SLA [1] and COT [2] on different datasets for performance comparison. The results are shown below, which demonstrate the superiority of our method.
| Dataset | CI-AS | CI-AS | CI-AS | sn-AT | sn-AT | sn-AT |
| ----------- | ----- | ----- | ----- | ----- | ----- | ----- |
| Label Ratio | Low | Mid | High | Low | Mid | High |
| SLA [1] | 32.05 | 36.99 | 41.74 | 35.36 | 37.48 | 42.41 |
| COT [2] | 31.44 | 35.86 | 39.01 | 35.73 | 36.79 | 43.85 |
| Ours | **45.36** | **46.38** | **47.45** | **50.23** | **51.02** | **51.42** |
> Q2. Repeated use of symbols may cause confusion and misunderstanding among readers. Eq. (1), (3), (16), and Theory 3.2 all contain $\lambda$. As far as I understand, the meanings of these symbols may be different.
A2. Thanks for pointing out this typo. The term in Eq. 1 and Eq. 3 should be $\sigma$, and in Theory 3.2 should be $\beta$. We will correct all typos in the revised version.
> Q3. The method includes too many empirical hyperparameters, and the ‘crucial’ parameters selected by parameter analysis seem not comprehensive enough. A more comprehensive description of each $\lambda$ and threshold $\tau$ will help to understand the method and promote future work.
A3. Thanks for your suggestion. We have added the parameter sensitivity analysis by varying $\lambda$ (Eq. 1) and $\tau$ (Eq. 6). From the results below, we can observe that the performance is not sensitive to the choice of $\lambda$ when we change $\lambda$ from 5 to 20, so we empirically set its value to 10. We also vary the value of $\tau$ with the range of {0.8,0.85,0.9,0.95}. The results below indicate that the method is not sensitive to $\tau$ in the interval [0.8,0.95]. Therefore, we set $\tau$ to 0.9 as the default.
| $\lambda$ | 5 | 10 | 15 | 20 |
| ------------------- | ----- | ----- | ----- | ----- |
| CITE-ASAP | 47.01 | 47.45 | 47.32 | 47.14 |
| snRNA_10X_v3_A-ATAC | 51.16 | 51.42 | 51.08 | 51.17 |
| $\tau$ | 0.8 | 0.85 | 0.9 | 0.95 |
| ------------------- | ----- | ----- | ----- | ----- |
| CITE-ASAP | 46.98 | 47.06 | 47.45 | 47.33 |
| snRNA_10X_v3_A-ATAC | 50.54 | 51.15 | 51.42 | 50.87 |
> Q4. The quality of images seems to be low, affecting comprehension. To be specific, the method process is not clearly and comprehensively shown in Figure 1. The text in Figure 2 is too small.
A4. Thanks for your comment. We have revised the figures according to your suggestions and uploaded the pdf file in the global response section.
> Q5. The proposed method seems to be a patchwork of methods in domain adaptation, cross-modal alignment, etc. Maybe its technical innovation is questionable. A more targeted and detailed description is recommended.
A5. Thanks for your comment. We would like to point out that the innovation of our approach against existing methods is threefold:
- **Underexplored Pratical Problem**. We focus on an underexplored yet practical problem of knowledge transfer across multi-omic single-cell data under label scarcity, while previous works focus on utilizing fully labeled scRNA-seq data.
- **A Holistic Framework**. To both challenges of label scarcity and cell heterogeneity, we propose a holistic framework, which consists of OT-based dataset expansion and divide-and-conquer strategy for multi-omics semantic learning, followed by cross-omic multi-sample Mixup for semantics integration. All these designs are totally new for single-cell data integration.
- **Theortical Analysis**. We provide a comprehensive theoretical analysis to support our designs, which makes our framework more solid.
- **Superior Performance**. Our method achieves superior performance on benchmark datasets compared with state-of-the-art methods. In particular, the performance increasement on snRNA_10X_v3_A-ATAC is up to 96.4% compared to the best baseline.
**Reference**
[1] Yu et al., Semi-Supervised Domain Adaptation with Source Label Adaptation, CVPR 2023
[2] Liu et al., COT: Unsupervised Domain Adaptation with Clustering and Optimal Transport, CVPR 2023
Thanks again for appreciating our work and for your constructive suggestions. Please let us know if you have further questions.
---
Rebuttal Comment 1.1:
Comment: I appreciate the effort put into the rebuttal, which addressed some of my concerns. After reading the other reviews and replies, I keep my positive rating.
---
Rebuttal 2:
Title: Thank you for your feedback!
Comment: Thanks for your feedback! We are pleased to know that you keep the positive rating and your support. We will properly include all the rebuttal contents in the revised version, following your valuable suggestions. | Summary: This paper addresses the challenge of knowledge transfer across multi-omic single-cell data under label scarcity. The proposed semi-supervised framework, DANCE, uses optimal transport to generate pseudo-labels and a divide-and-conquer strategy for handling scATAC-seq data. The framework demonstrates superior performance on benchmark datasets, offering a practical solution to label scarcity in multi-omic data.
Strengths: **Pros:**
1. The introduction of optimal transport (OT) into the single-cell domain effectively addresses label scarcity and imbalance issues, supported by ablation studies.
2. The divide-and-conquer strategy for scATAC-seq data and Mixup to alleviate cellular heterogeneity are well-handled.
3. The performance gain is impressive, showing improvements as label availability increases.
4. The paper is well-written and organized, making it enjoyable to read.
Weaknesses: **Cons:**
1. The exclusion of OT in scATAC-seq data due to cellular heterogeneity needs practical examples or quantitative analysis to substantiate its significance.
2. The initial prediction's dependency in the divide-and-conquer strategy could lead to misclassification. Discussion on handling wrongly divided samples or providing statistics is needed.
3. Exploring if DANCE can be conducted in the opposite direction (scATAC-seq with OT and scRNA-seq with divide-and-conquer) would be beneficial.
4. Discussion on the complexity of OT and divide-and-conquer in terms of memory and time compared to existing studies is required.
5. Including a discussion on the use of scCLIP [1] in scenarios with scarce labels would enhance the paper.
[1] https://openreview.net/forum?id=KMtM5ZHxct
Technical Quality: 3
Clarity: 3
Questions for Authors: See the Weaknesses above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I do not observer any potential negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are truly grateful for the time you have taken to review our paper, and your insightful comments and support. Your positive feedback is incredibly encouraging for us! In the following response, we would like to address your concerns and provide additional clarification.
> Q1. The exclusion of OT in scATAC-seq data due to cellular heterogeneity needs practical examples or quantitative analysis to substantiate its significance.
A1. Thanks for your suggestion. We have added a model variant w/ OT for scATAC-seq to support our point. The compared results on CITE-ASAP dataset are shown below. We can find that our full model outperforms w/ OT for scATAC-seq, which validates the OT strategy may not be the optimal choice given the significant cellular heterogeneity inherent in scATAC-seq data.
| Label Ratio | Low | Mid | High |
| ---------------------------- | ----- | ----- | ----- |
| w/ OT for scATAC-seq | 34.90 | 36.87 | 39.34 |
| w/o OT for scATAC-seq (Ours) | **45.36** | **46.38** | **47.45** |
> Q2. The initial prediction's dependency on the divide-and-conquer strategy could lead to misclassification. Discussion on handling wrongly divided samples or providing statistics is needed.
A2. Thanks for your suggestion. We have added the comparison of the pseudo-labeling accuracy between unlabeled scRNA-seq data and source-like scATAC-seq data. The results on the snRNA_SMARTer-snmC dataset are shown below. From the results, we can observe that the accuracy of source-like scATAC-seq data is close to that of unlabeled scRNA-seq data, verifying the effectiveness of our division.
| Pseudo-labeling Acc (%) | Low | Mid | High |
| --------------------------- | ----- | ----- | ----- |
| unlabeled scRNA-seq data | 78.85 | 79.17 | 80.13 |
| source-like scATAC-seq data | 77.12 | 78.04 | 78.56 |
We have also added a model variant w/o divide-and-conquer on snRNA_10X_v3_A-ATAC dataset for performance comparison. As evidenced by the results in the table below, the incorporation of the divide-and-conquer strategy yields positive gains.
| Acc (%) | Low | Mid | High |
| ---------------------------- | ----- | ----- | ----- |
| w/o divide-and-conquer | 46.49 | 47.80 | 48.15 |
| w/ divide-and-conquer (Ours) | **50.23** | **51.02** | **51.42** |
> Q3. Exploring if DANCE can be conducted in the opposite direction (scATAC-seq with OT and scRNA-seq with divide-and-conquer) would be beneficial.
A3. Thanks for your suggestion. We have included an experiment, which transfers cell type knowledge from scATAC-seq data to scRNA-seq data on the CITE-ASAP dataset. From the results below, we can validate the superiority of our method in the opposite direction.
| Label Ratio | Low | Mid | High |
| ----------- | ----- | ----- | ----- |
| scJoint | 28.14 | 45.40 | 49.59 |
| scBridge | 32.24 | 51.55 | 53.62 |
| scNCL | 36.46 | 51.01 | 52.73 |
| Ours | **70.33** | **70.78** | **72.89** |
> Q4. A discussion on the complexity of OT and divide-and-conquer in terms of memory and time compared to existing studies is required.
A4. Thanks for your suggestion. We have included the comparison of memory and time as follows and we can observe that our method has a competitive computation cost. In particular, the performance of scNCL is much worse than ours (the performance increasement of ours is over 105.9%), while our memory cost only increases a little with even less training time.
| CITE-ASAP | Memory Cost | Training Time/Epoch | Acc (%) |
| --------- | ----------- | -------------------- | ------- |
| scJoint | 0.7GB | 30s | 22.07 |
| scBridge | 1.4GB | 4s | 23.38 |
| scNCL | 1.0GB | 70s | 22.53 |
| Ours | 1.4GB | 30s | **46.40** |
> Q5. Including a discussion on the use of scCLIP [1] in scenarios with scarce labels would enhance the paper.
A5. Thanks for your suggestion. We will include the discussion in the revised version as follows.
"*scCLIP [1] introduces a contrastive learning approach to integrate multi-omic single-cell data. It aligns the representations of pairwise multi-omic single-cell data without the usage of cell type labels. In contrast, we focus on the cell type knowledge transfer task in label scarcity conditions*."
**Reference**
[1] Xiong et al.,scCLIP: Multi-modal Single-cell Contrastive Learning Integration Pre-training, NeurIPS Workshop 2023
Thanks again for appreciating our work and for your constructive suggestions. Please let us know if you have further questions.
---
Rebuttal Comment 1.1:
Comment: Thank you for all the efforts on rebuttal. My concerns have been alleviated, and I would like to increase my score to 7.
---
Reply to Comment 1.1.1:
Title: Thanks for your feedback and increasing your score!
Comment: Thanks for your feedback and increasing your score! We are pleased to know that your concerns have been alleviated. We will properly include all the rebuttal contents in the revised version, following your valuable suggestions. | Rebuttal 1:
Rebuttal: Dear Reviewers,
We thank you for your careful reviews and constructive suggestions. We acknowledge the positive comments such as "effective and well-handled method" (Reviewer W1Pi), "impressive performance" (Reviewer W1Pi), “well-written and organized” (Reviewer W1Pi, Reviewer puzU), "well-motivated and reproducible" (Reviewer kGak), "well-structured and easy to follow” (Reviewer kGak), "solid theory” (Reviewer kGak), "good experiment” (Reviewer kGak), "technically sound method” (Reviewer puzU), “effective ablation studies” (Reviewer puzU, Reviewer L3nP), “innovative approach” (Reviewer L3nP).
We have also responded to your questions point by point. The revised figures are uploaded in the pdf file.
Please let us know if you have any follow-up questions. We will be happy to address them.
Best regards,
the Authors
Pdf: /pdf/b1b0dbbd0ed92ec5a818a30c9cc56712c62e9aae.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies | Accept (poster) | Summary: The paper presents empirical scaling laws for the size of the vocabulary for LLMs. The findings in the paper are:
- Empirically, the vocabulary size minimizing the loss increases when FLOPs are increased (Fig 2, right, Fig 3)
- Through mathematical derivations from scaling laws, the optimal vocabulary size decreases when the embedding size increases. (Fig 4).
- With a given FLOPs budget, 43k vocav size beats 32k on academic benchmarks like Boolq (Table 2.)
Strengths: - Vocabulary scaling laws are an interesting and novel research direction.
- The paper is well written.
Weaknesses: - The results are probably mostly applicable to a small number of well-funded labs.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The abstract states that “beyond the conventional 32K” – is this really the convention? See e.g. GPT4o.
2. How does Table 2 look if you train on the same number of tokens instead of using the same FLOPs budget?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: na
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### W1: The results are probably mostly applicable to a small number of well-funded labs.
Thanks for pointing this out! We want to clarify that we are not a well-funded lab either. Due to our limited computing resources, we can only afford to train models with up to 3B parameters in our experiments, as the cost of validating scaling law experiments is indeed very high.
However, we believe that our conclusions are beneficial to the general research community, especially for small labs. Our scaling laws with vocabulary provides a compute-optimal allocation suggestion, enabling small labs to train high-performance models without repeatedly trying different vocabulary configurations, thereby saving computing resources.
Even for teams who want to conduct scaling law experiments themselves, our derivative-based method offers a simple and feasible approach based on theoretical derivation. Researchers do not need to run a large number of scaling law experiments to obtain a good vocabulary configuration. This is particularly advantageous for small labs. We will also make all our scaling law experimental results public so that more people can benefit from our work.
### Q1: The abstract states that “beyond the conventional 32K” – is this really the convention? See e.g. GPT4o.
Thank you for your insightful question! We acknowledge that there is no single "conventional" vocabulary size for language models, as it can vary based on the pre-training corpus and the intended use case. A vocabulary size of 32K is widely regarded as a common choice, particularly for models trained on English-centric corpora, such as Llama-1, Llama-2, and Mistral. Since our work primarily utilizes the English-centric SlimPajama corpus for pre-training, we have adopted the 32K vocabulary setting employed by these models as a "conventional" vocabulary size. We will modify the statement in the abstract accordingly to reflect this clarification.
As for GPT4o, we think its vocabulary size is relatively larger because it is designed to handle multiple languages (e.g., Chinese). This also highlights an important consideration for future research: determining the optimal vocabulary size for multilingual models, which we have discussed in Appendix B.4.
Our broader goal is to draw attention to the importance of vocabulary size in training language models and to encourage the appropriate allocation of computational resources for this aspect. Recently, there has been a shift in the industry, with major companies recognizing that their previous allocations for vocabulary were insufficient. For example, Llama has increased its vocabulary size from 32K to 128K, reflecting this evolving understanding. We hope this clarification helps, and will add the discussion in the revised version.
### Q2: How does Table 2 look if you train on the same number of tokens instead of using the same FLOPs budget?
Thanks for your prompting question! As you suggested, we also trained the model using the same number of tokens, i.e., 129B tokens, beyond the same FLOPs budget setting. As shown in the following table, the performance of the model with the suggested vocabulary size of 43K improves further compared to the 32K vocabulary size when using the same number of training tokens. We will add the results in the revised version.
| **$V$** | **$N_v$** | **$D$** | **Winogrande** | **PIQA** | **OBQA** | **Hellaswag** | **BoolQ** | **ARC-E** | **ARC-C** | **Average** |
|---------|-----------|---------|----------------|----------|----------|---------------|-----------|-----------|-----------|-------------|
| 32K (Baseline) | 0.20B | 129B | 55.7 ± 1.4 | 72.6 ± 1.0 | **34.4** ± 2.1 | 55.1 ± 0.5 | 60.1 ± 0.9 | 53.4 ± 1.0 | 29.1 ± 1.3 | 51.5 |
| 43K (Ours with same FLOPs) | 0.27B | 125B | **58.7**±1.4 |**72.7**±1.0 |33.0±2.1 |55.7±0.5 |62.3±0.8 | 55.0±1.0 | 31.5±1.4 | 52.7
| 43K (Ours with same Tokens) | 0.27B | 129B | 58.6±1.4 | **72.7**±1.0 | 33.6±2.1 | **55.8**±0.5 | **62.4**±0.9 | **55.5**±1.0 | **31.5**±1.4 | **52.9**
---
Rebuttal 2:
Title: Any New Comments Would be Greatly Appreciated
Comment: Dear Reviewer zt86,
We are deeply grateful for your detailed review and the insightful suggestions you provided for our paper. We have carefully considered and responded to each of your comments in our rebuttal.
As the Author-Review Discussion period is coming to an end, we want to ensure that all your concerns have been thoroughly addressed. If there are any remaining questions or issues, we would be glad to offer further clarification or make any necessary revisions.Thank you once again for your valuable feedback.
Best regards,
The Authors | Summary: This study primarily explores the role of vocabulary size in scaling large language models (LLMs). Traditional research has focused on model parameters and training data size, often overlooking the impact of vocabulary size. While intuitively larger vocabularies can enable more efficient tokenization by representing sentences with fewer tokens, they also increase the risk of under-fitting representations for rare tokens. By training models ranging from 33M to 3B parameters on up to 510B characters with various vocabulary configurations, we discovered that the optimal vocabulary size is constrained by the computational budget. We propose two methods to determine the optimal vocabulary size: an empirical IsoFLOPs approach and a fast derivative-based approach. Both methods indicate that vocabulary parameters should be scaled slower than non-vocabulary parameters. Nonetheless, vocabulary parameters are critical for performance and are under-allocated in current LLMs. By increasing the vocabulary size beyond the conventional 32K, we trained a better 3B parameter model despite using fewer training tokens. Our work reveals the underestimated role of vocabulary and the necessity of jointly considering vocabulary size, model parameters, and training data for efficient scaling.
Strengths: 1. The study takes a holistic approach by examining the role of vocabulary size in the scaling of large language models (LLMs), addressing a gap in traditional research that often overlooks this aspect.
2. The introduction of two novel methods—an empirical IsoFLOPs approach and a fast derivative-based approach—for determining the optimal vocabulary size showcases the study's innovation and practical contributions.
3. Demonstrating that increasing vocabulary size beyond the conventional 32K can lead to better model performance with fewer training tokens highlights the practical implications and potential for efficiency gains.
Weaknesses: 1. Lacks performance on large-scale models, such as whether increasing the vocabulary size to a greater extent performs better than existing models in the market. Table 2's experiments look a little bit less.
2. IsoFLOPs method is very sensitive, the experiments also looks not enough.
Technical Quality: 3
Clarity: 3
Questions for Authors: In Use Case section, the authors mentioned that (2) One has already conducted scaling law experiments following the Chinchilla laws with a fixed vocabulary size (e.g., 244 32K) and aims to estimate the optimal vocabulary size for a given model parameter. In determining the scaling law for the relationship between non-embedding model size and data (such as the Chinchilla law), why is it assumed that the vocabulary size is independent of these two factors
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the authors addressed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### W1: Lacks performance on large-scale models, such as whether increasing the vocabulary size to a greater extent performs better than existing models in the market. Table 2's experiments look a little bit less.
Thank you for raising this concern. We share the intention to compete with existing powerful models in the market, such as Llama-2-7B. However, training a 7B model on 2 trillion tokens from scratch is far beyond our current computational resources. Nevertheless, we have evaluated our 3B models on more benchmarks to alleviate your concern.
Specifically, we have added new experimental results on the following benchmarks:
- **MMLU**:Massive Multitask Language Understanding benchmark for broad domain language evaluation.
- **CommonsenseQA**: A multiple-choice QA dataset for measuring commonsense knowledge.
- **CoQA**: Conversational question answering tasks to test dialog understanding.
- **TruthfulQA**: A QA task aimed at evaluating the truthfulness and factual accuracy of model responses.
- **Lambada**: Tasks designed to predict the endings of text passages, testing language prediction skills.
The following table combines the original Table 2 in the paper and the new experimental results. As shown, our prediction enables better model performance by adjusting the vocabulary size within different FLOPs budgets. The 3B model with a 43K vocabulary size outperforms the 32K counterpart on 11 out of 12 tasks using the same FLOPs budget. For example, we improve performance on ARC-C from 29.1 to 31.5. In conclusion, the model using our suggested vocabulary size (i.e., 43K) consistently outperforms its counterpart (i.e., 32K) by a clear margin.
| Tasks | Metric | $V$=32K (Baseline) | $V^{opt}$=43K (Ours) |
|---------------|---------------------|-----------|---------------|
| Winogrande | Normalized Accuracy | 55.7±1.4 | **58.7**±1.4 |
| PIQA | Normalized Accuracy | 72.6±1.0 | **72.7**±1.0 |
| OBQA | Normalized Accuracy | **34.4**±2.1 | 33.0±2.1 |
| Hellaswag | Normalized Accuracy | 55.1±0.5 | **55.7**±0.5 |
| BoolQ | Normalized Accuracy | 60.1±0.9 | **62.3**±0.8 |
| ARC-E | Normalized Accuracy | 53.4±1.0 | **55.0**±1.0 |
| ARC-C | Normalized Accuracy | 29.1±1.3 | **31.5**±1.4 |
| MMLU | Normalized Accuracy | 25.0±0.4 | **25.5**±0.4 |
| CommonsenseQA | Normalized Accuracy | 20.2±1.2 | **21.0**±1.1 |
| CoQA | Exact Match | 32.3± 2.0 | **37.4**± 2.0 |
| TruthfulQA | BLEU | 30.4±1.6 | **31.3**±1.6 |
| Lambada | Normalized Accuracy | 43.0±0.7 | **44.9**±0.7 |
Another feasible way to compete with models in the market would be continual pre-training with a larger vocabulary size. We believe this is a good topic for discussion, but it involves several non-trivial research challenges not strongly related to the main contributions of this paper, i.e., exploring the optimal compute allocation considering vocabulary sizes. Therefore, we will discuss it in the revised version and leave it as an important future work. The challenges we will discuss include:
- Expanding the vocabulary necessitates changes in the tokenization process, which can lead to inconsistencies in token segmentation.
- Ensuring that these new embeddings are compatible and effectively integrate with the pre-trained embeddings is non-trivial.
- Catastrophic forgetting of old word embeddings when learning new word embeddings.
We will discuss all the above in the revised version. Thank you again for your valuable comments.
### W2.1: IsoFLOPs method is very sensitive.
Thank you for your insightful question. You raise a valid point – the IsoFLOPs-based approach can be sensitive to some extent, depending on the granularity, range, and quality of the fitting data. Since the pioneering work on scaling laws by Kaplan et al. 2020 [1] and Hoffmann et al. 2022 [2], the IsoFLOPs-based approach has become a widely-used tool to study the trend of model performance [3]. We have discussed it in our Appendix B.1, and we will add more details on how to reduce sensitivity, such as outlier data removal and repeated experiments, in our polished version.
To evaluate the goodness of fit, we use relative mean square error (rMSE) and the coefficient of determination (R^2). As shown in the table below (also in Figure 3), the results indicate a good fit, with rMSE < 0.001 and $R^2$ >= 0.89 for all the considered attributes: non-vocabulary parameters ($N_{nv}$), vocabulary parameters ($N_v$), and training characters ($H$). This suggests that these attributes follow a power law with respect to the FLOPs budget.
| | $N_{nv}$ | $N_v$ | $H$ |
|--------|----------|------|-----|
| rMSE | 0.00026 | 0.00051 | 0.00017 |
| $R^2$ | 0.93 | 0.89 | 0.96 |
Furthermore, the optimal vocabulary predictions (Table 1) from the IsoFLOPs-based method and the derivative-based method are aligned across small-scale and large-scale models. This independent verification by the derivative-based method validates the predictions from the IsoFLOPs-based method. Therefore, we believe that the IsoFLOPs-based method works well in our case.
[1] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361.
[2] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556
[3] Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. 2021. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thorough response to my questions. I am pleased to see the new results. However, there is room for improvement in this paper, such as Figure 3 (left), where the distribution of model sizes is very uneven. For example, for models below 100M, the basic points of model size are concentrated at 33M and ~90M, which will seriously affect the exponential factor of the fitted curve.
---
Reply to Comment 1.1.1:
Title: Response to reviewer Y8zo
Comment: Dear Reviewer Y8zo,
Thank you for your timely feedback. We're pleased that you found our new results compelling and appreciate your insightful comments on areas for improvement, particularly regarding Figure 3 (left). We would like to offer the following clarifications:
Our study expands on traditional scaling law approaches by incorporating additional parameters: Nnv (non-vocabulary parameters), Nv (vocabulary parameters), and H (training characters). This contrasts with previous work that primarily considered N (total model parameters) and D (training tokens). The inclusion of these additional factors adds complexity to the scaling law fitting process. Given the high computational costs associated with scaling law experiments, we concentrated our efforts on a limited set of non-vocabulary parameters while exploring a broader range of vocabulary parameters. This focus aligns with the primary objective of our research: investigating the impact of vocabulary size on model performance.
To achieve this, we deliberately selected 10 groups with varying vocabulary sizes for each of the 6 fixed groups of non-vocabulary parameters. While this design choice accounts for the uneven distribution seen in Figure 3 (left), it also allows for a more thorough exploration of vocabulary size effects, as demonstrated by the diverse data points in Figure 3 (middle). Our experimental design and results robustly support our main conclusion: larger models benefit from larger vocabularies.
We believe these clarifications provide a more comprehensive understanding of our research methodology and findings. We appreciate your careful review and hope these explanations enhance your confidence in our paper. We look forward to any further feedback you may have.
Best regards,
The Authors
---
Rebuttal 2:
Title: Response to reviewer Y8zo's second part
Comment: ### W2.2: The experiments also looks not enough.
It is noteworthy that we conduct extensive experiments on **1200 models pre-trained from scratch** (6 non-vocabulary parameters settings x 10 vocabulary sizes settings x 20 training data settings) for the fitting of our vocabulary scaling law. The key contributions in this paper is the several findings about how the vocabulary affects the model performance and how much compute should be allocated on the vocabulary based on the proposed 2 approaches.
Following the previous study [1,2,3], we mainly use the held-out validation loss value for the evaluation of the trained 1200 models. It is a better metric than the downstream tasks performance as the held-out loss provides an unbiased measure of the model’s ability to generalize to new data, but also enjoys high computing efficiency. Instead, the performance of downstream tasks has a great variety across different tasks, which is not suitable as the main evaluation metric.
The evaluation of downstream tasks is part of the ways to verify our prediction, therefore we do not take too much content to discuss it in our main paper. For downstream tasks, we conduct more experiments in the answer of your #Q1. The new results will be added in our polished version.
### Q1: In determining the scaling law for the relationship between non-embedding model size and data (such as the Chinchilla law), why is it assumed that the vocabulary size is independent of these two factors.
Thanks for your question! We do not assume that the vocabulary size is independent of parameters and data. Instead, we make some adjustments in the Section of Preliminary: 1) We break down the total parameters into non-vocabulary parameters and vocabulary parameters; 2) We measure data not in tokens but in training characters.
By doing so, the vocabulary size $V$ is independent with the non-vocabulary parameters $N_{nv}$ and the number of training characters $H$. In an experimental configuration, the developers can vary the vocabulary size without affecting non-vocabulary parameters or training characters.
Then, we details our motivation why we separate the vocabulary parameter and non-vocabulary parameter below:
Traditionally, scaling up model parameters in language models has been approached in two ways: increasing depth (i.e., the number of layers) or width (i.e., the hidden size). Current empirical practices often involve expanding both simultaneously [4]. This approach overlook crucial distinctions in how different parameters benefit from parameters expansions. Non-vocabulary parameters can benefit from increases in both depth and width, allowing for more complex hierarchical representations and broader feature capture. In contrast, vocabulary parameters, associated with word embeddings and language model heads, are generally confined to a single layer, limiting their ability to benefit from increases in the model depth. This disparity in growth potential between non-vocabulary and vocabulary parameters suggests that to maintain a balanced growth rate, it is better to separate the vocabulary parameter and non-vocabulary parameter into consideration.
[4] Yi Tay, Mostafa Dehghani, Samira Abnar, Hyung Chung, William Fedus, Jinfeng Rao, Sharan Narang, Vinh Tran, Dani Yogatama, and Donald Metzler. 2023. Scaling Laws vs Model Architectures: How does Inductive Bias Influence Scaling? In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 12342–12364, Singapore. Association for Computational Linguistic.
---
Rebuttal 3:
Title: Any New Comments Would be Greatly Appreciated
Comment: Dear Reviewer Y8zo
We sincerely appreciate your thorough review and the valuable suggestions and comments you provided for our paper. We have carefully considered each of your points and have addressed them in detail in our rebuttal.
With the Author-Review Discussion period nearing its conclusion, we want to ensure that all your concerns have been fully addressed. If there are any questions or unresolved issues, we are eager to provide further clarification or make any necessary revisions.
Thank you again for your thoughtful feedback.
Best regards,
The Authors | Summary: This paper investigates the impact of vocabulary size on the efficiency of large language models (LLMs). Using models with 33 million to 3 billion parameters, it finds that optimal vocabulary size is limited by computational resources. The study introduces two methods to determine the best vocabulary size, showing that vocabulary parameters should scale slower than other parameters. Results highlight the significant yet underestimated role of vocabulary in scaling LLMs effectively, suggesting that larger vocabularies can improve model performance.
Strengths: The paper addresses a unique aspect of language model scaling by investigating the impact of vocabulary size on model performance, a dimension that is often overlooked in LLM research.
The introduction of two novel methods to determine the optimal vocabulary size—empirical IsoFLOPs and a derivative-based approach—provides practical tools for optimizing LLM training and deployment.
The analysis in this paper is quite in-depth, and some conclusions can provide references for subsequent LLM training efforts.
Weaknesses: This paper conducted experiments on language models of various parameter sizes, but the largest model tested was only 3 billion parameters. It would be better if we could further verify models with more than 7 billion parameters. I believe both the industrial and academic communities are eager to know whether the scaling law for vocabulary can generalize to larger models.
Technical Quality: 3
Clarity: 3
Questions for Authors: NA
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This paper discusses the limitations in Appendix B.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### W1: This paper conducted experiments on language models of various parameter sizes, but the largest model tested was only 3 billion parameters.
We acknowledge the importance of evaluating our approach on larger models to establish its scalability. Increasing the model size necessitates pre-training on a larger corpus, which in turn demands more computational resources. For instance, conducting pre-training experiments on 7B models would require an immense computational budget exceeding 10^22 FLOPs, translating to approximately 6 weeks of training time on a cluster with 64 A100 GPUs. However, such a substantial level of computational resources is currently beyond our reach during the rebuttal period. Despite our desire to explore larger model sizes, we are constrained by the practical limitations of our available resources.
Nonetheless, the significance of the scaling law lies in investigating it through experiments on a relatively small scale to help us reasonably allocate computational resources when training a large model, thus avoiding wasted computational power. Our experiments with 1200 pre-trained models have demonstrated the existence of an optimal vocabulary size under FLOPs constraints, and the predictions from the theoretical analysis (derivative-based approach) and experimental fitting (IsoFLOPs-based approach) agree with each other. In fact, we have also made predictions for the pre-training of a 300B model, and the two approaches align well, and so we believe it should work for larger models.
Furthermore, the change in vocabulary size from 32K in Llama-2 to 128K in Llama-3, which resulted in performance improvement, can be also seen as a verification of our conclusion regarding increasing the vocabulary size when there are more computational budgets.
In conclusion, we appreciate your suggestions and we will try our best to conduct more experiments on larger models. We hope you can understand our computational limitations. We will also discuss this in the revised version.
---
Rebuttal 2:
Title: Any New Comments Would be Greatly Appreciated
Comment: Dear Reviewer EsaU,
We are truly appreciative of the time and effort you have dedicated to reviewing our paper. Your thoughtful feedback and constructive suggestions are valuable to us. We have carefully addressed your comments in our rebuttal to enhance the quality of our work. As we approach the final days of the Author-Review Discussion period, we would like to ensure that all your concerns have been comprehensively addressed. Should there be any remaining questions or issues, we are willing to provide further clarification or additional revisions.Thank you once again for your insightful contributions to our work.
Best regards,
The Authors | null | null | Rebuttal 1:
Rebuttal: ### General Response
We are grateful for the reviewers' efforts and the recognition of our contributions:
- **Novel Research Topic:** The paper explores the unique impact of vocabulary size on language model performance, an aspect often overlooked in LLM research [EsaU,Y8zo,zt86].
- **Analyses:** We provide in-depth analyses why it exists a optimal vocabulary size with the FLOPs and the optimal vocabulary size increases with more FLOPs, with theoretical anayses in Appendix A.1 and demostration experiments in Sec 3 and Apppendix A.2. [EsaU,zt86]
- **Experiments:** The paper includes two effective methods (IsoFLOPs and a derivative-based approach) to predict the optimal vocabulary setting. Extensive experiments on **1200 pre-trained models** (6 non-vocabulary parameters settings x 10 vocabulary sizes settings x 20 number of training data settings) are conducted. [EsaU,zt86]
- **Applications:** The study's findings offer two practical tools for optimizing the compute allocaiton of LLMs training, with the consideration of the vocaulary size. [EsaU,Y8zo,zt86]
In response to the feedback of reviewers, we have performed additional analyses and experiments to address the raised concerns. Below, we summarize our responses and the improvements made to our paper:
- We evaluate the 3B pre-trained models with different vocabulary sizes on more downstream tasks. The model that uses our suggested vocabulary size outperforms its counterpart by a clear margin.
- We supplement the paper with a new perspective from parameter growing to demontrate that the vocabulary parameters need to be separately considered from the total parameters, and larger models deserve larger vocabularies.
- We compare the models with the same training tokens instead of the same training FLOPs as asked.
All of the suggestions will be considered in our polished version. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
CultureLLM: Incorporating Cultural Differences into Large Language Models | Accept (poster) | Summary: This paper introduces a pipeline that can enhance LLM's ability to culture-aware tasks (such as hate speech detection, and bias detection. Their proposed CultureLLM included three stages: sampling, semantic data augmentation, and fine-tuning. They investigate the effectiveness of CultureLLM in nine languages and eight culture-related task. Their extensive experiments and analysis demonstrate that CultureLLM significantly improves LLM performance on culture-aware tasks while preventing catastrophic forgetting.
Strengths: 1. The paper presents solid experiments to investigate the effectiveness of the proposed method.
2. The paper is well-organized and easy to read.
Weaknesses: 1. Rely on human-annotated dataset (i.e., WVS). The author used WVS dataset as seed data to augment their fine-tuning dataset. This limits the applicability of the proposed method.
2. Most evaluation downstream tasks are anti-social detection tasks (such as offensive language, hate speech, toxicity and abusive language detections). The results may be biased to such tasks. I also wonder why these tasks are culture-related tasks. Are there any particular examples?
3. Some experiment details need clarification. 1. What are your fine-tuning hyperparamters? Like learning rate and training steps? 2. When you evaluate CultureLLM the downstream tasks, are the input samples in English or the particular language?
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness.
More questions:
1. Did you check if any fine-tining samples are overlapped with your downstream samples? Or some samples are very similar to downstream tasks?
2. In your ablation studies, WVS+a only used a semantic template. Do you mean that LLMs are fine-tuned on semantic templates?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This proposed method uses the WVS dataset as seed data to augment their fine-tuning dataset. This limits its applicability.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1: Rely on human-annotated dataset (i.e., WVS). The author used WVS dataset as seed data to augment their fine-tuning dataset. This limits the applicability of the proposed method.**
Using human-annotated data to help research is a popular choice for most of the LLM papers, as shown in [1-7], where they all use human-annotated dataset as seed data. On this point, one noteworthy advantage of our approach is that we only use 50 samples, which is significantly less that existing literature, claiming our less reliance on annotations.
[1] Wang, Ruida, Wangchunshu Zhou, and Mrinmaya Sachan. "Let's Synthesize Step by Step: Iterative Dataset Synthesis with Large Language Models by Extrapolating Errors from Small Models." arXiv preprint arXiv:2310.13671 (2023).
[2] Chen, Zixiang, et al. "Self-play fine-tuning converts weak language models to strong language models." arXiv preprint arXiv:2401.01335 (2024).
[3] Yu, Longhui, et al. "Metamath: Bootstrap your own mathematical questions for large language models." arXiv preprint arXiv:2309.12284 (2023).
[4] Li, Xian, et al. "Self-alignment with instruction backtranslation." arXiv preprint arXiv:2308.06259 (2023).
[5] Liu, Bingbin, et al. "Tinygsm: achieving> 80% on gsm8k with small language models." arXiv preprint arXiv:2312.09241 (2023).
[6] Singh, Avi, et al. "Beyond human data: Scaling self-training for problem-solving with language models." arXiv preprint arXiv:2312.06585 (2023).
[7] Huang, Yiming, et al. "Key-point-driven data synthesis with its enhancement on mathematical reasoning." arXiv preprint arXiv:2403.02333 (2024).
**W2: Most evaluation downstream tasks are anti-social detection tasks (such as offensive language, hate speech, toxicity and abusive language detections). The results may be biased to such tasks. I also wonder why these tasks are culture-related tasks. Are there any particular examples?**
- First of all, the downstream tasks are just evaluation, but not fine-tuning datasets. That being said, we only perform evaluation on those downstream datasets. Therefore, your comment "the results may be biased to such tasks" does not hold.
- Second, it is actually challenging to find enough public datasets to evaluate the cultural models. Those downstream tasks are what we have found. Most of them may relate to anti-social, but our models are not. In fact, those extreme cases are what cultural misunderstanding could possibly occur, which can help "stress test" our models and verify its real application.
- Third, the table below shows the input-output pairs of "offensive language detect" task. The offensive contents are different for different cultures. The model needs to judge if the input content is offensive and label it with "OFF" or "NOT". And the label is related to the cultural context.
| Culture | Content | Label |
| -------- | -------- | -------- |
| Arabic | @USER يا حُب يا دُنيا جديدة يا احلى يا احلى ابيات القصيدة♥️. | NOT |
| German | Liebe Freunde, Gauck,das Symbol des Sklaventums! | OFF |
| Portuguese | AVANTE MEDIDAS o Brasil clama por elas pra cima dos corruptos Cadeia neles | NOT |
**W3: Some experiment details need clarification. 1. What are your fine-tuning hyperparamters? Like learning rate and training steps? 2. When you evaluate CultureLLM the downstream tasks, are the input samples in English or the particular language?**
Thanks for the reminder! We will add the details into the next version of paper.
- Fine-tuning hyperparamters. For gpt-3.5-turbo, the number of training epochs is 3. And we use default settings for other hyperparamters. For Llama-70b-Chat, the details on setup are shown in Sec.D.1.
- Language of input sample: The evaluation benchmarks are all in particular languages.
**Q1: Did you check if any fine-tuning samples are overlapped with your downstream samples? Or some samples are very similar to downstream tasks?**
We checked manually and it turns out the fine-tuning samples are not overlapped with the downstream samples. It is easy to understanding since the seed data from World Value Survey(WVS) are related to cultural value, which focus on more abstract information. The evaluation downstream tasks care about the meaning of a specific sentence.
**Q2: In your ablation studies, WVS+a only used a semantic template. Do you mean that LLMs are fine-tuned on semantic templates?**
No. "WVS+a" means the output of Step 2 instead of output of Step 3 in Fig 2.
- - -
If you are satisfied with the answers, please consider improving the rating to support our work! If you have more questions, please do not hesitate to let us know:)
---
Rebuttal Comment 1.1:
Comment: Thanks for your response.
I would like to clarify my comment "The results may be biased to such tasks."
I understand that the evaluation datasets are not used to train the model and do not introduce any bias into the model. My concern is whether the evaluation results can comprehensively reflect the model's ability to solve culture-related tasks or just reflect its ability to detect anti-social language. For example, I think that irony and sarcasm are highly culture-related and require the model to understand the implicit expression in the utterances. Can you think about any other tasks that are culture-related?
---
Reply to Comment 1.1.1:
Title: Further Response
Comment: Thanks for your response! We answer your question about culture-related tasks from two aspects: 1) why the tasks in the paper are culture-related; and 2) other tasks.
1) Why the tasks in our paper are culture-related:
- offensive language detect:
Offensive language detection is culture-related because what is considered offensive varies across cultures, influenced by different norms, values, and historical contexts. Effective detection requires understanding the specific cultural context to accurately interpret language and avoid misinterpretations. Irony and sarcasm are also seen as offensive language.
- hate speech detect:
Hate speech detection is culture-related because cultural and historical contexts shape what is considered hateful or discriminatory. For instance, expressions of prejudice that might be seen as hate speech in one country could be viewed as acceptable or less offensive in another, such as comments about national identity or ethnic groups. Additionally, cultural attitudes toward different social groups and historical events influence the interpretation of what constitutes hate speech, making cultural awareness essential for accurate detection.
- stance detect:
Stance detection is culture-related because cultural context shapes how opinions, attitudes, and expressions are interpreted, influencing what is seen as supportive, neutral, or oppositional. To accurately assess stance, it's essential to understand the cultural background and nuances of the language used.
- toxicity detect:
Toxicity detection is culture-related because perceptions of what constitutes harmful or abusive language vary across cultures. For example, a phrase deemed disrespectful in one culture might be seen as a mild critique in another. Additionally, cultural norms around politeness and confrontation can influence how toxicity is expressed and perceived, making cultural context essential for accurate detection.
- threat detect:
Threat detection is culture-related because different cultures have varying thresholds for what is considered threatening or aggressive. For instance, direct confrontation or strong language might be seen as a serious threat in one culture, while in another, it could be interpreted as a standard form of assertiveness or debate. Additionally, cultural attitudes towards authority and conflict can influence how threats are expressed and understood, making it crucial to consider cultural context for accurate detection.
- bias detect:
For example, gender biases might be recognized differently in cultures with varying levels of gender equality, and what is considered a racial stereotype can differ across societies. Understanding these cultural nuances is essential for accurately identifying and addressing bias in language and behavior.
- abusive detect:
For example, humor or criticism that might be perceived as harmless in one culture could be seen as abusive in another, such as the use of sarcasm or direct criticism. Additionally, cultural differences in communication styles and social hierarchies affect how abuse is expressed and recognized, making cultural context crucial for accurate detection.
- spam detect
Spam detection is culture-related because different cultures have varying norms around communication and marketing practices. For example, aggressive promotional tactics that might be considered spammy in one region could be standard business practices in another, such as frequent unsolicited messages. Additionally, cultural attitudes toward privacy and advertising influence how spam is defined and identified, requiring a nuanced understanding of cultural context for effective detection. | Summary: This paper introduces CultureLLM, a novel and cost-effective approach to address the cultural biases in Large Language Models (LLMs) that arise from the dominance of English training data. Traditional solutions like prompt engineering and culture-specific pre-training are either expensive or computationally intensive, and often fail to address the paucity of data from low-resource cultures. CultureLLM leverages the World Value Survey (WVS) as seed data and employs a semantic data augmentation method to generate additional training data. The authors utilized only 50 seed samples fromWVS, extending them with augmented data to fine-tune culture-specific LLMs and a unified model, CultureLLM-One, covering 9 diverse cultures, including both rich and low-resource languages.
Strengths: - The paper studies culture understanding which is an important problem.
- The paper proposes an interesting data collection framework through role-playing.
Weaknesses: - __Assumption of Language as Culture__. The paper equates languages with cultures, which is an oversimplification. Cultures are multi-faceted and cannot be fully encapsulated by language alone. There are significant cultural differences within the same language-speaking regions that may not be adequately captured by this approach. Even worse, on line 205, the authors said they picked “representative countries''. But how do you define “representative”? More importantly, this selection method will likely cause the fine-tuned LLMs to be biased towards these “representative” countries for certain languages.
- __Limited Scope of Seed Data__. The methodology relies heavily on a small seed dataset of only 50 samples from the World Value Survey (WVS). While the approach is cost-effective, it may not capture the full breadth and nuances of cultural diversity.
- __Unfair Baseline Comparison__. The paper claims better performance than other culture-specific LLMs, like TaiwanLLM and SeaLLM. However, the comparison is unfair as these LLMs use different architectures and different amounts of pre-training data. The author could try fine-tuning the gpt-3.5-turbo with these baseline models’ fine-tuning/instruction-tuning data. Until then, “cost-effective” cannot be claimed.
Technical Quality: 2
Clarity: 3
Questions for Authors: - I am uncertain about the relevancy between the evaluated tasks and the fine-tuning data. How are values in the World Value Survey/extracted opinions relevant to offensive language/ hate speech/ stance? Can you elaborate?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The assumption of language as culture is a significant limitation of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1: Assumption of Language as Culture. The paper equates languages with cultures, which is an oversimplification. Cultures are multi-faceted and cannot be fully encapsulated by language alone. There are significant cultural differences within the same language-speaking regions that may not be adequately captured by this approach. Even worse, on line 205, the authors said they picked “representative countries''. But how do you define “representative”? More importantly, this selection method will likely cause the fine-tuned LLMs to be biased towards these “representative” countries for certain languages.**
*Why using one language to denote one culture:*
We strongly agree that language is not equal to, but only a part of culture. But using language to study culture is possible due to the following aspects:
- Existing literature on culture understanding shows that culture boundaries are fluid, dynamic and uncertain. Delanoy emphasizes that cultures are not homogeneous or static entities but are fluid and dynamic. He critiques essentialist views that rigidly define cultural boundaries and instead promotes a more nuanced understanding that considers the intersections of various cultural factors, such as ethnicity, language, religion, and socio-economic conditions [1]. Appadurai also discusses the fluidity of cultural boundaries and the creation of new cultural forms [2]. Cultural boundaries can be geographical regions, language, religion and so on. Based on above statements, using language as cultural boundaries is reasonable.
- Existing NLP works on culture also leverage labguage as culture boundaries. [3] focuses on Arabic and English culture. [4] focuses on 8 different cultures: English, Chinese, French, Russian, German, Arabic, Japanese and Korean. [5] also use language to split different cultures. The authors work on English, German, Russian, Bengali, Chinese, and Indonesian culture. [6] is a hand-crafted benchmark for evaluate diverse cultures. They also use languages as culture boundaries.
- Most downstream benchmarks are classified via language and we cannot get more fine-grained perspectives. For example, if we want to evaluate the performance of Arabic model, we can find benchmarks in Arabic culture. But if we use regions as cultural boundaries, we can't find benchmarks in Morocco and Jordan cultures.
- It’s interesting to incorporate more fine-grained cultural differences within the same language-speaking regions into LLMs. Our method is an initial attempt, which can generalize to more fine-grained cultures. Just changing the source of seed data can achieve this. However, we can not implement this method for all fine-grained cultures in one paper, because of time and resource limit.
*Explanation on "representative countries":*
In terms of “representative countries”, our selection criterion is that we choose the country which has the *most population*. We agree that this can cause the fine-tuned LLMs to be biased towards the “representative countries”. However, we think the criterion may be the best way to align with majority of people from certain cultures.
Finally, note that the main contribution of the paper is to present a general algorithm that can augment LLM culture data but *not* specific to any cultures or contries. In the future, if more fine-grained culture data are available, our algorithm can also work well.
[1] Delanoy, Werner. "What is culture." The Cambridge handbook of intercultural communication (2020): 17-34.
[2] Appadurai, Arjun. Modernity at large: Cultural dimensions of globalization. Vol. 1. U of Minnesota Press, 1996.
[3] Naous, Tarek, et al. "Having beer after prayer? measuring cultural bias in large language models." ACL (2024).
[4] Wang, Wenxuan, et al. "Not all countries celebrate thanksgiving: On the cultural dominance in large language models." arXiv preprint arXiv:2310.12481 (2023).
[5] Liu, Chen Cecilia, et al. "Are multilingual llms culturally-diverse reasoners? an investigation into multicultural proverbs and sayings." arXiv preprint arXiv:2309.08591 (2023).
[6] Myung, Junho, et al. "BLEnD: A Benchmark for LLMs on Everyday Knowledge in Diverse Cultures and Languages." arXiv preprint arXiv:2406.09948 (2024).
**W2: Limited Scope of Seed Data. The methodology relies heavily on a small seed dataset of only 50 samples from the World Value Survey (WVS). While the approach is cost-effective, it may not capture the full breadth and nuances of cultural diversity.**
- First, although the 50 data seem narrow, they however carry resourceful information since they are carefully designed and validaded by human experts. The seed data covers 7 topics: social values, security, science and technology, religious values, ethical values and norms, political interest and political participation, and migration. The data is high quality and have been widely used by a lot of literature. This statement can also be verified by the excellent performance of CultureLLM on downstream tasks.
- Second, we have conduct pilot experiments on selecting the seed data from WVS. In our study, we explore the effect of sample’s quantity and different composition of samples on the performance of model. The results indicate that the 50 samples perform better than other settings. The reasons may be that fine-tuned LLMs perform better when the distribution of fine-tuned data similar to that of pre-training data. The distribution of those 50 seed data is similar to the pre-training data. So we select those samples.
- Finally, the *main contribution* of the paper is not the seed data, but our data augmentation algorithm that remains general to any seed data and topics. That being said, in the future, with new topics and seed data available, users can easily develop their own versions of LLMs using our algorithm.
**W3: Unfair Baseline Comparison.[...]Until then, “cost-effective” cannot be claimed.**
We response in general response for this part.
---
Rebuttal 2:
Title: Remaining Rebuttal
Comment: **Q1: I am uncertain about the relevancy between the evaluated tasks and the fine-tuning data. How are values in the World Value Survey/extracted opinions relevant to offensive language/ hate speech/ stance? Can you elaborate?**
We analyze the performance for each task and report the WinRate in the table below.
| | offensive detect | hate detect | stance detect | toxicity detect | threat detect | bias detect | abusive detect | spam detect |
| ---------- | ---------------- | ----------- | ------------- | --------------- | ------------- | ----------- | -------------- | ----------- |
| ChatGPT | 0.6143 | 0.5433 | 0.6758 | 0.5280 | 0.4270 | 0.4464 | 0.5889 | 0.5846 |
| CultureLLM | 0.7203 | 0.6197 | 0.7359 | 0.6859 | 0.5172 | 0.5077 | 0.6622 | 0.6451 |
| WinRate | 0.1060 | 0.0764 | 0.0600 | 0.1579 | 0.0903 | 0.0612 | 0.0733 | 0.0605 |
The relevance of each task with WVS can be described in the following:
- offensive language detect:
1. Cultural Context and Sensitivity to Offensive Language: The World Values Survey aims to capture cultural values and beliefs across different societies. One aspect of cultural values is the tolerance or acceptance of offensive language. In some cultures, certain words or expressions may be considered highly offensive, while in others they may be more tolerated or even commonly used.
- hate speech detect:
1. Societal Norms and Attitudes: The WVS provides data on societal norms, attitudes towards minorities, and levels of societal trust. This data can help understand the underlying societal conditions that might foster hate speech or, conversely, promote tolerance and inclusivity.
2. Cultural Context: Understanding the cultural context is crucial for effectively detecting and interpreting hate speech. The WVS offers a rich dataset for understanding cultural differences in values and norms, which can inform more nuanced hate speech detection algorithms.
- stance detect:
1. Understanding Contextual Influences on Stance: The WVS can provide the cultural and societal background needed to understand why certain stances are more prevalent in specific regions or among certain demographic groups. This context can be invaluable for interpreting the results of stance detection analyses, especially when comparing stances across different cultures and societies.
- toxicity detect:
1. Reflection of Societal Norms in Online Behavior: The WVS provides insights into the prevailing norms and values within societies, which can indirectly inform the context within which toxic behavior manifests online. Understanding societal attitudes towards diversity, authority, individual freedom, and tolerance can help in interpreting the root causes of toxic behavior and devising appropriate responses.
- threat detect:
1. Understanding Motivations and Behaviors: Insights from the WVS can help understand the cultural and societal contexts that may influence the behavior of individuals or groups posing threats. This knowledge can inform more targeted and effective threat detection and mitigation strategies that consider the root causes of conflict or aggression.
2. Cultural Sensitivity in Security Measures: Incorporating findings from the WVS can lead to more culturally sensitive security practices that respect local values and norms. This is crucial in global operations where misunderstanding cultural nuances can lead to ineffective or counterproductive security measures.
- bias detect:
1. Understanding Societal Norms and Attitudes: Insights from the WVS can help in understanding the cultural and societal norms that underlie biases. By analyzing patterns in global values and beliefs, we can identify prevalent stereotypes, prejudices, and discriminatory attitudes that may need to be addressed in bias detection efforts
2. Injection of More cultural nuances: The WVS data can provide valuable context that are sensitive to cultural differences in values and norms. This is better equipped to detect and mitigate biases in data sets that reflect cultural nuances, ensuring that AI-driven decisions are fair and equitable across different societal contexts.
- abusive detect:
1. Cultural Contexts of Abuse: The WVS can help identify cultural norms that influence perceptions of what constitutes abusive behavior. This is crucial for developing detection systems that are sensitive to cultural differences, ensuring that they can effectively identify abuse without mistakenly flagging culturally specific but non-abusive interactions.
2. Injection of More cultural nuances: Insights from the WVS can inform the development of more nuanced algorithms for detecting abusive behavior by providing context on societal values and norms.
---
Rebuttal 3:
Title: Reviewer's response
Comment: While I acknowledge that prior works have used language as a proxy for culture, the validity of this approach remains debatable. Using language as a cultural boundary can simplify the implementation, but it doesn't fully address the complexity and diversity of cultures that share a common language. Regarding the choice of “representative countries,” the authors mentioned, "we think the criterion may be the best way to align with the majority of people from certain cultures." However, isn’t that exactly the flip side of having a biased model?
In terms of the seed data, your explanations have raised more questions. First, in your pilot study, what other settings did you compare these 50 seed data against? Second, it is confusing when you refer to the seed data as being "similar to that of pre-training data." Are you referring to the pre-training data of GPT-3.5-turbo? If so, how do you know the distribution of proprietary data? Your response implies that you already know the pre-training data of GPT-3.5-turbo, which raises concerns about the methodology.
As for your justification regarding fair comparison, there are also some issues. First, referencing other leaderboards does not justify your experimental setup. Second, acknowledging that a fair comparison cannot be achieved due to data unavailability suggests that it is premature to label your approach as "cost-effective." We need to clarify which costs and what effectiveness metrics you are comparing against. Lastly, the mention of “CulturePark” in your response makes it seem like the responses are being reused for multiple submission, which may come across as unprofessional.
---
Rebuttal Comment 3.1:
Title: Further Response
Comment: We thank reviewer XR5a for your prompt response to our rebuttal. Now we address your further concerns.
> While I acknowledge that prior works have used language as a proxy for culture, the validity of this approach remains debatable.
Agreed. We are certainly not the fist work to use language as a proxy for culture and this is not our contribution. We hope that our work is not judged on this point.
> Using language as a cultural boundary can simplify the implementation, but it doesn't fully address the complexity and diversity of cultures that share a common language.
Agreed. We never claimed such proxy can solve complexity and diversity of cultures. This is beyond the scope of the paper.
> Regard "representative countries" [...] isn’t that exactly the flip side of having a biased model?
Good point. The ideal state is that we can use the data from both language-rich and language-poor countries. But the bitter reality is that not only us, but also most of the researchers *cannot* make good use of language-poor countries since the *labeled* data remains extremely unavailable. The main point of the paper is not aiming at extremely-low resource culture (but we will be in the future). Furthermore, We would like to point out that in LLM world, *any language other than English* should be treated as "poor" language since the pre-training amount is significantly less than English. In this sense, our contributions can be viewed as extending the cultural understanding ability to non-English, but not-so-poor languages. Extending to the poorer languages could still be a problem.
For your comments about biased model: Indeed, bias cannot be overlooked. But our models are less biased than the original ChatGPT on English-dominated models. We will add such discussion in the futher version of the paper.
> First, in your pilot study, what other settings did you compare these 50 seed data against?
We chose different types of seed data based on different criterion which finally supported us to choose the 50 seed data.
In summary, there are 294 questions in World Value Survey. Different seed data can bring benefits at different scales. Our pilot study is to find the best seed data from those 294 questions which can bring more improvement. We randomly select different questions as seed data and evaluate how much they can bring improvement on downsteam tasks. Finally, we select those 50 questions and the corresponding answers as seed data. The table below shows the results of our pilot study in Arabic, Bengali and Chinese cultures. "Avg performance" means the average performance of those three cultures.
| Selection criterion | Avg performance | Min_30.0% Prob | Min_40.0% Prob |
|------------------------------|-----------------|-----------------|-----------------|
| Random selection of 50(1) | .4211 | .3846|.4231|
| Random selection of 50(2) | .4815 | .4322|.4443|
| Random selection of 100(1) | .4933 |.4622|.4513|
| Random selection of 100(2) | .4815 | .4312|.4341|
| Random selection of 150(1) | .5233 |.4722|.4713|
| Random selection of 150(2) | .5311 |.4722|.4842|
| Ours | .5917 |.4832|.4954|
> It is confusing when you refer to the seed data as being "similar to that of pre-training data." [...]
There is a seminal work focusing on detecting pretraining data of black-box LLMs [1]. Leveraing this work, we tried to explore if the seed data is (probably) trained on GPT-3.5-turbo. We guess that fine-tuned LLMs perform better when the distribution of fine-tuned data similar to that of pre-training data, To verify our hypothesis, we applied this method on those settings and determine if they are in pretraining data. The table above shows the results. "Min_30.0% Prob" and "Min_40.0% Prob" represent the possibility in different settings. The results show that the seed data can bring more improvement when they are more possible to be in the pre-training data. It aligns with our hypothesis. However, we would like to point out that this is just some assumption. We will revise the paper accordingly.
[1] Shi, Weijia, et al. "Detecting pretraining data from large language models." ICLR (2024).
---
Reply to Comment 3.1.1:
Comment: > referencing other leaderboards does not justify your experimental setup
Agreed. We referenced other leaderboards to show that even the most popular leaderboards in industry and academia cannot guarantee absolute fairness (imagine how competitive they are). It is never easy to do that in LLM era, but all we can do is to try our best to provide relative fairness which we hope that reviewer can understand. If the reviewers thinks more ablations or comparisons are needed to ensure further fairness, we are happy to add them if data and hardware resources are available.
> Justification of "cost-effective"
Good question. Now we summarize why our method is "cost-effective":
- In terms of *money cost*, fine-tuning a language-specific LLM only costs $6, which is extremely cheaper compared to existing models such as SeaLLM and Taiwan LLM.
- In terms of *data cost*, our approach does not need to collect labeled data manually, but only need 50 seed data from WVS (and any future new survey data), which is extremely cheaper compared to those that need heavy data annotation.
- In terms of *time cost*, fine-tuning a language-specific LLM only costs 1-2 hours, which is extremely less than any other cultural specific models which require pre-training and fine-tuning.
- In terms of *effectiveness*, our fine-tuned models can outperform the counterparts with a large margin.
- In terms of *simplicity*, our algorithm is simple, requires only common access to OpenAI API, and provides equitable use to every one.
In summary, we believe above five perspectives can be used to show that our approach is "cost-effective". We will add such discussion in the future version.
> Reagarding the term "CulturePark"
Apoligies for such a mistake.
- - -
Again, we thank you for your professional feedback to our paper to make it even better! If you think our response has addressed your concerns, please reconsider the rating; otherwise, we are happy to address your further concerns:)
---
Rebuttal 4:
Title: Reviewer's response
Comment: I am unsure why the use of language as a proxy to study culture cannot be judged here, as it is a fundamental assumption of your work.
Regarding the selection of the 50 seed data, there seems to be a contradiction. Initially, you mentioned that "the distribution of those 50 seed data is similar to the pre-training data," implying that Shi et al.’s method was used for selection. However, your experiments with settings other than your 50 seed data suggest that these examples were chosen based on average performance. Could you clarify which criteria were ultimately used for the selection?
Moreover, Shi et al.’s method requires computing token probability, which is not available in GPT-3.5-turbo's output. How did you obtain the probability for each token?
---
Rebuttal Comment 4.1:
Title: Further Response
Comment: > I am unsure why the use of language as a proxy to study culture cannot be judged here, as it is a fundamental assumption of your work.
Indeed, it's an open question without groundtruth answers. On this debatable problem, while we used language as a cultural proxy as suggested by many other works, we also agree and respect that the reviewer may think otherwise. This is not the contribution of our work. We just follow a lot of previous works [1-4].
[1] Naous, Tarek, et al. "Having beer after prayer? measuring cultural bias in large language models." ACL (2024).
[2] Wang, Wenxuan, et al. "Not all countries celebrate thanksgiving: On the cultural dominance in large language models." arXiv preprint arXiv:2310.12481 (2023).
[3] Liu, Chen Cecilia, et al. "Are multilingual llms culturally-diverse reasoners? an investigation into multicultural proverbs and sayings." arXiv preprint arXiv:2309.08591 (2023).
[4] Myung, Junho, et al. "BLEnD: A Benchmark for LLMs on Everyday Knowledge in Diverse Cultures and Languages." arXiv preprint arXiv:2406.09948 (2024).
> Regarding the selection of the 50 seed data, there seems to be a contradiction. [...] Could you clarify which criteria were ultimately used for the selection?
Sorry for the misunderstanding. We chose those 50 seed data based on the performance on downstream tasks. Because of the different performance of seed data, we assumed that "The reasons may be that fine-tuned LLMs perform better when the distribution of fine-tuned data similar to that of pre-training data.". To verify our hypothesis, we applied Shi et al.’s method on our data. The results showed that our hypothesis is reasonable.
> Shi et al.’s method requires computing token probability, which is not available in GPT-3.5-turbo's output. How did you obtain the probability for each token?
Actually, the token probability of GPT-3.5-turbo is available according to the OpenAI API document [1]:
from openai import OpenAI
client = OpenAI()
completion = client.chat.completions.create(
model="gpt-3.5-turbo-0125",
messages=[
{"role": "user", "content": "Hello!"}
],
logprobs=True,
top_logprobs=2
)
print(completion.choices[0].message)
print(completion.choices[0].logprobs)
Part of the output:
{
...
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I assist you today?"
},
"logprobs": {
"content": [
{
"token": "Hello",
"logprob": -0.31725305,
"bytes": [72, 101, 108, 108, 111],
"top_logprobs": [
{
"token": "Hello",
"logprob": -0.31725305,
"bytes": [72, 101, 108, 108, 111]
},
{
"token": "Hi",
"logprob": -1.3190403,
"bytes": [72, 105]
}
]
},
{
"token": "!",
"logprob": -0.02380986,
"bytes": [
33
],
"top_logprobs": [
{
"token": "!",
"logprob": -0.02380986,
"bytes": [33]
},
{
"token": " there",
"logprob": -3.787621,
"bytes": [32, 116, 104, 101, 114, 101]
}
]
},
}
For Shi et al.’s code, it was wrote almost one year ago. So we updated part of the code with new version, such as func `calculatePerplexity_gpt3` in `src/run.py` [2].
[1] https://platform.openai.com/docs/api-reference/chat/create
[2] https://github.com/swj0419/detect-pretrain-code/blob/main/src/run.py
- - -
If our response can address your concerns, please consider raise the rating! We are also happey for further discussion on your concerns. Thanks for your support!
---
Rebuttal 5:
Title: Reviewer's response
Comment: Thank you for providing the OpenAI API call.
However, Shi's method requires obtaining the (log) probability of the input prompt (i.e. the input tokens), while the API you shared only provides probabilities for the output tokens.
Additionally, it appears this API call supports only the top 20 highest probability tokens. Could you please clarify how you obtained the minimum 30% and 40% log probabilities in your experiments?
---
Rebuttal Comment 5.1:
Comment: We thank reviewer XR5a for the detailed comments on Shi's method. Now we answer your further concerns.
>However, Shi's method requires obtaining the (log) probability of the input prompt (i.e. the input tokens), while the API you shared only provides probabilities for the output tokens.
Because this step is to get the probability of the input token, our strategy is to prompt GPT-3.5-turbo with ```Just Repeat the following instruction: {seed data}```. Then GPT-3.5-turbo can repeat the seed data and output the probability of every input token in seed data.
>Additionally, it appears this API call supports only the top 20 highest probability tokens. Could you please clarify how you obtained the minimum 30% and 40% log probabilities in your experiments?
There is another parameter *n*. Its can determine: How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs.
We can adjust *n* and *top_logprobs* to get more probability tokens.
In fact, Shi's paper has little to do with our work, since it is just used to help us *potentially understand* why the selected 50 seed data are helpful in fine-tuning (such understanding could be wrong, actually). *That paper is not even mentioned in our manuscript.* As we discussed before, we exploited other methods such as different random selection to justify the 50 seed data.
We would like to kindly ask the reviewer to evaluate *our* technical contribution instead of the detailed discussion of only a possible explanation using other's work which is not mentioned in our submission. Our key contributions include: CultureLLM, a cost-effective solution to fine-tune culturally-aware LLMs, a data augmentation approach, and strong experimental results on multuple datasets.
- - -
Since the discussion is about to end in 1 day, we would like to ask if the reviewer is satisfied with our previous responses w.r.t. your other comments and update the rating accordingly. We thank the reviewer for the continuous discussion.
---
Rebuttal 6:
Title: Reviewer's response
Comment: Thank you for your response. The authors have addressed several of my concerns. However, the fundamental assumption of using language to denote culture, along with the issue of unfair comparison, remains challenging to resolve.
Nonetheless, considering the authors’ efforts in their rebuttal, I have decided to increase my rating.
---
Rebuttal 7:
Title: Further Response
Comment: We thank reviewer XR5a for the improved rating. There seems to be two remaining concerns:
> the fundamental assumption of using language to denote culture
As we explained previously, for this open question, we are not the first work to use this assumption. We respect the reviewer's opinion on this point.
> the issue of fair comparison
For this comment, we have answered reviewer XR5a's demand on fine-tuning GPT models on the pre-training data of SeaLLM and Taiwan LLM by stating that their data is not publicly available.
In fact, the major experiments in the paper are conducted under fair comparison: we compare GPT-3.5-turbo with our fine-tuned GPT-3.5-turbo version, ensuring that they are using the same backbone models. If one only cares about absolute performance, we still compare with GPT-4, the most advanced model to date. We hope that the reviewer can acknowledge this.
In the future, with more multilingual data publicly released, we will continue more comparisons using the same backbone models.
- - -
Authors appreciate the multiple rounds of discussion with reviewer XR5a, which makes the paper more sound. We will include all discussion results and analysis into the final version of the paper. | Summary: The paper presents CultureLLM, a fine-tuned LLM based on GPT 3.5 and fine-tuned on a cultural survey (in English) on 50 survey questions that are increased through semantically aware augmentation. 9 different languages are chosen with geographic choices about which survey to use to represent the languages. The CultureLLM is then tested in different tasks that have different language splits to show that the geographic/language awareness improves the performance than unaware LLMs.
Strengths: This is an interesting problem that faces LLMs and how they are relevant to different locales. The authors work to setup the problem and also highlight their challenges along the way and how they dealt with them. An example is how one deals with limiting the questions and templates for instructions generation and then using augmentation via an LLM but doing a test for relevancy of the generated augmentation. The ablation studies and experiments show improvements of the more geographically language tuned models.
Weaknesses: 1. It is important to heavily note that language is not equal to culture and this tends to cause confusion in this paper. Culture is way more complex than language and it might have been easier to call the language splits as culture for writing but this will introduce misunderstandings that will muddy your message.
2. Augmentation with semantic similarity checks is something that has been worked on before for NLP augmentaiton. How do you deal with the challenge that even with high cosine similarity because of BERT embeddings, sentences that have very dissimilar meanings will be counted as similar or pass through your filter.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. A questions that comes to mind, that you partly address, is the crosslingual nature of your finetuning, instead of translation. Would not having both even be better? And what would the effect also of using more local language based LLMs be? e.g. LeoLLM for german langauge.
2. Augmentation with semantic similarity checks is something that has been worked on before for NLP augmentaiton. How do you deal with the challenge that even with high cosine similarity because of BERT embeddings, sentences that have very dissimilar meanings will be counted as similar or pass through your filter.
Note: You refer to the crowdsource study as having details including IRB/Ethics information in Appendix E, but this is not so. You just describe the task questions. Was IRB/Ethics approval sought given that the nature of the questions may for some people find offensive?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have worked to highlight their limitations and also challenges with the approach.
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1: It is important to heavily note that language is not equal to culture and this tends to cause confusion in this paper. Culture is way more complex than language and it might have been easier to call the language splits as culture for writing but this will introduce misunderstandings that will muddy your message.**
We strongly agree that language is not equal to, but only a part of culture. But using language to study culture is possible due to the following aspects:
- Existing literature on culture understanding shows that culture boundaries are fluid, dynamic and uncertain. Delanoy emphasizes that cultures are not homogeneous or static entities but are fluid and dynamic. He critiques essentialist views that rigidly define cultural boundaries and instead promotes a more nuanced understanding that considers the intersections of various cultural factors, such as ethnicity, language, religion, and socio-economic conditions [1]. Appadurai also discusses the fluidity of cultural boundaries and the creation of new cultural forms [2]. Cultural boundaries can be geographical regions, language, religion and so on. Based on above statements, using language as cultural boundaries is reasonable.
- Existing NLP works on culture also leverage labguage as culture boundaries. [3] focuses on Arabic and English culture. [4] focuses on 8 different cultures: English, Chinese, French, Russian, German, Arabic, Japanese and Korean. [5] also use language to split different cultures. The authors work on English, German, Russian, Bengali, Chinese, and Indonesian culture. [6] is a hand-crafted benchmark for evaluate diverse cultures. They also use languages as culture boundaries.
- Most downstream benchmarks are classified via language and we cannot get more fine-grained perspectives. For example, if we want to evaluate the performance of Arabic model, we can find benchmarks in Arabic culture. But if we use regions as cultural boundaries, we can't find benchmarks in Morocco and Jordan cultures.
- Note that the main contribution of the paper is to present a general algorithm that can augment LLM culture data but not specific to any cultures. In the future, if more fine-grained culture data are available, our algorithm can also work well.
[1] Delanoy, Werner. "What is culture." The Cambridge handbook of intercultural communication (2020): 17-34.
[2] Appadurai, Arjun. Modernity at large: Cultural dimensions of globalization. Vol. 1. U of Minnesota Press, 1996.
[3] Naous, Tarek, et al. "Having beer after prayer? measuring cultural bias in large language models." ACL (2024).
[4] Wang, Wenxuan, et al. "Not all countries celebrate thanksgiving: On the cultural dominance in large language models." arXiv preprint arXiv:2310.12481 (2023).
[5] Liu, Chen Cecilia, et al. "Are multilingual llms culturally-diverse reasoners? an investigation into multicultural proverbs and sayings." arXiv preprint arXiv:2309.08591 (2023).
[6] Myung, Junho, et al. "BLEnD: A Benchmark for LLMs on Everyday Knowledge in Diverse Cultures and Languages." arXiv preprint arXiv:2406.09948 (2024).
**W2: Augmentation with semantic similarity checks is something that has been worked on before for NLP augmentaiton. How do you deal with the challenge that even with high cosine similarity because of BERT embeddings, sentences that have very dissimilar meanings will be counted as similar or pass through your filter.**
There are two tricks that work in our study
- Threshold Tuning and Filtering: Adjust the similarity threshold to balance between false positives and false negatives. We choose a more conservative threshold, which can reduce the likelihood of accepting semantically dissimilar sentences.
- Use of Cultural Contextual Information: BERT embeddings are context-sensitive, but sometimes context nuances are not fully captured. To mitigate this, we consider incorporating more cultural context.
**Q1: A questions that comes to mind, that you partly address, is the crosslingual nature of your finetuning, instead of translation. Would not having both even be better? And what would the effect also of using more local language based LLMs be? e.g. LeoLLM for german langauge.**
Nice question! We fine-tune LeoLLM with our generated data in both English and German version. Results are shown below:
| Task | hate_check | hate_iwg_1 | hate | hate_off | offensive_eval |
|---------------------|------------|------------|------|----------|----------------|
| LeoLLM | 0.46 | 0.21 | 0.18 | 0.33 | 0.42 |
| LeoLLM+English data | 0.50 | 0.26 | 0.25 | 0.40 | 0.46 |
| LeoLLM+German data | 0.51 | 0.32 | 0.27 | 0.42 | 0.50 |
The results indicate that the fine-tuned model perform better when the language of fine-tuned data is same with that of pre-training data. This could be due to the fact that English is much more than the local languages, which can naturally assist fine-tuning.
**Q2: Augmentation with semantic similarity checks is something that has been worked on before for NLP augmentaiton. How do you deal with the challenge that even with high cosine similarity because of BERT embeddings, sentences that have very dissimilar meanings will be counted as similar or pass through your filter.**
The response is shown in the response of W2.
**Note: You refer to the crowdsource study as having details including IRB/Ethics information in Appendix E, but this is not so. You just describe the task questions. Was IRB/Ethics approval sought given that the nature of the questions may for some people find offensive?**
Thanks for the reminder! We have gotten IRB/Ethics approval for the human study. | Summary: This research addresses cultural bias in large language models (LLMs) caused by training on mostly English data. Existing solutions can be expensive or require a lot of computing power. Here, they propose CultureLLM, a method that uses existing cultural surveys to create more training data and fine-tune LLMs. This method is shown to be effective and efficient for improving the cultural awareness of LLMs for 9 cultures.
Strengths: - The paper is clearly written and easy to follow
- The topic of the paper (culture LLMs) addresses a timely and important issue with the current LLM situation.
- The approach (especially using existing survey data) is simple and reasonable.
- The experiments show the efficacy of the cultureLLM suggested.
Weaknesses: - I don't see major weaknesses
Technical Quality: 3
Clarity: 3
Questions for Authors: - I'm wondering how the performance (or errors) are associated with the coverage of the 50 questions used for fine-tuning.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I think the limitations brought by the authors are adequate.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: I'm wondering how the performance (or errors) are associated with the coverage of the 50 questions used for fine-tuning.**
We analyze the performance for each task and report the WinRate in the table below.
| | offensive detect | hate detect | stance detect | toxicity detect | threat detect | bias detect | abusive detect | spam detect |
| ---------- | ---------------- | ----------- | ------------- | --------------- | ------------- | ----------- | -------------- | ----------- |
| ChatGPT | 0.6143 | 0.5433 | 0.6758 | 0.5280 | 0.4270 | 0.4464 | 0.5889 | 0.5846 |
| CultureLLM | 0.7203 | 0.6197 | 0.7359 | 0.6859 | 0.5172 | 0.5077 | 0.6622 | 0.6451 |
| WinRate | 0.1060 | 0.0764 | 0.0600 | 0.1579 | 0.0903 | 0.0612 | 0.0733 | 0.0605 |
The relevance of each task with WVS can be described in the following:
- offensive language detect:
1. Cultural Context and Sensitivity to Offensive Language: The World Values Survey aims to capture cultural values and beliefs across different societies. One aspect of cultural values is the tolerance or acceptance of offensive language. In some cultures, certain words or expressions may be considered highly offensive, while in others they may be more tolerated or even commonly used.
- hate speech detect:
1. Societal Norms and Attitudes: The WVS provides data on societal norms, attitudes towards minorities, and levels of societal trust. This data can help understand the underlying societal conditions that might foster hate speech or, conversely, promote tolerance and inclusivity.
2. Cultural Context: Understanding the cultural context is crucial for effectively detecting and interpreting hate speech. The WVS offers a rich dataset for understanding cultural differences in values and norms, which can inform more nuanced hate speech detection algorithms that are sensitive to context and do not inadvertently suppress legitimate expressions of cultural or political dissent.
- stance detect:
1. Understanding Contextual Influences on Stance: The WVS can provide the cultural and societal background needed to understand why certain stances are more prevalent in specific regions or among certain demographic groups. This context can be invaluable for interpreting the results of stance detection analyses, especially when comparing stances across different cultures and societies.
- toxicity detect:
1. Reflection of Societal Norms in Online Behavior: The WVS provides insights into the prevailing norms and values within societies, which can indirectly inform the context within which toxic behavior manifests online. Understanding societal attitudes towards diversity, authority, individual freedom, and tolerance can help in interpreting the root causes of toxic behavior and devising appropriate responses.
- threat detect:
1. Understanding Motivations and Behaviors: Insights from the WVS can help understand the cultural and societal contexts that may influence the behavior of individuals or groups posing threats. This knowledge can inform more targeted and effective threat detection and mitigation strategies that consider the root causes of conflict or aggression.
2. Cultural Sensitivity in Security Measures: Incorporating findings from the WVS can lead to more culturally sensitive security practices that respect local values and norms. This is crucial in global operations where misunderstanding cultural nuances can lead to ineffective or counterproductive security measures.
- bias detect:
1. Understanding Societal Norms and Attitudes: Insights from the WVS can help in understanding the cultural and societal norms that underlie biases. By analyzing patterns in global values and beliefs, we can identify prevalent stereotypes, prejudices, and discriminatory attitudes that may need to be addressed in bias detection efforts
- abusive detect:
1. Cultural Contexts of Abuse: The WVS can help identify cultural norms that influence perceptions of what constitutes abusive behavior. This is crucial for developing detection systems that are sensitive to cultural differences, ensuring that they can effectively identify abuse without mistakenly flagging culturally specific but non-abusive interactions.
2. Injection of More cultural nuances: Insights from the WVS can inform the development of more nuanced algorithms for detecting abusive behavior by providing context on societal values and norms.
4. Evaluating Tolerance Levels: The WVS data can provide insights into societal tolerance levels towards different forms of behavior, including what might be considered abusive. This can help in assessing the urgency and type of interventions needed to address abusive behaviors in various cultural contexts.
- spam detect
1. Cultural Variations in Communication: The WVS can shed light on cultural differences in communication styles and preferences, which can inform more nuanced spam detection algorithms that are better able to distinguish between legitimate mass communications and spam in different cultural contexts.
2. Attitudes Towards Technology and Privacy: Insights from the WVS regarding societal attitudes towards technology use, privacy, and data protection can help in tailoring spam detection efforts to respect cultural norms and expectations. For instance, societies with a high value on privacy might be more receptive to stringent spam filters.
3. Attitudes Towards Technology and Privacy: Insights from the WVS regarding societal attitudes towards technology use, privacy, and data protection can help in tailoring spam detection efforts to respect cultural norms and expectations. For instance, societies with a high value on privacy might be more receptive to stringent spam filters.
---
Rebuttal Comment 1.1:
Comment: Thank you for the author response. | Rebuttal 1:
Rebuttal: Dear Reviewers and AC,
We want to thank all reviewers for pointing out our strengths, including:
- problem significance: "addresses a timely and important issue with the current LLM situation", "an interesting problem that faces LLMs and how they are relevant to different locales"
- novel method: "is simple and reasonable"
- solid experiment: the experiments show the efficacy of the cultureLLM suggested, with solid experiments to investigate the effectiveness of the proposed method
- writing: "well-organized and easy to read"
Rebuttal for each reviewer has been submitted in respective sections. Here, we would like to clarify two common weaknesses.
Specifically, as raised by reviewer *XR5a* and *tmAM*, there remains one common weakness about the relationship between culture and language, which we aim to address here:
We strongly agree that language is *not* equal to, but only *a part of* culture. But using language to study culture is possible due to the following aspects:
- Existing literature on culture understanding shows that culture boundaries are fluid, dynamic and uncertain. Delanoy emphasizes that cultures are not homogeneous or static entities but are fluid and dynamic. He critiques essentialist views that rigidly define cultural boundaries and instead promotes a more nuanced understanding that considers the intersections of various cultural factors, such as ethnicity, language, religion, and socio-economic conditions [1]. Appadurai also discusses the fluidity of cultural boundaries and the creation of new cultural forms [2]. Cultural boundaries can be geographical regions, language, religion and so on. Based on above statements, using language as cultural boundaries is reasonable.
- Existing NLP works on culture also leverage language as culture boundaries. [3] focuses on Arabic and English culture. [4] focuses on 8 different cultures: English, Chinese, French, Russian, German, Arabic, Japanese and Korean. [5] also use language to split different cultures. The authors work on English, German, Russian, Bengali, Chinese, and Indonesian culture. [6] is a hand-crafted benchmark for evaluate diverse cultures. They also use languages as culture boundaries.
- Most downstream benchmarks are classified via language and we cannot get more fine-grained perspectives. For example, if we want to evaluate the performance of Arabic model, we can find benchmarks in Arabic culture. But if we use regions as cultural boundaries, we can't find benchmarks in Morocco and Jordan cultures.
- Note that the main contribution of the paper is to present a general algorithm that can augment LLM culture data but not specific to any cultures. In the future, if more fine-grained culture data are available, our algorithm can also work well.
Another thing we want to discuss is about *"fair comparation"*. We agree that it is necessary to fine-tune gpt models on the training data of TaiwanLLM and SeaLLM. However, a sad story is that their training data is not publicly accessible. In fact, the popular LLM leaderboards such as Chatbot Arena and AlpacaEval ranks models regardless of their sizes, training data, and post-training, but only the final performance on the same benchmarks. Moreover, we realize that it is never easy to reach a "fair" comparison since if we fine-tune the same models on their data, it is unfair for our approach since their pre-training data is significantly larger than ours. We would like to claim that given limited budget and GPU hardware, CulturePark remains a cost-effective solution to fastly build a cultural-specific LLM for low-resource culture. This is the main contribution of the paper.
[1] Delanoy, Werner. "What is culture." The Cambridge handbook of intercultural communication (2020): 17-34.
[2] Appadurai, Arjun. Modernity at large: Cultural dimensions of globalization. Vol. 1. U of Minnesota Press, 1996.
[3] Naous, Tarek, et al. "Having beer after prayer? measuring cultural bias in large language models." ACL (2024).
[4] Wang, Wenxuan, et al. "Not all countries celebrate thanksgiving: On the cultural dominance in large language models." arXiv preprint arXiv:2310.12481 (2023).
[5] Liu, Chen Cecilia, et al. "Are multilingual llms culturally-diverse reasoners? an investigation into multicultural proverbs and sayings." arXiv preprint arXiv:2309.08591 (2023).
[6] Myung, Junho, et al. "BLEnD: A Benchmark for LLMs on Everyday Knowledge in Diverse Cultures and Languages." arXiv preprint arXiv:2406.09948 (2024).
- - -
We hope that your concerns can be addressed. Thank you for your hard work.
Authors of CultureLLM | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
HairFastGAN: Realistic and Robust Hair Transfer with a Fast Encoder-Based Approach | Accept (poster) | Summary: This paper proposes a novel framework for hairstyle transfer from single images. As previous works have either suffered from long optimization times or low generation quality, this work introduces a new encoder-based solution that balances both efficiency and quality. The solution decomposes the pipeline into four stages: pose alignment, shape alignment, color alignment, and refinement alignment, with a specialized encoder trained separately for each stage. Detailed experiments demonstrate that this approach outperforms previous methods both quantitatively and qualitatively.
Strengths: 1. The results exhibit a high quality, both qualitatively and quantitatively. Compared to related works, HairFast generates results with a more consistent identity and natural hairstyle.
2. I particularly appreciate the decomposition of hair shape and color, which enhances the flexibility of this method.
3. The experiments conducted are comprehensive, with several baseline methods compared, and the reported results are promising.
4. The write-up is thorough, including all necessary details (as well as the source code) required to reproduce this work.
Weaknesses: Overall, I enjoyed reading this paper and did not identify any significant weaknesses. The proposed method might be somewhat complex, but with the shared source code, reproduction should not pose a major issue. There are a few typos in the paper, but these should be easy for the authors to fix.
Technical Quality: 4
Clarity: 3
Questions for Authors: N/A
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Limitations and failure cases are well discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer RjWU for their favorable review of our paper. We appreciate the reviewer's recognition of the value in our research and their contribution to the peer review process. The reviewer's support is significant in advancing our field of study. | Summary: The paper introduces HairFast, a model designed to tackle the task of transferring hairstyles from reference images to input photos for virtual hair try-on. This task is notably challenging due to the diverse poses in photos, hairstyle intricacies, and the absence of standardized metrics for evaluation. Existing state-of-the-art methods often rely on slow optimization processes or low-quality encoder-based models operating in StyleGAN's W+ space or using other low-dimensional generators. HairFast addresses these shortcomings by leveraging a new architecture operating in the FS latent space of StyleGAN, enhancing inpainting techniques, and incorporating improved encoders for better alignment and color transfer.
Strengths: 1) HairFast achieves near real-time performance, processing hairstyle transfers in less than a second.
2) The model produces high-resolution results, maintaining quality while operating in the FS latent space of StyleGAN
3) Unlike existing approaches, HairFast effectively addresses challenges related to pose variations and color transfer in hairstyle transfer tasks.
4) The paper compares with the competing methods based on StyleGAN.
Weaknesses: 1) For StyleGAN-based hairstyle editing methods, one prominent limitation is the further editability of the images beyond hair. Methods like Barbershop, HairNet etc overfit on some part of the face to limit further editability. It is difficult to assess if the properties of editability of the underlying GAN are preserved. Can the current method edit the length of the hair, hairstyle (wavy, curly), and pose of the face after the hairstyle transfer is performed?
2) While the paper claims identity preservation, there is less evidence in terms of quantitative analysis. A score based on the Arcface model may help in providing a robust analysis of the method.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can your method edit hair length, hairstyle type (e.g., wavy, curly), and facial pose post-hairstyle transfer, while preserving the editability of other facial features without overfitting?
Have you used ArcFace or similar models to quantitatively assess identity preservation?
How does your method ensure continued editability of facial features after hairstyle transfer, and what measures prevent degradation of initial edits?
Can your method effectively handle varying attributes like hair length, waviness, and curliness? How does its flexibility compare with existing methods?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed the limitations.
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer fS7X for their valuable input and the time they have invested in reviewing our work.
**Can your method edit hair length, hairstyle type (e.g., wavy, curly), and facial pose post-hairstyle transfer, while preserving the editability of other facial features without overfitting?**
Our method is indeed capable of modifying hair shape, including length, through the use of sliders. This functionality is achieved by training the Shape Adaptor to project hair shape attributes into a small number of independent normal distributions. In our work, we take these attributes from another image, but they can also be edited by hand using the sliders. For more detailed information on this approach, we refer to the CtrlHair method [1].
Regarding attributes such as hair waviness, curliness, and facial pose, these can be edited using complementary techniques like StyleFeatureEditor [2]. This method can be applied to our output image as a post-processing step, allowing for further refinement of these specific attributes.
It's important to note that our method maintains the editability of other facial features without overfitting, as the hair editing process is separate from other facial attribute manipulations.
[1] Xuyang Guo, Meina Kan, Tianle Chen, Shiguang Shan, GAN with Multivariate Disentangling for Controllable Hair Editing. ECCV 2022
[2] Denis Bobkov, Vadim Titov, Aibek Alanov, Dmitry Vetrov, The Devil is in the Details: StyleFeatureEditor for Detail-Rich StyleGAN Inversion and High Quality Image Editing. CVPR 2024.
**Have you used ArcFace or similar models to quantitatively assess identity preservation?**
Thank you for your insightful comment. While we did not include this analysis in the original manuscript, we have conducted additional quantitative assessments of identity preservation. These results are available in the rebuttal file on Tab 2, and we intend to incorporate them into the final version of the paper.
For this evaluation, we utilized the SFace model rather than ArcFace, as the latter was employed in training some of our encoders. Our findings demonstrate that our method outperforms most other approaches in the main experiments with respect to identity preservation.
**How does your method ensure continued editability of facial features after hairstyle transfer, and what measures prevent degradation of initial edits?**
While our current work has not specifically addressed this aspect of the problem, we acknowledge its importance and intend to explore it in future research.
**Can your method effectively handle varying attributes like hair length, waviness, and curliness? How does its flexibility compare with existing methods?**
Our method demonstrates considerable effectiveness in modifying hair length. However, similar to existing approaches, it encounters challenges in accurately transferring texture attributes such as waviness and curliness. We have included additional comparisons illustrating changes in these attributes in Figure 2 of the rebuttal document for a more comprehensive evaluation. | Summary: This paper introduces HairFast, a model that addresses the challenge of transferring hairstyles from a reference image to an input photo in near real-time with high resolution and superior reconstruction. Existing methods either suffer from slow optimization processes or low quality due to operating in low-dimensional spaces. HairFast utilizes a new architecture in the FS latent space of StyleGAN, enhanced inpainting, and improved encoders to efficiently handle pose differences and transfer hairstyle shapes and colors in less than a second.
Strengths: +1. The HairFast model achieves both high resolution and near real-time performance, outperforming optimization-based methods that are typically slow. This enables virtual hair try-on experiences that are more fluid and responsive for users.
+2. The model demonstrates superior reconstruction compared to optimization-based hairstyle transfer methods, indicating a higher level of accuracy and fidelity in transferring hairstyles from one image to another. This leads to more realistic virtual hair try-on results.
+3. The HairFast model uniquely addresses the issue of pose differences between the source and target images, which has been a challenge for existing hairstyle transfer methods. By including enhanced inpainting, improved encoders for better alignment and color transfer, and a post-processing encoder, the model can effectively transfer hairstyles even when poses are significantly different, enabling a wider range of virtual hair try-on scenarios.
Weaknesses: -1. The HairFast model operates within the FS latent space of pre-trained generative models like StyleGAN. This reliance restricts its performance when StyleGAN fails to represent certain hairstyles or facial features adequately. Additionally, this dependency limits the model's ability to generalize to different generative models or datasets.
-2. While the HairFast model excels at transferring common hairstyles, it may struggle with extremely complex or unconventional styles (e.g. highly individualized, extremely curly, or straight hair patterns).
-3. Although the HairFast model achieves near-real-time performance on high-performance hardware like the Nvidia V100 GPU, its performance may suffer on devices with limited computational resources (e.g. mobile devices or low-end PCs).
Technical Quality: 3
Clarity: 3
Questions for Authors: -1. Since "Fast" is a key highlight of this work, why is there a lack of in-depth computational complexity comparisons, such as FLOPs and parameter count?
-2. How does the HairFast model compare to other state-of-the-art hairstyle transfer methods based on the diffusion model in terms of accuracy, speed, and user-friendliness?
-3. In line 173, what is the objective basis for setting the value of \alpha to 0.95?
-4. How does the HairFast model handle hairstyles that are not well-represented in its training dataset?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: -1. The HairFast model is likely trained on a limited set of similar datasets. As a result, its performance may degrade when applied to diverse datasets with a wider range of hairstyles, ethnicities, or facial features.
-2. The HairFast model operates within the FS latent space of pre-trained generative models like StyleGAN. This reliance restricts its performance when StyleGAN fails to represent certain hairstyles or facial features adequately. Additionally, this dependency limits the model's ability to generalize to different generative models or datasets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer JffK for their thoughtful comments and questions. Their insightful feedback has provided us with valuable perspectives to improve our paper. We appreciate the time and effort the reviewer has dedicated to this review.
**1. Since "Fast" is a key highlight of this work, why is there a lack of in-depth computational complexity comparisons, such as FLOPs and parameter count?**
Thanks for your comments, we acknowledge the importance of these metrics and have addressed this by calculating the FLOPs and parameter counts for our method and the compared approaches. These values have been included in Table 1 of our general rebuttal document, and we will incorporate them into the final version of the paper.
Our initial decision to omit these comparisons stemmed from the observation that they do not always accurately reflect real-world runtime performance. For instance, the CtrlHair method, despite using significantly fewer FLOPs, actually runs approximately 10 times slower due to its use of Poisson blending in their own inefficient CPU-based implementation. However, we recognize the value of providing this information for a comprehensive evaluation.
It's worth noting that despite these discrepancies between theoretical complexity and practical runtime, the overall ranking of methods in terms of speed remained consistent with our original findings.
**2. How does the HairFast model compare to other state-of-the-art hairstyle transfer methods based on the diffusion model in terms of accuracy, speed, and user-friendliness?**
The task of hairstyle transfer is highly challenging, and until recently, there were no diffusion-based methods addressing this problem. To the best of our knowledge, only one paper on this topic has been published, which appeared on July 19, 2024, significantly after our submission deadline. This paper claims to be the first diffusion-based framework for hairstyle transfer, likely making it the only work to date in this specific domain. To maintain anonymity, we cannot directly cite this work in our response.
As the code for this method is not yet available, our comparison is based on limited information. The method demonstrates effectiveness in transferring complex hairstyles and gradient hair colors. Due to the properties of diffusion models, it also performs well in inpainting tasks while maintaining good reconstruction. The authors conducted a user study comparing their work to ours, and according to their results, our method is only marginally inferior in terms of Accuracy, Preservation, and Naturalness.
However, this approach does not directly address the challenge of transferring hairstyles with significant pose differences. In terms of computational efficiency, their method will be much slower, as it requires running Stable Diffusion v1.5 twice for 30 steps. Additionally, their approach may be less flexible and user-friendly, as it transfers hair shape and color from a single reference, unlike our method which allows for multiple reference inputs.
**3. In line 173, what is the objective basis for setting the value of \alpha to 0.95?**
The value of \alpha=0.95 was determined through a systematic ablation study and manual fine-tuning. In our ablation configurations C and D, we initially set \alpha=0, which resulted in the hair color not being transmitted and even leaking from the target, causing various artifacts. This issue persisted for values up to \alpha=1. We then conducted a visual comparison across different \alpha values, ultimately selecting 0.95 as it provided the best balance between preserving the desired texture in hair reconstruction and effectively transferring the color. This value empirically demonstrated superior performance in maintaining the integrity of the reconstructed hair while still allowing for successful color transfer.
**4. How does the HairFast model handle hairstyles that are not well-represented in its training dataset?**
Our HairFast model, which operates on feature space (FS) image reconstructions, has the capability to store and transfer even highly complex attributes that were not present in the FFHQ dataset used for training StyleGAN and our encoders. The primary challenge in handling unusual hairstyles arises from the BiSeNet segmentation model. When faced with an unfamiliar domain or particularly complex facial images, BiSeNet may incorrectly select regions, leading to inaccurate hair transfer results.
To demonstrate our method's effectiveness in handling such cases, we conducted additional experiments involving cross-domain hair transfer. The results of these experiments are presented in Fig. 1 of the main rebuttal file. These findings illustrate that our method can successfully perform hair transfer even in domains that were not represented in the training data, showcasing its robustness and adaptability to novel hairstyles.
**Weakness discussion**
It is important to note that the identified weaknesses in points 1-3 are not unique to our approach but are common challenges faced by other methods addressing this problem as well. These limitations reflect the current state of the field and highlight areas for future research and improvement across all related techniques.
---
Rebuttal Comment 1.1:
Comment: Thanks to the author for the objective response. After reading the rebuttal, I will raise my rating to Borderline Accept. If the author can further adequately address my second question, I will immediately raise my rating to Weak Accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback and the improved rating. We appreciate your consideration.
In addressing the second question, we aimed to be as comprehensive as possible. We only refrained from naming the specific diffusion-based paper to maintain the anonymity required in the blind review process, given that their work directly references ours.
Could you kindly specify which additional details you feel are necessary for a comprehensive answer? We're fully prepared to elaborate on any points you believe require further clarification or expansion.
---
Rebuttal 2:
Comment: Thank you for the clarification. The diffusion-based model we are talking about uses a two-stage approach:
1) Generation of a bald proxy image using Latent ControlNet
2) Hair transfer utilizing a Hair Extractor based on U-Net, with features injected into the SD model via cross-attention layers
Each stage requires a specific training dataset and network training. While the authors provide comprehensive information, reproducing their work accurately would be time-consuming and resource-intensive.
We aim to include comparisons with this model in our paper's final version, either through our own implementation or by analyzing their published sample images.
Please let us know if you have any further questions or concerns. We're happy to provide additional information.
---
Rebuttal Comment 2.1:
Comment: Thanks for responding to address my concerns, I will raise my rating to Weak Accept! | null | null | Rebuttal 1:
Rebuttal: This rebuttal document contains tables and figures addressing Reviewers' comments. It includes:
1. A table with performance metrics (execution time, efficiency, parameters, memory usage)
2. A table showing identity preservation metrics
3. Visual results demonstrating the method's robustness on cross-domain hair transfer.
4. Comparative visuals for wavy and curly hair compared to baselines.
Pdf: /pdf/41eaa15f053653e8e9a546fbde625206a14c83d6.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
GSGAN: Adversarial Learning for Hierarchical Generation of 3D Gaussian Splats | Accept (poster) | Summary: The paper presents a 3D-aware GAN (3DGAN) framework using a 3D Gaussian Splatting model.
Given a collection of single view 2D images (e.g., FFHQ, AFHQ), the proposed method can train 3D aware images of comparable quality to SOTA NeRF-based method, but with a much faster rendering speed (up to 100x) during inference, thanks to the efficient 3DGS representation. The paper achieves strict 3D consistency as all the pixels are rendered with 3DGS without relying on post 2D super resolution (EG3D). This is among the first works (e.g., Gaussian Shell Maps in CVPR2024) that introduce the 3DGS in 3DGAN framework.
In details, the paper uses hierarchical gaussian representations, and the 3D representations are learned in a coarse-to-fine manner.
To regularize the generation of 3D Gaussians the paper introduces anchor gaussians which will create the next level gaussians, but the anchor gaussians itself were only used for regularization. Additional regularization losses are applied to the coarsest level gaussian anchors to regularize their positions.
In terms of the generator architecture, the paper borrows similar structures from StyleGANs, similar to other 3D GANs but introduces some attention-based MLP architectures.
The paper also introduces the layerscale layer, which is initialized to be zero to stabilize the positions of the gaussians in the early stage of the training. Coincidentally, the same strategy was also used in GGHead.
Strengths: -The paper presents a 3D GAN framework using a 3D GS representation. The method achieves strict 3D consistency and a significantly improved speed while achieving comparable FID scores to SOTA NeRF methods (e.g., Mimic3D)
-learning 3D representations from a single view in the wild image collection is an important research area and this area has improved significantly due to the improved computational efficiency of 3D representation (e.g., triplane in EG3D). The use of 3DGS in 3DGAN will likely boost the overall quality of 3D learning from in the wild 2D images, and the similar method could be applied in the future to learn other domains (e.g., objects, full body etc).
-The paper seems to have sufficient numerical and qualitative comparisons.
-The paper is easy to follow.
Weaknesses: -The related work section is extremely short and is lacking a lot of relevant citations. Please look at recent papers in the similar area
(e.g., **GGHead: https://arxiv.org/abs/2406.09377**,
**WYSIWYG: https://research.nvidia.com/labs/nxp/wysiwyg/**),
and properly cite all the related works. This is a major issue.
-Gaussian Shell Maps is also a highly relevant work (CVPR 2024), the code is available here: https://github.com/computational-imaging/GSM
-Most technical details are provided but some important details seem missing (see below)
Technical Quality: 3
Clarity: 2
Questions for Authors: -On the most important technical contribution, I did not fully understand why the paper needs to introduce anchor gaussians. Couldn't the same regularizations (if they exist at all) applied to the rendering gaussians themselves?
-In early 3D GAN research (e.g., pi-GAN), they used fully MLP-based architectures to learn unconstrained 3D space, but later EG3D showed triplanes are a more efficient 3D representations. Did the author consider the use of triplane?
-How many Gaussians are used in total with all layers and how sensitive the method is for the number of gaussians? I think this is an interesting question many readers would be interested in.
-it seems that the position regularizations only applied to the coarsest level in Eq (10). What about scale regularizations? What about regularizations to other layers? Please clarify.
-In Eq (4), why there is an additional delta s, which seems to be doing the same thing as s^{hat_l}? How is delta s < 0 enforced in the loss? This detail seems missing.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The limitations and potential negative impacts are explained. Additional limitation could be the generated image quality is still not on par with post 2D super resolution 3DGAN baseline (EG3D).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive review.
---
### “Short length of related work and lack of a lot of relevant citations”
We apologize for the lack of relevant citations, and will carefully revise the related works section including the papers below:
Gaussian Shell Maps (CVPR 2024)
They focus on the task of 3D human generation based on SMPL template with an adversarial learning framework. With the given human body template, the generator learns to synthesize multiple shell maps containing Gaussian parameters.
We believe that the proposed method is more fundamental research about extending the 3D Gaussian representation in the domain of GANs without any structural priors.
Moreover, we also perform some experiments about the extension of the proposed method using a template such as FLAME face model, so please refer to it.
GGHead (SIGGRAPH 2024)
First of all, please note that GGHead is released on arxiv at (Jun 13, 2024) after Neurips submission (May 22, 2024).
GGHead focuses on 3D face generation using FLAME template. Similar to Gaussian Shell Maps, they synthesize texture maps containing Gaussian parameters and position offsets. They also suggest initializing position offsets to 0, akin to our proposed method utilizing layer-scale in the generator architecture.
Their framework also requires the use of templates with a foreground mask. As mentioned earlier, we believe our work represents valuable and fundamental research of GS on 3D GANs.
WYSIWYG (CVPR 2024)
WYSIWYG is 3D GANs based on SDF representation. They first learn to volume rendering (VR) at low-resolution (LR) with uniform sampling. Then, they distill the LR weights for VR to a high-resolution (HR) volume to reduce the number of sampling points on rays, achieving the feasible speed of rendering directly at HR.
They show prominent FID on FFHQ-512, but ours surpasses them in terms of FID on AFHQ-Cat-512. Furthermore, ours still achieves a much faster rendering speed almost 100 times compared to WYSIWYG which requires ~222ms for rendering a single image at 512 resolution with a A100 GPU.
---
### “Motivation and importance of anchor Gausisans”
First of all, we clarify that regularizations by loss functions (except adversarial and pose losses) apply only to the positions of the coarsest anchor Gaussians.
We observe that Gaussians used for rendering tend to become overly sharp and elongated, similar to findings in Gaussian Splatting [R#LD2e-1], with scales toward a specific axis approaching zero. This occurs because real-world textures are generally sharp, necessitating sharp Gaussians.
While this is not an issue in typical Gaussian splatting for 3D reconstruction, it poses problems for hierarchical Gaussians by excessively narrowing the possible positions of coarser Gaussians. For example, if a scale for a specific axis becomes zero, coarse Gaussians are limited to a plane rather than a volume. As visualized in Fig. 6 of manuscript, we empirically validate it degrades the quality of positions without the anchor Gaussians. An alternative approach is to prevent Gaussians from becoming too small, but it might cause images to be blurred and degrade visual quality.
Thus, we separate Gaussians for rendering from those used for regularization. Anchors, not involved in rendering, do not become overly sharp and can guide coarser Gaussian positions effectively. As shown in Appendix (lines 557-558), anchors have a higher scale than rendering Gaussians.
Furthermore, this method requires minimal additional computation since we only split the output layers ($\text{toGaus}$), and anchors not being used for rendering incur no extra rendering cost.
[R#LD2e-1] Feng, Yutao, et al. "Gaussian splashing: Dynamic fluid synthesis with gaussian splatting." CVPR 2024.
---
### “Consideration to use triplane representation”
Initially, we considered using triplane representation. Following EG3D, we synthesized triplanes with a StyleGAN2 generator and estimated warping offsets and Gaussian parameters from voxel features. While this approach worked, it underperformed compared to a transformer-based architecture.
We hypothesize that MLPs are more efficient for modeling scenes with Gaussian representations compared to NeRF-based methods, which require modeling density and color for every coordinate, including empty space. In contrast, our approach focuses only on the positions of Gaussians.
Moreover, a key architectural difference is that MLP-based methods such as Pi-GAN cannot use attention, performing attention between every sampled point on rays is computationally infeasible. In our method, however, Gaussians interact through attention operations.
---
### “The number of Gaussians and the sensitivity of method about the number of Gaussians”
We use a total of 87296 and 349440 Gaussians for 256 and 512 resolutions, respectively (i.e. 5 and 6 blocks for each 256 and 512 resolution).
We experimented by halving the number of Gaussians in FFHQ-256, by reducing the initial Gaussians from 256 to 128 (i.e. total 43648 Gaussians). As a result, the model with the reduced Gaussians achieves FID 7.99, a degraded FID compared to the original FID 6.59, since the representation power is reduced. However, we observe that the synthesized results are still visually plausible with multi-view consistency.
---
### “Details about the regularizations including the position and scale regularizations / reason for $\Delta$ s”
We apply regularization losses only on the coarsest-level Gaussians. For the other parameters (scale) or layers, we only use adversarial and pose losses. That is, scale parameters are regularized by architectural design.
For $\Delta s$, we aim to ensure the scale of fine Gaussian would be smaller than its coarser counterpart. To this end, we introduce $\Delta s$ that minimizes the scale within a certain degree. We parameterize it by the image resolution and the number of coarsest Gaussians as noted in eqn. 13 in Appendix.
---
Rebuttal Comment 1.1:
Comment: The authors addressed my concerns and assuming there will be major rewriting in the related work (please do check the related work sections of a few papers I mentioned) and improvements on the technical details as explained by the authors in the final revision, I am leaning toward acceptance.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer LD2e
Comment: We sincerely appreciate Reviewer LD2e for their thorough evaluation and insightful comments.
In the final version of the manuscript, we will carefully revise the related work section, ensuring a precise review of the related work sections in the papers mentioned by the reviewer. | Summary: This paper proposes a new 3DGAN based on the 3DGS representation. To maintain stable training and good generation quality, this paper introduces a hierarchical Gaussian to generate the results in a coarse-to-fine manner. Meanwhile, a reasonable transformer-based architecture is proposed to implement the hierarchical Gaussian. Moreover, the proposed Anchor Gaussian is also reasonable. The generation results in both the cat and human domains are good.
Strengths: 1. The whole pipeline is reasonable and interesting. To stable 3DGAN training when using 3DGS, it is smart to utilize a hierarchical architecture to predict the parameters of 3DGS in a coarse-to-fine manner.
2. The Anchor Gaussian can disentangle the rendering and regulation well, which helps the method generate high-quality results.
3. The transformer-based architecture is suitable to implement the Hierarchical Gaussians.
4. The results are good and the FID is also comparable. Meanwhile, since the inherent advantage of 3DGS, the proposed method can generate high-resolution images with low time-consuming.
Weaknesses: 1. The L_pose is not used in the EG3D or other baseline methods, and I want to know how this loss affects the training of the proposed method. Meanwhile, the pose encoder of the camera parameter Is trained simultaneously with GAN?
2. Is there a branch that generates the background regions similar to the Panohead? I find some artifacts in the generation results (i.e., some floaters).
Overall, this is a good paper that introduces a reasonable 3DGS-based GAN. I tend to accept the current version. I run the code, the results are good!
Technical Quality: 3
Clarity: 3
Questions for Authors: see the weakness.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: see the weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive reviews.
---
### “Effect of $L_\text{pose}$ and details about the pose encoder”
$L_{pose}$ is used for guiding the pose information to the generator. In detail, for real data, the discriminator learns to estimate the pose embedding corresponding to the given image by contrastive loss. Similarly, for fake data, the generator learns to make the pose embedding of the synthesized image would be estimated to its camera parameter by the discriminator. Thus, we can guide the generator to synthesize the proper 3D scene without conditional adversarial losses EG3D uses.
The pose encoder is jointly trained with GANs, and it is just a shallow MLP composed of 2 FC layers that simply transform the camera parameters into the high-dimensional feature vector.
Besides, these kinds of pose losses are often used for 3D GANs [R#RZRC-1, 2, 3].
[R#RZRC-1] Deng, Yu, et al. Gram: Generative radiance manifolds for 3d-aware image generation. CVPR 2022.
[R#RZRC-2] Xiang, Jianfeng, et al. Gram-hd: 3d-consistent image generation at high resolution with generative radiance manifolds. ICCV 2023.
[R#RZRC-3] Jo, Kyungmin, et al. 3d-aware generative model for improved side-view image synthesis. ICCV 2023.
---
### “Background generator and some artifacts such as floaters”
Yes. We deploy the background generator that resembles the architecture of the proposed method, as we want to model the entire 3D scene by gaussian splats. There are some architectural differences as follows: we use modulated FC layers instead of transformer-like blocks, and the positions of background Gaussians are normalized to be located on the sphere with radius=3, while foreground Gaussians would reside in [-1, 1] cube.
Also, we reduce the channels and the number of upsampling. Other details are available in the Appendix.
We believe that some artifacts in the generation results like floaters are actually not artifacts, instead, some high-frequency details that the background generator should synthesize.
However, since we have focused on building the generator architecture optimized for the foreground, we did not tune the background generator architecture much. For example, we simply use only a single upsampling block with a much smaller number of Gausisans (i.e. total of 10K Gaussians for background).
As a result, we observe some tendency that the generated Gaussians from the background generator typically have high-scale and blurred shapes and do not model the high-frequency details.
We believe that this artifact can be easily resolved by introducing a 2D background generator with an alpha mask prediction like Panohead [R#RZRC-4], or sophisticated tuning with the current background generator architecture, or just using a foreground mask to decompose the background.
[R#RZRC-4] An, Sizhe, et al. Panohead: Geometry-aware 3d full-head synthesis in 360deg. CVPR 2023. | Summary: The paper proposes a hierarchical Gaussian Splatting for 3D GAN. The authors claim that such structure lead to stable training and fast rendering.
Strengths: * The hierarchical GS structure is interesting.
Weaknesses: * The motivation to propose such a hierarchical GS structure is not clear. As stated in line 119-122, the design is used to mitigate the instability of position/scale in GS training. It is not an intuitive solution and it is not well interpreted in the paper. The examples in Fig. 6 are too simple and confusing, please refine the figures to make it easy to understand why the hierarchical design is reasonable.
* The hierarchical structure of Gaussian Splatting is not a new topic. The literature review about GS and its variants is not sufficient to support the claimed novelty.
* The author claims the rendering speed as their contribution. In my opinion, the fast rendering is the characteristic of GS itself. If the authors claim the proposed hierarchical GS makes it faster, ablation and discussion are needed as evidence. As shown in Tab. 1, the selected methods for rendering time comparison are all papers published before GS.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to "weaknesses".
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No negative social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive reviews.
---
### “The motivation of hierarchical GS is not clear and Fig. 6 is confusing to understand”
GANs with 3D Gaussian solve more complex problems compared to previous GANs for 2D image synthesis, as they need to predict various parameters such as 3D position and shape of Gaussians and color at once while previous methods need to estimate the color of fixed positions.
This high complexity of the problem makes the solution space more sparse (i.e. all Gaussian parameters such as position and shape should be well-matched with each other), and it degenerates the training stability of GANs since it makes the generator hard to converge to such narrow desirable solution.
For example, high-resolution data which has a sparse feasible solution space is hard to train compared to low-resolution one [R#RZRC-1]. From this point of view, we design the generator to synthesize in a coarse-to-fine manner with regularization that reduces the sparsity of solutions by synthesizing Gaussians from coarse-level (easier to solve) to fine-level, similar to the approach of previous work [R#RZRC-1].
Therefore, we devise a method of regularizing the position and scale while generating Gaussians coarse-to-fine. Detailed rationale of ours is as follows:
1) With the hierarchical architecture, the positions of fine-level Gaussians are restricted by its coarser-level ones, thus possible positions of Gaussians are extremely reduced, making the model to be optimized more easily. Furthermore, as we design the local coordinate system by parameters of coarser Gaussians, finer ones should reside in their coarser Gaussian counterparts. Thus, if the scale of the coarsest Gaussians does not diverge or explode, finer Gaussians also would be located in the scene within a valid range (i.e. they are used for rendering and located in viewing frustum).
2) Along with the hierarchy, we enforce the scale of finer-level gaussian to be smaller than its coarser-level counterparts. Thus, the generator learns to use various scales of Gaussians for building 3D scenes, leading the generator to depict both coarse and fine details. With this regularization, we can prevent the generator from converging to short-cut such as synthesizing the Gaussians of large scales.
In Fig. 6, we demonstrate that the naive generator architecture produces images with artifacts from elongated Gaussians of large scales (left in Fig. 6). Additionally, it generates disorganized, invalid positions outside the cube (right in Fig. 6), instead of being well-aligned with the desired 3D scene. Conversely, our proposed method (bottom-right examples) synthesizes sharp textures without such artifacts and maintains structured positions.
Moreover, models with minimal or no constraints exhibit training instability (Fig. 5). Notably, the diverged model fails to synthesize valid Gaussians, resulting in the disappearance of all Gaussians in the rendered images, as noted in line 50.
We will carefully revise the figures and explanations. (Fig. 4 in PDF)
[R#RZRC-1] Karras et al. Progressive Growing of GANs for Improved Quality, Stability, and Variation. ICLR 2018
[R#RZRC-2] Karnewar et al. Msg-gan: Multi-scale gradients for generative adversarial networks. CVPR 2020
[R#RZRC-3] Zhang et al. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. CVPR 2017
---
### “Hierarchical structure of GS is not a new topic and insufficient to show novelty”
First of all, we emphasize that the most important contribution of this paper is the first work that extends the application of GS on 3D GANs without any structural priors. To achieve this, we introduce several novel components such as the hierarchical architecture of GS and anchor Gaussians and transformer-based generator architecture.
Thanks for noticing us availability of the prior works about the hierarchical structure of GS.
With careful searches on GS-related work, we found two works as below:
- Chen et al. GaussianEditor: Swift and Controllable 3D Editing with Gaussian Splatting. CVPR 2024
- Kerbl et al. A Hierarchical 3D Gaussian Representation for Real-Time Rendering of Very Large Datasets. SIGGRAPH 2024
GaussianEditor is a text-based 3D scene editing method by diffusion-based guidance. With diffusion-based guidance, they introduce the additional objective that reduces L2 distance between the position of the parent node and their child nodes after densification.
There are several major differences between ours and their method. For example, 1) they aim to solve an editing task, and 2) they require a well-trained 3D-GS and densification process, and 3) their method only adjusts the objective function, and 4) they do not consider the scale of Gaussians.
The second one solves 3D reconstruction of a very large-scale scene by building a hierarchy between 3D scenes modeled by GSs. So, they pursue a different goal from ours and also they only consider hierarchy between 3D scenes, not the individual Gaussians. Also, it is released on arxiv (17 June 2024) after NeurIPS submission.
Therefore, we believe ours has its own snovelty.
---
### “Advantages such as fast rendering is not the contribution of this work / Compared methods are out-dated”
First of all, we do not claim that the proposed hierarchical architecture of GS enhances the rendering speed. We argue that the application of GS is hard to extend in 3D GANs, as GS requires some heuristics and structural priors for training and the training can be unstable in an adversarial framework.
Therefore, the major contribution of our work is the extension of GS on 3D GANs, so the advantages of GS compared to NeRFs such as fast rendering and strict 3D consistency are achieved naturally.
We also clarify that the proposed method is the first 3D GANs using GS, so there are no papers about GS-based 3D GANs we can compare with, and we compare ours with Mimic3D (Oct 2023) published after GS (Aug 2023).
---
Rebuttal Comment 1.1:
Comment: The authors well addressed my concerns. | Summary: The paper proposes a method to train 3D GANs using 3D Gaussians. A limitation to training 3D Gaussians in an adversarial training setup is the instability of the scale optimization, which leads to the scale explosion. To address this, the paper proposes a method of hierarchical generation of Gaussians using the Stylegan architecture. As a result, this approach can generate consistent 3D objects at higher rendering speeds. The results are comparable to the volume rendering approaches with the approach surpassing the generation speeds. The authors evaluate their method on two datasets – FFHQ and AFHQ CATs.
Strengths: 1) The paper addresses the problem of scale explosion when 3D Gaussians are trained under adversarial losses. The paper addresses this problem by a hierarchical approach to the generation of these Gaussians. This would help in scaling future 3D models to higher texture resolutions.
2) The paper comprehensively compares the existing methods based on volume rendering and shows that the quality and consistency of results are comparable. The rendering speed exceeds the volume rendering methods.
3) The paper shows inversion results that have applications in various single image to 3D generation tasks.
Weaknesses: 1) Why not the original resolution of 1024 for the faces? The authors claim that the generator is 100x faster. The paper does not demonstrate results with this resolution since it fixes the scale. GSMs[1] also mention the scale problem that makes it difficult to train high-frequency details. If the current method addresses the problem, does it scale well to higher resolution?
2) There are many works now employing underlying FLAME/SMPL models like GSM[1], GNARF[2] etc to model faces/bodies. This helps in natural articulation and control. This framework only supports generation with limited articulation and editing. A discussion of these frameworks with limitations and advantages would help.
3) Although the authors show the scaled-down Gaussians for visualization, there is no evidence of extraction of geometry from the models. Some works in the Gaussian domain[3] try to extract the geometry. How does it compare with Eg3d? How does running the COLMAP on the outputs look like? Are there more artefacts than the volume rendering methods?
4) There still seem to be some floating artefacts as seen in the supplementary videos.
[1] Abdal, Rameen, et al. "Gaussian shell maps for efficient 3d human generation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
[2] Bergman, Alexander, et al. "Generative neural articulated radiance fields." Advances in Neural Information Processing Systems 35 (2022): 19900-19916.
[3] https://github.com/MrNeRF/awesome-3D-gaussian-splatting
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) What specific challenges were encountered when attempting to train or generate images at the original resolution of 1024?
2) Are there plans to demonstrate results at 1024 resolution in future work, and what optimizations might be needed to achieve this?
3) Can the architecture be adapted or extended to handle high-frequency details better at higher resolutions?
4) How does the current framework compare to models like GSM[1] and GNARF[2] in terms of articulation and control?
5) Can the integration of FLAME/SMPL models enhance the current framework's capabilities, and if so, how might this be implemented?
6) Have you attempted to extract geometry from your models, and if so, what were the results?
7) How does your approach compare with Eg3d regarding geometry extraction and handling artefacts?
8) What does running COLMAP on your outputs reveal about the quality and presence of artefacts compared to volume rendering methods?
9) What are the primary causes of floating artefacts observed in your results?
10) Are there specific scenarios or conditions where floating artefacts are more prominent?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The Authors have addressed the limitations.
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive review.
---
### “Reason for not using original 1024 resolution / Can this method be extended to high resolution such as 1024x1024?”
First of all, please let us clarify that we just follow the most general benchmarks used in EG3D, which uses a maximum resolution of 512x512. To validate the suitability of the proposed method on more high-resolution data, we perform an additional experiment on FFHQ-1024 dataset. Due to the limited rebuttal period and computing resources, we train the model until the discriminator sees 5M images, which is only a ⅓ of the total iteration.
As a result, we observe that the proposed method achieves FID20K 10.99, already surpassing our baseline GRAM-HD’s 12.0, indicating successful training on high-resolution data.
Furthermore, we provide qualitative examples of FFHQ-1024 dataset in PDF (Fig. 1). As visualized, we observe that the proposed method can handle the high-frequency details with the descent quality of 3D consistency.
---
### “Articulation and control ability of proposed method compared to template-based 3D GANs / Adaptatiblity of the proposed method with templates such as FLAME/SMPL”
We first highlight that the proposed method is more fundamental research for extending 3D Gaussian representation on 3D GANs without additional supervision such as templates. This line of research would be beneficial to various research areas. For example, GNARF is also largely based on EG3D, which is a pure 3D GANs without any usage of templates.
In this context, we believe template-based methods are hard to compare directly with the proposed method. Instead, ours has controllability that leverages various editing techniques based on GANs such as drag-based editing [R#Abis-1], text-based editing [R#Abis-2], and other editing using its semantic latent space [R#Abis-3, 4].
Furthermore, we argue that the proposed method can be easily integrated with templates, as the reviewer mentioned. For example, similar to GNARF, we synthesize templates by the proposed method and warp them by the FLAME/SMPL’s motion distribution pre-computed from training dataset before giving it to the discriminator. This framework would encourage the generator to synthesize the 3D template scene that can be manipulated by motions in FLAME/SMPL.
To support our claim, we train ours with FLAME in FFHQ-256. For implementation, we subsample 1K points in the fixed basic FLAME template and substitute it with anchors in the coarsest level. Since it increases the number of initial points, we reduce the number of levels by removing a generator block. As a result, we validate it successfully trained by achieving FID 7.32 in a single trial without any hyperparameter tunings, suggesting the promising possibility for integration with template-based methods.
[R#Abis-1] Pan, Xingang, et al. Drag your gan: Interactive point-based manipulation on the generative image manifold. SIGGRAPH 2023.
[R#Abis-2] Lei, Biwen, et al. DiffusionGAN3D: Boosting Text-guided 3D Generation and Domain Adaptation by Combining 3D GANs and Diffusion Priors. CVPR 2024.
[R#Abis-3] Wang, Tengfei, et al. High-fidelity gan inversion for image attribute editing. CVPR 2022.
[R#Abis-4] Ling, Huan, et al. Editgan: High-precision semantic image editing. NeurIPS 2021.
---
### “Extract geometry from the proposed method / COLMAP on the output of ours”
In the case of original 3D Gaussian splatting, there is no proper way to extract the geometry such as mesh from trained Gaussians, and it is known as the generated mesh by marching cube shows noisy surface [R#Abis-5]. We have tried to extract geometry using marching cube algorithm following previous work [R#Abis-6], building an opacity field from Gaussians and applying marching cube on it. As shown in Fig. 2 of PDF, we observe the mesh extracted by marching cube seems noisy too.
However, there are recent approaches for improving Gaussian splatting to be mesh-extractable, and we believe these methods can be easily integrated into our framework by incorporating regularization terms [R#Abis-5] or altering the output representation to 2D Gaussians [R#Abis-6].
Meanwhile, as ours can synthesize multi-view consistent rendering results, COLMAP on outputs can reconstruct the reliable geometry. We report some examples of COLMAP dense reconstruction of ours in Fig. 3 of PDF. Moreover, we quantitatively compare ours with EG3D in terms of COLMAP (reprojection error, # matched point from sparse recon., # matched points from dense recon.) with 10 random samples. As a result, ours and EG3D achieve (0.440, 3301, 176K) and (0.733, 2360, 156K), respectively, validating the 3D consistency of the proposed method by low errors and large # of matched points.
Floating artifacts are also reconstructed well, as they are the outputs of the generator and represented by 3D Gaussians which is 3D consistent itself.
[R#Abis-5] Guédon et al. Sugar: Surface-aligned gaussian splatting for efficient 3d mesh reconstruction and high-quality mesh rendering. CVPR 2024
[R#Abis-6] Huang et al. "2d gaussian splatting for geometrically accurate radiance fields." SIGGRAPH. 2024.
---
### “Primary causes of floating artefacts and scenarios for these artefacts.’
We believe that some elements in the generated results, such as floaters, are not actually artifacts. Instead, they may represent high-frequency details that the background generator should synthesize. We expect these artifacts can be easily solved by proper tuning of the background or introducing other background architecture such as 2D generator with alpha masks.
We elaborate on it more in the last question of R#Wn5i. | Rebuttal 1:
Rebuttal: Thanks to all reviewers for their constructive reviews.
We made a rebuttal for each reviewer, so please refer to them.
Additionally, we provide a single PDF containing the below contents.
1) Examples from FFHQ-1024.
2) Predicted meshes from the generated 3D Gaussians.
3) COLMAP dense reconstruction examples of the proposed method.
4) Detailed caption of Fig. 6 of the manuscript.
For a detailed explanation about each figure, please refer to a rebuttal for the below reviewers.
(1, 2, 3) - Reviewer Abis
(4) - Reviewer RZRC
Pdf: /pdf/527abb1655bf1e7dfac237dcd7cc10a7a00896cb.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Almost-Linear RNNs Yield Highly Interpretable Symbolic Codes in Dynamical Systems Reconstruction | Accept (poster) | Summary: This paper introduces a new recurrent neural network (RNN) architecture called Almost-Linear (AL)-RNN for reconstructing nonlinear dynamical systems from time-series data. The key innovation of AL-RNN is training parsimonious piecewise linear (PWL) representations of dynamical systems. By combining linear units with a small number of rectified linear units (ReLUs), AL-RNNs can effectively capture the dynamics of complex systems while maintaining interpretability. The authors demonstrate the effectiveness of AL-RNNs on benchmark datasets (Lorenz and Rössler systems) and real-world data (ECG and fMRI), showing that they can discover minimal PWL representations that accurately capture the dynamics of these systems.
Strengths: Novelty: The AL-RNN architecture is a novel contribution to the field of dynamical systems reconstruction. It addresses the limitations of existing methods (PLRNNs and SLDS) that often result in overly complex models.
Interpretability: The AL-RNN's structure, with its minimal use of ReLU units, naturally leads to a symbolic encoding of the dynamics, making the model more interpretable and facilitating mathematical analysis.
Empirical Effectiveness: The paper demonstrates the effectiveness of AL-RNNs on both benchmark and real-world datasets, showing that they can accurately capture the dynamics of these systems.
Weaknesses: Usefulness of symbolic dynamics seems to be limited and can be misleading: Even when the underlying dynamics is deterministic, the extracted symbolic transition dynamics is probabilistic, which can be misleading (Figure 6d).
Lack of guidance on model size: There is no discussion on how to determine the minimum number of ReLU units (P) needed, as well as the number of linear units (M). The authors explored a range of (P) and chose an arbitrary value. The method would be much more useful in practice if it could be regularized to automatically find the minimal (P).
It is crucial that the model dynamics is partly driven by the observation data, e.g. teacher forcing, but this is not mentioned in the main text equations (eq1~5). Teacher forcing is mentioned only as a training method, not for testing.
The paper lacks details on the training process and hyperparameter selection, which could hinder reproducibility.
Theorem 1 seems to be incorrect. As a counterexample, consider a linear dynamical system with a stable orbit (i.e., with 1 subregion, 0 ReLU units). In this case, the symbolic state remains constant, but not all states are fixed points.
Writing needs to be improved. Section 3.2 on symbolic dynamics seems unnecessary. The symbolic partitioning is intuitive to understand in terms of the activation of ReLU units, but the formal definitions of Section 3.2 do not seem to add any further understanding. The theory section also seems unnecessary and could be moved to the appendix. Many of concepts introduced in these sections don't seem to be mentioned afterwards, e.g. shift operator.
Undefined Terms: The paper does not properly define (N), which makes it confusing to understand. Additionally, the term "hyperbolic AL-RNN" is used without a clear definition.
Excessive Use of Acronyms: The paper uses too many acronyms, which can hinder smooth reading. It would be good to reduce their usage. Here is a list of the acronyms used in the paper: AL-RNN: Almost-Linear Recurrent Neural Network BPTT: Backpropagation Through Time DH: Hellinger Distance DS: Dynamical System DSR: Dynamical Systems Reconstruction DST: Dynamical Systems Theory ECG: Electrocardiogram fMRI: Functional Magnetic Resonance Imaging FP: Fixed Point id-TF: Identity Teacher Forcing KL: Kullback-Leibler LDS: Linear Dynamical System MSE: Mean Squared Error ODE: Ordinary Differential Equation PDE: Partial Differential Equation PLRNN: Piecewise-Linear Recurrent Neural Network PWL: Piecewise Linear RADAM: Rectified Adaptive Moment Estimation ReLU: Rectified Linear Unit RC: Reservoir Computer SEM: Standard Error of the Mean SINDy: Sparse Identification of Nonlinear Dynamics SLDS: Switching Linear Dynamical System SOTA: State-of-the-Art STF: Sparse Teacher Forcing TF: Teacher Forcing STSP: ?
Technical Quality: 3
Clarity: 2
Questions for Authors: How does the performance of AL-RNNs compare to other state-of-the-art DSR methods on a wider range of benchmark and real-world datasets?
How does the choice of the number of linear units (M) affect the performance and interpretability of the AL-RNN model?
Is there a principled way to determine the optimal number of ReLU units (P) for a given dataset?
Can the symbolic dynamics approach be modified to better handle deterministic systems, avoiding the misleading probabilistic representation of transitions?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The paper does mention some limitations, such as the challenge of determining whether a topologically minimal and valid reconstruction has been achieved from empirical data.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the referee’s overall positive assessment and the valuable feedback provided!
**Weaknesses**
**W1 (usefulness of symbolic dynamics):** First, please note that the symbolic encoding itself is not probabilistic, i.e. the symbolic seq. (as shown in Figs. 13, 14 or 17) are as deterministic as the underlying system itself. As new Fig. R5 in the provided PDF now further highlights, quantities like the topological entropy obtained from the symbolic encoding correlate highly with quantities obtained directly from the system dynamics, like the max. Lyapunov exp., but are much easier to compute. This further confirms that important topological properties of the underlying system can be inferred from the symbolic encoding we used. In general, symbolic dynamics has led to many powerful insights and results about the dynamics of certain systems (certain proofs about chaos or the number of unstable periodic orbits could only be derived symbolically, for instance; see Wiggins 1988, Guckenheimer \& Holmes, 1983).
Hence, we think the referee’s point concerns more the representation of symbolic dynamics in the form of *transition graphs*. This type of graph represent. is quite standard in symbolic dynamics (see, e.g., textbook by Lind & Marcus 2021), where arrows between nodes usually represent *admissible* transitions, just like in graph representations of finite state machines or formal languages, for example. The graphs are meant to represent the set of all possible seq. (or ‘syntactically correct’ sentences). We agree, however, that one needs to be clear about the semantics of these graphs and their interpretation, which we will clarify.
**W2 (guidance on model size):** We chose $P$ according to a grid search as the min. value at which performance started to plateau (because we wanted to obtain topologically minimal representations), so its choice is not arbitrary but related to the kinks (or humps) in the curves in Fig. 3. Likewise, $M$ was determined by systematic grid search, see Appx. Fig. 9. However, we like the referee’s idea of determining an optimal number of linear subregions by regularization. We now implemented this, and find that the numbers of subregions determined this way agree well with those obtained by our prev. crit., see Fig. R6 in PDF.
**W3 (teacher forcing):** Sparse teacher forcing is indeed *only applied during training*, not during testing. Hence it is, correctly, not included as part of those equations. This is in fact crucial: DSR models are supposed to be generative models that, after training, can generate new data with the same geometrical and topological structure as those produced by the observed system. During test time, therefore, the once trained model cannot rely on actual observations, but follows solely its own dynamics. While TF is a broad term, *sparse TF* is different (Brenner et al. 2022, Mikhaeil et al. 2022) and has been introduced specifically in the context of DSR where it is SOTA (see also Tab. R1). We will clarify this.
**W4 (reproducibility):** Please note that we provided code for reproducibility with the org. submission here: https://anonymous.4open.science/r/SymbolicDSR-1955/README.md
The hyperpar. settings for all runs were further listed under A2: Training Method. In the rev. we will further clarify how exactly these optimal hyperpar. were determined.
**W5 (Th.1 incorrect):** The th. is correct, but we see where this misunderstanding comes from: For the referee’s example, the system would need to be non-hyperbolic, i.e. would need to have a center subspace. However, this case (req. conjugate pairs of eigenvalues to lie exactly on the unit circle) we explicitly ruled out in our theorems (it is a 0-measure set in par. space). We mentioned this in the pg. above the th. (since common to all of them), but for clarity will make explicit in the theorems themselves.
**W6 (theoretical sects.):** We are happy to reduce sect. 3.2 and move it partly to the Appx., the ref. is right that some of the concepts are not followed up upon. Others, like that of a shift operator, shift space, or topolog. partition, are however directly used in the theorems in sect. 4 (for the sake of the theorems, we prefer to be formally precise about certain concepts even if they may be intuitively clear).
**W7 (undefined terms):** N is the observation dim. of the data, and by ‘hyperbolic AL-RNN’ we mean an AL-RNN which is hyperbolic in each of its linear subregions, i.e., such that the transition matrices defined in eq. 2 have no eigenvalues on the unit circle. Will be clarified.
**W8 (use of acronyms):** Agreed, we will remove all acronyms which are not standard or are only rarely used.
**Questions**
**Q1:** Please note that our goal was not to introduce a novel SOTA method for DSR, but rather to introduce an approach for retrieving topologically minimal and symbolically interpretable representations from data. Still, one may ask whether the AL-RNN is at least on par with other SOTA methods for DSR. In Table R1 in the rebuttal PDF we answer this Q. We also included human EEG data \& Lorenz-96 as further benchmarks. As can be seen, the AL-RNN even outperforms most current SOTA methods, which may be due to its simple design making training much more stable and robust (cf. also new Figs. R1-R3).
**Q2:** The number of linear units makes no difference to interpretability, since it does not affect the total number of linear subregions (hence neither the symbolic encoding nor the computation of fixed points or cycles). As shown in Fig. 9, more linear units can, however, still improve performance up to a certain level.
**Q3:** See W2 above.
**Q4:** See response to W1 above. We also would like to note that esp. in math. chaos theory probabilistic approaches to deterministic systems are indeed commonplace, e.g. in the def. of invariant measures (which are probability ms.) and ergodic theory (see, e.g., textbooks by Katok & Hasselblatt 1995, Alligood et al. 1996).
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: The authors have sufficiently addressed the concerns, and the updated result looks great. I'm raising my score to 7.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: We are glad to hear the referee likes our update on results. We very much appreciate the referee's feedback which helped us to see parts of the paper which needed further clarification and support. | Summary: The paper proposes to limit the number of non-linear units in a RNN to facilitate the analysis and hence understanding of inferred dynamical systems. The authors show that even with a limited number of non-linear units, the model is able to explain a large portion of the data for the Rössler and Lorentz systems. Furthermore the proposed model is related to the notion of symbolic codes and it is shown theoretically how these can be used to further analyse properties of the dynamical system under study. Finally, the authors analyse insights obtained by applying the model to real-world data.
Strengths: (Disclaimer: this is not really my area of expertise and although I like the paper a lot and have found no obvious objections in the method or the evaluation, I can not judge the novelty of the paper.)
- Exceptionally well written
- Well motivated and comprehensive introduction and problem motivation
- Evaluation on both simulated data (eg Lorenz 63 and Rössler system) as well as real-world measurements (ECG, fMRI data)
- Theoretical contribution that allows one to infer properties of the underlying dynamical system from the model via symbolic codes
Weaknesses: - A potential weakness of the paper is the lack of comparison to other methods. On the other hand, the paper addresses a relatively niche topic so that I am not sure whether (accessible) baseline methods to compare to are available?
Technical Quality: 3
Clarity: 4
Questions for Authors: - What is $\phi$ in eq (1)?
- I don't quite understand the notation in equation (2): where does the \phi from eq (1) go? What does the subscript $\Omega(t)$ mean, and how can there be $2^M$ configurations for $D_{\Omega(t)}$ if $D_{\Omega(t)}$ is a diagonal matrix? Doesn't the fact that $D_{\Omega(t)}$ is a diagonal matrix imply that there is only a single configuration (modulo different values that they diagonal may take)?
- In equation (6) as well as the lines of text just above it, there is a dot, which I believe may be a decimal point. Is this a common notation? I was at first thrown of by this (but I was also not familiar with the symbolic codes idea before.) If it is not standard (or maybe in general to facilitate understanding), it might be useful to add a note on this notation.
Confidence: 2
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Limitations are well described in a dedicated section in the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the referee for the supportive and positive feedback, we are happy to hear the referee liked our work!
**Weaknesses**
In a sense this is the first study of its kind. We are not aware of any other work in the DSR field (and beyond) making this link to symbolic dynamics, and attempting to reduce model complexity in a way that allows for easy translations into topologically minimal representations. This approach to model interpretability is the major contribution of this work, so it is hard to compare to other methods.
However, one may still ask whether AL-RNNs can at least compete with other DSR methods in terms of the quality of the DS reconstructions they achieve (our examples demonstrate that they are good enough even with a very low number of linear subregions, which in itself is a major advantage for obtaining mechanistic insight into dynamics). New Table R1 which we now added to the rebuttal PDF confirms AL-RNNs are at least on par with - in fact even outperform - most other SOTA methods (which lack our method’s interpretability). We also added new benchmarks to this Table.
**Questions**
**Q1:** $\phi$ is the ReLU nonlinearity, sorry for this oversight. We will clarify this in the updated manuscript.
**Q2:** The ReLU ($\phi$) was absorbed into the $D_{\Omega(t)}$ matrix: Note that we can rewrite eq. 1 equivalently into eq. 2 by placing a 1 on the diagonal of $D_{\Omega(t)}$ for each state $z_{p,t}>0$ at time $t$, and a 0 otherwise (just by definition of the ReLU as $\max[0,z_{p,t}] $). This matrix depends on time $t$, however, because it is determined by the values of all the states at that time. $\Omega(t)$ denotes the subset of states for which we have $z_{p,t}>0$ at time $t$. This should also answer the referee’s last question: Since each entry on the diagonal of the $M \times M$ matrix $D_{\Omega(t)}$ can be either 0 or 1 at any time $t$, we have a total of $2^M$ possible configurations for $D_{\Omega}$.
This may admittedly have been a bit hard to follow, especially as we didn’t make clear the meaning of $\phi$ in the equation above! We will clarify this whole section accordingly.
**Q3:** Yes, its interpretation is a bit similar to a decimal point (but note it’s not a decimal system of course): The symbolic sequences correspond to theoretically infinite length trajectories, and the point “separates past from future”, i.e. indicates which symbol corresponds to the current time $t$ along the trajectory. We will clarify this type of notation in the revision (it's indeed standard in symbolic dynamics), valid point, also for the parts above, thank you for your feedback on this!
---
Rebuttal Comment 1.1:
Title: Thank you for the clarification
Comment: I appreciate the author's response which has clarified my questions. Concidering this as well as the responses to the other reviewers, I recommend acceptance of this paper.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: We are happy to hear we could satisfactorily clarify your points. Thank you for your support! | Summary: This paper addresses the broad problem of learning interpretable dynamical systems from data; it builds upon existing approaches that use piecewise linear RNNs (PLRNNs), with one interesting twist: constraining the number of linear subregions to be much smaller than the "usual" $2^N$. This is achieved by allocating a small number P of units to have a ReLu activation function, every other unit being linear. This leads to $2^P$ linear subregions. The paper begins by proving a couple of theorems showing that these dynamical systems are amenable to interpretable symbolic analysis (though these results are generic to PLRNNs, not specific to the new version). The authors then fit AL-RNNs (this new breed of PLRNNs) to chaotic attractors, demonstrating that good dynamical reconstructions can indeed be obtained with relatively small P, providing a post-hoc justification for the approach. Finally, the approach is applied to ECG and fMRI data; in the ECG case, symbolic analysis returns a highly interpretable graph linking the different linear subregions.
Strengths: The core idea is neat; although the difference between existing PLRNNs and these new AL-RNNs is more a difference of degree than a difference of nature (just changing the number of units having a ReLu activation), it's great to have thought of taking PLRNNs into that regime and to show empirically that (i) it doesn't compromise DSR quality too much, yet (ii) it gives dynamics that are computationally more amenable to symbolic analysis.
Another strength (for me at least) is that this paper contributes to exposing an audience that has historically primarily cared about dynamical systems reconstruction (e.g. me) to the concept of symbolic dynamics -- indeed I'm glad I reviewed this paper and thus got a useful (even if rudimentary) primer on SD.
Weaknesses: I would very much like to see model recovery experiments; how easy are those amost-linear RNNs to fit, and to fit consistently? I could imagine that the model might settle into a suboptimal set of linear subregions early on during training and then have a hard time snapping out of it. I have to admit I don't have a good intuition for this, but this is something the authors could substantiate numerically by running simple model recovery experiments. On this note, what hard degeneracies do we expect here due to a majority of the state dimensions being unobserved? Can the authors use the symbolic dynamics grounding of section 3 (currently a little disconnected from the rest, I have to say) to derive meaningful measures of how well the ground truth system's topology / symbolic dynamics are recovered despite those degeneracies? (e.g. for small enough P it might be possible to look at all permutations of the transition matrix between linear subregions and conclude that the ground truth has been recovered?).
Re consistency, could the authors comment on whether the error bars (across training runs) in e.g. Figure 5d-f are to be considered small or large? The figure caption says "shows close agreement among different training runs" but judging from these whisker plots, the underlying coefficient of variation seems quite high (which actually triggered the concern I articulated above concerning potentially inconsistent recovery of ground truth PWL dynamics).
Technical Quality: 3
Clarity: 3
Questions for Authors: - You omitted to say that $\phi(\cdot)$ in Equation 1 is the ReLu function -- this is pretty critical!
- In equation 4, why are you over-parameterizing the linear part? It seems that the first $M-P$ columns of $W$ can be absorbed into the first $M-P$ columns of $A$. So you really just have the last $P$ columns of $W$ to learn.
- In Figure 2, why did you write "$P^2$ subregions / symbols"? Did you mean $2^P$? (and accordingly, should $P^4$ in fact read $2^{2P}$)?
- l.185: can you please define "hyperbolic AL-RNNs" (AL-RNNs in which fixed points are all hyperbolic?) and what that implies concretely for A and W?
- Theorems 1-3 make intuitive sense but appear a little ill-phrased to me -- for example, in Theorem 1, the specific $z^\star$ that appears in the first clause of the iif statement is not even referred to in the other clause (and indeed it cannot be uniquely identified from knowing “the corresponding symbolic sequence $a^\star$” -- you don't really say what you mean by “corresponding”, btw; do you mean the symbolic sequence associated with any state space trajectory that contains $z^\star$?). Perhaps this theorem could be rephrased as "if there exists a fixed point $z^\star$ in $U_e$, then $e^\infty$ is a fixed point of the shift map; conversely, if $e^\infty$ is a fixed point of the shift map for some $e$, $U_e$ must contain a fixed point of the $F_\theta$ map.” (?)
Same concern in theorem 2 and 3.
In theorem 2, I think the $k+p$ subscript should be modulo $p$?
- Should we be concerned by the fact that your measures of DSR accuracy in Figure 3 do not decrease mononotonically with the number of PWL units? Local minima in the teacher-forcing loss?
- For the ECG and fMRI datasets, I couldn't see a quantitative assessment of model performance; in particular, for the fMRI dataset, I think the authors should discuss whether and how much the addition of a categorical task-stage decoder impairs relevant DSR performance metrics (or perhaps even improves consistency across training runs?).
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Perhaps the authors could discuss the extent to which it really is easier to analyze $2^P$ subregions rather than $2^N$ subregions -- quantitatively I understand that this is a lot fewer subregions, but when $2^P$ is beyond a handful, whether it's 50 or 5 million, it's unclear to me how "easy" it is to analyze/understand these things (or indeed what that even means...).
*post-rebuttal EDIT*:
Having read the rebuttals to all 3 revieweres, I am raising my score to an 8; strong paper likely to have an impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the referee for the enthusiastic support and appreciation of our work!
**Weaknesses**
**W1 (consistency of fits/ model recovery)**: One crucial advantage of AL-RNNs is that they indeed consistently deliver the same model over many training repetitions. The errors in Figs. 5d-f are in fact very small: To put these numbers into context, we now normalized all 3 graphs by proper reference values (Fig. R1 in rebuttal PDF), with normalized numbers substantially below 1. Moreover, we now compared the consistency across training runs to those obtained with vanilla PLRNNs by evaluating the agreement in trajectory point distributions across linear subregions, revealing a much higher agreement among AL-RNN solutions (Fig. R2). The parsimony of AL-RNNs compared to other models leaves them with much less ‘wiggle room’ in finding different solutions, as can also be appreciated from the direct comparison in Fig. R3 (see also Figs. 20 and 21). Finally, as suggested, we also did model recovery experiments, finding that the recovered solutions are virtually identical across repetitions (original: $D_{stsp}=3.14$, $D_H=0.28$; recovered: $D_{stsp}=3.38+/-0.18$, $D_H=0.28+/-0.03$; 3 linear subregions in all cases).
**W2 (degeneracies)**: While these models indeed do have the capacity to capture unobserved dimensions in their latent space (e.g. Brenner et al. 2024), we usually still would use delay embedding for this (as for the ECG). This, both, eases the training process itself and enables evaluation of agreement in attractor geometry. While it is possible to harvest the symbolic representation to measure the similarity in 2 reconstructions (overlap in symbolic codes and their graphs, e.g. agreement in adjacency matrices), for empirical data the topologically minimal representation is usually unknown, of course, and is precisely what we would like to infer through the AL-RNN (see Discussion). Hence, empirically, measures like $D_{stsp}$ or $D_H$ remain the methods of choice for initially evaluating the quality of DSR in delay-embedding spaces. But we checked this idea now for evaluating the consistency across training runs, and the symbolic graphs in fact remain identical across different runs.
**Questions**
*Q1:* Absolutely, thanks for pointing out this oversight!
*Q2:* Yes, true, the full $\mathbf{W}$ matrix is kind of a ‘historical quirk’ from previous formulations of the PLRNN and the resp. codebase. For model training and performance it makes no difference ($D_{stsp}$: $t(19)=-1.58, p=0.13$, $D_{H}$: $t(19)=0.68, p=0.51$), but for parsimony the diagonal of $\mathbf{W}$ should be removed for the linear units. We will comment on this in the revision.
*Q3:* Yes, thanks for catching!
*Q4:* With “hyperbolic AL-RNN” we mean the system is hyperbolic in each of its linear subregions, implying that none of the Jacobian matrices $\mathbf{A}+\mathbf{W} \mathbf{D}_{\Omega}$ has eigenvalues exactly on the unit circle (a measure-0 set in parameter space). Will be clarified!
*Q5:* Thanks for pointing out this source of misunderstandings, we will rephrase the theorems for clarity. The key is the term “corresponding” by which we meant the mapping from trajectories of the original system onto symbolic seq.. We will precisely define this mapping and the term “corresponding” in our rev..
Re Th.2: The notation here is correct, this is the standard def. for a p-cycle (otherwise it would just read $\mathbf{z}_k=\mathbf{z}_k$, but we wish to indicate the map returns after p iterations).
*Q6:* Some of the wiggling up and down of some of the curves after the initial kink (which we are looking for) is likely to be just noise across different training runs with different minima achieved. Also note that we are not directly optimizing for $D_{stsp}$ and $D_{H}$, so while in STF based algorithms MSE remains a good proxy in general (e.g. Hess et al. 2023), there is no guarantee the relation is strictly monotonic.
However, the hump in the figure for the Lorenz-63 is a bit more suspicious, and so we dug a bit deeper: It seems that in this case the first minimum at P=2 indeed indicates the topologically minimal representation (see also Fig. R6), and that when surpassing this optimal point in fact performance first decays again. It is thus more a feature rather than a bug, and would make identifying the optimal representation even easier.
*Q7:* For the ECG data, the quantitative results were not included in Fig. 7 but in Fig. 3, so perhaps easy to miss, will be made clearer. For the fMRI data, as requested we now have produced the same type of graph, included as Fig. R4 in the rebuttal PDF. That an additional categ. decoder significantly improves DSR quality on these same fMRI data has been shown before in (Kramer et al. 2022; Brenner et al. 2024).
**Limitations**
Since the number of subregions grows exponentially with $P$ and $M$, $P<<M$ eases model analysis profoundly. One major purpose of DSR in our minds is to provide mechanistic insight into system dynamics (rather than just forecasting). The thorough understanding of the dynamical mechanisms of chaos in Fig.5, e.g., is only possible because we have less than a handful of linear subregions (likewise for the empirical examples). Besides this important visualization aspect facilitating human interpretation, there are also clear numerical benefits: For instance, in a brute-force search algorithm for fixed points and cycles, the number of iterations we would need to find such objects also increases exponentially with $M$. Hence reducing this to $P<<M$ is always a huge benefit that enables to dig much deeper into model mechanisms. Of course it is difficult to put an exact threshold on this. However, that we were able to capture even rather complex ECG and fMRI dynamics with just a few linear subregions ($\leq 8$) we find encouraging. Whether this will always or mostly be the case in empirical settings, is an interesting and open question, which will be discussed.
---
Rebuttal Comment 1.1:
Comment: Thanks for the thorough response. I think that (i) the inherent interpretability of the model, (ii) the model recovery experiments showing consistency of training, as well as (iii) the response to Reviewer 5E4v with favourable comparison to SOTA, show that this is a very practical model likely to have a substantial impact in the field. I am raising my score to an 8.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you very much for the appreciation of our paper and rebuttal, and for engaging so constructively and thoughtfully with our work! | null | null | Rebuttal 1:
Rebuttal: **General reply**
We thank all three referees for their thorough reading and the constructive and helpful feedback on our manuscript. We are happy to see that all referees provided a generally supportive and positive assessment of our work. We hope we could address the remaining concerns in the detailed point-by-point replies below in the individual rebuttals, and the additional new material provided in the rebuttal PDF.
In brief, the rebuttal PDF contains the following new results and figures:
* Table R1 provides a systematic comparison of AL-RNN performance to many other state-of-the-art DS reconstruction models, incl. additional benchmark systems for testing (human EEG and high-dim. chaotic Lorenz-96 system). While the idea behind the AL-RNN was not to provide a new SOTA for DSR, it does indeed outperform most other techniques when trained with sparse teacher forcing (potentially due to its simple and parsimonious design).
* Figs. R1-R3 highlight that a particular feature of the AL-RNN, especially in comparison to the standard PLRNN, is the consistency in inferred models across multiple training runs, i.e. very similar or same model solutions are obtained across many different parameter intializations.
* Fig. R4 is the same as Fig. 3 from the paper for the fMRI data.
* Fig. R5 shows that topological properties can be cheaply computed from the symbolic encoding.
* Fig. R6 illustrates selection of the optimal number of piecewise-linear units through a regularization approach.
**Cited References**
K. T. Alligood, T. D. Sauer, and J. A. Yorke, Chaos: An Introduction to Dynamical Systems, Springer-Verlag, New York, 1996.
M. Brenner et al. 2022, Tractable Dendritic RNNs for Reconstructing Nonlinear Dynamical Systems, Proceedings of the 39th International Conference on Machine Learning (ICML 2022)
M. Brenner et al. 2024, Integrating Multimodal Data for Joint Generative Modeling of Complex Dynamics, Proceedings of the 41st International Conference on Machine Learning (ICML 2024)
J. Guckenheimer and P. Holmes, Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields, Springer-Verlag, New York, 1983.
F. Hess et al. 2023, Generalized Teacher Forcing for Learning Chaotic Dynamics, Proceedings of the 40th International Conference on Machine Learning (ICML 2023)
Katok, A., & Hasselblatt, B. (1995). Introduction to the Modern Theory of Dynamical Systems. Cambridge: Cambridge University Press
D. Kramer et al. 2022, Reconstructing Nonlinear Dynamical Systems from Multimodal Time Series, Proceedings of the 39thth International Conference on Machine Learning (ICML 2022)
Lind & Marcus, 2021, An Introduction to Symbolic Dynamics and Coding.. H. Cambridge University Press, 2nd edition
J. Mikhaeil et al. 2022, On the difficulty of learning chaotic dynamics with RNNs, Advances in Neural Information Processing Systems 35 (NeurIPS 2022)
S. Wiggins, Global Bifurcation and Chaos, Springer-Verlag, New York, 1988.
Pdf: /pdf/c023d56101e10ef64d09c7e9c30d0cbedce4e230.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Generalizablity of Memorization Neural Network | Accept (poster) | Summary: The paper studies the generalization capabilities of memorization networks, specifically networks that achieve optimal memorization capacity (i.e. O(sqrt(N)) parameters for memorizing N samples). The authors present a memorization algorithm that is based on the construction of Vardi et al. Next, they show that memorization with fixed width or small number of parameters cannot generalize well. They also provide upper and lower bounds for the communication complexity of learning with a memorization algorithm, while also providing an efficient memorization algorithm.
Strengths: - I think this research direction is novel and interesting. It puts a spotlight on memorization results, which may be efficient in terms of expressiveness but their generalization capabilities are unclear.
- The presentation of the paper is good and it is easy to follow, while also providing sufficient proof intuitions.
- Section 5 is particularly interesting since it shows the limitations of efficient memorization results, and specifically that memorization networks cannot generalize well.
- Section 6 is also very extensive, and provides tight sample complexity bounds on memorization learning, that connects the size of the memorization network to the number of required samples for learning.
Weaknesses: I think this paper is very nice overall, however, there are some issues with specific results which I would be happy to see the author’s response about:
- Proposition 3.8 - where is the proof? Also, it is not clear to me why the constructions in [54,48] are probabilistic.
- There is something unclear about the proof of Theorem 5.3. The theorem statement is that for every distribution D and any test set D_tr there exists a memorization network that doesn’t generalize. However, in the proof, the authors begin with a given set D_tr, and then define D based on this training set, i.e. D is defined by some set S which in itself depends on D_tr. I think the statement should be that there exists a distribution D such that for all training set D_tr there exists a memorization network that doesn’t generalize. Note that the current proof doesn’t show it, because D is constructed such that it depends on D_tr.
- The comparison between interpolation and memorization learning is unclear. Specifically, in what cases do the results on interpolation learning already provide generalization bounds for memorization learning?
Some minor comments:
- Results and comparisons on implicit bias toward margin maximization of neural networks do not appear (e.g. https://arxiv.org/abs/1906.05890, https://arxiv.org/abs/2006.06657). They also study a regime where the optimization converged to 100% accuracy on the dataset and provide generalization guarantees based on margin arguments.
- In the main contribution, point 3, it is better to write lower bound is \Omega rather than O.
Technical Quality: 3
Clarity: 4
Questions for Authors: - How large can S_D be? It seems that for any continuous distribution on [0,1]^n, S_D is already exponential in n, hence having such a large sample complexity makes this bound vacuous. Can the authors provide some examples where S_D and N_D are not very far apart?
- What is the size (num of parameters, width, depth) of the network defined in Theorem 7.3?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: The authors discuss the limitations of their results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The authors thank the reviewer for the valuable and insightful questions and hope that we have answered these questions satisfactorily.
Question 1. Proposition 3.8 - where is the proof? Also, it is not clear to me why the constructions in [54,48] are probabilistic.
We deduce proposition 3.8 from the proof process of theorem in these papers, considering that proposition 3.8 can be directly obtained from the proof in the corresponding paper, the proof is not given. In the revised version of the paper, we will give a sketch of the proof for Proposition 3.8. This randomness comes from a technique of memorization: in the construction of a memorization network, we require a special vector to compress data into one dimension, but this vector is not easy to calculate directly. However, every time we randomly select a vector, there is a non-zero (0.5) chance of selecting the required vector, and the randomness comes from this random selection. We have described some similarities in the proof of Theorem 4.1, including lines 220-223 and 648-649 (and the proof above lines 648-649) of the paper.
Question 2. There is something unclear about the proof of Theorem 5.3.
The $D_{tr}$ and $D$ used in our proof in Appendix E4 is the $D_{tr}$ and $D$ in Theorem 5.3, they can be any distribution $D$ and a dataset $D_{tr}$ selected from it, and we will clarify this point in the future. Our proof is for any distribution $D$. The core of the proof is in lines 1060-1065, and the proof in these lines applies to any distribution. Lemma E2 is also for any distribution $D$. Therefore, the whole proof is really for any distribution. In the revised version of the paper, we will make this clear.
Question 3. The comparison between interpolation and memorization learning is unclear. Specifically, in what cases do the results on interpolation learning already provide generalization bounds for memorization learning?
Generalization bounds for interpolation learning are only established in linear or approximately linear situations. Our sample complexity results are for all memrozaition networks and hence are valid in all cases.
Question 4. Results and comparisons on implicit bias toward margin maximization of neural networks do not appear.
Thank you for pointing out this. There are many articles that analyze the margin maximization or generalization under gradient. These works can prove the results under some assumptions, such as NTK, Lip conditions, two-layer networks, etc., which are very good. In comparison, our results consider all the memorization algorithms and we have not made so many assumptions, so we believe that our work has certain advantages. We will add these in the revised version of the paper.
Question 5. The size of $N_D$ and $S_D$.
Firstly, in most cases, we have $N_D<S_D$, and $S_D$ is more susceptible to the influence of separation bound. Second, in the worst-case scenario, they are exponential in n, and this is inevitable as shown in Corollary 6.4, where a lower bound $2^{O(n)}$ for $N_D$ is given. Also, in line 1425, an upper bound $n^{O(n)}$ for $S_D$ is given. Third, for some ``good'' distributions, we believe that $N_D$ and $S_D$ are reasonably small, from the nice practical performance of deep learning.
For some simple situations, we can estimate their values. We use linear data as a simple example to calculate $N_D$ and $S_D$.
If the distribution $D\in R^n$ is linear separable and defined as: $x\in[0,1]^n$ has label $1$ if $\langle\mathbf{1},x\rangle>199n/200$, $x\in[0,1]^n$ has label $-1$ if $\langle\mathbf{1},x\rangle<n/200$, where $\mathbf{1}$ is the vector whose weight is all 1.
Then $N_D=O(n)$ for any $c$, because $D$ is linear separable.
And $S_D=2$. Because the distance between the points with different labels is at least $0.49\sqrt{n}$; and the distance between point with label 1 and point $\mathbf{1}$ is at most $\sqrt{n/50}$; and the distance between point with label -1 and origin is at most $\sqrt{n/50}$. By $0.49/3.1>\sqrt{1/50}$, then let $S=\\{0,\mathbf{1}\\}$ and by definition 7.1, we know that $S$ is a nearby set of $D$, then $S_D=2$.
Question 6. What is the size of the network defined in Theorem 7.3?
The size of the network is affected by the data distribution. In the worst-case, the network has depth 3, width N, and $O(N^2)$ parameters. As said in lines 103-107, to ensure generalization, we abandoned many classic techniques, which resulted in an increase of parameters. We will add this information in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answer, my questions have been answered and I will keep my score.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your nice insights! | Summary: This work studies the generalization properties of neural network memorization algorithms. Several results are proved: (1) Construction of a memorization network for a sampled dataset, with an optimal number of parameters. (2) There exists a constant such that for all datasets sampled from a distribution, there exists a memorization network with at most the constant number of parameters. (3) Sample complexity lower and upper bounds for memorization algorithms. (4) An efficient memorization algorithm with a generalization guarantee.
Strengths: Solid mathematical paper with several novel results in DL theory. Mostly clearly written.
Weaknesses: - The significance of the theoretical results are not very clear because no estimates of $N_\mathcal{D}$ and $S_\mathcal{D}$ are given.
- Writing can be improved, e.g.: “is also a challenge problem” line 408, “which is more challenging compare to linear model” line 44, “and shows that even if there is enough data” line 63, “In other word” line 87.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can estimates of $N_\mathcal{D}$ and $S_\mathcal{D}$ be provided even for simple toy examples (e.g., linearly separable data)?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: For Reviewer QwyD:
The authors thank the reviewer for the valuable and insightful questions and hope that we have answered these questions satisfactorily. Also thank you for pointing out the typos in our writing, we will correct them in future versions.
Question 1. No estimates of $N_D$ and $S_D$ are given. Can $N_D$ and $S_D$ be provided even for simple toy examples (e.g., linearly separable data)?
(1): The meaning of $N_D$ and $S_D$ in terms of generalization. $O(N^2_D)$ represents the minimum amount of data required for any memorization algorithm to achieve generalization, while $O(S_D)$ represents the upper bound of the amount of data required for achieving generalization by efficient memorization algorithms. We mainly pointed out the existence of these two values, but estimated them is not the core of the paper, the estimation of them can be a future work.
(2): How big are $N_D$ and $S_D$?
Firstly, in most cases, we have $N_D<S_D$, and $S_D$ is more susceptible to the influence of separation bound. Second, in the worst-case scenario, they are exponential in n, and this is inevitable as shown in Corollary 6.4, where a lower bound $2^{O(n)}$ for $N_D$ is given. Also, in line 1425, an upper bound $n^{O(n)}$ for $S_D$ is given. Third, for some ``good'' distributions, we believe that $N_D$ and $S_D$ are reasonably small, from the nice practical performance of deep learning and also be showen in the following simple example.
(3): Can we calculate $N_D$ and $S_D$? Unfortunately, because these two values are closely related to the properties of the distribution $D$, calculating these two values for any distribution is not easy, as we said in line 405 to 406. But for some simple situations, we can estimate their values. We use linear data as an example to calculate $N_D$ and $S_D$.
If the distribution $D\in R^n$ is linear separable and defined as: $x\in[0,1]^n$ has label $1$ if $\langle\mathbf{1},x\rangle>199n/200$, $x\in[0,1]^n$ has label $-1$ if $\langle\mathbf{1},x\rangle<n/200$, where $\mathbf{1}$ is the vector whose weight is all 1.
Then $N_D=O(n)$ because $D$ is linear separable.
And $S_D=2$. Because the distance between the points with different labels is at least $0.49\sqrt{n}$; and the distance between point with label 1 and point $\mathbf{1}$ is at most $\sqrt{n/50}$; and the distance between point with label -1 and origin is at most $\sqrt{n/50}$. By $0.49/3.1>\sqrt{1/50}$, then let $S=\\{0,\mathbf{1}\\}$ and by definition 7.1, we know that $S$ is a nearby set of $D$, then $S_D=2$. | Summary: This paper provides a first, thorough theoretical understanding of the generalization ability of the memorized neural network, including the existence of the memorized neural network, the matching upper bound and lower bound on the sample complexity, the computational infeasibility of attaining optimal sample complexity, and a computation-efficient but sub-optimal remedy.
Strengths: Overall, the paper is well-written, the results are novel and comprehensive, and it makes a clear contribution to the theoretical community.
Weaknesses: There are some comments, shown in the Questions below, but I recommend accepting this paper.
Technical Quality: 4
Clarity: 4
Questions for Authors: Major Comments:
1. I think the hypothesis class D(n,c) is simple to be learned. I guess, for fixed (c, n), if the density in the support of X is bounded from below and above, then the minimax (worst D) error rate in D is the parametric rate? Hence this is a relatively easy class of function to be analyzed, and there is almost no noise, which may weaken the results of this paper.
2. The existence result in Theorem 4.3 is nice. But it should be clear whether the memorization algorithm L is dependent on D, or at least c. It will be good if such an algorithm can obtain memorization agnostic to the knowledge of c.
3. The upper bound and lower bound does not exactly match in its current form in terms of epsilon. I guess the lower bound is not tight, because it does not rule out the possibility that a fixed number of samples can have exact recovery (eps=delta=0), this is relatively loose. Can it be improved?
Minor comments:
1. I think the introduction part can make a clear, or informal definition of what is generalization under the well-separated classification task given it lists all the theoretical results.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: See the Questions above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The authors thank the reviewer for the valuable and insightful questions and hope that we have answered these questions satisfactorily.
Question 1. Why consider the distribution D(n,c) defined in Definition 3.1?
D(n,c) is indeed a distribution with relatively good properties, as we explain this in Remark 3.2 of the paper, there are three main reasons and advantages to use such a distribution:
(1): Proposition 3.3 shows that there exists a distribution D not in D(n,c), such that any network is not generalizable over D, so if conisdering general distributions, we cannot have a general generalization theory at all.
(2): D(n,c) is suitable for most of real-world distributions, like image classification.
(3): D(n,c) is abstracted from previous related work. In order to achieve memorization with networks with sub-linear number of parameters, it requires that data has a positive separation bound, like [48] and [54]. So, when studying the distribution in $D(n,c)$, many related techniques and classic conclusions can be used in the memorization, such as [48] and [54], on the other hand, we need to point out that the existing conclusions cannot directly derive our theorem.
For the relationship between the parametric rate and error rate you mentioned, we need to point out that if we know which regions the network can correctly classify, we can indeed estimate the error rate using density. However, based on our research, we only know that the network has memorized a dataset, and we cannot directly obtain which regions the network can correctly predict.
Research on more complex distributions or noisy distributions can be a future research direction.
Question 2. In Theorem 4.3, whether the memorization algorithm L is dependent on D, or at least c. It will be good if such an algorithm can obtain memorization agnostic to the knowledge of c.
In the proof, such algorithm is depended on c. But it is easy to make it no longer dependent on c, we will add an additional step to the memorization algorithm, we calculate the shortest distance $c '$ between two samples with different labels in the input dataset at first, easy to see that $c'\ge c$, and then use $c'$ to replacing $c$ in the proof of th4.3. After doing this, such an algorithm is completely only dependent on the input dataset, and for any distribution, there is an upper bound on the number of parameters of output network.
This is a good suggestion, and we will make modifications in future versions, thank you.
Question 3. The upper bound and lower bound does not exactly match in its current form in terms of epsilon.
You are right that this lower bound only reaches tightness for $N^2_D$, but there is still room to improve the factor $(1-2\epsilon-\delta)$. But currently, we do not achieve this yet.
Question 4. I think the introduction part can make a clear, or informal definition of what is generalization under the well-separated classification task.
We will do this in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification, I'll keep the score.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your nice review ! | Summary: - The authors study the memorization capacity of ReLU networks and its generalization theory for i.i.d. datasets over the compact domain, under a binary classification setting.
- They propose two different memorization capacity upper bound of ReLU networks. One bound depends on the size $N$ of the training dataset ($O(\sqrt{N}\ln{N})$) and the other bound is independent of $N$. Notably, the second bound implies that even with an arbitrarily large dataset sampled i.i.d., a constant number $N_{\mathcal{D}}$ of the parameters is enough to memorize the dataset, where $N_{\mathcal{D}}$ depends only on the true data distribution $\mathcal{D}$.
- They show that memorization is not enough for generalization with concrete examples. First, a generalizable memorization network must have a width of at least $n$ (the data dimension). Second, for almost every data distribution, there exists a non-generalizable memorization network with $O(\sqrt{N}\ln{N})$ parameters ($N$: size of training dataset).
- They study the sample complexity for $(\\epsilon,\\delta)$-PAC-generalizable memorization networks. First, they provide a lower bound of sample complexity for general memorization networks. Second, they provide an upper bound of sample complexity for particular memorization networks with at most $N_{\mathcal{D}}$ parameters. Both bounds are tight in terms of $N_{\mathcal{D}}$ up to logarithmic factors.
- Lastly, they study the algorithm constructing a PAC-generalizable memorization network in poly$(N)$ time.
Strengths: - S1. As far as I know, this is the first work studying the generalizability of the memorization ReLU network, under the binary classification task.
- S2. In terms of ReLU networks, they discover two novel complexity terms $N_{\mathcal{D}}$ (smallest number of parameters to memorize i.i.d. dataset (of any size) from $\mathcal{D}$) and $S_{\mathcal{D}}$ (”efficient memorization sample complexity”; minimum size of the nearby set of $\mathcal{D}$) which depend only on the true data distribution.
- S3. The paper is well-written and easy to follow.
Weaknesses: - W1. Compactness of the input domain
- Line 47 of the introduction says the paper will consider the open domain $\mathbb{R}^n$ of input. However, only the compact input domain $[0,1]^n$ is considered throughout the paper. It is unclear whether the results on the compact domain can be easily extended to the $\mathbb{R}^n$ case. In the approximation literature, it is important whether the input domain is compact or not. Thus, the authors should mention whether the extension to a larger domain $\mathbb{R}^n$ is easy. Or, it would be better to replace $\mathbb{R}^n$ with a compact input domain in the introduction.
- W2. Randomness in Theorem 4.1
- Theorem 4.1 contains a probabilistic argument. However, it is unclear where the randomness comes from, with the theorem statement alone.
- W3. Efficiency of memorization algorithm in Theorem 4.3
- It would be meaningful to mention whether the memorization algorithm in Theorem 4.3 is efficient.
- W4. Parameter complexity of efficient memorization algorithm in Theorem 7.3
- It would be meaningful to clarify how many parameters the network constructed by the efficient memorization algorithm in Theorem 7.3 should have.
- W6. Minor comments on notations/typos (There are so many typos…)
- Title: “Generalizablity” → “Generalizability”
- Line 60: “…, (2) of Theorem 1.1…”
- Please use $\overline{O}$ for upper bounds, $\overline{\Omega}$ for lower bounds, and $\overline{\Theta}$ for tight bounds (up to logarithmic factors).
- Section 3.1: “$X_0 = x$” and “$n_0 = n$” must be explicitly mentioned.
- As far as i know, $\mathbb{R}\_{+}$ and $\mathbb{Z}\_{+}$ usually include zero. To exclude zero, $\mathbb{R}\_{++}$ (or $\mathbb{R}\_{>0}$) and $\mathbb{Z}\_{++}$ (or $\mathbb{Z}\_{>0}$) are better to use.
- Line 250: “generaizable” → “generalizable”
- Line 274: “The density of distribution $\mathcal{D}$”
- Line 279: “$A_{\mathcal{D}} (\mathcal{F}) \le 0.51$”
- Line 291: “generazable” → “generalizable”
- Line 309: “generalizability,.” → “generalizability.”
- Line 321: “$N_D$” → “$N_{\mathcal{D}}$” (calligraphic D)
- Line 325: “bpund” → “bound”
- Line 337: “$\mathcal{D}_{\rm tr} \sim \mathcal{D}^N$
- Lines 340 and 472: “Proof Idea.” is not in a bold font.
- Line 367: What does it mean by “a subset of $(x,y)\sim \mathcal{D}$”?
- Line 382: Is “$S_{\mathcal{D}} \ge N_{\mathcal{D}}^2$” really true? I guess it should be “$S_{\mathcal{D}} \gtrsim N_{\mathcal{D}}^2$”, ignoring logarithmic factors.
- Line 619: “Uisng” → “Using”
- Step Three in the proof of Theorem 4.3 (Appendix C): Please make clear which theorem of [61] you refer to.
- Line 1052: “Tree” → “Three”
- Section F: a very non-standard notation is used here: “$C^m_n$” means a binomial coefficient “$\binom{n}{m}$”! This makes the proof really hard to read. Please consider changing the notation.
- Part 2 in the proof of Theorem 6.1: I guess $12v_1 \le N_{\mathcal{D}} \le 4v_1 \sqrt{k} \ln(\sqrt{k})$ is required for the later use, rather than $3 \le N_{\mathcal{D}} \le 4v_1 \sqrt{k} \ln(\sqrt{k})$. This can be fixed with minor changes.
- Lines 1148-1149: Make clear that $S(\mathcal{D}) \subset [k]^q$.
- Equation (4) and the rest of the proof of Theorem 6.1: I think $q^k$ should be replaced with $k^q$. See lines 1166, 1167, 1195, and 1196.
- Line 1189: “Because” → “This is because”
- Lines 1189-1192: several “$S_{ss}(\mathcal{D},Z)$”s are written in wrong symbols.
- Line 1213: “Here, $E\_{\mathcal{D}}(\mathcal{F}) = \mathbb{E}\_{(x,y)\sim \mathcal{D}} [I(\operatorname{Sgn}(\mathcal{F}(x))=y)]$, $E\_{\mathcal{D}\_{\rm tr}}(\mathcal{F}) = \frac{1}{N} \sum_{(x,y) \in \mathcal{D}\_{\rm tr}} I(\operatorname{Sgn}(\mathcal{F}(x))=y)$ ”.
- Line 1224: “$n$” → “$m$”
- Line 1249: “similarto” → “similar to”
- and many more…
Technical Quality: 2
Clarity: 3
Questions for Authors: - Q1. Architectural constraint: ReLU network
- A clear limitation of this work is that it is limited to a fully-connected ReLU network.
- Can you imagine how the change in activation function will affect the memorization capacity, $N_{\mathcal{D}}$, and $S_{\mathcal{D}}$?
- Q2. Sample complexity bound gap of memorization algorithms in Section 6
- Currently, there is an $\epsilon^{-2}$ gap between the upper and lower sample complexity for the generalization of memorization networks. I guess this is quite huge. Do you think this gap can be reduced?
- Q3. Proof of Corollary 6.4
- The proof is too short. How did you get to the statement of Corollary 6.4 by plugging $k=2^{[\tfrac{n}{\lceil c^2 \rceil}]}$? It seems unclear to me.
- Q4. Algorithm in Theorem 7.3 as an initialization
- Empirically, what if you run SGD (or other standard optimizers) with logistic loss starting from the network parameters constructed by the algorithm of Theorem 7.3? Will it be better than the result of SGD from random initialization?
- Q5. Generalizability of memorization v.s. non-memorization networks
- I know this question is somewhat out of topic, but can you discuss the generalizability of memorization and non-memorization networks? Can a non-memorization network generalize better than some memorization networks? If so, when can it happen?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors have discussed the limitation in Section 8 (Conclusion). The major limitation is that it is unclear how to compute the numerical complexities $N_{\mathcal{D}}$, $S_{\mathcal{D}}$.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The authors thank the reviewer for the valuable and insightful questions and hope that we have answered these questions satisfactorily. Also thank you for pointing out the typos in our writing, we will correct them in future versions.
1. Question (W1): Compactness of the input domain.
The current proofs are for the compact input domain. We will change $R^n$ to $[0,1]^n$ in the Introduction Section. After evaluating the proofs, we find that only Theorems 5.1, 5.3, 6.1 still hold when we consider the distributions in $ R^n$. On the other hand, the extension to a larger domain $R^n$ is not a good idea, because networks with finite parameters can not approximate many of distributions over $R^n$, so that it is impossible to study guaranteed generalization theorems under such infinite domain, and th4.1, th4.3, th6.5 and th7.3 are incorrect on $R^n$. So, if we need to study about the distributions on $R^n$, we need to add more constraints to the distributions.
2. Question (W2): Where comes the randomness in Theorem 4.1?
This randomness comes from a technique of memorization: in the construction of a memorization network, we require a special vector to compress data into one dimension, but this vector is not easy to calculate directly. However, every time we randomly select a vector, there is a non-zero (0.5) chance of selecting the required vector, and the randomness comes from this random selection. We have described this in lines 220-223 and 648-649 (and the proof above lines 648-649) of the paper. We will make this clearer in the revised version of the paper.
3. Question (W3): Efficiency of memorization algorithm in Theorem 4.3?
Theorem 4.3 is an existence result that mainly demonstrates the existence of $N_D$. The efficiency of the memorization algorithms are given in Theorem 6.5 and Theorem 7.3. If $para(L(D_{tr}))\le N_D$, (2) of Theorem 6.5 shows that such memorization algorithms are not efficient. If $N>O(S_D)$, then Theorem 7.3 shows that there exists an efficient memorization algorithm.
4. Question (W4): Parameter complexity of efficient memorization algorithm in Theorem 7.3.
The number of parameters are affected by the data distribution. In the worst-case, the network will have $O(N^2)$ parameters. As said in line 103-107, to ensure generalization, we abandoned many classic techniques, which resulted in an increase of parameters. We will add this information in the revised version.
5. Question (Q1): Architectural constraint: ReLU network
The reason why we consider the fully connected ReLU network is because this type of network is very classic, and it has been widely applied in previous related research. For other structure networks like CNN or with other activation functions, the conclusions in this paper may be different. The main results in the paper are based on the expressive power of the fully-connected ReLU network, such as VCdim, the ability to fit discrete points and so on, so after replacing the network structure, we can still use the ideas in the paper to re-establish the $N_D$, $S_D$ and so on. This is certainly an important direction for future research. We will mention this in the revised version of the paper.
6. Question (Q2): There is an $\epsilon^2$ gap between the upper and lower.
You are right. We want to mention that the lower and upper bounds are at the same order of magnitude for the number of samples $N^2_D$. We believe that it is still room to improve the factor $(1-2\epsilon-\delta)$ in the lower sample complexities. But currently, we do not achieve this yet.
7. Question (Q3): How to get the proof of Corollary 6.4?
According to Lemma E.2, for any $n,c$, we can find $2^{n/[c^2]}$ points with distance $c$ in $[0,1]^n$. Then we take suitable $n,c,k=2^{n/[c^2]}$ to make the contents in lines 1131 to 1133 of the proof of Theorem 6.1 valid. Finally, follow the part 2-4 in the proof of Theorem 6.1, according to line 1159-1162 and the begining of part 4, we can prove Corollary 6.4. We will add more details about the proof in the future.
8. Question (Q4): Algorithm in Theorem 7.3 as an initialization.
We can only now provide some simple experiments. In the case of a small sample size, using our method as initialization for training may be helpful; while in the case of a large sample size, the training with random initialization can reach a very high level, so using our method for training does not bring significant improvement.
We do binary classification problems on CIFAR-10 dataset by considering 5 pairs binary classification problems: samples with label 2i and label 2i+1, where $i=0,1,2,3,4$. Randomly select 1000 samples from each category as the training set. We compare four methods: train a DNN, train a VGG-16, ours, training with our methods as initialization. The accuracies are given below.
|Pair of Label|CNN|DNN|Ours|Ours+Training|
|0,1|0.95|0.85|0.76|0.90|
|2,3|0.84|0.73|0.59|0.75|
|4,5|0.87|0.72|0.65|0.77|
|6,7|0.98|0.84|0.69|0.84|
|8,9|0.92|0.82|0.68|0.84|
It is easy to see that:
(1): Our methods cannot go beyond training.
(2): If the dataset is relatively small, using our method as an initialization can help improve accuracy.
(3): Since the paper focuses on a fully connected network structure, its performance is inferior to that of CNN in image classification tasks.
9. Question (Q5): Generalizability of memorization v.s. non-memorization networks.
We believe that a non-memorization network can generalize better than some memorization networks mainly in the following situation: The data set contains significant bad data, such as outliers. One weakness of memorization algorithm is that it must fit all the data, and the network will learn non-existent features from outliers, thus harming generalization. In this situation, a non-memorization algorithm can ignore the bad data in certain sense and thus work better.
---
Rebuttal Comment 1.1:
Title: Remaining Questions
Comment: Thank you for your kind reply. I am happy with most of the answered questions/comments.
I have to take some time to digest the responses for W1, W3, W4, and Q3. I will come back again after carefully checking these responses.
Unfortunately, the two questions in my original review are not answered yet, mainly inside the "minor typo/comment" part. Let me bring them here again:
1. Line 367: What does it mean by “a subset of $(x,y)\sim \mathcal{D}$”?
2. Line 382: Is “$S_{\mathcal{D}} \ge N_{\mathcal{D}}^2$” really true? I guess it should be “$S_{\mathcal{D}} \gtrsim N_{\mathcal{D}}^2$”, ignoring logarithmic factors.
I am looking forward to further responses to these unanswered questions. Also, if there are missing details for the response above due to the space limit, please leave them as a comment here, then I'll happily read them all.
---
Reply to Comment 1.1.1:
Comment: I'm glad you replied to us, taking this opportunity, we would like to provide some more detailed responses.
For Question (W3): This proof of th4.3 uses the method in [61] and gives an algorithm with a large $N'_D$, and such algorithm is effective. But if we want to get an algorithm with a smaller $N_D'$, it's quite difficult, and it is NPC when $N_D'=N_D$.
For Question (Q4): Due to formatting issues, the table format is not easy to read, we show it at here:
|Pair of Label|CNN|DNN|Ours|Ours+Training|\\\
|0,1|0.95|0.85|0.76|0.90|\\\
|2,3|0.84|0.73|0.59|0.75| \\\
|4,5|0.87|0.72|0.65|0.77|\\\
|6,7|0.98|0.84|0.69|0.84| \\\
|8,9|0.92|0.82|0.68|0.84|\\\
For line 367: We want to express a subset composed by samples in a distribution. Strictly speaking for it, if the distribution $D$ is defined on $A\subset [0,1]^n\times\{-1,1\}$, then it means a subset composed of samples in $A$. For example, $D$ is defined as: $P_{(x,y)\sim D}((x,y)=(x_0,-1))=P_{(x,y)\sim D}((x,y)=(x_1,1))=0.5$, then $A=\\{(x_0,-1),(x_1,1)\\}$.
For line 382: You are right, we mainly want to express a possible relationship between $N_D$ and $S_D$, our expression is not strict, your recommendation is correct. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.